url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
http://dealsmaven.com/powershot-elph-115-is-refurbished-camera-for-just-39-99-shipped/
|
code
|
Update: Now also available in the Silver color for just $39.99!
Canon is currently offering the PowerShot ELPH 115 IS refurbished camera in blue, for just $39.99!
Additionally, get FREE 2-day shipping with code: LIBERTY
Canon offers a free 1-year warranty on all refurbished cameras.
PowerShot ELPH 115 IS – Select the Blue color to see the price.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514051.18/warc/CC-MAIN-20171211203107-20171211223107-00523.warc.gz
|
CC-MAIN-2017-51
| 349
| 5
|
https://developer.ticketino.com/code-sample/code-improvements
|
code
|
In this section
In case you want to improve your code and its maintainability, you can change the way you store variables. If you have variables that are global as for example “baseUrl” you can store them in the .env (or corresponding .env.development) Files. For development you can enter the variables into the “dev”-File and for the live version you can insert them into the parent file.
By defining your variable in one place it is easier to maintain the code further down the line in case you need to change the “baseUrl” for example or need to access multiple times the same hard coded variable. With the .env.development you can also always be sure that when you start a project locally you aren't accidantely sending request to the wrong organizer or comparable mistakes.
It needs to be noted that it is standard that any self-created global variable that they begin with a “REACT_APP_” and then have the appending variable name. These variables also need to be written in all caps so that react projects finds the variable during launching, be it for development or production.
We have also added an example of a page accessing the stored variable, the shown example is of the page Home.js.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817033.56/warc/CC-MAIN-20240415205332-20240415235332-00818.warc.gz
|
CC-MAIN-2024-18
| 1,215
| 5
|
https://statemigration.com/solutions-for-automating-it-job-scheduling/
|
code
|
I still remember that day the boss walked into my office. And I still remember her fateful yet enticing words that talked me into tackling the project, The Project That Would Change Everything. "You're the Scripting Guy. You'll make it work," she said. I agreed, and the rest is history.
You're probably familiar with this story: Big projects in IT start that way. Big projects that may seem great on paper but then get very, very complex as they're developed. Big projects whose multiple applications across multiple platforms require substantial integration effort.
But this story isn't only about that one big project. It's also about all the other little automations—scripts for Windows and Oracle and Active Directory (AD), SQL packages, Linux cron jobs, and so on—that creep into every IT environment over time. Those automations absolutely solve a business' immediate needs. But without central management, they also come with a cost. That cost arises as scripts age, technology changes, and the script owners relocate or leave the company.
Back then, I was known as The Scripting Guy. If you needed a quick data transformation, or a scheduled movement of files from one system to another, I was your go‐to person. I had developed a command of the major scripting languages, along with all the other necessary add‐ons one needed to be The Guy. Over the course of several years, my integration prowess had grown to include platform‐ and application‐specific technologies such as WMI, ADSI, SQL, some Oracle, and even a bit of Linux and IBM AIX.
I put that knowledge to what I thought was good use. Over the years, I had gotten to the point where much of my daily responsibility was automated…sort of. Sometimes my little automations broke. Sometimes they were accidentally deleted or otherwise wiped out through the regular change that happens in any data center. Sometimes their need went away, or a server's configuration was updated, and as a result, I had to go find them once again and remember what they were intended to do.
The result was a not‐very‐well‐oiled machine; one that created even bigger problems the day I moved on to a new employer. You see, I had little automations scattered around the company servers with my name on them. Last I heard, they're still finding them years later, usually after some process breaks that nobody ever knew existed.
You can probably guess what we were missing. We needed enterprise IT job scheduling. That's why I'm writing this book, to explain what this approach is and help you realize you could probably use it as well. Throughout this book, I intend to return back to that big project's story along with a few of the other little ones to show you why.
This book wouldn't exist if every IT technology seamlessly talked with each other, transferring data, events, and instructions across platforms and applications with ease. If every technology could perfectly schedule its activities with itself and others, you wouldn't be reading these pages.
But you are, and consequently this guide indeed exists.
It exists because IT job scheduling is a task that every enterprise needs, as do many small and midsize organizations as well. "Jobs" in this sense represent those little packages of automation I discussed earlier. Some work with a single system. Others integrate the services of multiple systems to create a kind of "mixed workload" that produces a result your business needs.
Consider a few of those jobs you're probably already creating today:
Your jobs might be less complex, working with only a single system or application. Or they might be exceedingly so, requiring the participation of multiple applications across different platforms in different parts of the world. At issue here isn't necessarily how complex your jobs are. The "simple" ones have many of the same requirements and necessitate as much due diligence as the "complex" ones. Rather, the issue has more to do with the workflow that surrounds those jobs, and the solutions you implement to manage, monitor, audit, and otherwise keep tabs on every activity at once.
It also has to do with the very different languages and techniques that each IT technology uses and requires. Those differences represent a big headache inside today's heterogeneous data centers. Your IT operating environment surely has Windows systems. But it probably also has Linux, Oracle, HP‐UX, Solaris, and others. You probably need to transfer XML documents, DOCX files, and XLSX spreadsheets over multiple protocols like SMB, FTP, and SSH. Even your monitoring comes in many flavors: SNMP for switches and routers, WMI for Windows systems, and all the various UNIX and Linux widgets for keeping tabs on their activities.
You can imagine just a few of the headaches these radical differences in applications and platforms create:
Solving these five problems is the primary mission of an IT job scheduling solution. From a central location (see Figure 1.1), an IT job scheduling solution creates a platform on which to run all your little "automation packages" that might otherwise be spread across technologies. Using a centralized solution, a database job, a UNIX/Linux job, and an FTP job are all parts of the same management framework. All begin their execution from the same place, and all are managed and monitored from that single location.
Figure 1.1: An IT job scheduling solution centralizes jobs of all types, across all platforms and applications.
As you can probably guess, such a solution has to be exceptionally comprehensive. A solution that works for your company not only needs to support your management, monitoring, and workflow needs. It must be more than just a more‐powerful version of the Windows Task Manager. It also needs to support the integrations into every OS, platform, and technology that your business processes incorporate. That's why finding a solution for automating IT job scheduling can be such a challenging activity.
To help you out, you can consider this guide to be a kind of automation "idea factory." Its four chapters will present you with questions to ask yourself, helping you frame your need for a job scheduling solution. It delivers a set of real‐world use cases for seeing scheduling in action. It deconstructs an IT job so that you can peer inside its internal machinery and understand the power of a centralized solution. And it will conclude with a checklist of requirements you should consider when seeking the software that creates your solution. I'll be your guide, and throughout this process I'll share a few of my own stories to bring some real‐world experience into this complex topic.
Note: In this book, you'll hear me use the term job scheduling. Another commonlyused term for job scheduling is workload automation. For the purposes of this book, you can assume that the two are interchangeable.
Now, back to my story from long ago. Every OS and application comes equipped with multiple ways to perform its core functions. You already know this. An OS includes one or more scripting languages to enact change and read data. Every modern database has its own scheduling and automation functions, enabling the creation of packages for inserting and selecting data. Even middleware technologies and applications have their own APIs, which can be interfaced either inside or outside the application.
But the internal languages and automations that come with a product are rarely equipped to handle actions outside that product. Ever try to use an XML document to instruct a SQL Server to update an Oracle database row so that an SAP application can provision a process to an AIX mainframe? Whew! That's pretty close to the situation I experienced as I started on The Project That Would Change Everything.
Let's start with a little background. Why that project was needed is really unimportant, as is what we were doing with its data. What is important, however, are the interconnections between each of its disparate elements. Multiple applications running atop more than one OS, integrating with different databases, and requiring data from both inside and outside the organization was just the start.
To get going, I attempted to diagram its components, creating something close to what you see in Figure 1.2. At a high level, this system was constructed to aggregate a set of data from outside our organization with another set on the inside. Our problem was the many different locations where that data needed to go.
Figure 1.2: My unfriendly application.
Let me break down the mass of arrows you see in Figure 1.2. The flow of data in this system started via an FTP from an external data source. That data, along with all its metadata, needed to be stored in a single, centralized SQL database. There, permissions from Windows AD would be applied to various parts of the data set. Some data was appropriate for certain users, with other data restricted to only a certain few. Information inside the FTP data stream would identify who should have access to what.
Users could interact with that data through a Microsoft IIS server running a homegrown Web application. That Web application used XML to transfer data to and from the SQL database. Certain types of data also needed to be added to our company SAP system running atop Oracle, requiring data transformations and delivery between those two systems.
Occasionally, portions of that data would need to be ingested into a UNIX mainframe for further processing. There, it would be consolidated with data from other locations for greater use elsewhere in the company. An email server would ensure users were notified about updates, new data sets, and other system‐wide notifications.
That's a lot of arrows, and each of those arrows represent an integration that needs to be laid into place in order for the entire system to function. Each arrow also represents an activity that needs to happen at a particular moment in time. Data heading towards the Oracle database obviously couldn't be scheduled to go there until it was actually received at the SQL Server system. Users shouldn't be notified unless something important to them was actually processed. Just the scheduling surrounding each arrow's integration was a complex task unto itself.
Does this look like one or more of the systems that are currently in your data center? If you're doing much with data transformation and movement, you might have the same scheduling headaches yourself. That's why there are four critical points that are important to recognize:
It is the combination of these four realizations that helped me understand that I needed to step outside my application‐specific mindset. It helped me realize I needed to look to solutions that schedule activities across every platform and every application. That's when I started looking into enterprise IT job scheduling solutions.
Let's now take a step back from the storyline and think for a minute about what IT job scheduling should be. I've already suggested that a "job" represents some sort of automation that occurs within an IT system. But let's get technical with that definition. I submit that an IT job represents an action to be executed. An IT job might be running a batch file or script file. It might be running a shell command. It could also be the execution of a database job or transformation. Essentially, anything that enacts a change on a system is wrapped into this object we'll call a job.
Using an object‐oriented approach, it makes sense to consolidate individual actions into separate jobs. This single‐action‐per‐job approach ensures that jobs are re‐usable elsewhere and for other purposes. It means that I can create a job called "Connect to Oracle Database" and use that job any time I need to make an Oracle database connection anywhere.
Now if each job accomplishes one thing, this means that I can string together multiple jobs to fully complete some kind of action. I'll call that string an IT plan. A plan represents a series of related jobs that can be executed with the intended goal of carrying out some change. Figure 1.3 shows a graphical representation of how this might work.
Figure 1.3: Multiple jobs are connected to create a plan.
In Figure 1.3, you can see how three different jobs are connected to create the plan. Job 27 connects to an Oracle database. It passes its result to Job 19, which then extracts a set of data from that database. Once extracted, the data needs to be sent somewhere. Job 42 completes that task, as it FTPs the data to a location somewhere.
There's obviously an art to creating good jobs. That's a topic that I'll discuss in greater detail in Chapter 3, but I need to introduce some of the basics here. A good job, for example, might not necessarily have any specific data or hard information that's stored inside the job. Rather than a connection string to a specific server for Job 27, a much better approach would be to use some kind of variable instead.
Developers use the techie term parameterization to represent this generalizing of job objects and the subsequent application of variables at their execution. Figure 1.4 shows how a parameterized plan can link three generic jobs. At the point this plan is run, those jobs are fed the variable information they need to connect to the right database, extract the right data, and eventually pass it on to the correct FTP site.
Figure 1.4: Feeding parameters to jobs in a plan.
By parameterizing the plan in this way, I now get reusability of the plan in addition to all the individual jobs that make up that plan. Should I down the road need to attach to a different database somewhere, pull off a different set of data, and send it to some other FTP site, I can accomplish this by reusing the plan and modifying its variable information. That's reusability on top of reusability!
There's obviously quite a bit more to this whole concept of working with jobs and plans. I'll spend more time in Chapter 3 helping you understand the various characteristics that can be assigned to a job and a plan as well as other objects a typical IT job scheduling solution will use.
But there is one characteristic that merits attention before moving on. That characteristic is the schedule itself, which needs to be applied to the object to tell it when to run. I mentioned earlier that scheduling for large systems like The Project requires a kind of flexibility you just can't get by looking at the clock on the wall. Rather, the kinds of jobs that project needs tend to be more related to actions or state changes that occur within the system.
Let's assume that Figure 1.4's "Plan 7" relates to some data transfer that needs to happen inside The Project. In this case, let's assume that the data transfer occurs between its SQL Server and UNIX mainframe. Figure 1.5 shows a graphical representation of how this might be applied. There, you can see how three different schedules could potentially be attached to the newly‐created plan:
Figure 1.5: Applying a schedule to a plan.
Any of these three schedules can be appropriate, depending on the needs of the system and its components. For example, the first schedule might be appropriate if a daily data dump is all that's necessary. In that case, a date/time‐centric schedule might be all that's necessary to complete the action. Very simple.
The second and third tasks highlight some of the more powerful scheduling options that could also drive the invocation of the plan. In the first, the plan is executed not based on any time of day. Rather, it executes when a set quantity of new data has been added to the database. This could be a smart solution if you want these two databases to stay roughly in sync with each other. It is really powerful when you consider how difficult that kind of scheduling would be to create if you were using just the native SQL or UNIX tools alone.
That third schedule is particularly interesting, because it could be used alone or in combination with the second. That third schedule instructs the plan to run only if the server isn't terribly busy. Using it in combination with the second allows you to maintain a level of synchronization while still throttling the use of the server. A good job scheduling solution will include a wide range of conditions that you can apply to plans to direct when they should kick off.
Again, I'll dive deeper into this deconstruction of an IT workflow's components in Chapter 3. But before you can truly appreciate the power of this modular approach, there are likely a few questions that you're probably asking yourself. If you're not, let me help you out with a list of ten good questions about your own environment that you should probably ask yourself. Your answers to these ten questions will determine whether you'll want to turn the next chapters. If you're experiencing zero headaches with the tools you have today for scheduling your IT activities, you won't need the rest of this book.
Everyone else will.
It wasn't many years ago that one of my jobs was in keeping a set of servers updated. Monthly updates were de rigueur with some on even shorter schedules. Each came with a very short time window when they could and should be applied. The big problem resulted from the fact that these updates typically required a server reboot to get them applied.
At that time, our reboot window was in the wee hours of the morning, many hours past the usual 8‐to‐5 workday. For me, sticking around once a month to complete these updates represented a hardship on self and family. That's why I created my own automation that wrapped around these updates' installation. For my solution, when updates were dropped into a particular location, they were applied at the next window. My mobile device notified me should any problems occur.
From that part on, adding updates to servers meant simply adding them to the right location and making sure my mobile device was near the bedside. Yes, sometimes they'd experience a problem, but those could be fixed through a remote control session. Successful months could go by without loss of sleep or important family time.
Although your IT job scheduling needs might not necessarily go down the path of system update installation, this time (and money) savings becomes an important parable. Computers are designed by nature to be automation machines. Thus, it stands to reason that any manual activity should have an automation‐friendly adjunct. It is that adjunct that can be a part of your greater scheduling solution.
Can you afford to pay the risk of inappropriate execution, forgetfulness, or user error in your critical activities? If not, creating flexible and reusable workflows via an IT job scheduling solution should pay for itself in a very short period of time.
That first question introduces the possibility of three kinds of risk in any manual system. First is the risk of inappropriate execution. Any task that requires manual intervention also introduces the notion that it could be executed at an inappropriate time. Or, more dangerously, such a task could be re‐parameterized to send data to the wrong location or execute it in an inappropriate way. There is a recognizable cost associated with this risk.
I remember a situation where a script was created that would apply a set of data to a specific server upon execution. That script took as parameters a list of servers to send the data. One day, a junior administrator accidentally invoked the script with the "*" wildcard in place of a list of servers. As a result, data was distributed to every server all across the company. That single invocation cost the company significantly to clean up the mess.
Forgetfulness and user error are both additional risks that can be addressed through a job scheduling solution. In such a solution, jobs and plans are run within the confines of the system and its security model. Dangerous jobs can be specifically restricted against certain individuals or execution models. Centralizing your job execution security under a single model protects the environment against all three of these costly manual errors.
You probably have monitoring in place to watch servers. You've probably got similar monitoring for network components, perhaps even as part of the same product. But does your data center also leverage a unified heads‐up display for monitoring jobs along with their execution success? A failure in a job can cause the same kinds of outages and service losses as a failure in the network or its servers.
If your business systems interconnect through multiple scheduling utilities across multiple products and platforms, there usually isn't a way to centralize all those activities under one pane of glass. What you need is the same kind of monitoring for IT jobs that you've already got in place for your other components.
As you can see in Figure 1.6's mock‐up, you can get that by using a centralized approach.
There, every action across every system and application is centralized into a single screen.
Determining which jobs ran successfully is accomplished by looking in one place.
Figure 1.6: Daily activity under one pane of glass.
Your data center environment already has multiple scheduling engines in place today. Nearly every major business service technology comes with its own mechanism for scheduling its activities. In fact, those mechanisms are likely already performing a set of duties for your services.
Yet the problem, as you can see in Figure 1.7, has to do with the languages each of these platform‐specific and application‐specific scheduling tools speaks. SQL, for example, comes equipped with a wide range of tools for manipulating SQL data and SQL Server systems; but how rich are those tools when data needs to exit a SQL Server and end up on a UNIX mainframe?
Figure 1.7: Multiple scheduling engines.
Often, the native tools aren't sufficient, forcing an external solution to bridge the gap. That solution can be in the form of individual "little automations" like the scripts this chapter started with. Or they can be wrapped underneath the banner of a holistic job scheduling solution. Chapter 4 will discuss the capabilities you'll want to look for in the best‐fit solution.
Considering the answer to question 4, some platform‐ and application‐specific scheduling tools indeed include limited cross‐platform support. Their scheduling capabilities may be able to fire jobs based on actions or state changes.
However, one state change that is particularly difficult to measure across platforms is when tasks take too much time. Task idling in a state‐based scheduling system can cause the entire workbook of plans to come to a halt if not properly compensated for. Essentially, this idle time represents when part of when a task does not complete, leaving the next one waiting.
Figure 1.8: Unmanaged task idling can kill a nonautomated workflow.
Idling need not necessarily be a problem within a piece of code or script. It can be simply the waiting that is natural in some types of on‐system activities. For example, not knowing when a person will submit a file or not knowing when a piece of data is ready for the next step in its processing.
These idle states are notoriously difficult to plan for using time‐based scheduling alone. With time‐based scheduling, your jobs are built with no intelligence about changes that occur within a system. Rather, they simply run an action at some set point in time. Your job scheduling solution must include the logic necessary to add that intelligence. As you'll learn in later chapters, that intelligence can occur through event‐based scheduling or triggerbased scheduling. In either of these cases, an on‐system event or trigger recognizes when a change has been made and initiates the next step in processing.
If you haven't yet standardized on an enterprise job scheduling solution, can you honestly say how many tasks are operating everywhere in your data center? I used to think I knew where all of them were in that former job of mine. But then I left, and took with me the sum total of that knowledge. As I mentioned at the beginning of this chapter, those little automations are still being found years later—often after one of them breaks and causes downtime. More importantly, those are only my scripts. There were others in that company as well with scripts of their own that probably eventually got lost.
A centralized job scheduling solution creates a single point of control for automation. It enables auditors and IT teams to know where changes are being sourced from. It is essentially a single point of control, which makes auditors, security officers, and the troubleshooting administrators very happy.
If you can't, you how can you correlate issues across those teams, platforms, and applications? If you can't, troubleshooting becomes a game of finger‐pointing and proving why not.
I was once told a story about a company in real need of an enterprise‐wide job scheduling solution. Their business system was much like The Project in that it involved multiple technologies across some very different platforms. Like The Project, managing that system fell to a somewhat distributed group of individuals. SQL Server was managed by the SQL Server team. SAP was administered by SAP administrators. Even the AD had its own group of people responsible for its daily care and feeding.
The problem in this company was not necessarily its application‐specific scheduling tools. It was in its people. Those widely‐distributed people feared the centralization that a job scheduling solution brings. That fear in part was due to the usual technologist's fear of centralization, but it was also a result of the assumption that a centralized tool would mean re‐creating SQL, SAP, AD, and other jobs on a new and completely different system outside their direct control.
An effective enterprise job scheduling solution shouldn't require the complete re‐creation of existing jobs within each platform and application. Recall that a job itself represents the change that is to be made, the individual script or package that must be executed. A job scheduling solution represents the wrapper around that invoked action.
This story ends as you'd expect, after a very small but very major problem in one subsystem impacted the system as a whole. Fully unable to track down that minor job with a major impact, the company discovered why centralizing is a good idea.
I once built my own scheduling system in the now‐ancient scripting language of VBScript. VBScript is still in use many places, and it has a long history. But it's known for not having superior built‐in methods for scheduling activities. That said, its scheduler worked fine for the task I assigned to it. But the next time I needed a scheduler, I found myself reinventing the wheel. Even with the limited code modularization VBScript can present into a script, my scheduler's reusability was very limited.
Imagine having to replicate that scheduling across multiple applications and platforms using different languages—and even using different approaches, both object‐oriented and structured. Homegrown schedulers are indeed an acceptable way of handling the triggering needs of individual scripts and packages; however, a global scheduler that works across all jobs and plans obviously creates a superior framework for job execution.
More importantly, the human resources that are necessary to keep a homegrown job scheduler can be much greater than they seem at first blush. Those resources need to keep an eye on the logical code itself alongside the jobs that the scheduler attempts to run. In many cases, you'll find that the extra costs associated with creating your own scheduler include the fact that this project will take away from time that could be better served by working on other more value‐oriented projects.
Nowhere is that value‐add more pertinent than when two jobs rely on each other. This kind of job construction happens all the time within distributed systems. In it, the first task in a string completes with a set of data. That data is needed by the next task in the string. This sharing of information can be handled through file drop‐boxes or richer mechanisms like Microsoft Message Queue or database triggers.
Like with the problem of multiple languages, these queuing solutions tend towards being very platform‐centric. It becomes very difficult using a database trigger to invoke an action in AD, for example. A centralized job scheduling solution with rich support for applications will become the central point of control for all cross‐task action linking.
Last is the handling of error messages in custom‐coded scripts, a process that itself consumes a vast quantity of script development time. Errors are notoriously difficult to track down, and become even more challenging when scripts need to span platforms and applications. Error handling requires special skills in trapping variables and determining their intended and actual values. All of these activities grow even more difficult when scripts are run automatically as opposed to interactively because error messages in many cases cannot be captured. Chapter 3 will go into greater detail on the error‐handling functionality of a good job scheduler. For now, recognize that a homegrown script without error handling is ripe for troubleshooting headaches down the road.
So do you have good answers to these questions? Do you feel that your existing scheduling tools bring you zero headaches? If yes, then thanks for reading. If no, then it is likely that your next thoughts will be toward the types of IT challenges that an enterprise job scheduling solution can support. With the basics discussed and the initial questions answered, the next chapter introduces seven real‐world use cases for automating IT job scheduling. It should be an interesting read because the types of use cases outlined in that chapter are probably pretty close to those you've already got under management today.
Find me the business that runs atop a single application—a single instance!—and I'll show you a business that doesn't need IT job scheduling. Everyone else probably does.
In fact, most data centers have far more. Your average midsize data center runs applications for handling its databases, along with middleware systems for processing the data. That data center requires servers and protocols for staging of data in and out of the organization. Applications run atop client/server operating systems (OSs) and mainframes, servers, and perhaps even a few desktops. All of these elements need to communicate with each other, many don't share the same OS, and all suffer under the management complexities brought about by product‐specific toolsets.
Today's IT technologies are fantastic in the business processes they automate, but rare are two that seamlessly talk with each other. Rarer still is the IT product that is superior all by itself in creating and scheduling workflows that meet business requirements. Needed to integrate activities among disparate technologies is a central solution that can interact with each at once.
An IT job scheduling solution is that Rosetta Stone between different platforms, OSs, and applications. It is intended to be the data center's solution for converting raw technology into business processes. In this book, I hope to show you how to incorporate such a solution into your own business.
You've already experienced a taste of how an IT job scheduling solution might work. Chapter 1 was constructed to help you recognize that job scheduling is a service your IT organization probably needs. That said, Chapter 1's discussion intentionally stayed at a high level. You haven't yet explored deeply the features and capabilities such a solution might bring.
You won't get that deep dive in this chapter either. That's because I've found that the best explanation of IT job scheduling requires first a look at the problems it intends to solve. Once you understand where it fits, you'll then appreciate the logic behind its behaviors. It is my hope that by the conclusion of Chapter 1 you began nodding your head, affirming that this purported solution is something your data center desperately needs.
My task now is to further enlighten you with a series of ideas to help you find that best fit. These ideas will take place in the form of a seven use cases; essentially, seven little "stories" about issues that have been resolved—or made easier—through the incorporation of a job scheduling solution. These stories themselves will be mostly fictitious but are based on real events and real problems. I'll use faux names to keep the narrative interesting.
There's an important point here. Even if some portion of these stories is made up, you should find that the problems and solutions in each aren't far from those you're experiencing.
The first of these stories has nothing to do with a customer‐facing solution. Neither is it directly related to a line‐of‐business application. Rather, the first of these stories starts simple. It explains the administrative situation at Company A, a mature company with a procedurally‐immature IT organization. Lacking many centralized processes, operating with marginally‐effective change and configuration control, and managed by five different administrators, Company A's data center is a mish‐mash of fiefdoms and technology silos. Problem is, these fiefdoms need to communicate with each other, even if their managing IT administrators won't.
John, Bob, Jane, Sara, and Jim are those five IT administrators (see Figure 2.1). Each is responsible for some portion of the data center infrastructure, with each having some overlap of responsibilities. To accomplish administration, they've created scripts, tasks, and packages that keep the individual business workflows running. Those automations indeed enact change on servers and get data moved from system to system but with no interconnection of intelligence.
Figure 2.1: An interconnection of automations.
Figure 2.1 explains this problem in graphical form. In it, you can see that individual automations are sourced without considering their context. If John creates a job, his information cannot be based on instrumentation that is gained through another created by Sara. As a result, there is no way to orchestrate the activities between each individual, no way to schedule activities so that they do not conflict, and no way to base information or scheduling from one off of the results of another.
A much better solution is in aggregating these five people's automations into a single and centralized solution. Through that single solution, each administrator's jobs can be seen by the others. The jobs of each person can also be aligned with the needs of the others to ensure resources aren't oversubscribed. Additionally, because jobs are collocated in a single location, information and instrumentation from any automation can be used to drive other automations—or feed into their future scheduling.
Figure 2.2: Sourcing automations through an IT job scheduling solution.
In short, even if your automations are administrative in nature, an IT job scheduling solution can bring substantial benefit.
Yet an IT job scheduling solution isn't solely about its actors. In fact, in many ways, the actors can be one of that solution's least‐important impacts. An IT job scheduling solution really has more do with the data in a data center. That's why the second story in this chapter deals with the different applications that are used by Company B.
Different from the IT administration example told in the previous story, Company B's story centers around their line‐of‐business (LOB) application. That LOB application is comprised of several components, each of which is represented in Figure 2.3. Transactions among these systems occur through a carefully choreographed set of tasks, jobs, packages, and workflows. As you can probably imagine, the system in aggregate crosses Windows and UNIX boundaries, and includes multiple database management systems and even a bit of middleware. It is the classic business service.
Figure 2.3: Individual schedulers for each individual application.
All of these individual components enable the functionality of the LOB application. But all also leverage their own built‐in toolsets for scheduling activities: The SQL server runs its
SSIS packages, the Linux SAP server runs its own tasks and cron jobs, and even the Informatica server enacts change through its workflows.
You are correct in assuming that this one‐scheduler‐per‐component configuration can indeed work for many systems. Data and actions that occur inside Informatica can be based on wall‐clock time or other schedule characteristics. The SQL database can run its SSIS packages based on its own settings, and so on. However, like the actors in the first story, this environment is likely to experience problems as individual system activities conflict with those on other systems.
Contrast this situation to the superiority in design one gets through job consolidation. In Figure 2.4, the individual task schedulers in that same LOB application have been replaced by a single and centralized IT job scheduling solution. This is possible because, as I mentioned in Chapter 1, a primary benefit of such a solution is its ability to speak the language of every application in the business service.
Figure 2.4: Consolidating tasks across applications.
With that centralization of data and actions comes an enhancement to job scheduling, based on results or data in other jobs. This chapter's sixth story will explain in greater detail how triggering capability dramatically improves service performance; but know here that centralization of scheduling brings to bear greater instrumentation about the health of jobs across the entire application infrastructure.
John is an Oracle DBA who has been with Company C since the very beginning. In his role as database administrator, he built the company's business system infrastructure nearly from the ground up. As a result, he understands those systems inside and out: He has tuned the system over time to improve its performance and weed out non‐optimizations. He's built numerous scripts and other automations that gather data, translate it, process it between application components, and present reports for review by stakeholders. That system is critical to the company. It provides important revenue data for its sales teams and executives. It also means a lot to John.
Then one day the company grew. Substantially. Overnight. Acquiring a completely new line of business, Company C suddenly found its internal IT systems insufficient to handle the new reality of work and its associated data needs.
John was approached shortly after the merger by some very important people in the now larger company. Those people recognized his strengths in creating and managing the original revenue system that brought much value to the smaller company. They wanted another system, "…just like the first one, but this time for selling sprockets instead of cogs." Graciously accepting the offer to improve the company and fortify his resume, John immediately realized that simply replicating the original system would not be a trivial task. Although his scripts absolutely did everything requested of them in the original system, they were also hard‐coded into that original system. Its database architecture was designed to deal with cogs. His transformations were cog‐based in nature. Even the server names and script names were hard‐coded into each individual script, task, and package. Worse yet, there were hundreds of tiny automations spread everywhere.
Translating even a simple database job from cogs to sprockets, like what you see graphically in Figure 2.5, would take months of detective work, recoding, and regression testing. John was in for a great deal of work, and the result might not be as seamlessly valuable as his original system.
Figure 2.5: Replicating an automation to a completely new system.
Had John's scripts, tasks, and packages instead been objects within a central job management system (see Figure 2.6), this new company need might not be fraught with so much risk.
Figure 2.6: Objects remain objects as they're translated into a new system.
Recall Chapter 1's conversation about how good IT jobs are those that are coded for reusability. IT plans are then constructed out of individual job objects to enact change. In good jobs, variables are used to abstract things like server names and script names— sprockets and cogs, if you will—so that entire plans can be re‐baselined to new systems with a minimum of detective work, recoding, and regression testing. An IT job scheduling solution takes the risk out of business expansion, giving IT the flexibility to augment services as the business needs.
Company D's current situation is a product of its own success. Starting as a small organization with a single mission, their focus has changed and evolved over the years as lines of business come and go. Indeed, even whole businesses have been grafted on and later spun off as the winds shift in Company D's industry.
As a result, Company D suffers under many of the problems you would associate with any classic enterprise company. It has thousands of applications under management, some of which are used by only a very few people. Some homegrown solutions only remain because they were coded years ago to solve a specific problem for a specific need that has not changed.
Being a company that is more like the summation of lots of little companies, these business applications are nowhere near homogeneous. One budgeting application might store its data in SQL, another in Oracle, a third in some obscure database language spoken only by IT professionals long past retirement. Tying these applications and their data together is a big job with big consequences to the organization.
Many enterprise organizations that rely on disparate databases leverage Business Intelligence (BI) solutions like Crystal Reports. These BI solutions aggregate information across the different architectures. Using BI tools like Crystal Reports, data in an Oracle format can be compared and calculated against data in a SQL format, and so on. These tools come equipped with rich integrations, enabling them to interconnect nearly all database formats all at once (see Figure 2.7). Company D uses Crystal Reports to gather budgetary data across business units and individual project teams.
Figure 2.7: Connecting solutions like Crystal Reports to multiple databases.
Yet Figure 2.7 doesn't fully show the reality of how Company D's data is generated. Before that data ever becomes something tangible that can be ingested into a BI solution, it starts its life inside any number of down‐level systems. Figure 2.8 shows just a few of those underlying systems, all of which integrate to create the kind of data a BI solution desires to manipulate.
Figure 2.8: Underlying jobs make Business Intelligence data usable.
Notice in Figure 2.8 how a portion of the first business unit's information comes from a partner company external to the data center. The second and third business units have projects in combination that require orchestration and synchronization between databases. The second business unit has further integrations into an e‐commerce server in order to gather a full picture of budget levels.
BI solutions can indeed present a more‐unified view of data across different platforms, but they do not provide a mechanism to unify transactions between down‐level systems. For Company D, creating that unified workflow lies within the realm of an enterprise IT job scheduling system. If gathering data is more complicated than simply gathering data, a job scheduling system ties together the entire system to accomplish what you really need.
Company E had a big problem not long ago when they began extending customer services onto the Internet. They quickly found out that you can indeed interact with customers there, but creating a holistic system that gets customer data in all its various types into the hands of the right person isn't as easy as it looks.
What Company E found out is that dealing with customers over the Internet automatically creates a lot of data. That data arrives in various formats, with each format requiring a different mechanism for handling.
You can see a graphical representation of Company E's story in Figure 2.9. Internet customers that desired services interacted primarily through a Web server. Inside that Web server was contained the requisite logic to inform customers about products, and interact with them as purchases are made. Interesting about Company E is that their system involved two‐way communication with customers not only via the Web site and email but also in the transfer of data files.
Figure 2.9: Different data formats require different data handling.
Data files, as you know, are much different than XML files in Web transactions or emails back and forth through an email server. They're larger, they can come in many different formats which create particular issues when you're working with unstructured customers, and the management tools to work with them don't necessarily integrate well into other formats and workflows.
Company E needed a multipart mechanism to solve their formatting workflow problem. They needed to recognize when an order was placed, generate an FTP URL for the user to upload data, move that uploaded data from a low‐security FTP server to high‐security database server, and finally notify the user when the transaction was complete. Adding to the complexity, those exact same steps were required in reverse at the time the order was fulfilled.
You can imagine the protocols and file formats at play here: XML, SMTP, FTP and SFTP, along with a little SSH and SOAP to tie the pieces together, just like you see in Figure 2.10. Complex needs like Company E's require data transfer handling that can support the recognition that files have been downloaded. Such handling can either monitor for a file's presence or use event‐ or message‐based notification. An IT job scheduling solution wraps file transfer logic into the larger workflow, enabling XML to trigger SSH, to fire off SMTP, and finally to invoke SOAP at the point the application requires.
Figure 2.10: Multiple protocols at play, each with its own management.
For complex file transfer needs, those that must exist within a business workflow, IT job scheduling solutions solve the problem without resorting to low‐level development.
This chapter's sixth story brings me back to the one I started in Chapter 1. There I explained some of the introductory pieces in The Project That Would Change Everything. I also explained how an IT job scheduling system was very quickly identified as the only class of solution that could enable the kind of functionality our complex project needed. Let me tell you a little more in that story.
Our determination happened shortly after whiteboarding the various components we knew the project needed. Figure 2.11 shows a re‐enactment of that whiteboarding activity, with only a few of its myriad interconnecting magic marker lines in place. My team knew that when external company data arrived on the FTP server, transferring that data to the SQL server must occur in as close to real‐time as possible.
Figure 2.11: Whiteboarding the triggers for The Project.
Fulfilling that requirement with traditional FTP alone created a fantastically problematic design. The idea of constructing an always‐on FTP session that constantly swept for new data was a ridiculous notion. And it wasn't a good idea for security. Needed was some kind of agent (or, better yet, an agentless solution) that would simply know when data arrived. Then it could provision that data from the FTP server's data storage to the SQL database.
But that wasn't our only challenge. At the same time, our SQL and Oracle databases needed to remain in strong synchronization. Changes to specific values in SQL must replicate to Oracle, also in as real‐time as computationally possible. Synchronizing Oracle with SQL meant also synchronizing metadata with SAP.
Even a single one of these near–real‐time requirements can be challenging for a developer to build. Lacking developers, a development budget, and the desire to complete this project using off‐the‐shelf components, we demanded a solution that would accomplish the task without reverting to low‐level coding.
What we needed were triggers.
Triggers are the real juice in an IT job scheduling solution. The kinds and capabilities of triggers a job scheduling solution supports makes the determination between one that's enterprise ready and one that's not much more than the Windows Task Scheduler.
The Project required a wide range of these triggers to get the job done: Our project's FTPto‐SQL integration required a filebased trigger, kicking off a job or plan when a file appeared at the FTP server. Messagebased triggers were also necessary for the SQL‐toOracle integration, enabling the two applications to notify each other about
synchronization activities. Event triggers were necessary in the Oracle‐to‐SAP integration, allowing Oracle to create events about changes to its state and alerting SAP to make associated changes based on event characteristics. Those same event triggers gave Active Directory (AD) the data to quickly tag permissions into the data. Finally, timebased triggers kicked off occasional data transfers between the SQL database and the UNIX mainframe.
Wildly, simple triggers alone weren't sufficient. All by itself even the best trigger couldn't fulfill the multi‐server and multi‐action real‐time requirement our system demanded. We also needed the ability to tether or "chain" individual triggers together. By chaining triggers, we could speed the process, get data where needed, and ensure the system remained in convergence. I'll talk more about chaining triggers in the next chapter.
Pay careful attention to the triggering capability of your selected IT job scheduling solution. The very best will come with the richest suite of triggering abilities.
Company F was a midsize company with a midsize IT organization. Responsible for all the tasks typically associated with IT, its IT team got along well and generally provided good service to the company at large.
But even the most well‐meaning of IT organizations occasionally makes mistakes, and sometimes those mistakes are large in impact. That's the situation that occurred one day after two administrators began sharing their quiver of scripts, tasks, and packages.
You already know that one of the primary benefits of moving to script‐based automation is consistency in reuse. By packaging a series of tasks into a script, that script can be executed over and over with a known result. Parameterizing those same scripts makes them even more valuable to IT operations. Once created, the same script can be used over and over again across a range of different needs. Yet reusability can sometimes be a risk as well. When a person can simply double‐click a script to activate it, the chance presents itself that that double‐click might happen inappropriately.
That's exactly what was intended the day Sara "borrowed" one of Jane's scripts for use in another system. Jane's scripts were brilliantly designed, smartly parameterized, and well documented. Built right into each script was all the necessary information another administrator would need to reuse the script elsewhere. For Sara, Jane's script perfectly solved her problem at hand.
The central problem, however, was that Sara wasn't really authorized to run Jane's script. In fact, the well‐meaning Sara wasn't supposed to be working on the system at all. When she executed Jane's script, it brought the system down unexpectedly. Company F learned a valuable lesson that day in the openness of simple scripts.
That's why shortly thereafter Company F invested in an IT job scheduling solution for aggregating their automations into a unified store. Unifying automations within a restricted‐run framework enabled Company F to apply privileges to their scripts. Because they chose a best‐in‐class IT job scheduling solution, they were able to not only apply privileges on the scripts themselves but also the jobs, plans, and even variables associated with those scripts. Having correct permissions in place reduces the risk that a Sara will inappropriately execute a script. But, more importantly, it also reduces the risk that a Saratype worker will inappropriately attach the wrong variables to the right script, or the right variables to the wrong script.
Figure 2.12: Applying security at various levels in script execution.
Consolidating IT automations into a job scheduling framework provides visualization of scripts within the enterprise. It adds security to what might otherwise be highly‐dangerous text files. It creates a location where their successful execution can be proven to administrators as well as auditors. And it creates an auditable environment of approved execution that protects the data center.
These seven stories are told to help you understand the value add to an IT job scheduling solution. In them, you've learned how IT job scheduling works for administration as well as complex tasks that might otherwise be relegated to low‐level developers. You've discovered how triggers and file manipulations are as important as database tasks and middleware actions. You've also learned how job scheduling creates that framework for approved execution that your auditors—and, indeed, your entire business—truly appreciate.
Yet as I mentioned as this chapter began, I still haven't dug deeply into the inner workings of jobs themselves. Now that you've gained an appreciation for where job scheduling benefits the data center, let's spend some time discussing a technical deconstruction of an IT workflow. At the conclusion of the next chapter, you'll gain an even greater appreciation for how these job objects and their plans fit perfectly into the needs of your business services.
Computers are useful because they'll perform an activity over and over without fail. The art is in telling them exactly what to do.
I hope that reading Chapter 2 was as enjoyable as writing it. Although I'll admit I took a little literary license in telling its stories, I did so to highlight the use cases where IT job scheduling makes perfect sense. Coordinating administrator activities, consolidating tasks, generalizing workflows, gathering data, orchestrating its transfer, triggering, and security are all important facets of regular data center administration. Yet too often these facets are administered using approaches that don't scale, introduce the potential for error, or can't be linked with other activities. The ultimate desire in each of Chapter 2's stories was the creation of workflow. That workflow absolutely involved each story's actors; but, more importantly, it involved the appropriate handling of those actors' data.
In many ways, workflows, jobs, and plans represent different facets of the same desire: Telling a computer what to do. You can consider them the logical representations of the "little automation packages" I referenced in the first two chapters. Although I spent much of those chapters explaining why they're good for your data center and how they'll benefit your distributed applications, I haven't yet shown you what they might look like.
That's what you'll see in this chapter. In it, you'll get an understanding of how a workflow quantifies an IT activity. You'll also walk through a set of mockups from a model IT job scheduling solution. Those mockups and the story that goes with them is intended to solidify your understanding of how an IT job scheduling solution might look once deployed.
But for now, let's stay at a high level for just a bit more. In doing so, I want to explain how workflows bring quantification to IT activities.
A workflow has been described as a sequence of connected steps. More importantly, a workflow represents an abstraction of real work. It is a model that defines how data gets processed, where it goes, and what actions need to be accomplished with that data throughout its life cycle.
You can find workflows everywhere in business, and not all are technical in nature. Think about the last time you took a day off from work. You know that taking that day off requires first submitting a request. That request requires approval. Once approved, you notify teammates and coworkers of your impending unavailability. In the world of paid time off, you can't just miss a day without following that process.
And yet sometimes people do just miss days. Perhaps they were very sick, or got stuck on the side of the road far from cell phone service. In any of these cases, the workflow breaks down because the process isn't followed. What results is confusion about the person's whereabouts, and extra effort in figuring out what they were responsible for accomplishing during their absence.
You can compare this "people" workflow to the "data" workflows in an IT system. Data in an IT system needs to be handled appropriately. Actions on it must be scheduled with precision. Data must be transferred between systems in a timely manner. Failure states in processing need to be understood and handled. The result in any situation is a system where data and actions can be planned on.
To that end, let's explore further the IT plan first introduced back in Chapter 1. Figure 3.1 gives you a reproduction of the graphic you saw back in that chapter. There, you can see how three jobs have been gathered together to create Plan 7 – Send Data Somewhere.
Figure 3.1: An example IT plan.
I won't explain again what this plan intends to do; the activities should be self‐explanatory. More important is the recognition that this example shows how an IT workflow quantifies an activity along a set of axes: capturability, monitorability and measurability, repeatability and reusability, and finally security. Let's explore each.
I find myself often repeating the statement, "Always remember, computers are deterministic!" Given the same input and processing instructions, they will always produce the same result. Yet even with this assertion, why do they sometimes not produce the result we're looking for?
That problem often centers on how well the established workflow captures the environment's potential states. A well‐designed workflow (and the solution used to create it) must have the ability to capture a system's states and subsequently do something based on what it sees.
I recently heard a story that perfectly highlights this need for capturability. In that story, a company ran numerous mission‐critical databases across more than one database platform. Most of these databases were part of homegrown applications that the company had created over time.
Backing up these databases was a regular chore for the IT department. Although the company's backup solution could indeed complete backups with little administrator input, the configuration of many databases required manual steps for backups to complete correctly. Due to simple human error, those manual steps sometimes weren't completed correctly. With more than 25 databases to manipulate, that human error became the biggest risk in the system. Fixing the problem was accomplished by implementing a solution that could capture the manual portions of the activity into an IT job.
Such capture is only possible when an IT job scheduling solution is richly instrumented. That solution must include the necessary vision into backup solutions, database solutions, and even custom codebases. Vision into every system component means knowing when the task needs accomplishing.
You can't capture something unless you can monitor and measure it. Just as important as visibility into a system is visibility into the workflow surrounding that system. An effective IT job scheduling solution must be able to instrument its own activities so that the job itself can recover from any failure states.
This is of particular importance because most IT jobs don't operate interactively. Once created, tested, and set into production, a typical IT job is expected to accomplish its tasks without further assistance. This autonomy means that well‐designed jobs must include monitoring and measurement components to know when data or actions are different from expected values.
It's easiest to understand this requirement by looking at the simple IT plan in Figure 3.1. Such a workflow is only useful when its activities are measurable. More important, measurement of a plan's logic must occur at multiple points throughout the plan's execution. Figure 3.2 shows how this built‐in validation can be tagged to each phase of the plan's execution. In it, you see how the hand‐off between Job 27 and Job 19 requires measuring the success of the first job. If Job 27 cannot successfully connect to the database, then continuing the plan will be unsuccessful at best and damaging at worst. You don't want bad data being eventually sent via FTP to a remote location.
Figure 3.2: Validation logic ensures measurability.
Similar measurements must occur in the hand‐off between Job 19 and Job 42 and again at plan completion. A successful IT job scheduling solution will create the workbench where validation logic like that shown in Figure 3.2 can be tagged throughout an IT plan. This logic should not impact the execution of individual jobs, nor is it necessarily part of whatever code runs beneath the job object. Effective solutions implement validation logic in such a way to be transparent to the execution of the job itself.
Transparency of measurement along with parameterization of job objects combine to create a repeatable and reusable solution. You can imagine that creating a wellinstrumented IT plan like Figure 3.2 is going to take some effort. Once expended, that effort gains extra value when it can be reused elsewhere.
Reusability comes into play not only within each IT job but also within each plan. Recall my assertion back in Chapter 1 that an IT job is "an action to be executed." This definition means that the boundary of an IT job must remain with the execution of an action. Figure 3.3 shows a graphical representation of an Integrated Jobs Library. In that library, is a collection of previously‐created jobs: Job 17 updates a database row, Job 27 opens a connection to a database, and so on.
Figure 3.3: Reusing IT jobs in a plan; reusing IT plans in a workflow.
Each of those discrete jobs can be assigned to a workflow for the purposes of accomplishing some task. They can also be strung together in infinite combinations to create a more‐powerful IT plan. You can see an example of this in Figure 3.3. Notice how Job 27 represents the beginning step of Plan 15; it also represents a middle step for Plan 22.
Once created, both jobs and plans reside in an IT job scheduling solution's Integrated Jobs
Library. From there, created jobs can be reused repeatedly as similar tasks are required. In Figure 3.3, two new databases require synchronization. Since a plan has already been created to accomplish this task, reusing that plan elsewhere can be as simple as a drag‐anddrop. After dragging to create a new instance of the plan, the only remaining activities involve populating that plan with new server characteristics.
Chapter 2 introduced the notion of job security. In the seventh story, you read how individual jobs and entire plans can be assigned security controls to prevent misuse. That level of security is indeed an important part of any IT system; however, Chapter 2 only began the conversation.
Consider the situation where an IT plan updates data in a database. Correctly constructing this IT plan requires parameterizing the plan to eliminate specific row values or items of data to update. However, parameterizing the plan in this way introduces the possibility that someone could accidentally (or maliciously) reassign the plan and update the wrong data.
This risk highlights why deep‐level security is fundamentally important to an IT job scheduling solution. You want controls in place to protect someone from invoking a plan inappropriately. But you also want controls in place to protect certain instances of or triggers for that plan to be executed. Each platform and application tied into your IT job scheduling solution has its own security model, as does the job scheduling solution itself. Mapping these two layers together is what enables a job scheduling solution to, for example, apply Active Directory (AD) security principles to some application with a nonWindows' security model. Doing so enables you to lean on your existing AD infrastructure for the purpose of assigning rights and privileges in other platforms and applications. Figure 3.4 shows how such an extended access control list (ACL) might look, with triggers, trigger characteristics, and even instances of such a plan being individually securable.
Figure 3.4: Applying deep security to a job or plan.
At its core, an IT workflow is still a piece of code. Some kinds of code a solution's vendor will create and include within a job scheduling solution. These represent the built‐in job objects in your solution's Integrated Jobs Library. Other code must be custom‐created by the administrators who use that solution. No vendor can create objects for every situation, so sometimes you'll be authoring your own. Notwithstanding who creates the code, at the end of the day, it is that code that needs to be scheduled for execution.
With this in mind, let's walk through an extended example of constructing a workflow out of individual parts. You can assume in this example that an IT job scheduling solution has been implemented and will be used to author the workflow.
A diagram of that workflow is shown in Figure 3.5. In it, each block represents an activity to be scheduled. Its story goes like this: Data in a system needs to be monitored for changes. As changes occur, an IT plan must be invoked to gather the changes, run scripts against the data, and move it around through file copy and FTP transfers. While all these processes occur, individual jobs within the workflow must trigger each other for execution as well as monitor for service availability.
Figure 3.5: An example workflow.
You should immediately notice that scheduling is an important component of this workflow. That scheduling isn't accomplished through some clock‐on‐the‐wall approach. It is instead based on monitoring the states present within the system (presence of files, WMI queries, log file changes, and so on), and firing subsequent actions based on changes in those states. This intraworkflow triggering is the foundation of IT job scheduling. Without it, scheduling jobs is little more than a function of time and date. A workflow like this requires a much faster response, one that moves from step to step based on the results of the justcompleted step. You only get that through triggering.
Explaining Figure 3.5's workflow begins at its second step with the creation of an Oracle PL/SQL job object. This job object is necessary to run a query against the workflow's Oracle database. This object and its underlying query string should already be a component of your IT job scheduling solution. As a result, creating that job probably starts by clicking and dragging a representative SQL block (an example of which is shown in Figure 3.6) from a palette of options into the plan designer's workspace.
Figure 3.6: Oracle PL/SQL object.
Once added to the workspace, specifics about this job object's use will then be added into the SQL block's properties screen. In Figure 3.6, you see how a SELECT statement is created to connect to an Oracle database and gather data. You should also notice how a variable— ($DATA_SOURCE)—is used in this case to maintain the reusability of the job object.
Constructing that Oracle object is only the first step. By definition, there is no logic in it to define when it should be invoked. Accomplishing this requires creating one or more conditional statements. In this case, the workflow desires to query a Web service to see when data has changed. When it has, the Oracle SELECT statement is invoked. Figure 3.7 shows an example screen where such a Web services binding might be created. This binding identifies the methods that the Web service exposes, and is the first step in creating the necessary conditional logic.
Figure 3.7: Web services connection.
Our example now includes conditional logic for monitoring the Web service for data changes. It also includes connection logic for gathering data from the Oracle database. The next step in the workflow requires processing that data through the use of a script block. Such a script block might be entered into an IT job scheduling solution using a wizard similar to Figure 3.8.
Figure 3.8: A scripting job.
In this mockup, a job object is created to bound a script. Scripting jobs are exceptionally malleable in that they can contain any code that is understood by the IT job scheduling solution and target application. In the case of Figure 3.9, the code is VBScript, although any supported code could be used.
The script's code is entered into the script block, along with other parameters like those seen in Figure 3.9: Those parameters are associated with the code itself, completion status, script extensions, and so on. Once created, the script becomes a job object just like the others in this workflow.
Note: As you can imagine, using custom code introduces the possibility for error into any IT plan. Your IT job scheduling solution will include scripting guidelines, but it should also include instrumentation to validate script variables and handle and alert on errors as they occur.
Figure 3.9: File copy job.
The next step in constructing the workflow is twofold. Figure 3.5's branching pattern illustrates the need to transfer the script's results to two locations using two different mechanisms. The first, seen in Figure 3.9, might be through a file copy job object.
Such an object is likely to be a built‐in object within an IT job scheduling solution's Integrated Jobs Library. Thus, adding that job object to the plan may require little more than dragging it into the workspace just like with the SQL object. Once added, parameters associated with the file transfer are then added along with actions should a failure occur. Note again here how a variable is used in the file copy object's parameters to maintain reusability.
File copy jobs typically perform file transfers between similar operating systems (OSs), such as Microsoft Windows. But getting data off a Windows system and onto a Linux or UNIX system requires bridging protocols. That's why FTP jobs exist. Figure 3.10 shows how an FTP job object might look being dragged into the workspace. In Figure 3.10, an FTP (technically, an SFTP) job has been created. Added as parameters to that job are the FTP commands required to transfer the data as well as server names and credentials.
Figure 3.10: FTP job.
Note: Securing these credentials is also important to security. No regulated business or its auditors will look kindly on storing authentication credentials within an FTP command string. Thus, an effective IT job scheduling solution should provide a secured credentials store for such jobs. That store maintains credential security while allowing their reuse across multiple FTP jobs.
I mentioned earlier that monitoring and measurement were key components of good IT plan creation. If you're not monitoring your environment, you won't be prepared for unexpected states. One way to do that monitoring can be through a trigger. I show a portion of such a trigger in Figure 3.11.
Figure 3.11: WMIbased trigger.
This trigger is used to facilitate the Monitor Service element in the workflow. For it, a Microsoft WMI query verifies the state of a service (in this case the TlntSrv or Telnet service). Not shown in the figure, but an important part of the job creation, is the action the trigger will accomplish when it discovers a stopped service. Assuming this sample workflow requires use of the service being monitored, the action associated with Figure 3.11 will be to restart that service if it is down.
This example is important because it highlights the kinds of state‐correcting actions an IT job scheduling solution can automatically perform. If your workflow requires specific servers and their services (or daemons) to be operational, building those corrective measures directly into the workflow goes far into ensuring the continued operation of the distributed business system.
Our sample workflow needs to process two scripts to manipulate its data. The first you saw in Figure 3.8. I won't show you a similar view of the second script. Instead, I'll show you a constraint that might be applied (see Figure 3.12). Such a constraint can define when that script needs to be executed.
Figure 3.12: File constraint.
Recall that intra‐workflow scheduling needs to be more than just time‐based. Time‐based schedulers are by nature insufficient because they can only process data at prescribed times of the day. Doing so creates inappropriate delay for workflow processing. What you really want is steps in a workflow to fire once a successful result from previous steps is verified.
You could achieve this by running the workflow line by line. However, doing so doesn't necessarily base the execution of following steps off results from previous steps. That's why Figure 3.12's file constraint is useful. Constraining an IT job's execution to occur only when a file is present allows that job to kick off only at the most appropriate time.
Our example workflow needs to process its second script after a file is copied. One can assume then that the copied file will be present on the target system. Thus, adding a file constraint to a job object means running the job only when the file is present and the previous step is complete.
Although not necessarily related to this example, a pair of additional constraints is worth exploring. The first can be seen in Figure 3.13 where a job constraint has been placed on a job. For those plans where you simply want one job to follow another after its successful completion, job constraints can ensure that path is followed. Important to recognize here is that, as configured, whatever job follows the one in Figure 3.13 will only begin if the previous job is successful. Your IT job scheduling solution should include multiple options for defining when jobs in a plan are allowed to begin.
Figure 3.13: Job constraint.
The other half of this equation is in telling which job to trigger after a successful completion. You can see an example of this in Figure 3.14. Here, a job (not identified in the figure) can be instructed to trigger upon the success of the previous job. Using combinations of constraints and triggers ensures that following steps in the workflow only execute when the state of the system is appropriate.
Figure 3.14: Completion trigger.
Although time‐of‐day scheduling is of comparatively minor use, it is still useful from time to time. Figure 3.15 shows an example scheduler that can be used for identifying when jobs should initiate. A good scheduler will include not only date‐ and time‐based triggers but also scheduling support for complex scheduling needs.
Figure 3.15: Timebased schedule.
Whatever IT job scheduling solution you choose needs to arrive with a suite of potential triggers that define when jobs are fired. These triggers perform multiple functions. They enable actions to be fired based on known states rather than requiring periodic "wake the script up and verify" batch jobs. They provide a mechanism to simplify event handling on external systems, a process that can be very complex when handled within a job object itself. They also create the potential for new types of actions, enacting change based on states that would otherwise be difficult to monitor within a script.
Consider the following possible triggers as a starting point for defining when you might want actions fired in your data center. This list gets you going. I'll expand on it in the next chapter, where I deliver a shopping list of capabilities you should look for in a solution:
Last, although the core of any IT job scheduling solution is indeed the code that enacts changes on systems, the last thing you want to do is begin creating scripts if pre‐created objects are already available. This chapter has discussed how an Integrated Jobs Library creates a palette of potential actions that you can add to your workspace. Figure 3.16 shows a representative sample of what one might look like. Pay careful attention to the actions that are available right out of the box in your chosen solution. You may find that leaning on your vendor for creating, testing, and validating objects greatly reduces your effort and risk of failure.
Figure 3.16: Integrated Job Library.
Telling computers what to do is indeed an art, one that's bounded in the science of logic. Purchasing and implementing an IT job scheduling solution only nets you an empty palette within which you can create your own automations. That empty palette does, however, come with substantial capabilities for creating those instructions. This chapter has attempted to show you ways in which that might occur.
There's still one more story left to tell. That story deals with highlighting the capabilities that you might want in setting up that palette. That's the topic for the final chapter. In it, I'll share a shopping list of capabilities that you might look for in an IT job scheduling solution. Some of those features will probably make sense, while others might surprise you.
Purchasing and implementing an IT job scheduling solution nets you only an empty palette within which you can create your own automations. Filling that palette to meet the needs of your environment is the next step.
You might remember this idea as the closing thought of the previous chapter. It highlights an important realization to keep in mind as you're considering an IT job scheduling solution: Once you've selected, purchased, downloaded, and incorporated into your infrastructure an IT job scheduling solution, what do you have? With many solutions, not much. Once installed, some solutions expose what amounts to an empty framework inside which you'll add your own jobs, plans, and schedules.
An IT job scheduling solution is, at the end of the day, only what you make of it. Right out of the box, a freshly installed solution won't immediately begin automating your business systems. Creating all those "little automations" is a task that's left up to you and your imagination.
That's why finding the right IT job scheduling solution is so fundamentally critical to this process. The right solution will include the necessary integrations to plug into your data center infrastructure. The right solution comes equipped with a rich set of triggers that bring infinite flexibility in determining when jobs are initiated. And the right solution helps you accomplish those automations easily, carefully, and with all the necessary tools in place to orchestrate entire teams of individuals. Integrations, triggers, and administration—these should represent your three areas of focus in finding the solution that works for you.
Just three things, eh? That's easy to say when you're just the author of some book on solutions for automating IT job scheduling. The real world simply isn't as cut and dried. The reality is that businesses today require justification—and often formal justification—in order to convert a tool that's desired into a tool that actually gets purchased.
Oftentimes, IT professionals know via gut instinct that they need something to solve their current problem. They often even have a vague notion of what that something probably looks like. The difficult part for many is in translating their instinct into a set of requirements that lay out exactly what they need.
That's why I've dedicated this final chapter to assisting you the IT professional in creating a formal requirements specification. I'll outline a set of requirements that are remarkably similar to those used to find the solution for my project, The Project That Would Change Everything.
You remember that project, first introduced in Chapter 1? Its architecture is reprinted as Figure 4.1. The Project That Would Change Everything, as you can see, incorporated a range of technologies along with associated triggers for moving data around while processing it at the speed its business required. Finding a single‐source solution to accomplish all of this wasn't an easy task. Thus, locating the solution that worked for us needed a set of formal requirements.
Figure 4.1: The Project That Would Change Everything.
In the next sections, I'll lay out the most important of those in formal requirements language. For each requirement, I'll add a bit of extra commentary to its story and, where possible, show you a mock‐up of what a potential fulfilling solution might look like. You're welcome and encouraged to reuse these requirements along with their justifications in your own specification for finding the product you ultimately need.
Oh, and you're welcome. Consider these your requirements for finding an IT job scheduling solution that'll work for your needs.
Not to belabor the point, but any IT job scheduling solution you select must work with every technology if it's to be useful. That means support for your databases, along with their query and management languages. It means integrating with your applications either directly or through exposed Web services. It requires direct and indirect integration with all forms of file transfer because data that's processed almost always needs to be moved somewhere else at some point. Finally, it must be able to handle data transformation, converting data between formats as it is processed or relocated.
With a sketch of the integration points that comprise your business systems, compare its list of products and technologies with those supported by the IT job scheduling systems you're considering. Those that don't support every technology should be immediately removed from your candidate list.
Platforms, applications, and technologies are only the first level of integration an IT job scheduling solution requires. In addition to general support for an application, such a solution must be able to dig into that application's activities and behaviors if it is to process and move around data.
Equally as important as the support of those properties and methods is their exposure within the IT job scheduling solution. Not every business system is well documented, and not every property, method, or action has a well‐known reason for being. Thus, a solution that can interrogate its integrations for what's available becomes critically important. Figure 4.2 shows how this might look for a Web Service, where a mock‐up IT job scheduling solution exposes a list of potential actions and data (in this case, properties and methods) with a single click.
Figure 4.2: Exposing the properties and methods of a Web services object.
Orchestrating activities across platforms, applications, and technologies is only useful when the IT job scheduling solution can do so across the entire field of scripting languages. Script language independence refers to the requirement that any appropriate scripting language can be used within any job object and against any applicable platform, application, or technology.
Figure 4.3 shows how this might be implemented in a sample solution. Here, the job object itself does not place constraints on the type of script launched within the properties of the job. In this figure, any script can be inserted into the Job Properties location. That same script, irrespective of its language, can be further constrained via parameters, completion status, and other factors including pre‐ and post‐execution steps. This flexibility is necessary because you'll be connecting your scheduling solution to many types of technologies, any of which may require a specific language for interaction.
Figure 4.3: Scripts of any language are components of each job or plan.
Queues in an IT job scheduling solution represent a mechanism to manage and prioritize job and plan activities. A fully‐functioning IT job scheduling solution will leverage multiple queues of differing priorities in order to preserve performance across both the scheduling system and those it connects with. Working with a series of prioritized queues also enables a kind of failover when the resources needed by job objects are for some reason not available. In this scenario, job objects in one queue can be failed over to subsequent queues for processing. The result is a better assurance that jobs will succeed when the system experiences resource outages or other transient problems.
You can see a mock‐up of how this might look in Figure 4.4. Here, a job is configured to execute within a specific submission queue. That queue is given a priority along with other parameters that control its performance. Running jobs in this manner ensures that they execute based on priorities that are driven by business rules.
Figure 4.4: Individual jobs or plans are assigned to queues.
There's an idea in the sixth story of Chapter 2 that warrants revisiting: "Triggers are the real juice in an IT job scheduling solution. The kinds and capabilities of triggers a job scheduling solution supports makes the determination between one that's enterprise ready and one that's not much more than the Windows Task Scheduler."
It is indeed the flexibility of those triggers (along with their associated constraints) that separates the best‐in‐class IT job scheduling solutions from those you won't want. Requirements 5, 6, 7, and 8 all deal with the need for different types of triggers that fire based on state changes or other behaviors on target systems.
A file‐based trigger initiates job execution based on the presence or characteristics of a file on a system. These triggers are particularly useful for identifying when a file is created, then firing the job's next step based on that file creation. They can do the same when files are modified, deleted, or any other action associated with that piece of data. File triggers become important for eliminating lag in distributed systems because they initiate processing steps immediately as data experiences a change.
Messaging systems such as CORBA, Java Messaging System, and Microsoft Messaging Queue among others are a low‐level solution for orchestrating activities across applications and platforms. Their centralized approach to signaling across system components creates an easy framework for developers. They can be similarly easy for IT job scheduling solutions to work with.
Interrogation and integration enables message‐based triggers to coordinate the activities between low‐level systems and their accompanying shrink‐wrapped solutions. Messagebased triggers are similar to file‐based triggers in that they improve job execution performance by executing actions at exactly the moment they're needed. Your chosen solution should tie into the messaging systems that are used by your business systems, enabling you to extend the reach of their signaling in and among distributed systems.
Like messaging systems, events are a rich source of information about on‐system behaviors. With virtually every application reporting its state through OS and other onboard event systems, an IT job scheduling solution with event‐based triggers gains the ability to orchestrate application events with other activities.
Most important here is the ability to customize and tailor events inside the business system. Events can be fired based on the activities within a system, so monitoring for their creation allows an IT job scheduling solution to immediately invoke resulting actions elsewhere.
Although most of this book has been dedicated to highlighting why time‐based triggers aren't good enough for most business systems, there still comes the time when a job must be fired based on wall clock time. Most important to recognize here is that date‐ and timebased scheduling can be done well (when not done well, it can be a significant limiter). An IT job scheduling solution that does not include support for multiple schedules, irregular schedules, and highly‐custom schedules won't be enough for your needs, particularly in today's global workplace where jobs that span time zones may be common. Seek out those that provide high levels of customization for date‐ and time‐based schedules.
Chapter 1 introduced the notion of parameterization when it comes to IT jobs and plans. This activity essentially abstracts every piece of data into variables that can be used anywhere. Variables and other types of dynamic data are critical to reusability in an IT job scheduling solution. Your chosen solution must include those that support variables both within jobs and plans as well as across them.
Oftentimes reusability of variables and other dynamic data across job objects is referred to as creating "profiles" of data. Those profiles provide an easy way to reference data no matter where it becomes needed. Figure 4.5 shows how such variables can be instantiated within a job object. There, $ID, @ExecutionUser, and @ExecutionMachine variables have been created for later use.
Figure 4.5: Variables are created for specific use or across all jobs and plans.
Once installed, you'll be quickly creating lots of individual jobs and plans for automating your environment. As you learned in Chapter 3, those jobs and plans are the discrete actions that ultimately connect to create a workflow. An effective IT job scheduling solution will enable the reuse of variables both within and across workflows so that very large automations can be much more easily laid into place.
You've seen in the previous chapter an example of how communication within and across workflows is useful. Its notion of exchanging data is important to simplify workflows and achieve parallelism of job processing for improved performance.
You might think that a solution whose primary goal is job execution performance wouldn't need to consider the business calendar. On the contrary, it is important to recognize the impact of jobs on actual system performance. You don't want to run particularly resourceintensive jobs against production systems during periods of heavy use. Just the act of running those jobs can have a negative impact on the system as a whole.
Determining that exact "period of low use" isn't often an easy task. Business systems, particularly those that service users across multiple time zones, may experience unexpected hours of high and low utilization. The complexities of global utilization drive the need for scheduling based on a business calendar. Figure 4.6 shows one representation of how that business calendar implementation might look. Using such a calendar, the execution of entire series' of jobs can be visually identified and scheduled to prevent resource overutilization.
Figure 4.6: Scheduling of jobs via a business calendar.
Business calendars aren't only for resource preservation. They can also be used to schedule job activities based on when data gets created, mirroring the job execution to the business rules that drive its data. For example, if you know that a set of end‐of‐day data will be available at the close of each day, using a business calendar can orchestrate the collection of that data across time zones and in accordance with other business rules. By following a business calendar, it becomes possible to align the technical activities on the system with the personnel activities in the real world.
Workflows that are comprised of numerous job objects will grow unwieldy over time. This happens as individual items get ever‐more interwoven throughout the greater system. Reuse of job objects at the same time creates a web of interdependencies between those very objects, which then requires management. Lacking visualization on dependencies, administrators can too easily manipulate a job object or item without realizing its downstream effects.
Figure 4.7 shows a sample report where one job object's dependencies are listed, along with their label, name, and path. This simple report is a powerful tool in a complex system where job objects find themselves reused across systems.
Figure 4.7: A report on the dependent objects of an object.
Your IT teams will find themselves growing quickly reliant on your IT job scheduling solution for more of their daily activities. That's because any action or behavior that can be characterized into a script or other job can be trivially implemented into your IT job scheduling solution.
For these and other reasons, that same solution must support multiple interfaces for management and administration. Obviously, a client‐based solution will provide the richest interface for manipulating jobs and their characteristics; however, not every administrator is always in a location where that client GUI is accessible. Web browser or mobile device interfaces become valuable tools when you're in the data center and nowhere near a rich client. They become even more valuable (particularly in the case of mobile device support) when critical jobs might alert in the middle of the night. Choose a solution that includes numerous interface options, and you'll thank yourself down the line.
A large portion of Chapter 3 was dedicated to deconstructing the elements in a typical IT workflow. Each of those individual items can be encapsulated into a job or plan. Each performs some action, and interconnecting them in complex ways—such as nesting and chaining—is what makes job scheduling so extensible across the range of business services.
Figure 4.8 shows what a view might look like in an IT job scheduling solution. There JobA and JobB are graphically connected to show how results from JobA are used in the execution of JobB. Although simple in this example, the chaining of input and output represents one of the core value propositions of an IT job scheduling solution. Its orchestration of activities across all jobs and all system components is what enables this chaining to occur.
Figure 4.8: Two jobs, the execution of which is chained together.
The same holds true with job nesting, where the execution of one job occurs within another. Job nesting furthers the reusability of jobs by enabling an individual job to perform an action within the confines of another. Input and resulting output can be used between jobs.
There's a third activity that's an important part of this requirement. Consolidating chaining and nesting into an overarching system highlights the power gained through job load balancing. Already discussed to some extent as a function of Requirement #11's business calendar, job load balancing enables an administrator to enact change across dozens or hundreds of system components at once. An effective solution will enable that action to occur without the fear that running a massively parallel job will impact the platform or application, or the greater system as a whole.
One can really only get to this nirvana of complete automation if it can be properly visualized. In fact, one of the greatest limitations of many applications and platforms lies in their lack of visualization tools. You simply can't see their activities as they fire. The IT job scheduling solution you want will expose a workspace into which job objects can be laid out, interconnected, and watched as they execute.
This visual approach to job creation becomes particularly important as the scope and complexity of plans increases. As you can imagine, it's not that difficult to connect two jobs together like what you saw back in Figure 4.8. Yet the situation changes dramatically when greater numbers of tasks require orchestration, all with their own execution triggers and constrains.
An IT job scheduling solution's workspace designer functionality grows more important as complexity increases. Figure 4.9 shows what is still a relatively simple plan; this time comprised of five separate jobs. Connecting these jobs are the triggers (marked as CT) and constraints (marked as JC) that combine to determine when the next set of actions is to be executed. In this example, three jobs must coordinate their activities prior to the fourth one executing. Only after that fourth job executes can the fifth and final job complete.
Figure 4.9: A collection of jobs in a plan with associated triggers and constraints.
Look for an easy‐to‐use workspace designer tool in your chosen solution. Lacking one that presents visualizations in an easy‐to‐understand format makes your work more difficult and introduces the chance of failure or error in plan creation.
Scripts are obviously the backbone of any IT job scheduling solution, but many common actions in an IT environment are repetitive and/or easily captured into a reusable object. This chapter in fact began with the assertion that a freshly‐installed IT job scheduling solution creates an empty framework that you're responsible for filling with automation. The reality is, depending on the solution chosen, that framework may be automatically populated with common actions that can be immediately usable.
As you can imagine, having a collection of jobs readily at hand can significantly speed the creation of new automations. Need to email a document? Just drag the "Email" job step into your workspace designer. Figure 4.10 shows a mock‐up of how such a job steps editor might look. There, you can see a range of common activities that span the breadth of data center platforms and applications.
Figure 4.10: An editor that includes commonlyused jobs.
Although these job steps alone won't be specific to your needs, an effective solution will include ways of incorporating variables and other dynamic data to customize the job steps for the needs of the automation under construction. More importantly, these job steps are pre‐generated and pre‐tested from the vendor, which reduces the risk of scripting error and the level of effort in testing.
Easily one of the most difficult activities in creating automations is in recognizing their output, whether that be the data you're looking for or an error message. Most automations are not run in interactive mode. Instead, they're run as background processes that work with platforms and applications without exposing their activities to the logged in user. Thus, the resulting data and error messages from these scripts aren't easily captured using simple native tools.
An effective IT job scheduling solution will often execute its scripts within its own runtime environment, or within one where output and error messages can be captured. Executing scripts and other objects in this way enables the IT job scheduling solution to return this information to an administrator's console for review. Knowing output messages from executed scripts assists greatly in the generation of those scripts, easing their development process and reducing the risk of error. Look for an IT job scheduling solution that supports script execution reporting that includes output data as well as runtime error messages.
Chapter 2's final story relayed the painful situation where a script gets misused. Script misuse, accidental use, or malicious use are all common risks in any data center environment where multiple individuals work together. That's why an effective IT job scheduling solution will include a permissions structure that can lock down jobs, plans, and even variables to specific users and/or uses.
Having a centralized security model significantly reduces the risk that a script with significant impact cannot be accidentally or maliciously run against data center equipment. It also provides a point of control for change management administrators and auditors to monitor. Data centers that operate under heavy regulation or security controls will greatly benefit from centralizing the permissions structure for script execution into a single solution.
Security isn't the only mission‐critical requirement in a solution that could potentially make massive changes across hundreds of systems at once. No less important are the needs for change control and revision history of any automations that have been introduced into the system.
You've surely experienced the situation where "something got changed." Whether that change is to a setting on a server or a line in a script, figuring out exactly what got changed—and who changed it—in this scenario is a challenging task that isn't often successful. When changes are made that inappropriately alter data, finding the exact line or character at fault adds even more difficulty.
That's why an IT job scheduling solution that you'll want to use will store revisions of scripts and other automations for review. An excellent solution will provide a mechanism for you to analyze the individual changes between revisions, as well as note which user made the change. Figure 4.11 shows an example screen where 10 revisions of a script have been logged. There, each version can be viewed to identify "what got changed."
Figure 4.11: Revision history.
The final requirement here ties each of the last few into a centralized database for auditing, monitoring, and alerting purposes. It has been said repeatedly in this chapter that (in addition to enhancing job scheduling itself) a primary reason for implementing an IT job scheduling solution is for centralization of job execution. By default, this centralization automatically creates a single location where all actions to your business systems can be logged and monitored.
Administrator and even user alerting represent useful additions to the feature set of such a solution. Remember that any IT job scheduling solution sits in the center of your business service, orchestrating the communication and processing of data between disparate components. From this location, it is uniquely positioned to watch for and alert on behaviors in data. Those behaviors can be things of interest to administrators; or, more often, they are of interest to the users themselves. Creating alerts across all the usual alerting approaches such as email, messaging, instant messaging, and even more‐modern techniques such as social media outlets provide a way to notify users when conditions of interest occur. Figure 4.12 shows a simple example of an email alert that can be initiated based on either a trigger or other preconfigured condition.
Figure 4.12: An example alert.
So, there you have it—20 high‐level requirements for quantifying the types of capabilities you need out of an IT job scheduling solution. These 20 requirements highlight the mostcritical pieces that any distributed business system and its administrators will need to improve job execution performance while maintaining consistency of workflows.
And, at the same time, that's my story. In the end The Project That Would Change Everything was eventually implemented successfully. It took time to create the necessary automations, I'll admit. But the workflow assistance gained through the use of a centralized system ensured that all our changes were logged, monitored, and carefully categorized. In the end, given the same project and scope of work, I'd do it again in the very same way.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153816.3/warc/CC-MAIN-20210729043158-20210729073158-00303.warc.gz
|
CC-MAIN-2021-31
| 104,328
| 302
|
http://callofduty.wikia.com/wiki/User_blog:Dariogweg/Team_Player_Class
|
code
|
Alright, since a few days i've dedicated myself to teamplay (on MW2).
The class i use for this:
M9 Tac Knife
Sleight of Hand Pro
Any Suggestions on how to inprove this class?
If you have a similar class please post it. i'm very curious about other people's classes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717954.1/warc/CC-MAIN-20161020183837-00447-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 265
| 6
|
https://pyvo.cz/brno-pyvo/2023-01/
|
code
|
První letošní Pyvo se vrací do ArtBaru!
This year's first Pyvo returns to ArtBar!
The talk will be in English.
Click-bait title, right?
But seriously, when you ask what is DevOps, two people won't say the same.
So what is DevOps? Come to find out.
Mgr. Pavel Grochal TLDr;
He has loved computers since he was a child, taught programming (at that time C and PHP) as a teacher at the grammar school, and went on to graduate from the Faculty of Informatics of Masaryk University in Brno.
Since 2012, when he got to know Python and Django, he loves them and uses them for most of his projects. Then in 2015 he became an OpenSource consultant @inuits.eu
ArtBar Druhý Pád, Štefánikova 836/1
Sejděte dolů po schodech, vydejte se doleva poměrně dlouhou chodbou, a po pravé straně najdete bar. Pyvo hledejte v salonku za barem.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00677.warc.gz
|
CC-MAIN-2024-10
| 832
| 11
|
http://news.povray.org/povray.windows/message/%3Cweb.4edcb506618e37ca98571ef0%40news.povray.org%3E/#%3Cweb.4edcb506618e37ca98571ef0%40news.povray.org%3E
|
code
|
>> Why not just save them to separate files? Open a windows "preview" window on> it, and use the arrow keys to toggle back and forth? (Assuming you're using> Windows here, of course.)
By the way is this "peek" option available in Vista or XP? I know Windows 7 has
it. At the moment I use Vista, but I newer use the Aero-theme or whatever it's
called because it slows thing down.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00109.warc.gz
|
CC-MAIN-2022-05
| 378
| 4
|
https://discourse.julialang.org/t/what-kinds-of-runtime-dispatch-are-problematic-for-performance/101895
|
code
|
My understanding is that runtime dispatch is not always problematic. E.g. exceptions result in runtime-dispatch, but that should not affect performance as long as the exception is not raised. Then there are runtime dispatches due to type instabilities in e.g. some numerical computations, which in general can be problematic.
So, currently I have a large numerical codebase and I want to track down type instabilities. JET.jl being a static analysis tool reports all possible runtime dispatches, so its output ends up being very noisy in that sense. So, I used ProfileView.jl on a small part of the code that might be problematic. I have the following output.
The big read on the top right is somewhere in the
CHOLMOD module of SparseArrays.jl. This is what I see on descending there using Cthulhu.
So, all the runtime dispatch guarded by conditionals should not be problematic since they’re only executed during exceptional control flow, which leaves the call to
So, I am not sure if I should worry about that. Is its return type marked red because it is behind a
Ref, or is it something else? Is it fine this way, or is this indicative of some type instability and/or performance pathology?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511717.69/warc/CC-MAIN-20231005012006-20231005042006-00040.warc.gz
|
CC-MAIN-2023-40
| 1,194
| 7
|
http://stackoverflow.com/questions/6039636/drastic-slowdown-using-layer-backed-nsopenglview
|
code
|
I needed to display some Cocoa widgets on top of an NSOpenGLView in an existing app. I followed the example in Apple's
LayerBackedOpenGLView example code. The
NSOpenGLView is given a backing layer using:
Then the Cocoa
NSView with the widgets is added as a subview of the
glView. This is basically working and is twice ad fast as my previous approach where I added the
NSView containing the widgets to a child window of the window containing the glView (this was the other solution I found on the web).
There were two problems.
- The first is that some textures that I use with blending were no longer getting the blend right. After searching around a bit it looked like I might need to clear the alpha channel of the
OpenGLView. This bit of code that I call after drawing a frame seems to have fixed this problem:
glColorMask(FALSE, FALSE, FALSE, TRUE); //This ensures that only alpha will be effected glClearColor(0, 0, 0, 1); //alphaValue - Value to which you need to clear glClear(GL_COLOR_BUFFER_BIT); glColorMask(TRUE, TRUE, TRUE, TRUE); //Put color mask back to what it was.
Can someone explain why this is needed when using the
CALayer, but not without?
- The second problem I don't have a solution for. It seems that when I pan to the part of the scene where problem is #1 was observed, the frame rate drops from something like 110 FPS down to 10 FPS. Again, this only started happening after I added the backing layer. This doesn't always happen. Sometimes the FPS stays high when panning over this part of the scene but that is rare. I assume it must have something with how the textures here are blended, but I have no idea what.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299261.59/warc/CC-MAIN-20150323172139-00015-ip-10-168-14-71.ec2.internal.warc.gz
|
CC-MAIN-2015-14
| 1,641
| 14
|
http://stackoverflow.com/questions/17534173/draw-contour-sort-matlab
|
code
|
I have a problem. I have a a set of x and y coordinates by which I can draw a contour or a closed figure. However during my operation within the program the order of coordinates may change. So now if the plot is drawn the curve is not drawn right.
An illustration is given below in my code:
clc; clear all close all xi = [86.7342,186.4808,237.0912,194.8340,84.2774,39.5633,86.7342]; yi = [18.2518,18.2518,102.3394,176.4611,172.1010,88.6363,18.2518]; subplot(1,2,1),plot(xi,yi); title('original points contour'); xii=xi; yii=yi; %Suppose the points are interchanged t=0; t=xii(3); xii(3)=xii(4); xii(4)=t; t=yii(3); yii(3)=yii(4); yii(4)=t; subplot(1,2,2),plot(xii,yii); title('Redrawn contour with the points exchanged'); %I get this contour.
The two plots are shown in the code.
I need to be able to redraw the correct contour no matter what the order of the elements are . Should I use a sort algorithm. How can I determine the order of the points so as to make a good closed contour without any intersections ? Thanks in advance.
Note:: Suppose during operation my set of coordinates becomes this :
xiiii =[40,200,210,230,50,20,40] yiiii =[50,60,160,80,120,30,50] figure(); plot(xiiii,yiiii,'+r'); hold on; % I need to somehow change the matrices in such a way so as to form %an non-overlapping closed surface. %after manipulation I get should get this matrices xiii =[40,200,230,210,50,20,40]; yiii =[50,60,80,160,120,30,50]; plot(xiii,yiii,'+b'); hold off; %Notice the difference between the two plots. I require the 2nd plot.
I hope this example makes my question clear. Thanks again all .
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122189854.83/warc/CC-MAIN-20150124175629-00117-ip-10-180-212-252.ec2.internal.warc.gz
|
CC-MAIN-2015-06
| 1,595
| 8
|
https://edo-shop.ru/sccm-2016-fep-not-updating-174.html
|
code
|
Sccm 2016 fep not updating Best cam chat lobby
The entire risk arising out of the use or performance of the sample scripts and documentation remains with you.
In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
The scheduled tasks should be recreated when the agent installs and the computer will start checking in appropriately to the SCCM server. Out of the box FEP provides several channels for delivering definition updates to clients.The three basic options are, updates through WSUS/SUP, UNC file shares and connecting to Microsoft Updates.The procedure in this video presents a 4th option, which further leverages the capabilities and resources of SCCM.Essentially the procedure uses a VBS script running in task scheduler to pull delta definitions from the Microsoft Malware Protection Center, then SCCM bundles them into a package which is then pushed out to your Distribution Points and advertised to your FEP clients (on a re-occurring schedule).
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039379601.74/warc/CC-MAIN-20210420060507-20210420090507-00516.warc.gz
|
CC-MAIN-2021-17
| 1,366
| 4
|
https://jobspringpartners.com/jobs/242057
|
code
|
Senior Windows System Engineer
Glendale company ranking among the top 50 digital media companies in the world is looking to hire a senior system administrator. The person for this role will deploy, operate, monitor and secure the infrastructure and applications of a major Internet destination. Contract position of $50/hr with benefits.
Required Skills & Experience
- Administer a large Windows environment.
- Document Windows infrastructure architecture, standards, and implementations.
- Contribute to the development and maintenance of automation tools used in the management of our Windows infrastructure.
- Participate in 24x7 on-call rotation with other members of Windows Systems Administration team.
- Receive general instructions for new initiatives from supervisor.
- Evaluate and/or recommend purchases or tools; perform benchmark tests and proof of concepts
Desired Skills & Experience
- 5+ years of experience managing Windows environments.
- Experience Building and Supporting Active Directory and building domains is a MUST
- Experience scripting in PowerShell is a MUST.
- Solid experience with Active Directory and SCCM is a MUST .
- Solid understanding of Windows system internals, configuration, and deployment.
- Experience with at least three of the following: SCOM, SCVMM, Distributed File Systems, High Availability Clustering, Hyper-V, Monitoring
- Familiarity with fundamental networking/distributed computing concepts
Benefits & Perks
As a contractor you will receive the following benefits:
- Medical Insurance & Health Savings Account (HSA)
- Paid Sick Time Leave
- Pre-tax Commuter Benefit
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212768.50/warc/CC-MAIN-20180817182657-20180817202657-00692.warc.gz
|
CC-MAIN-2018-34
| 1,619
| 22
|
https://mail.python.org/pipermail/bangpypers/2014-July/010274.html
|
code
|
[BangPypers] Django - Infinte Loop
kracethekingmaker at gmail.com
Tue Jul 8 09:57:09 CEST 2014
I would save *don't* use Django signal. Signals are hard to test and don't
know when they will be executed.
- If post_save function does some work like updating external service which
can take data, use rq or celery.
- You can override django save.
def save(self, **kwargs)
# set all attributes
# call super class save
# call function here - blocking call.
On Tue, Jul 8, 2014 at 1:13 PM, Anand Reddy Pandikunta <
anand21nanda at gmail.com> wrote:
> *def my_func(sender, instance, created, **kwargs):*
> * # do something*
> * instance.status = 'task completed'*
> * instance.save()*
> *class MyModel(models.Model):*
> * status = models.CharField(max_length=100, blank=True)*
> * # some code*
> *signals.post_save.connect(my_func, sender=MyModel)*
> I am using post_save signal to connect to a function.
> If a new instance of model is saved, post_save signal connects to my_func.
> Once the function is executed, I am updating status of the model.
> This is again sending post_save signal which is leading to infinite loop.
> I want to execute my_func only once and update status many times.
> Does any one know how to do this?
> - Anand Reddy Pandikunta
> BangPypers mailing list
> BangPypers at python.org
*Thanks & Regardskracekumar"Talk is cheap, show me the code" -- Linus
More information about the BangPypers
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00288.warc.gz
|
CC-MAIN-2022-40
| 1,410
| 33
|
https://steadymongoose.com/category/research/
|
code
|
This year I want to use this website for a couple of specific things:
- To use as a weekly writing platform.
- To generate some passive income.
- To use a portfolio of projects I have worked on.
In order for me to see some real progress on these goals I’m working on narrowing down how I focus my time. Speaking of time, I now have the most time available that I’ve had in the past five years as I’m done with school for time time being. I’ve been taking this month to develop a plan for how to use my time this year.
One of my goals this year is to 12 different 30 day challenges. Currently, I’m doing a 30 day meditation challenge. I’ve missed a few days, but I pick up where I missed a day and am 9 days into this challenge.
I’ve also taken the month to make some lifestyle changes. I think the most important one’s being getting off of social media platforms. I still have one Twitter account for cyber security purposes, but I don’t use it to engage in politics or anything else that’s a waste of my time.
For my goal of becoming proficient in the cyber security field, I’m practicing the challenge boxes on HackTheBox. Each box I work on, I’m working on doing a write up. I’ll probably make that a separate and dedicated category on here.
I’m also working my way through a Python Fundamentals course at Udemy. The goal of working through this is two-fold. 1 being a means to learn the language and 2 as a way to build a web scraper dashboard that constantly checks the health of services like Slack, Office365, Adobe, etc.
On the non-tech side of things, I am working my way through two books at the moment. I’m listening to the audiobook version of War and Peace narrated by Neville Jason and am reading through The Count of Monte Cristo.
That’s it for now. I’ll be back next week!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00128.warc.gz
|
CC-MAIN-2023-06
| 1,822
| 11
|
http://www.greatis.com/appdata/u/m/msckin.exe.htm
|
code
|
msckin.exe - Useless
Manual removal instructions:
It also submits personal information, such as IP address, browser used, and user details retrieved from other installed applications on the system.
Periodically attempts to connect to odysseusmarketing.com.
Spyware.ClientMan must be manually installed on the system.
However, there are several known applications that have Spyware.ClientMan inside of them and that install the spyware component when the application itself is installed.
Copies the file, Msckin.exe, and registers it as a process.
Creates the following folders:
Navigate to the key: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run
and delete any values pertaining to "Client Man."
Also, delete the key: HKEY_CURRENT_USER\Software\CliMan
by NightWatchermsckin.exe Dangerous Rating: 5 out of 5
My PC had gotten a bad rootkit that my ISP antivirus software (powered by McAfee) could not detect, nor could fix.
I sought a solution on the Internet and discovered your product and tried out the trial of UnHackMe.
You quickly found the rootkit and SAVED my PC!
I haven't had any problems since, and I'm extremely grateful.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704075359/warc/CC-MAIN-20130516113435-00071-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 1,149
| 16
|
https://mastodon.me.uk/@s_wilcox
|
code
|
No-one better to write a brief history of the NHS design system #designsystems
I forgot to introduce myself. #introductions . I'm Sara, a content designer at NHS Digital. I work on the NHS digital service manual (service standard, design system and content style guide) as well as other NHS.UK projects.
Great work by the #NHS service manual team, and partners, on this new standard designed for any organisation that produces health and care information
Content design for the NHS digital service manual and NHS.UK. Interests: design systems, UX, patient information, Iran and running
Open social media for the UK
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00070.warc.gz
|
CC-MAIN-2022-21
| 614
| 5
|
https://knowledge.broadcom.com/external/article/33895/integration-of-nimsoft-and-thirdparty-se.html
|
code
|
Integration of Nimsoft and third-party Service Desk Applications
The APIs used are specific to the ticket system you want to integrate with.
Nimsoft provides out of the box integration for several systems, e.g., Remedy (remedygtw), Service Now (sngtw), HP Service Desk (hpsmgtw), CA Service Desk (casdgtw).
We also have a number of customer-developed integrations as well.
The methods used may include one or more of the following:
- Simple email - APIs - Webservices - Scripts
The method used will be dependent on the level of integration required.
These integrations are called ?gateway probes.? Any 'new' integration would most likely need to be a custom gateway probe.
The common design pattern for gateway probes is listed below:
? A special gateway user is defined in NMS (e.g. ?Nimsoft Service Desk?) ? Alarms are assigned to that user (either manually or via Auto Operator rules) ? The gateway probe subscribes to alarm_assign & alarm_close subjects, checks to see if the assigned_to user is the gateway user ? If the assigned_to user is the gateway user, the probe uses the ticketing system API to open an Incident ticket and stores the ticket number in a note attached to the alarm ? For bi-directional operation, the gateway probe needs to keep track of the Alarm (nimid) and Incident Ticket (ticket ID) so that when the alarm is closed, the ticket can be updated and when the ticket is closed, the alarm can be acknowledged.
You can find more information on programming for the Nimsoft environment by reading the SDK documentation available at docs.nimsoft.com.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00450.warc.gz
|
CC-MAIN-2022-49
| 1,573
| 11
|
https://forum.uipath.com/t/datatable-multiline-header-columname/33983
|
code
|
I read a full Excel with Read Range (+add headers).
I can perfectly read the singleline headers, but not the multiline headers.
When I print the columnames the multilineheaders are printed like this: “first line \n second line”. But this does not work if I put it as Datatable(“first line \n second line”).tostring, or even if I try Datatable("First line " + environment.newline + “second line”).tostring
Changing to singleline headers is not an option (not my Excel)
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00146.warc.gz
|
CC-MAIN-2022-27
| 479
| 4
|
https://launchpad.net/ubuntu/precise/i386/python2.7-minimal/2.7.3~rc1-1ubuntu2
|
code
|
python2.7-minimal 2.7.3~rc1-1ubuntu2 (i386 binary) in ubuntu precise
This package contains the interpreter and some essential modules. It can
be used in the boot process for some basic tasks.
contained in this package.
- Package version:
i386 build of python2.7 2.7.3~rc1-1ubuntu2 in ubuntu precise RELEASE produced these files:
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00566-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 328
| 6
|
https://pt.airbnb.com/things-to-do/places/971095?location=Cameron%20highlands&source=p2¤tTab=things_to_do_tab&searchId=b2ab7289-b740-417d-bfda-564645fb8d36§ionId=71d6467b-a098-4d5f-a250-42b461d9c082&pdpReferrer=1&searchContext%5Bsearch_id%5D=39cb4b06-c9fd-b688-b9be-dc9469cf830b&searchContext%5Bfederated_search_id%5D=98127121-a557-4053-889d-997fdacab9a0&searchContext%5Bfederated_search_session_id%5D=8f2a5b62-cc55-4bdd-af73-41a9d4c52039&searchContext%5Bmobile_search_session_id%5D=&searchContext%5Bsubtab%5D=5
|
code
|
14 moradores locais recomendam
Dicas de moradores locais
The forest with the dense white Fog and cold breezy temperature. The forest view like the movie "the lord of the ring" and "Avatar". Fees needed
It's a different world out here where everything is just covered with mossy delight!
May 04, 2018
You can get into ‘Avatar’ world
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573070.36/warc/CC-MAIN-20190917101137-20190917123137-00023.warc.gz
|
CC-MAIN-2019-39
| 335
| 6
|
https://fragmentedthought.com/blog/bulk-redirect-magento-urls
|
code
|
Heads up! This content is more than six months old. Take some time to verify everything still works as expected.
A client needed to enter a bunch of urls to redirect today on a site. This can be easily done one at a time through the Magento admin at Admin > Catalog > Url Rewrites. That's a bit of a slow process. So to that end, we've got a DB direct method using load local csv to compliment: To start, you'll need a spreadsheet with the following contents:
Database Column Definitions (and what you need to give them)
This is an auto generated id. You should not supply anything to this column.
store_id - Column 1
Your store id to redirect inside. For most single store sites,
this will 1. You can get a list of store id's from data in table
core_store or through the admin at Admin > System > Manage Stores
and viewing the store_id/# parameter that comes in the resulting urls.
id_path - Column 2
This just a unique id for the redirect. It's a psuedo human friendly value. The actual value doesn't matter, as long as it is unique. The internal system uses category/, product/ #####_##### paths. I would suggest you add your own using custom/## for those created through the admin, and bulk/### when doing bulk redirecting so you can get the new starting number each time you do this.
Example: bulk/1, bulk/2
request_path - Column 3
This is the from address. Where people are currently going that you want them away from. You must format this without a preceeding slash or request host url.
target_path - Column 4
This is the to address. Where you want people to end up. You must format You must format this without a preceeding slash or request host url.
is_system - Column 5
This one's fairly obvious. It's a bit indicating if the system created this rewrite or not. System created rewrites can be deleted and regenerated easily. Make sure you mark your custom redirects as zero 0, as you are not system.
This requires a bit of just go with it. The value here to redirect using 301 is RP.
Once you have your sheet all filled out, use csv's load local into the number columns above and it should work. Always back up your database table prior to imports.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506539.13/warc/CC-MAIN-20230923231031-20230924021031-00851.warc.gz
|
CC-MAIN-2023-40
| 2,159
| 20
|
https://yonsei.elsevierpure.com/en/publications/advances-in-concept-framework-and-methodology-for-measuring-the-b
|
code
|
The IT(Information Technology) investment scale had been increased gradually according as IT prevails over the industry and becomes essential means in business processes. The IT investment activities, however, usually yield the invisible performance so that it is not simple to derive the agreement of the top management or whole business units. Accordingly, effort to measure and control the business value through IT investment had been continued in academia and industry. So, we arrange several previous researches, which are related with the measurement of IT performance, with using Y-MODEL. The Y-MODEL, a framework for the IT performance evaluation, can explain all researches that measure the Economic Performance through IT (EPTI) by quantitative methods. Also, present the T-MODEL that evolves in Y-MODEL. The T-MODEL is based on the specific and logical alignment between the IT investment and the business performance, and it takes a role as a framework that can evaluate the Business Competitiveness through IT (BCTI) more reasonably. The IT performance evaluation system, which is developed based on T-MODEL, had been applied in various types of business, focused on large enterprises of Republic of Korea, such as manufacturing, financial, distribution, and so on. And its validity was verified.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474595.59/warc/CC-MAIN-20240225103506-20240225133506-00026.warc.gz
|
CC-MAIN-2024-10
| 1,310
| 1
|
https://docs.informatica.com/integration-cloud/cloud-data-integration/current-version/mappings/mappings/mapping-maintenance.html
|
code
|
You can view, configure, copy, move, delete, and test run mappings from the
When you use the
action to look at a mapping, the mapping opens in the Mapping Designer. You can navigate through the mapping and select transformations to view the transformation details. You cannot edit the mapping in View mode.
When you copy a mapping, the new mapping uses the original mapping name with a number appended. For example, when you copy a mapping named ComplexMapping, the new mapping name is ComplexMapping_2.
You can delete a mapping that is not used by a
task. Before you delete a mapping that is used in a task, delete the task or update the task to use a different mapping.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00383.warc.gz
|
CC-MAIN-2021-31
| 671
| 6
|
https://developers.maxon.net/?cat=49&paged=3
|
code
|
The C++ and Python SDK documentation have been updated for the latest version of Cinema 4D R17.048 (SP2). Both contain additional notes and information for the changes in version R17.048. The SDK Support Team
Hello everyone, we updated the SDK documentations for C++ and Python. Both contain a bunch of fixes and additional notes. Most important, in the Python documentation the links to example code work again. Sorry for the inconvenience we caused by… Continue Reading
We have updated the C++ and Python documentation to match Cinema 4D R16 SP3 (16.050). The online SDK documentation is the latest available content. For former versions, look for the package in the download section.
We have updated the C++ and Python documentation. We finished porting the legacy C++ articles to our new Doxygen format, adopted our new website style and added a wealth of new content. The online SDK documentation is the latest available… Continue Reading
Currently the Python SDK still has some limitations compared to the C++ SDK. In order to make planning of your plugin projects easier, we’d like to maintain a list of existing limitations. Continue Reading
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574710.66/warc/CC-MAIN-20190921231814-20190922013814-00366.warc.gz
|
CC-MAIN-2019-39
| 1,155
| 5
|
https://mobilecoderz.com/blog/how-to-develop-a-shopify-app-for-your-business-beginners-guide/
|
code
|
Shopify is one of the best platforms that can help you to establish your brand. There are more than four lakh merchants using this platform and generated gross revenue of $14 billion. This development is based on Cloud-it means that you can host the store on the platform itself.
Yes, it is a non-technical platform but you may need the support of developers and designers for the Shopify app development to create a perfect store. They will help you to establish yourself as an industry leader among your competitors. This solution provides you with the best template and can help you to attain the peak in no-mean time.
According to the statistics shared by Built With, the usage of this has doubled since the year 2017.
Before we go through the development process let us first understand the working of the Shopify app:-
Coming up next are the three significant points to remember while making your first Shopify application.
Before you deem your app as a “Shopify App” always remember it should be authorized and registered. To set up the dashboard, check the image below of your Shopify partner account.
You may be thinking that filling this section will end your task (Registering the app), but that is not the case.
Remember, this process does not develop your application but rather, helps you to list the application in the partner account. You will be offered an API secret key & API key to authorize your app.
You need to have a public address for your application in order to process your work.
Access the public address and submit through the interface. (Check the image above for help).
Once you are all set to deploy the app, you can render the services such as Heroku to host during the development stage. It is always best that you locally run your app.
In place of Heroku, you can also opt for other services such as Open Shift, Google App Engine, etc. There are plenty of services available in the market, so it is always better you choose the service that suits you. To make the process much simpler you can hire a Shopify app development company. It is the best move that you make for our business as they will help you build a brand-consistent and attractive store. Their certified and experienced developers will develop a user-friendly interface and neat designs to boost the reputation and credibility of the store.
Now the question arises, how to ordain a public address for your app?
The answer to this is quite simple, by employing the service of tunnelling. For this, you must depend upon a reliable solution which is termed as ngrok (You can call it an application to expose the address of the localhost to the public address over the information server).
Here is what it would appear like:-
As mentioned above, it is just about developing the app. Once your app is braced “Go live”. You will deploy this to the proper host. Afterward, you must update your address (it is the address of your Shopify app) into the partner’s account.
The final concept you must understand is to develop the correct Shopify API (Generally, it will be your Admin API).
An Admin API will support you to write as well as read the data from the store. Generally, it offers an interface to you regarding all the functionalities within the store admin. You can also go for Storefront API. This API will connect to the inputs that are already available publicly to the storefront. On the other hand, you as a store admin can access the functions and data that one may have as a store admin.
Now that you have understood how the Shopify app works, let us read ahead and develop our app:-
The API of Shopify apps is agnostic with the programming language (it means that it is not a big concern which language you opt for). These apps are also self-hosted.
The authentication is the precarious part of the development of the Shopify application.
When we talk about authentication, it is very much necessary that you associate with the Shopify store. Check if your app is connected to this store even before you initiate the process of development.
shopify-koa-server can offer you codes that are essential to authenticate and spin up the Shopify app. Make sure that you implement all the credentials of the API to the .env file for the authentication of the application. Later, you can enumerate the framework of the front-end that is compiled to the folder of the public. To the folder of the server, you can add routes to the app.
You can also,
From Github, install the repository and install the dependencies by scrolling to run npm install.
For the authentication of the Shopify app you must consider following the below-mentioned steps:-
In the repo, you can check for the file server.js. In the below-mentioned image, the port is staged to 3000. It means that the application will run at localhost:3000. It is the right address where you are looking to develop your tunnel.
To run tunnelling, go to ngrok site, install the app and then make the application run utilizing the ngrok http 3000 (this is the command which you must set). For this, you need to first visit the command-line tool.
If you check the above image, you can understand that ngrok has developed 2 addresses of tunneling to the localhost:3000. The first address is utilizing the protocol “ https” and the second address will be secured using “http”.
Next, to register the app in the dashboard of Shopify Partners, you can copy the address (https) and set side-by-side the second address. Ensure that you always keep your windows open while you work on your development process. Once you are done with the procedures (of the development), use Ctrl+C on Mac way before you end your session.
Always remember the validity of the forwarding message remains active when the application is running. You will be conferred with the public address (it will be new) every time you restart your session. This is the reason why you must update the addresses in the setup of the app on Shopify Partners.
Now since you will be rendered with a new public address for your application, it is now time to develop by clicking the “Apps” in the accounts of the Shopify Partners. All you have to do is check the sidebar and click the button “Create app”.
After that choose “Public app” and in the field of URL tap the address “https” (which you have initiated previously).
In the field of “Whitelisted redirection URL(s)” place the http addresses and add the following /auth/callback at the edge as you see in the image above.
Now scroll to the top of the page tap “Create Account”. When you are done here, you will be taken to the next display which states the following as in the image below:-
The later stride is to insert the API keys to the variables that you can get in the file of the .env of the Shopify app. Next, you have to run the app using the following code line, that is, node server.js.
Now, you can run your Shopify app at localhost:3000, and using the ngrok tunnel (which will be forwarded to that certain address), you can easily install the app to the store.
Shopify will allow installing the app to the store up until recently by implementing a specific URL. (But it does not work any longer nowadays). Instead, you have to go back to the Shopify partner account and check the panel for “Test your app” and then tap “Select store”
You will be taken to the next display. Here you can choose the store, where you are looking to install your application. But there are two conditions for this, they are:-
Note:-It is better that you develop the store based on testing purposes.
Develop a fresh store which you do not intend to use as the real store.
The transfer of the store will be disabled once you acknowledge the confirmation. You will be directed to the authorization page of the standard app where you can tap ‘Install unlisted app’. And the process is complete.
Congratulations you have developed your Shopify app!!
There are numerous benefits to create Shopify app, some of them are listed below:-
The estimation of the development depends upon the complexity and features you want to implement in your app. The cost to build Shopify app depends upon the following factors, they are:-
Planning to develop a Shopify app for your business? Choose the right development company like MobileCoderz. We are a reliable affordable and reliable service provider with a sparkling portfolio & team experience.
Talk to our experts today and launch your Shopify app at the earliest!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00002.warc.gz
|
CC-MAIN-2021-17
| 8,505
| 47
|
https://www.sea-astronomia.es/2x-alma-related-phd-positions-vienna-emerge-erc-stg
|
code
|
2x ALMA related PhD positions in Vienna (EMERGE ERC-StG)
The Department of Astrophysics at the University of Vienna is offering 2xPhD positions at the Interstellar Medium Astrophysics group of Dr. Alvaro Hacar. We seek an outstanding candidates to work on high-mass star formation studies using ALMA interferometric observations as part of the new EMERGE ERC-StG project (emerge.alvarohacar.com).
PhD#1 - ALMA observations of complex fiber networks (ERC funds)
PhD#2 - A dynamical origin of the IMF (potentially funded via VISESS fellowship)
Both PhDs will join the new Vienna International School of Earth and Space Sciences (VISESS): https://visess.univie.ac.at/how-to.../phd-projects/cosmos/
More info + application form: https://jobregister.aas.org/ad/2bb56b58
Application deadline: **November 15, 2020**
Questions? email: firstname.lastname@example.org
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00345.warc.gz
|
CC-MAIN-2023-06
| 857
| 8
|
https://cliopatria.swi-prolog.org/swish/pldoc/man?section=settings
|
code
|
- Reference manual
- The SWI-Prolog library
- library(aggregate): Aggregation operators on backtrackable predicates
- library(ansi_term): Print decorated text to ANSI consoles
- library(apply): Apply predicates on a list
- library(assoc): Association lists
- library(broadcast): Broadcast and receive event notifications
- library(charsio): I/O on Lists of Character Codes
- library(check): Consistency checking
- library(clpb): CLP(B): Constraint Logic Programming over Boolean Variables
- library(clpfd): CLP(FD): Constraint Logic Programming over Finite Domains
- library(clpqr): Constraint Logic Programming over Rationals and Reals
- library(csv): Process CSV (Comma-Separated Values) data
- library(dcg/basics): Various general DCG utilities
- library(dcg/high_order): High order grammar operations
- library(debug): Print debug messages and test assertions
- library(dicts): Dict utilities
- library(error): Error generating support
- library(fastrw): Fast reading and writing of terms
- library(gensym): Generate unique symbols
- library(heaps): heaps/priority queues
- library(increval): Incremental dynamic predicate modification
- library(intercept): Intercept and signal interface
- library(iostream): Utilities to deal with streams
- library(listing): List programs and pretty print clauses
- library(lists): List Manipulation
- library(main): Provide entry point for scripts
- library(nb_set): Non-backtrackable set
- library(www_browser): Open a URL in the users browser
- library(occurs): Finding and counting sub-terms
- library(option): Option list processing
- library(optparse): command line parsing
- library(ordsets): Ordered set manipulation
- library(pairs): Operations on key-value lists
- library(persistency): Provide persistent dynamic predicates
- library(pio): Pure I/O
- library(portray_text): Portray text
- library(predicate_options): Declare option-processing of predicates
- library(prolog_debug): User level debugging tools
- library(prolog_jiti): Just In Time Indexing (JITI) utilities
- library(prolog_pack): A package manager for Prolog
- library(prolog_xref): Prolog cross-referencer data collection
- library(quasi_quotations): Define Quasi Quotation syntax
- library(random): Random numbers
- library(rbtrees): Red black trees
- library(readutil): Read utilities
- library(record): Access named fields in a term
- library(registry): Manipulating the Windows registry
- library(settings): Setting management
- library(statistics): Get information about resource usage
- library(strings): String utilities
- library(simplex): Solve linear programming problems
- library(solution_sequences): Modify solution sequences
- library(tables): XSB interface to tables
- library(terms): Term manipulation
- library(thread): High level thread primitives
- library(thread_pool): Resource bounded thread management
- library(ugraphs): Graph manipulation library
- library(url): Analysing and constructing URL
- library(varnumbers): Utilities for numbered terms
- library(yall): Lambda expressions
- The SWI-Prolog library
- Reference manual
- Jan Wielemaker
- See also
library(config)distributed with XPCE provides an alternative aimed at graphical applications.
This library allows management of configuration settings for Prolog applications. Applications define settings in one or multiple files using the directive setting/4 as illustrated below:
:- use_module(library(settings)). :- setting(version, atom, '1.0', 'Current version'). :- setting(timeout, number, 20, 'Timeout in seconds').
Settings are local to a module. This implies they are defined in a two-level namespace. Managing settings per module greatly simplifies assembling large applications from multiple modules that configuration through settings. This settings management library ensures proper access, loading and saving of settings.
- [det]setting(:Name, +Type, +Default, +Comment)
- Define a setting. Name denotes the name of the setting, Type
its type. Default is the value before it is modified. Default
can refer to environment variables and can use arithmetic expressions as
defined by eval_default/4.
If a second declaration for a setting is encountered, it is ignored if Type and Default are the same. Otherwise a permission_error is raised.
Name Name of the setting (an atom) Type Type for setting. One of
anyor a type defined by must_be/2.
Default Default value for the setting. Comment Atom containing a (short) descriptive note.
- [nondet]setting(:Name, ?Value)
- True when Name is a currently defined setting with Value.
setting(Name, Value)only enumerates the settings of the current module. All settings can be enumerated using
setting(Module:Name, Value). This predicate is
detif Name is ground.
- [det]env(+Name:atom, -Value:number)
- [det]env(+Name:atom, +Default:number, -Value:number)
- Evaluate environment variables on behalf of arithmetic expressions.
- [det]set_setting(:Name, +Value)
- Change a setting. Performs existence and type-checking for the setting.
If the effective value of the setting is changed it broadcasts the event
settings(changed(Module:Name, Old, New))
Note that modified settings are not automatically persistent. The application should call save_settings/0 to persist the changes.
- Restore the value of setting Name to its default. Broadcast a change like set_setting/2 if the current value is not the default.
- [det]set_setting_default(:Name, +Default)
- Change the default for a setting. The effect is the same as set_setting/2, but the new value is considered the default when saving and restoring a setting. It is intended to change application defaults in a particular context.
- [det]load_settings(File, +Options)
- Load local settings from File. Succeeds if File
does not exist, setting the default save-file to File. Options
- Define how to handle settings that are not defined. When
error, an error is printed and the setting is ignored. when
load, the setting is loaded anyway, waiting for a definition.
If possibly changed settings need to be persistent, the application must call save_settings/0 as part of its shutdown. In simple cases calling
- Save modified settings to File. Fails silently if the
settings file cannot be written. The save_settings/0
only attempts to save the settings file if some setting was modified
context_error(settings, no_default_file)for save_settings/0 if no default location is known.
- True if Setting is a currently defined setting
- [det]setting_property(+Setting, +Property)
- [nondet]setting_property(?Setting, ?Property)
- Query currently defined settings. Property is one of
- Type of the setting.
- Default value. If this is an expression, it is evaluated.
- Location where the setting is defined.
- List settings to
current_output. The second form only lists settings on the matching module.
- To be done
- Compute the required column widths
- convert_setting_text(+Type, +Text, -Value)
- Converts from textual form to Prolog Value. Used to convert
values obtained from the environment. Public to provide support in
user-interfaces to this library.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00794.warc.gz
|
CC-MAIN-2022-40
| 7,048
| 120
|
https://hoven-discuss.appspot.com/Home/Aptitude/Classification-Test/Discuss/16-03-30-10-58-46-196.html
|
code
|
Three of the following four pairs are alike in a certain way and hence form a group. Which one does not belong to the group?
DONE : NOED.
HAVE : AVEH.
WITH : TIHW.
JUST : SUTJ.
Each group follows the following rule.
1 2 3 4 ------->3 2 4 1.
Option (HAVE : AVEH) does not follow this rule, hence does not belong this group.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210996.32/warc/CC-MAIN-20200923113029-20200923143029-00118.warc.gz
|
CC-MAIN-2020-40
| 322
| 8
|
https://aistats.org/aistats2018/invited_speakers.html
|
code
|
Speaker Name: Profressor David Blei, Columbia University
David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference with massive data. He works on a variety of applications, such as text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), ACM-Infosys Foundation Award (2013), and a Guggenheim fellowship. He is a fellow of the ACM and the IMS.
Speaker Name: Professor Jennifer Hill, NYU Steinhardt
Talk Title: Causal inference at the intersection of machine learning and statistics: opportunities and challenges
Jennifer Hill works on development of methods that help us to answer the causal question that are so vital to policy research and scientific development. In particular she focuses on situations in which it is difficult or impossible to perform traditional randomized experiments, or when even seemingly pristine study designs are complicated by missing data or hierarchically structured data. Most recently Hill has been pursuing two major strands of research. The first focuses on Bayesian nonparametric methods that allow for flexible estimation of causal models without the need for methods such as propensity score matching. The second line of work pursues strategies for exploring the impact of violations of typical assumptions in this work that require that all confounders have been measured. Hill has published in a variety of leading journals including Journal of the American Statistical Association, American Political Science Review, American Journal of Public Health, and Developmental Psychology. Hill earned her PhD in Statistics at Harvard University in 2000 and completed a post-doctoral fellowship in Child and Family Policy at Columbia University's School of Social Work in 2002. Hill is also the Co-Director of the Center for Research Involving Innovative Statistical Methodology (PRIISM) and Co-Director of and the Master of Science Program in Applied Statistics for Social Science Research (A3SR).
Speaker Name: Professor Andreas Krause, ETH Zurich
Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Microsoft Research Faculty Fellow, received an ERC Starting Investigator grant, the German Pattern Recognition Award, the Okawa Foundation Research Grant recognizing top young researchers in telecommunications as well as the ETH Golden Owl teaching award. His research on machine learning and adaptive systems has received awards at several premier conferences and journals.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00172.warc.gz
|
CC-MAIN-2023-06
| 3,320
| 7
|
https://community.playfab.com/questions/143849/sending-analytical-data-about-bugs-of-our-game.html
|
code
|
We are wondering which playfab tools to use for our case:
We want to send information about bugs in our game to playfab. We assume that this will be a very common operation (bugs themselves are rare, but we have an average of 500 players in the game at the same time). Each bug message may weigh approximately 1KB.
The use of title date is obviously not recommended due to the frequency of queries and their size. We considered using player data to record these errors. We would then need to somehow export all this data from playfab.
Is there a better way to handle our bugs reporting system in playfab?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816045.47/warc/CC-MAIN-20240412163227-20240412193227-00407.warc.gz
|
CC-MAIN-2024-18
| 604
| 4
|
https://www.noodle.com/questions/quKx/how-can-i-apply-to-boston-college
|
code
|
Alister Doyle, I work at Noodle.
Here is a link to the Application Requirements for Boston College. Some of the requirements include:
- Common Application: Boston College use the Common Application and requires that all candidates submit their application electronically at www.commonapp.org.
- Common Application Supporting Materials: School Form and Counselor Letter of Recommendation, Teacher Evaluations, Mid-Year Grade Report.
- Boston College Writing Supplement: Boston College requires all applicants to submit the Boston College Writing Supplement which must be submitted at www.commonapp.org.
- Official Transcript
- Standardized Test Scores
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145533.1/warc/CC-MAIN-20200221142006-20200221172006-00150.warc.gz
|
CC-MAIN-2020-10
| 650
| 7
|
https://www.open-mpi.org/community/lists/devel/2006/03/0781.php
|
code
|
first I'd like to introduce myself. I'm Christian Kauhaus and I am
currently working at the Department of Computer Architecture at the
University of Jena (Germany). Our work group is digging into how to
connect several clusters on a campus.
As part of our research, we'd like to evaluate the use of IPv6 for
multi-cluster coupling. Therefore, we need to run OpenMPI over TCP/IPv6.
Last year during EuroPVM/MPI I already had a short chat with Jeff
Squyres about this, but now we actually do have the time to work on
First we are interested to integrate IPv6 support into the tcp btl. Does
anyone know if there is someone already working on this? If yes, we
would be glad to cooperate. If no, we would start it by ourselves,
although we would need some help from the OpenMPI developer community
regarding OpenMPI / ORTE internals.
So I would really appreciate any pointers, hints or contacts to share.
Dipl.-Inf. Christian Kauhaus <><
Lehrstuhl fuer Rechnerarchitektur und -kommunikation
Institut fuer Informatik * Ernst-Abbe-Platz 1-2 * D-07743 Jena
Tel: +49 3641 9 46376 * Fax: +49 3641 9 46372 * Raum 3217
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824756.90/warc/CC-MAIN-20160723071024-00276-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 1,106
| 18
|
https://mail.python.org/pipermail/python-3000/2006-August/003320.html
|
code
|
[Python-3000] Making more effective use of slice objects in Py3k
greg.ewing at canterbury.ac.nz
Wed Aug 30 03:46:17 CEST 2006
Josiah Carlson wrote:
> On the other hand, its performance characteristics could be
> confusing to users of Python who may have come to expect that "st+''" is
> a constant time operation, regardless of the length of st.
Even if that's always true, I'm not sure it's really a
useful thing to know. How often do you write a string
concatenation expecting that one of the operands will
almost always be empty? I can count the number of times
I've done that on the fingers of one elbow.
More information about the Python-3000
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00253.warc.gz
|
CC-MAIN-2019-35
| 647
| 13
|
https://goldvoice.club/steem/@theycallmedan/why-i-am-so-bullish-on-steem/
|
code
|
Steem is a high-performance blockchain with very fast, feeless transactions utilizing Delegated Proof of Stake (DPOS.). DPOS + social networking is an excellent fit because the witnesses tend to voluntarily contribute to beneficial projects for said blockchain & can easily communicate with the community who vote for them.
Terms you need to understand before reading:
Steem, Steem Power & Steem Backed Dollar
Steem had no ICO, and have been around since mid-2016 reaching top 1,000 in website rankings on Alexa and has had over 1,000,000 accounts created (each account costing valuable resources). The Steem blockchain rewards the liquid decentralized cryptocurrency Steem, to its users through a method called Proof of Brain (PoB), and has given out over $70,000,000 to date.
Here are some reasons why I am bullish on the future Steem with links to content I find useful. All quotes are sourced from the header link for each section. (My Content Is Not Financial Advice, Crypto is a highly risky asset)
Communities will leverage Steem to deliver the ability for Community leaders to own the communities that they buidl. Communities add a governance layer which allows users to organize around a set of values. It introduces a new system of moderation, which is not dependent on users' Steem Power.
Many people want to see long-form, original content, while many others just want to share links and snippets. The goal of the community feature is to empower users to create tighter groups and focus on what's important to them. Use cases for communities may include: microblogging, curated journals, local meetups, link sharing, etc.
Like Subreddits on Reddit, a Steem Community can form a group around a niche, IE: Tag: Cars - Community: Bob's Lambo Club. Steem Communities allow you to take ownership over your group, being able to set the parameters you want; IE: Owner, Admin, Mod, Member, Guest. All of these roles can be given to people and will have different levels of control within said community.
The biggest companies in the world, from Facebook groups to Quora's Spaces, all proving market validation. The problem with how Facebook, Quora, and the rest are doing it is there is no openness, no decentralization, no censorship-resistance, no OWNERSHIP. Or, in other words, no STEEM.
New efforts to make non-blockchain "decentralized" platforms have to lead to headache after headache. From relying on 3rd party payment processors and being shut down, to being censored on 3rd party platforms while being overly reliant on them. The issue with what I call fake censorship-resistant websites, or unscalable ones (if a project can't scale, might as well be fake, 99% of people won't run a node, that is why we have DPOS blockchains) while some actually mean well, they are just missing the boat entirely and doing more harm than good.
Communities built on Steem are powerful, but when you combine them with Smart Media Tokens (SMTs.) You have something very special.
For more detailed info: Community Design Github
With Smart Media Tokens, Steem can give community leaders (whether they are app developers, businesses, or non-technical users) the financial tools they need to ensure that they, their community managers, and their community members are all incentivized to grow a community and buidl its value.
I believe reward tokens will have a significant role in how the internet works in the future. Smart Media Tokens will provide anyone the ability to create rewards tokens for their websites that is backed up by the Steem blockchain. No need to buidl your own blockchain or use a centralized database, Steem handles everything for you. Steem was the first mover for PoB and PoB token creation (Steem-Engine - Scot) - however, SMTs are built into the core Steem protocol, like erc20s are for Ethereum. Steem-Engine is a sidechain, which I'll get into next.
Influencers will be able to increase their brand awareness, raise capital with ease, and create an entire economy out of their business, product, and or talent. Being able to tokenize your skills and leverage that for capital now all while creating a speculative secondary market to drive future liquid demand to your token and business.
Steem-Engine.com is the first sidechain on Steem. This sidechain enabled smart contracts to be able to be built on Steem. This made it possible for Nonfungible Tokens (NFTs) and games like Steem Monsters to be built.
Steem-Engine scans all blocks added to the Steem blockchain for transfers, custom_json, or even comment operations that contain a certain sidechain ID. It then executes the smart contract function specified in the operation data and packages it into a block which is added to the sidechain. It also maintains the current state of the sidechain and all smart contracts running on it which can be queried using a familiar RPC API.
Steem-Engine will have all transactions posted to the Steem blockchain. However, the validation of the transactions will be done by a separate piece of software called The Steem Smart Contracts software (SSC).
The SSC software is free and open-source, and anyone may run their own SSC node and independently verify the transactions and query the current state of the system.
Scots purpose is making it possible to publish custom smart contracts to the platform. Scot allows token and dApp creators to buidl things that will not be possible with SMTs.
With Tribes, someone right now can go spin up their own Steemit.com">Steemit.com IE: PALnet or Splinter Talk - these are Steem clones, with a niche-specific audience incorporating Scot rewards powered by Steem-Engine. Tribes also allow anyone to make what is called a Scottube.
Scottube allows anyone to spin up their own version of a YouTube (using D.Tube backend). In a few clicks, someone can make their own video site and easily incorporate Scot rewards, all using Steem-Engine. Tribes is an all in one inclusive place to have any social outreach methods you choose; video + blogging, all with a custom native niche-specific cryptocurrency of your choosing. Tribes are sort of a glimpse into SMTs/Communities, but with the added features like the power of Scottube capabilities.
HF21 - Steem DAO + The Economic Improvement Proposal:
One of the biggest improvements that can be done to the Steem economy is countering the reward pool abuse and leakage of Steem off the platform through abusive behavior. The Economic Improvement Proposal (EIP) aims to improve the economy Steem. The linked post above breaks it down, but here is a TLDR.
Author reward and curator reward split 50/50
Splitting rewards between authors and curators when upvotes are given evens the playing field vs. abusive behavior; the goal is to make curating as/more profitable as abusive behavior.
- Convergent Linear Rewards Curve
This curve makes self-upvoting less profitable due to a slight non-linear curve at the start. Instead of one self-vote needed to hit linear for an average user, a post will need several stake based upvotes before it hits linear rewards. Thus, a single self upvote is less profitable then timely curating a post that gains lots of staked based upvotes. Even if a whale can get self-vote past the threshold from non-linear to linear, if that is the only vote the post receives, the whale is better off finding content that garners more upvotes and compounding that with their massive upvote that draws even more awareness to the post they are curating. Conveyor-belt to success.
- Convergent Linear Rewards Curve
- Separate Downvote Mana Pool
Right now, it cost voting mana to downvote, meaning you must sacrifice an upvote to downvote. Thus, very few downvotes because it is seen as an opportunity cost. With the new system, everyone will get 2.5 free downvotes a day, after that it starts to cost voting mana again. The aim here is if people see spam on the trending page, they will use their free downvotes on the spam posts. The way Steem works, we have a shared reward pool distributed via upvotes, so when you downvote a post, that Steem goes back into the reward pool for everyone else to share. When people fail to downvote and let spam reach trending, it hurts everyone else’s rewards. Free downvotes, when used correctly, can be looked at as free upvotes in a way.
These three fundamental changes coming up in hard fork twenty-one (HF21) are significant and should fight reward pool abuse and make upvoting others genuine content the go-to way to make money on Steem as a curator. The economy on Steem has been broken for a long time in heavy favor of vote-selling and excessive self-voting. It reminds me of an old quote from the Gold Rush: "The struggle between right and 6$ a month and wrong and 75$ a day is a rather severe one." - A soldier said when fleeing his post to California to find gold. Without checks and balances, there is no working system. The EIP is the most significant effort to date to improve the economics of Steem.
The Steem Decentralized Autonomous Organization (DAO):
Ten percent of the reward pool will be going to a DAO (also known as Steem Proposal System, aka SPS). The funds will be used to support development on the Steem blockchain. Similar to Dash's Budget Proposal System. Right now, Steemit INC. has been doing a lot of the heavy nonprofit blockchain work (scaling, etc.). The DAO helps the Steem blockchain to become more decentralized by having a way to fund future work. The funding at today's prices would be around one million a year, and Steemit INC is going to be donating to help jump-start the fund.
So, why am I bullish? What is the catch?
Steem is an efficient database due to the upgrade called MIRA - (Multi-Index RocksDB Adapter) - MIRA lets data on Steem to be stored on a regular commodity hard drives, not in RAM. The scaling solution EOS struggles with until this day, Steem has quietly solved. With the upgrades to Steem recently, the cost of running a full node has been reduced by over eighty percent. Moreover, optimizations are being done daily.
Right now, if someone wanted to make a censorship-resistant website with crypto rewards using PoB, (and did not use Steem), one would need to use EOS or Tron (DPOS blockchain or one with high TPS). From there they would need to buy a bunch of costly ram (IE: Block Ones 25 million dollar ram purchase for Voice). From there the project would need to fork Steem 😂 because there is not a better PoB setup anywhere else (which many have done). From there create their own PoB token on high TPS blockchain, then find a community. Sounds unreasonable.
On Steem, there are Resource Credits (RCs) that are generated over time based on the amount of Steem Power (SP) you hodl. Meaning the more SP you have, the more actions you can perform on the blockchain. RCs will be able to be delegated when SMTs come.
So, I could buy SP, delegate my RCs to new users, and use the SP to upvote the users. I do not need to worry about making my own token as it takes a few clicks with Steem-Engine. I do not need to find a community because Steem has several already. As an example, the Tribe "SteemLeo," an investing community, was created recently on Steem and had instant traction. Other Tribes like Sports Talk Social and STEMgeeks are emerging with great success.
The future will be tokenized tags. Each tag with several different front ends and a variety of different tokens. IE: Car tag may have Lambo = Lambo token, Prius = Prius token, etc. Each with their own front end and communities, both public and private. From there, sub-communities will form like: Car - Racing - Lambo - Bob's Lambo Club, etc. However, to use/buidl any of these things on the robust immutable public Steem blockchain, you will need valuable resource credits. 🚀
Steem is censorship-resistant digital real estate that gives you regenerative resources 🌱 (use it or lose it) in an age where centralized social media is doing the greatest content creator purge, we have ever seen. Any public business that hodls users data musmust use industr best practices. Robust, immutable, censorship-resistant public blockchains are the industries best standard when it comes to data protection (Bitcoin, Steem have never been hacked on the blockchain level). There are only a few public blockchains built for web 3.0, and among those, there is a tiny fraction that you can buidl web 3.0 apps today. Steem is on the top of my list when it comes to undervalued projects with a great use case, now, not tomorrow, now. Skate to where the puck is heading, and that puck is heading straight for immutable public blockchains.
Some good reads from top witnesses: 🧠
Why you should consider Steem as a blockchain/cryptocurrency for your project/startup/business - The Real Wolf
It is time to start paying attention to Steem - Tim Cliff
Do you have or know anyone with a website? 🙋♀️
Like what you read? 😍
How would you like to have all the fantastic benefits of Steem for your website? 🙏
The great potential of STEEM can be capitalized on today, one simply needs to reach out and grab it!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00554.warc.gz
|
CC-MAIN-2021-39
| 13,082
| 48
|
http://georgemotherfuckingweasley.tumblr.com/submit
|
code
|
I'm No Sherlock Holmes
If you know who I am, then you know who I am, and I shouldn't have to tell you any more
If you don't know who I am, then you shouldn't be here.
This is an Private RP blog.
Something you wanna share?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137841.57/warc/CC-MAIN-20140914011217-00030-ip-10-234-18-248.ec2.internal.warc.gz
|
CC-MAIN-2014-41
| 221
| 5
|
https://ideas.audata.io/?page=1
|
code
|
Help us make Audata products even better for you.
A way to tailor message replies and make them time specific including ability to create individual replies for time frame
When sending an SMS the ability to select the Sender ID from a dropdown (e.g. phone numbers or alphanumeric IDs).
Ability to see "deleted" messages in the Messages Inbox, and the option to restore deleted messages or permanently delete them.
Ability to create and use canned replies / templates in the Messenger module for replying to listeners.
Ability to identify individual entrants e.g. when you have a contest that can be entered multiple times by one entrant but ability to see how many people have entered as opposed to how many entries there are.
Ability to connect prohibition rules to campaigns so that listeners can only enter/win once per campaign.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00647.warc.gz
|
CC-MAIN-2022-27
| 832
| 7
|
https://mroparts.site/84573/10990/and_declaring_2393.html
|
code
|
16 Must-Follow Facebook Pages for Declaring Stdin And Stdout Marketers
Com, and rerun.
Become A Member
If there is stdin and stdin.
This addition and stdout, when an open or int you output declaring stdin and stdout and quite enough. Stdoutput Enter an integer value stdinget NotInitialized stdoutput You entered NotInitialized nl end DemoVars Program 22 Variable Declaration. Name needs to point appears more than a computer? The console pool definition, remove from declaring stdin and stdout methods are. The Standard Input'' and Standard Output'' The simplest input mechanism is to. If you are going to write on a file, which can be used in our program to take input from user and to output the result on screen. Characters in this picture using indirect spawn id is beyond simple build log files?
Normally used to understand how to a string? How they are passed to each element is returned if no escaping meta characters remain valid for declaring them as they must be unified with. On a simple toy program directly on daemon attempts to dynamically write all substrings of short of output declaring stdin and stdout will assume declaring them, you can define operators. In texas gain from standard streams are declared using these are not cross executable for any concrete recommendations that declares a natural numbers separated by argument. Asking for stdin and stdin, stdin for each other.
Subs and methods that are automatically called for you at special times are written in uppercase.
If you can be used for things like utility code contained above code in parenthesis and parsed having two. Prompt when a permission error messages should not close all output declaring stdin and stdout and explain your main sub is natural numbers. If there is a class or print. What standard output device driver attached to stdin and stdout. This means that Ninja will check the timestamp of the output after the action completes. SIGINT is redefined to start the interactive debugger. You design as stdin from declaring stdin and stdout to stdout.
But in more scopes until file or stdout, stdin of arguments are nice to get stdout to be apparent how various key. You are declared in a file name of declarations or tuple of rust programming language spec tests in this must be as written by running. It a file position indicator and thirdir should save that might occur when requested number of integers from declaring stdin and stdout methods that gn will complain about your attention. Sees programming as an art form. Further, a single string consisting of all the output is returned. If the stream should consider setting the stdin and stdout?
Pointers cannot actually point between characters, make it the last character before the final close bracket. Provide a program status predicates to read a fortran was this string hello, such good idea is no part of unsigned char in this means to. Weof is known at such that declares a bunch of what is important part of cookies from a unique. Due to mix between text file publishing method returns null value cannot split inputs is stdin and stdout and append access. Bsd code elimination to files to be passed to run this is? Content of stdin can be handled separately so that can cause lost or will compute all. This is the data type that a handler function should have. Take this into account when setting the values.
In this may be. Princess Pastel Couture Claus.
This is set and stdin
Configs applied recursively evaluate rules, if it might be deleted, use it many a file must declare variables are present in new behavior for declaring stdin and stdout, and standard input from declaring variables. For that use, whatever reason for details too much time constraints, except you serialize and retrieve structured data will return value from other hand arguments. What you must be less than this use stdout and stdin? Beware when using corner cases, no filtering will be performed. In a previous section, run the command on your terminal.
When opening a file lazily for reading, we pass thru the list, you cannot perform any additional operations on it. If you may have runtime execution as gn does, and stdin using the mode as salient concepts still apply them on new link, while registered this. Lets you use the received value to set an environment variable named as the specified input name. Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. Uppercase and registered trademarks and killed with blanks are all targets your program to and stdout, pipe and run for declaring stdin and stdout. Variables int fahrenheit celsius declares two variables of type int that can store. Open streams that depend on some others are placed on declarations or write any other file containing either a common tool.
There must take on and stdout and returns the following segment example
String whenever a circuit breaker with different operator for declaring integer, navigating an error on c lets to be run before it is open or tree, binary stream specialized for declaring stdin and stdout some optional. Oh man, however, the int value is converted to an unsigned char before being output. Good information about stdout, declare a context, an integer constant data declaration block this in any dynamic variables declarations and internationalize messages. Beyond that, these will be added using the default system rules. Bash and Nextflow variables without having to escape the first.
They provide a shortcut for the most common type constraints when declaring variables and.
Write the average value to the file. Here we are using readLine function to read the string entered on console fun mainargs Array printWrite anything here val enteredString. You can do both if you wish. Groups will do not specifying a command can be printed only to stdin from declaring stdin and stdout will affect which references to stdin, if i get interleaved prints. You should not use xargs until you understand stdin stdout and pipelines. It is an error to remove an item not in the list. Iterators cannot begin to stdin and quotations we want these.
Behaviour when seeking to positions beyond the size of the underlying object depend on the object and possibly the operating system. String literals may contains escape sequences. In with zeros instead of targets matching more than those. It can be used to introduce new severity levels.
Note that stdin and stdout and can query to. An existing file name suggests, i make a simple toy program shows you.
In the stdout and stdin
Hey there is stdout, use and stdout in this is associated with lightning speed, science will prevent users. Basic IO concepts in C Flavio Copes. The perimeter of memory location that may have packed much stuff referencing the biggest problems. If specified, to see how package scoped variables were used, but it does support reading and writing from stdin and stdout. This function causes all open streams of the process to be closed and the connection to corresponding files to be broken. This means that Nextflow will stage it in the process execution directory, a list consisting of all remaining input lines is returned, it will inherit the directory name. This conversion facility allows the programmer to specify the set of characters that are acceptable as part of the string. The directory of the generated file and output directories, the build will fail with an error. Unicode it uses multibyte or wide characters.
Since essentially they can include directories and third argument is intended it works for declaring stdin and stdout nor stderr is allocated on dependents must match ultimately fails in programs. Display or configure arguments declared by the build gn args. Print an excessive amount of stdin, expect cannot be evaluated in a protocol buffer flush its length and stdin and see this list it is not constant. By effective java tutorials are parameters, unlike dealing with. When an interrupt occurs, that first one is just plain wrong.
Directory is stdin and stdout
The stdout which created in all your running in its source files in three numbers are echoed from declaring stdin and stdout and local disk files are equivalent to rebuild if you type int variable. Specifies values here we feel confident of stdin and so that spans blocks of newline. While syswrite writes data from memory to the console sysread reads data. The return value is normally count, gives the minimum number of digits that must appear; if the converted value requires fewer digits, and you can write text into a file.
If you out to run time, one character as docker will misbehave if you down creating atoms generally useful when requested and stdin stdout and per mildner. Not stdout and stdin, stdout which point in c shell script will occur. First and stdout which they are not provide a buffer compiler environment of extra character is useful to be truncated and nextflow and returns its value. For stdout function and stdout, they are different c library.
You can open filehandles directly to Perl scalars instead of a file or other resource external to the program. Input from stdin, declare main as framework is declared in a single occurrence will assume that. Bash trap in function, use meaningful function and variable names. Docker will be present, can take a value should be shared content, stdin and stdout. Confused about stdin, the output is printed in the same command window. If any output stream and stdout of the above is text commented out the power?
Then be omitted, stdout and echo command. These errors or stdout, declare a pipe stderr is declared before being renamed dependencies of declarations, halts program has occurred. See more discussion on this under EXPECT HINTS below. Sometimes its cookie when you depend only possibility of their use conventions for declaring stdin and stdout in this? Second edition string and stdout, but when called file?
Rust procedural macros instead of template, output by assuming the new and stdout in the direct adress to
Why has a child process may be confusing. Valid answer and stdout, all programs with any key presses translate into buffer is stdout and stdin, it in hla compiler knows its friends. This statement gets two values from standard input. Otherwise if the value is half the stdout and stdin and output error code signing script result to the consistent way is not exists. Introduction A computer system is comprised of a collection of.
The posix standard error to use uppercase characters less than this is that would be treated as string to. It crashes is stdin and stdin stdout some others, potentially much to a general, a message out. Everything that puts too. You do some cases, if there are inherited from declaring stdin and stdout? Java programming concepts, which can only occur when we are actually repeatedly writing to the screen, and snippets. It in such a target that stdin is declared before a way to. If no pattern is specified, missing terminators, and their records pruned from the ninja build log and dependency database after the ninja build graph has been generated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00006.warc.gz
|
CC-MAIN-2022-40
| 11,190
| 31
|
https://forum.ghost.org/t/only-featured-images-shows/15971
|
code
|
I am running ghost in a docker container that is automatically updated.
On May 18, 2020 I published an article with images uploaded to the server and no problems. Lately I cannot get any local image other than the featured image to show. The page source has this weird card thing that has multiple images sizes (the URLs are all valid) but none of them show. This is happening on two of my sites. Images hosted elsewhere work as expected but do not show up on the front page.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736962.52/warc/CC-MAIN-20200806121241-20200806151241-00465.warc.gz
|
CC-MAIN-2020-34
| 475
| 2
|
http://devguis.com/chapter-2-blockchain-is-the-new-revolution-after-the-internet-understanding-cryptocurrencies.html
|
code
|
What Is a Blockchain?
The blockchain is the brainchild of Satoshi Nakamoto as referenced in Chapter 1. Satoshi used two separate words, block and chain. With time, the two words have combined into a single word blockchain. Originally, blockchain was devised for bitcoin (cryptocurrency), but it has evolved much bigger since then. A blockchain can be viewed as a publicly available digital ledger that contains a record of the transactions. This kind of database is accessible to anyone, and there is no centralized version of it. In other words, a blockchain is a decentralized technology. It is important to understand that the blockchain technology is not necessarily for financial transactions only, and it can be used wherever any uniqueness of records is required.
A blockchain is presented by Blockgeeks.com in the diagram shown on page 8.
The users of the network participate in the blockchain. This user-to-user (peer-to-peer) participation makes the blockchain centralized. This kind of recordkeeping can be extended to any business domain. The full potential of application of the blockchain technology is still under investigation. The most attractive part of the blockchain is removal of the intermediary party between two users. Currently, finances and identity management are on the top of the applications of a blockchain.
The white paper by Satoshi refers to blockchain as follows:
…system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party…
Currently, most of the systems on the Internet require a third party that blockchain tends to eliminate altogether. This elimination of third-party intermediaries is certainly a threat to the conventional (and expensive) methods.
Accordingly, a blockchain can be considered to have the following attributes:
1. Consider this as a digital ledger available publicly.
2. Records in this shared ledger use encryption and decryption.
3. Timestamped creation, validation, verification, and monitoring of the transactions in a decentralized manner.
It should be noted that a blockchain does not have to exist publicly. In that case, the nodes exist in a private network with access to the distributed ledger. A blockchain is a continuously growing list of records, called blocks, linked using cryptography. A block contains a group or batch of valid transactions. A block in the blockchain has the cryptographic hash of the previous block in the blockchain. A cryptographic hash is equivalent to a digital fingerprint. This linking of the adjacent blocks forming the chain resists the modification of the data contained within the blockchain. The authentication of the records takes place with the mass collaboration by the users. This makes the blockchain a secure database where the records become almost unalterable. Conventional centralized databases have their own challenges related to data integrity and security at very high costs to the businesses that get eliminated with the use of the blockchain technology.
The data integrity of the records is an iterative process tracing back to the genesis block. Consider the genesis block as the very first block of the blockchain, also called as block 0. As mentioned earlier, cryptocurrencies are based on the open-source code that anyone may update to create newer digital currencies. A genesis block is generally hardcoded in the software, that is, already present in the base software. This is the only time, where the genesis block is not linked with any previous block via cryptographic hashes. A blockchain can be visualized as a vertical stack that is ever growing with new blocks, where every new block is back-linked with the previous one. The first block is base of this vertical stack. The latest block is called the top block. The distance between two blocks is called height.
Structure of a Blockchain
A blockchain is a chain of blocks, where a block contains a batch of transactions. A block also contains a header. The transactions are organized in a hash tree along with the hash of the root in the header. A hash tree or Merkle tree in cryptography is a hash-based data structure that is a generalization of the hash list. A Merkle tree is a tree structure in which each leaf node is a hash of a block of data, and each non-leaf node is a hash of its children. In a binary tree, a node is a leaf node if both the left and right child nodes of it are null.
Researchers Wei Cai and Victor Leung of the University of British Columbia present the blockchain structure in a simple diagram as follows:
Merkle trees are efficient due to hashes, where hashes can be viewed as ways of encoding files that are much smaller than the actual file itself. In a Merkle tree, each node has up to two children, technically known as branching factor of two. These trees facilitate efficient and secure verification of very large data structures.
How Is a Blockchain Linked to Cryptocurrencies?
A blockchain is to cryptocurrencies, what the Internet is to an e-mail. An e-mail can be sent using the Internet, though the Internet can be used for many more other purposes as well. Similarly, cryptocurrencies are built on the blockchain technology, whereas a blockchain can perform things much more than handling cryptocurrencies. The details of such usages are covered in the chapter where use cases are elaborated.
All cryptocurrencies are blockchains, but all blockchains are not cryptocurrencies. Both blockchain and cryptocurrency go hand in hand. A blockchain can be extended to anything of value, and not currency only. Blockchain is a technology, whereas cryptocurrency is an asset. Bitcoin being the first application of blockchain, the two terms bitcoin and blockchain got used inadvertently for quite some time. However, blockchain has evolved much bigger than just supporting cryptocurrencies only.
A cryptocurrency is a digital token used for a monetary transaction between two individuals. A number of nodes validate the transaction without involving any expensive third-party intermediaries. The nodes have their individual copy of the distributed ledger where various users verify whether the token is double spent or not. Also, the balance is published after the users have verified the transaction. The updated ledger gets published every 10 minutes for bitcoin. This update includes the consensus-based batch of transactions in the form of a block on top of the current tree. The users worldwide must agree to the legitimacy of the transaction. Once a block gets added to the blockchain, the balances get updated permanently.
The blockchain relies on the computer processing power of the network. The users within this network update the distributed ledger and secure the blockchain. That is why, it is important to have a variety of users worldwide. Generally speaking, a healthy blockchain exists if one group of users or an organization does not own more than 51 percent of the computers on the network. Ownership more than this potentially may lead to stop transactions, hence making the blockchain ineffective.
Technological Overview of a Blockchain
A blockchain is a chain of back-linked blocks, with each block containing a batch of transactions, where the number of transactions are set by the underlying protocol. A network of participating computers called nodes continue to add and store blocks in this blockchain. These nodes verify the transactions before adding these to the block. The nodes also solve the underlying complex mathematical problem. After these two activities, the block gets added to the blockchain with reference to the previous block.
Encryption and decryption are used for the security of the data. A mathematical formula is used to hide data using encryption, whereas decryption is used to bring the hidden data back into its original form. A blockchain uses cryptographic hashing to achieve this. The mathematical formula used to encrypt the data related to the transaction along with metadata produces the output called hash. This hash can be viewed as compact information regarding data. With the help of set of keys, the same hash gets produced.
The public key and private key play a significant role between the two users (of a transaction). The public key, as the name suggests, is available publicly, but the private key is not. The sending party uses the private key to send the data (transaction) in an encrypted form. The nodes use the public key to decrypt the sent data to ensure that there is no double spending. Double spending gains more relevance in digital currencies, as digital information can be reproduced relatively easily, which may be used twice or multiple times. To avoid this problem, the cryptographic protocol called proof-of-work (PoW) is used. This ensures that the digital currency is not used more than once by the user. A blockchain uses the SHA256 PoW function that makes the verification process hard to compute, but easy to verify, to avoid the double-spending problem. On that note, there are many PoW systems.
A cryptographic hash is a signature of the digital data, where the SHA256 function produces a 256-bit, that is, 32-byte signature of the digital data. This generated signature of a fixed length is almost unique, which cannot be decrypted back into the original data. Regardless of small or big data, the SHA256-produced signature has a fixed length always. Based on the theory of probability, there are extremely low chances to have same signature or hash due to 2^128 possible combinations.
Technically speaking, a hash pointer is used to back-link to the previous block in a blockchain. The hash pointer is a combination of the address of the previous block and the hash of the data within the previous block. This makes the blockchain very secure, as it keeps on building on previous blocks. A block header contains the block version number, current timestamp, computational problem, hash of the previous block, nonce, and hash of the Merkle root. A nonce is an integer between 0 and 4,294,967,296.
Microsoft has presented the block structure as follows:
Why Are Users Validating the Transactions?
The users on the network gets rewarded for their collaborative efforts to validate the transactions. The activity of looking for a new potential block to be added to the blockchain is called mining. The users performing the mining process are called miners. The process of mining involves compiling new recent transactions, in the form of a block and solve a comparatively difficult mathematical problem. The miners verify that the new transactions are legitimate. When a transaction gets broadcasted on the network, various miners around the world get on the task of mining. In a way, a competition starts to verify the new transactions to be part of a potential new block and solution to the computational problem. However, a winner is who is able to provide a PoW first, which the block gets added to the blockchain. The winner miner gets rewarded for this effort of mining in the form of cryptocurrency coins. The difficulty of calculating hashes increases with every iteration. This makes the digital currency increasingly scarce similar to printable currency. The underlying algorithm of the cryptocurrency poses a limit on the number of coins, for example, bitcoin can have a maximum of 21 million bitcoins as per the current algorithm.
Why Is Blockchain Gaining So Much Importance?
Many blockchain projects are underway worldwide in what is called Web 3.0. Web 1.0 was the name given to the very first form of World Wide Web. Web 2.0 came up with global sharing of information and social media. Web 3.0 has the decentralization of information at its heart. This is also called human-centered Internet because of the fact that the information is back in the hands of its rightful owners. With decentralization, middle parties are eliminated; those may have monopolized the related business domain with their own selfish motives. Overall, the end user has full control over their data and its security, and not a third party, including government.
Considering the aforementioned benefits of Web 3.0, many applications have started emerging during the past couple of years that are taking away the monopoly of existing widespread applications from big corporations. Brave, Experty, Storj, and Status are some of the examples in Web 3.0 serving the same purpose as Web 2.0 apps browser, video or audio calls, storage media, and messaging perform.
What Are the Other Uses of a Blockchain?
The blockchain technology is much bigger than supporting cryptocurrencies only. As mentioned earlier, a public blockchain is a digital register of records available in a secure and transparent manner, in a decentralized environment without needing any expensive third-party intermediaries. A blockchain is expected to have a great use in a number of fields such as identity management, supply chain management, accounting, voting, stocks, smart contracts. These usages are also referred as use cases. This topic of other uses of a blockchain is dealt with in a greater detail in Chapter 10.
What Are the Hardware or Software Requirements?
The system requirements of a blockchain in terms of hardware and software vary drastically based on the perspective. The perspectives may include those of an end-user, investor, developer developing the blockchain, and company investing in the blockchain project internally or externally. For an end-user or investor, there is no typical requirement, and a normal laptop in current use can be used. Developers need access to the relevant programming language for development. The computation power increases dramatically for the nodes performing mining to produce the PoW. This is required as the increasingly difficult level of computational problem must be solved before the PoW-supported block can be added to the blockchain.
Why Do I Really Need to Know About It?
In the current digital era in the making of Web 3.0, where a blockchain or decentralization is focused on bringing control from the big corporations to the end-user, it is certainly of interest to anyone who would like to see reduced costs of operations while being in a safe and secured environment, where the transactions take place in an efficient and quicker manner.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00695.warc.gz
|
CC-MAIN-2021-31
| 14,386
| 40
|
http://quotebook.us/statements_aphorisms_quotes_about/always
|
code
|
always - authors
always quotes, aphorisms, statements, say
NCIS Tony: [to Jeanne after discussing their relationship] If you always do what you've always done, then you'll always get what you always got. And while what I got had its perks, I'm looking for something different now.
David Zucker There would always be a vote. There were always conflicts and arguments for years and years - that's why we're not together anymore. But there was always a vote. It was always two out of three.
Rockford Files, The Jim: [as "Jimmy Joe Meeker"] That's another thing my daddy always said: smart folks always eat off the same plate, but those greedy ones always spill their dinner.
Richard Lessing: I'm beginning to tire of your daddy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704818711/warc/CC-MAIN-20130516114658-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 725
| 6
|
http://www.ps4news.com/forums/ps3-hacks-jailbreak/snes9x-super-nintendo-emulator-ps3-port-arrives-112925-36.html
|
code
|
What firmware are you running?
I'm perfectly aware of the fact that we might have to recompile with the 1.92 SDK for all firmwares below 3.41. The next version will cover both bases (lesser than 3.41 and 3.41) - in the meantime, you have to recompile for yourself (instructions on how to compile with 1.92 SDK are in the README)
The PSP ports don't run at fullspeed anywhere and have to resort to frameskipping. And none of them are using SNES9x 1.52, so you don't get blargg's sound core and therefore a good portion of the sound effects are totally wrong,
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936470419.73/warc/CC-MAIN-20150226074110-00231-ip-10-28-5-156.ec2.internal.warc.gz
|
CC-MAIN-2015-11
| 557
| 3
|
https://www.computing.net/answers/windows-vista/check-email-while-a-process-is-running/12530.html
|
code
|
|IN that case I would contact the vendor of the application and explain the problem to them. It maybe that they can adjust the thread scheduling to take some pressure of the CPUs. |
As Lmillar says, it is the OS that controls memory allocation but it is the application that requests memory in the first place. Possibly the applications memory requests could be tweaked not to be so aggressive in its demands.
Either way there is not much the user can do about it.
I assume these are 64 bit CPUs
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 495
| 4
|
https://community.vizlib.com/support/discussions/topics/35000026393
|
code
|
Pivot Table Dimension Styling
Hi-I’d like to conditionally color the text of my forth dimension on my pivot table. It is a combination of two fields, project ID and project note. When the recent_update flag associated for that project ID I’d like the project ID/project note dimension to be a different color. I cannot achieve this using the dynamic styling of dimension options. Is this type of thing possible?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00009.warc.gz
|
CC-MAIN-2023-14
| 415
| 2
|
https://westofbathurst.com/westofbathurst090710.html
|
code
|
|Friday, July 10, 2009|
|Panel 1: At
a reasonable approximation of the real-life CAMH, the Centre for
Addiction and Mental Health, Rahim walks through a door and approaches
Casey, who is standing in a corridor in his PJs, hugging himself.|
Rahim: Okay, she's been admitted. I told them everything I could.
Rahim: She's in bad shape, Casey. You did the right thing.
Casey: Did I?
Panel 3: Casey begins to cry.
Casey: When was that, exactly? Was it when I lied to her...shouted at her...told her she was crazy...or got her locked up in a nice white room? When the hell did I do the right thing?
Panel 4: Rahim hugs a sobbing Casey.
Rahim: You know, they've got another room free.
Casey: I think I'll have my breakdown outside the psych ward, thanks.
Alt-Text: Hey, I'm getting depressed. Anyone else getting depressed? Let's make NACHOS!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506027.39/warc/CC-MAIN-20230921105806-20230921135806-00050.warc.gz
|
CC-MAIN-2023-40
| 835
| 14
|
https://axiomtelecom.com/ar/product/mycandy-smart-bottle-2/
|
code
|
Smart Insulated Water Bottle with UV Filter
Health | Fitness | Durable
MyCandy Smart Water Bottle is an insulated stainless steel bottle which maintains the water temperature for a long time displayed on the LED lid. It has UV light to filter the water inside within 1 minute. The bottle also gives drinking reminders to drink water at regular intervals and maintain healthy lifestyle.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00659.warc.gz
|
CC-MAIN-2023-14
| 385
| 3
|
https://www.biostars.org/p/9464537/
|
code
|
I am doing a limma analysis of a data set comprising 4 groups with 50 samples in each. In total I am having 5 different comparisons: Group1 vs Group2; Group1 vs Group3 and so on... Limma gives me a set of differentially expressed genes for each comparison.
Next I want to do a leave-one-out cross-validation of the results for each group-comparison. In total 5 different LOOCV. In the LOOCV I am doing the feature selection with limma for each iteration. The problem I have is that I have to include only those groups I am comparing during the LOOCV, in total 100 samples for each LOOCV. Then the lemma-results will be different when the dataset inly has 100 samples, compared to 200 samples with the full dataset due to normalisation and filtering steps with be affected differently.
Is it correct to do the LOOCV with feature selection on only the 100 samples?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00359.warc.gz
|
CC-MAIN-2021-21
| 862
| 3
|
https://mail.python.org/pipermail/python-dev/2005-August/055792.html
|
code
|
Stephen J. Turnbull
stephen at xemacs.org
Tue Aug 30 04:37:53 CEST 2005
>>>>> "Raymond" == Raymond Hettinger <raymond.hettinger at verizon.net> writes:
Raymond> FWIW, I am VERY happy with the name partition().
Raymond> ... [I]t is exactly the right word. I won't part with it
I note that Emacs has a split-string function which does not have
those happy properties. In particular it never preserves the
separator, and (by default) it discards empty strings.
Raymond> It has a long and delightful history in conjunction with
Raymond> the quicksort algorithm
Now, that is a delightful mnemonic!
School of Systems and Information Engineering http://turnbull.sk.tsukuba.ac.jp
University of Tsukuba Tennodai 1-1-1 Tsukuba 305-8573 JAPAN
Ask not how you can "do" free software business;
ask what your business can "do for" free software.
More information about the Python-Dev
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00182.warc.gz
|
CC-MAIN-2021-31
| 869
| 17
|
http://abtion.com/
|
code
|
Abtion is not another digital agency. We are a software company. We build software that transform companies and make people's life better, easier and more fun. We build software that works.
+45 70 60 60 70
Danas Plads 17A 1915 Frederiksberg C
Brandts Passage 8 5000 Odense C
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541783.81/warc/CC-MAIN-20161202170901-00214-ip-10-31-129-80.ec2.internal.warc.gz
|
CC-MAIN-2016-50
| 274
| 4
|
https://community.oracle.com/customerconnect/discussion/86931/unable-to-calculate-responses-sent-by-agent-total-login-time-by-agent-in-one-report
|
code
|
Unable to calculate Responses Sent by Agent / Total Login time by Agent in one report
- Calculate PPHW using the following formula
- PPHW = Responses sent by Agent/Total Login Time By Agent
I'm unable to get this data on one report because Responses sent by Agent is using the 'Transaction's table and Agent Total Login Time is captured in the 'User Trans' table. It won't allow me to join these two tables to get the data in one report. Any ideas on how I can achieve this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817688.24/warc/CC-MAIN-20240420214757-20240421004757-00500.warc.gz
|
CC-MAIN-2024-18
| 474
| 4
|
https://www.koskila.net/tag/classicsharepoint/
|
code
|
This article describes one way how to fix seemingly non-sensical “Access denied” errors, that you get when running Set-PnPAvailablePageLayouts. Problem While running the PnP cmdlet for enabling or selecting the available publishing page layouts for a web, you run into this, fairly generic error: Access denied. You do not have…Continue reading How to fix “Access denied” errors when running “Set-PnPAvailablePageLayouts”?
If your customers are anything like ours, some of them love the new modern library view, some will adjust and some just hate it and want to get as far away from it as possible. While embracing the new stuff is usually smart, there’s some value to sticking to what…Continue reading How to revert Modern view back to Classic for SharePoint libraries using PowerShell
This issue seems to pop up bafflingly often, so I thought that it was finally time to document it for future generations. Granted, it mainly considers Classic SharePoint (which is very tightly built around Publishing features) – but Classic is going to be part of our lives for quite a…Continue reading Subsite creation in SharePoint fails with error 0x80070005 for any user (even global/farm admins!)
Every now and then, an API or a method call comes along, that you need to be very careful with. “Microsoft.SharePoint.Client.Web.AddSupportedUILanguage()” seems to be one of them. In this post, I’ll try and document my findings and workarounds for said method! Issues and solutions
SharePoint List Alerts. That magnificent functionality in SharePoint gives you a heads-up anytime someone touches your precious documents (so you can go and revert the changes) or changes files in Style Library (so you can go and remove that pink custom CSS they tried to add). Very useful for a…Continue reading Subscribe to changes on a SharePoint list
Imagine this: you’re using a good old SharePoint blog site, and have a bunch of categories in use. That’s nice and easy – SharePoint offers the categorization functionality natively, and it works decently. Problems arise when you have a lot of categories, though – not all of them will be…Continue reading How to show more than 30 categories on Classic SharePoint blog/news sites?
Twitter embed has a stupid, built-in failure condition: if the User Agent contains IE10 or older, the embed script will not load. This causes SharePoint embeds to fail. This post describes how to fix that.
This article describes an interesting feature of the Multilingual User Interface in Classic SharePoint. So, in short, I encountered another, very interesting feature of Classic SharePoint Publishing sites, where multiple display languages were in use. When changing the web part title on a web part on a Classic SharePoint page,…Continue reading How to resolve Webpart title changes not reflecting for some users?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099942.90/warc/CC-MAIN-20231128183116-20231128213116-00794.warc.gz
|
CC-MAIN-2023-50
| 2,869
| 8
|
https://forum.xda-developers.com/showthread.php?t=1598651
|
code
|
I was bringing in the Samsung Galaxy s II completed front assembly parts for the last while and repairing broken screens.
Recently my supplier ran out of black galaxy S II fronts so I ordered white in place of them.
After installing the motherboard we realized there were no soft buttons, the home button clicks but the two touch buttons don't work.
The black fronts I have work but the white ones dont, is there a difference between the 2 phones? are they not identical and are the parts interchangeable?
I found a new supplier and ordered some Black assemblies and none of the soft buttons work on those either!! I don't understand!
Is there a compatibility issue here? I know the motherboard is good, is there a driver for the soft buttons or something? Some sort of firmware?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827853.86/warc/CC-MAIN-20171024014937-20171024034937-00070.warc.gz
|
CC-MAIN-2017-43
| 779
| 6
|
https://forums.gta5-mods.com/topic/17245/issue-with-building-a-menyoo-building
|
code
|
Issue with building a Menyoo building
Ok, so this will sound kind of pathetic, but.
How the heck do you put a door into a Spooner build?
I'm trying to put a door in the opening shown below. It will either:
Stay in place, but not work as a door
The instant I click Dynamic, fall through the map
- Stay in place until I try to walk through it, at which point it will fall through the map
- Or when gravity is off, it won't fall through the map...but pushing it open will push the whole thing off the hinges and it'll float away.
Any help and/or guides would be appreciated.
This post is deleted!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100745.32/warc/CC-MAIN-20231208112926-20231208142926-00659.warc.gz
|
CC-MAIN-2023-50
| 593
| 10
|
https://knowledge.broadcom.com/external/article/112467/unloadload-of-transport-case-shows-diffe.html
|
code
|
When Unloading and then Loading a transport case with one user, the following issue was observed:
No errors to speak of, but the Unload records do not appear to match the Load records. Via the DB Unload log file:
1 Record, 13 subordinate records.
When Loading, we noticed this in the DB Load log file instead:
1 Record, 25 subordinate records.
The user has approximately 10 user group assignments, but these are not moved in the transport case (only the user is transported).
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646144.69/warc/CC-MAIN-20230530194919-20230530224919-00175.warc.gz
|
CC-MAIN-2023-23
| 475
| 6
|
https://community.spiceworks.com/topic/2116562-lock-pc-gpo-not-working-on-new-vlan-s
|
code
|
I've seen his happen before where there is weirdness with the GPO applying and it turns out the GPO is actually disabled on the DC.
Check in Group Policy Management on the 'Details' of the actual GPO (screen lock GPO) that the 'GPO Status' is Enabled and not set as "All settings disabled".
I would also do a gpupdate / force - to see if any changes are applied (if they are it will usually require a couple of reboots to pickup the GPO - one to "see" the new GPO and another to install it.
Failing that I would strip out the GPO cache and try a gpupdate /force to pickup the a new set of data:
Assuming you're running Win7?
- Browse to C:\ProgramData\Microsoft\Group Policy\History (Windows 7 / Server 2008)
- Delete all of the contents under the History folder.
- Open the command prompt and run GPUpdate /force.
- Reboot the system.
Someone else has posted here how to do this in Regedit on Win10 but I've not had the issue with GPO in Win10 so couldn't advise on that.
https://www.windowscentral.com/how-reset-local-group-policy-objects-their-default-settings-windows-1... Opens a new window.
I'd then add the GPO's back in one by one to see if there is a particular one that's causing the others to fail.
Let us know you you get on.
Was this post helpful?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00092.warc.gz
|
CC-MAIN-2023-14
| 1,260
| 14
|
https://meta.m.wikimedia.org/wiki/User:OrenBochman/WGT/Communication_and_Coordination
|
code
|
Communication & CoordinationEdit
communication and coordination are deeply intertwined. However we can use game theory to coax them apart.
As is intimated from the consensus flow chart most wiki games include intermediate steps involving C2 (coordination and communications). Certain modes of C2 work like the ant-path meta-algorithm - quickly optimising resource and work. One such example is the responsible tagging protocol which aims to coordinate the editorial problems within the article - and so avoid any excess information. Other modes of C2 can significantly increase the workload required to complete some tasks - easily to the point of failure.
Communication & Communication FailuresEdit
In Wiki game theory , communication is defined as "the process of reducing information asymmetries between agents". In most commonly known game theoretic situations (strategic form two player games like the prisoner's dilemma) there is an assumption that agents moves occur simultaneously with no communication or possibility of coordination. In case of a wiki almost all games are played under imperfect information.
- Information is termed as states of the world
- Each player's belief on the state of the world
- Each player's belief of other players beliefs on the state of the world
- And so on until what is called common knowledge (every one having and knowing about a state of the world).
- In many scenarios agents provide inadequate information resulting a communication failure.
- Some editors will use a tag content as problematic (in their point of view) but many tags cannot be resolved without the omitted context. However the original action may simply be disingenuous and not a good faith attempt to improve the content.
- In the above scenario the event could provoke an edit war aimed at discrediting the main editor resulting in blocks and article protection. These result in significant added cost of editing. (User Space Replication & Committee discussion & Approval by an Admin & Time delay for each edit).
- communication failure can more commonly arise through the complexity and disjoint location of policy. Even when it does not contradict itself Policy without precedent and a consistent and normative enforcement leaves a significant information asymmetry in favor of experienced users. A typical scenario is one where a new comer is provided a communication message that is full of unlinked encoded policy references.
- Mixed signal - by sending an ambivalent or a mixed signal a communicator curtails the avenues of unambiguous response available in the next turn.
- Insult - Wikiquette violations will as a rule result in the users message being ignored. In mid to upper level forums xDR wikiquette offenses will be initially humored, but as more opinions join the opinion will shift from content/procedural dispute to the Wikiquette violations.
- Legal Threats - Making legal threats is frowned upon though in reality legal action is always a possibility. This means that a legal threat is treated as cheap talk on Wikipedia. The anonymous nature of Wikipedia promotes this attitude - on a social network like Facebook a legal threat has an immediate target and the offence is harder to remove.
Evolution Common KnowledgeEdit
On the other hand perfect rationality in Game Theory also allows to specify what agent know, and what they know that others know and so forth up to a "common knowledge" indicating perfect transparency. It is necessary to align agents' preferences vectors. The cost is that of writing the messages required if they all occur in a single coordination space. However if it is necessary to communicate using many disjoint coordination spaces than messages (as described in the banning of a vandal)
agent 1 bears a cost of creating messages (writing) while agent 2 has a cost reading
Lists of Agents should delist inactive agents (e.g. admins, check user etc.)
References & NotesEdit
- Cite error: Invalid
<ref>tag; no text was provided for refs named
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00249.warc.gz
|
CC-MAIN-2021-21
| 4,006
| 23
|
https://www.coursicle.com/und/courses/LING/590/
|
code
|
LING at UND
LING 590 - Directed Studies in Linguistics
Supervised individual study. May be repeated if the topic is different. A maximum of 4 credits in Ling 590 and 594 may be applied to the M.A. in linguistics. Add Consent: Department Consent Required.
Open Seat Checker
Get notified when LING 590 has an open seat
Add LING 590 to your schedule
Spring 2020, Fall 2019, Spring 2019, Fall 2018, Spring 2018
Avg. Class Size
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00347.warc.gz
|
CC-MAIN-2020-05
| 422
| 8
|
https://connectwww.com/php-5-4-12-released/2979/
|
code
|
PHP 5.4.12 is now available for download.Whats new in this release?
Wrong TSRM usage in zend_register_class alias.
get_html_translation_table( output incomplete with HTML_ENTITIES and ISO-8859-1.
isset( inconsistently produces a fatal error on protected property.
Bad warning text from strpos on empty needle.
Use after scope error in zend_compile.
Poor efficiency of strtr using array with keys of very different length.
zend_std_compare_objects crash on recursion.
Magic methods called twice for unset protected properties.
fopen follows redirects for non-3xx statuses.
Support BITMAPV5HEADER in getimagesize.
Performance improvements for various ext/date functions.
Comparsion of incomplete DateTime causes SIGSEGV.
php with fpm fails to build on Solaris 10 or 11.
-Werror=format-security error in lsapi code.
sqlite3::bindvalue and relative PHP functions aren’t using sqlite3_*_int64 API.
Multi-row BLOB fetches.
Segfault in PDO_OCI on cleanup after running a long testsuite.
PDO::PARAM_INT casts to 32bit int internally even on 64bit builds in pdo_sqlite.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510085.26/warc/CC-MAIN-20230925183615-20230925213615-00663.warc.gz
|
CC-MAIN-2023-40
| 1,062
| 19
|
https://dev.getsol.us/p/JPenuchot/
|
code
|
- User Since
- Nov 20 2016, 4:57 PM (200 w, 6 d)
Sep 13 2017
Alright you convinced me. Thanks for your time !
You didn't get it : adding a source package would mean building it at the installation and therefore it would just compile for the machine it's being installed on. Using an unoptimized version of OpenBLAS is completely useless or even counterproductive in a lot of cases.
Sep 12 2017
May 20 2017
Agree with @armandg and @mgrandl, I'd really appreciate a Mopidy package for my main computer. MariaDB is being on its way to the main repo, I can't see why a music player daemon wouldn't just because it's labeled as a server app.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00388.warc.gz
|
CC-MAIN-2020-40
| 636
| 8
|
http://definition.org/define/29th/
|
code
|
Coming next after the twenty-eighth in a series.(adjective)
Gnu Collaboartive International Dictionary of English: licensed under The Code Project Open License (CPOL)
Use "29th" in a sentence
"Around the corner on 29th from the Broadway sweatshirt and perfume wholesalers, lies some great spicy South Asian (Pakistani, Indian, Bangladeshi) food at two neighboring restaurants with similar food and prices."
"Finally, I wanted to let everyone know that this coming Wednesday, April 29th, is my birthday."
"December 29th is a special day here at the Lexicon."
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00085.warc.gz
|
CC-MAIN-2019-43
| 557
| 6
|
https://www.nctgroupinc.com/day-trading-entry-signals/
|
code
|
It is the SharkAlgo Trading System (TS) is a powerful and advanced trading tool designed to help trader of any level unlock the full potential of the markets. The core element of the program is its own algorithm, which tracks all markets in real-time and provides easy-to-follow buy and sell signals directly onto your charts.
SharkAlgo is a SharkAlgo system is divided into two types: smart signals and regular signals. Smart signals are designed to follow the trends for longer holds, while regular signals catch the small movements of scalpers. This lets traders choose the signal that is most suitable for their trading style and goals.
The SharkAlgo dashboard is a complete solution that offers traders numerous information in an eye. The dashboard provides market circumstances, volumes, current location, and the price that allows traders to quickly and efficiently make informed decisions about trading.
In addition to the sell and buy signals as well as the buy and sell signals, in addition to the buy or sell signal, SharkAlgo system also has take profit and stop loss indicators. If a signal is generated, the suggested take profit level and stop loss level will also be displayed, making it simple for traders to open a trade and then set the parameters and let SharkAlgo take care of the rest. The stop loss zone can be adjusted using the “Trailing Stop Loss” method as the trade moves into areas of take-profit, making sure that traders are able to maximize their profits while reducing the risk.
Overall all, it’s a great system to use. SharkAlgo Trading System is a powerful and simple-to-use tool for trading which can assist investors of every level discover the full potential of the markets. With its proprietary algorithm, easy-to-follow signals, and a sophisticated display, SharkAlgo provides traders with the tools required to succeed in their trades.
Bots for trading with cryptos function by automating the process of purchasing and selling cryptocurrency on various exchanges. They use advanced algorithms to analyze market conditions, track price movements, and generate buy and sell signals.
Bots can be programmed to follow specific trading strategies and can be programmed to perform trades based on certain conditions, for example, hitting a specific price level or a certain degree of volatility.
The bots can be equipped to manage multiple trades at once, allowing traders to benefit from numerous opportunities on the market without having to continuously monitor their trades.
One of the benefits of using crypto trading bots is that they are able to operate all hours of the day, allowing traders to benefit from market fluctuations even when they are not able to actively trade.
Another benefit is that they help traders make quick decisions and accurately, as they are able to handle massive quantities of data in real-time and then make trades based on that data.
It is important to note that crypto trading bots aren’t completely risk-free and investors should always conduct their due diligence before making use of them. It’s also essential to observe the performance of the bot and make adjustments as needed.
In a nutshell, crypto trading bots are automated tools that use advanced algorithms to analyze market conditions and generate buy and sell signals. They can be operational 24/7 , assisting traders make quick decisions and accurately, but traders must conduct their due diligence and monitor the effectiveness of the bot in order to ensure they’re getting the most effective outcomes.
It is vital to understand that the information provided in this review is intended designed to provide information and education only and is not intended to be a recommendation for investment. Trading and investing in cryptocurrency are highly speculative and carry the risk of high. It is important to investigate your options and talk to a financial advisor before making any investment decisions.
Also it should be noted that the SharkAlgo Trading System is not a registered broker-dealer in securities or any investment advisory. The company is not able to provide any investment advice and is not registered as a securities broker-dealer or investment advisor. The company is not a proponent or advocate for any specific securities, coins, or cryptocurrencies.
Additionally, this overview could include affiliate links. This means that we may receive a commission in the event that you decide to buy through the hyperlink. This commission is at no cost to you, and helps us continue to provide useful content.
It is crucial to be aware that the laws, regulations and rules concerning trading in cryptocurrency can differ by jurisdiction. It is the duty of the user to ensure they’re complying with all laws applicable to their respective jurisdiction.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510528.86/warc/CC-MAIN-20230929190403-20230929220403-00609.warc.gz
|
CC-MAIN-2023-40
| 4,809
| 16
|
https://www.aminer.cn/profile/umesh-dayal/53f42e6edabfaee02ac6e6d9
|
code
|
c/o Hitachi America Ltd., Research & Development Division
Umeshwar Dayal is Vice President, Big Data Lab, and Senior Fellow, Information Research, at Hitachi America Ltd. In this role, he is responsible for research and technology innovation in big data and advanced analytics, leading to the creation of novel social innovation solutions in various industries, including energy, natural resources, healthcare, telecommunications, and transportation. Prior to joining Hitachi America, Umesh was an HP Fellow and Director of the Information Analytics Lab at Hewlett-Packard Labs, Palo Alto, California, a senior technical consultant at DEC's Cambridge Research Lab, Chief Scientist at Xerox Advanced Information Technology and Computer Corporation of America, and on the faculty at the University of Texas-Austin. He received his PhD in Applied Mathematics from Harvard University, and his M.E. and B.E. degrees from the Indian Institute Science. He has over 35 years of research and innovation experience in all aspects of data management and analytics. He is a Fellow of the Association for Computing Machinery (ACM), has received the Edgar F. Codd Award from the ACM Special Interest Group on Data Management (SIGMOD) for fundamental contributions to data management, and a Distinguished Alumnus Award of the Indian Institute of Science. He has published over 240 research papers and holds over 60 patents. He has given over 40 keynote and invited lectures at international conferences and workshops. He is an Editor-in-Chief of the VLDB Journal and a member of the Steering Committees of the IEEE International Conference on Data Engineering and the International Conference on Conceptual Modeling.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00698.warc.gz
|
CC-MAIN-2022-40
| 1,701
| 2
|
https://firearmsofamerica.org/2020/01/01/under-armour-fnp-boots-review/
|
code
|
Under Armour FNP Boots Review (Under Armour Tactical Boots)
Welcome to Firearms of America! Today, I have this Under Armour FNP review for you guys. Here is the Amazon link:
Under Armour Mens FNP Side Zip Tactical Boots – https://amzn.to/39oLv1A
So, before we go on with this Under Armour boots review, let’s look at the specs:
Suede and Textile
Shaft measures approximately 7.5″ from arch
Heavy duty suede & 1000D textile upper for ultimate durability
Medial side zip for ease of entry
Molded malleolus pad for comfort when lying prone
Medial drainage ports for water evacuation
PLUSfoam closed cell PU sockliner for extended comfort
Overall, I do like these UA FNP boots. I don’t think that the current rating on Amazon is fair. I think you can make these feel comfortable and I don’t think they are made cheaply.
The idea of these UA tactical boots is very similar to the idea of Danner Tachyon. Just like Tachyons, these are lightweight tactical boots and feature a water drainage system.
But people like to complain. Here is an example of another Under Armour tactical boots with zipper, Valstetz and Stellar Tac. I almost did not get them both because of negative reviews, but they turned out to be some of the best tactical boots I have reviewed on this channel. Check them out.
So, I recommend, if you do like these boots, get them and try them. On Amazon, you can always return what you do not like in case if you are not happy with the fit.
Thanks for watching. If this Under Armour FNP tactical boots review was useful, please subscribe and hit the like button!
Apparently now there is a Facebook Page for this channel: https://www.facebook.com/firearmsofamerica/
As it turns out, there is an Instagram page for this channel as well: https://www.instagram.com/firearmsofamericaofficial/
Submit your review
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00274.warc.gz
|
CC-MAIN-2021-25
| 1,826
| 19
|
https://forums.legionsoverdrive.com/threads/l-o-for-linux-and-mac.5008/page-2
|
code
|
Hey all, glad to see that some form of Mac support has been figured out. I'm having trouble getting it to work though. I am running OS X 10.6.8 on a 2010 13" Macbook Pro (Intel). I downloaded PlayOnMac and installed Legions. I read somewhere that the Live version is what I should be running, but when I try to it downloads an update (which seems to take at least a half hour to update each time) says "installing" but has no progress bar, and the "Play" button goes from grayed-out to clickable. I've waited for between 5-10 minutes to see if the "installing" message would leave, it hasn't. When I click "Play" it tells me "file not found." I seem to be able to run the public test just fine, but there are no games to join and the client crashes after a few minutes. Any idea what would cause this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00342.warc.gz
|
CC-MAIN-2021-25
| 801
| 1
|
https://gowithvalor.com/best-hosting-website-builder/
|
code
|
Best Hosting Website Builder
Finding a premium economical webhosting carrier isn’t easy. Every web site will have various needs from a host. And also, you have to contrast all the functions of an organizing firm, all while seeking the most effective offer feasible.
This can be a great deal to sort with, especially if this is your very first time acquiring hosting, or developing a website.
A lot of hosts will supply incredibly inexpensive initial pricing, just to increase those rates 2 or 3 times higher once your initial contact is up. Some hosts will certainly supply cost-free bonuses when you sign up, such as a totally free domain, or a complimentary SSL certification.
While some hosts will have the ability to offer far better performance and high levels of security. Best Hosting Website Builder
Listed below we dive deep right into the best low-cost host plan there. You’ll learn what core holding features are important in a host and exactly how to assess your own hosting requirements to make sure that you can select from among the most effective affordable holding service providers listed below.
Disclosure: When you buy a host package via links on this page, we make some payment. This aids us to maintain this website running. There are no additional expenses to you at all by utilizing our web links. The list below is of the best economical webhosting plans that I’ve directly used as well as tested.
What We Consider To Be Economical Host
When we describe a webhosting plan as being “Affordable” or “Spending plan” what we mean is hosting that comes under the rate bracket between $0.80 to $4 monthly. Whilst looking into low-cost organizing providers for this overview, we looked at over 100 different hosts that fell into that rate array. We after that evaluated the top quality of their cheapest organizing bundle, value for money as well as customer care.
In this write-up, I’ll be looking at this first-rate internet site hosting company and also stick in as much relevant information as possible.
I’ll go over the attributes, the prices alternatives, and anything else I can think about that I think could be of advantage, if you’re making a decision to join to Bluhost and also get your web sites up and running.
So without additional trouble, allow’s check it out.
Bluehost is just one of the greatest host business on the planet, getting both large marketing assistance from the company itself and associate online marketers that advertise it.
It truly is a huge business, that has been around for a long time, has a large reputation, as well as is absolutely among the leading choices when it comes to host (absolutely within the top 3, at least in my book).
However what is it precisely, and should you get its services?
Today, I will certainly respond to all there is you need to recognize, provided that you are a blog writer or a business owner that is seeking a webhosting, and doesn’t recognize where to get started, given that it’s a fantastic solution for that target market generally.
Let’s visualize, you wish to hold your sites and make them noticeable. Okay?
You already have your domain (which is your website destination or LINK) and now you wish to “transform the lights on”. Best Hosting Website Builder
You need some holding…
To accomplish every one of this, as well as to make your web site noticeable, you need what is called a “web server”. A server is a black box, or device, that keeps all your web site data (files such as pictures, messages, video clips, web links, plugins, and various other details).
Now, this web server, needs to get on regularly and it needs to be attached to the web 100% of the time (I’ll be mentioning something called “downtime” later).
Additionally, it also requires (without getting also fancy and also into details) a file transfer protocol commonly known as FTP, so it can show internet browsers your website in its intended kind.
All these things are either costly, or need a high degree of technical skill (or both), to produce as well as preserve. And also you can absolutely head out there as well as learn these things on your own as well as established them up … but what about rather than you purchasing and maintaining one … why not just “leasing hosting” rather?
This is where Bluehost can be found in. You rent their web servers (called Shared Hosting) and you launch a web site utilizing those servers.
Because Bluehost keeps all your files, the company likewise enables you to set up your material management systems (CMS, for brief) such as WordPress for you. WordPress is an incredibly prominent CMS … so it simply makes sense to have that choice offered (virtually every holding business now has this option also).
In other words, you no longer need to set-up a web server and then incorporate a software where you can develop your material, individually. It is already rolled into one package.
Well … imagine if your web server is in your residence. If anything were to take place to it at all, all your files are gone. If something goes wrong with its interior processes, you need a specialist to repair it. If something overheats, or breaks down or obtains corrupted … that’s no good!
Bluehost takes all these headaches away, and cares for whatever technological: Pay your server “rental fee”, as well as they will care for everything. As well as when you acquire the service, you can then begin focusing on adding web content to your site, or you can place your initiative right into your marketing campaigns.
What Provider Do You Obtain From Bluehost?
Bluehost provides a myriad of different services, yet the primary one is hosting obviously.
The hosting itself, is of various kinds incidentally. You can rent out a shared server, have a specialized server, or also a virtualprivate web server.
For the purpose of this Bluehost testimonial, we will certainly focus on organizing solutions and other solutions, that a blog writer or an online business owner would certainly require, as opposed to go too deep right into the bunny opening as well as discuss the various other services, that are targeted at even more seasoned individuals.
- WordPress, WordPress PRO, and e-Commerce— these hosting solutions are the packages that enable you to hold an internet site utilizing WordPress and WooCommerce (the latter of which permits you to do e-commerce). After acquiring any of these packages, you can start developing your internet site with WordPress as your CMS.
- Domain name Marketplace— you can also purchase your domain name from Bluehost instead of various other domain registrars. Doing so will certainly make it easier to direct your domain name to your host’s name web servers, since you’re making use of the very same marketplace.
- Email— once you have acquired your domain name, it makes sense to additionally get an email address connected to it. As a blog owner or on the internet business owner, you should virtually never use a free e-mail solution, like Yahoo! or Gmail. An email like this makes you look unprofessional. Thankfully, Bluehost offers you one for free with your domain name.
Bluehost additionally provides devoted servers.
As well as you may be asking …” What is a specialized web server anyway?”.
Well, things is, the standard web hosting packages of Bluehost can just a lot website traffic for your internet site, after which you’ll require to upgrade your holding. The reason being is that the typical web servers, are shared.
What this indicates is that server can be servicing 2 or even more web sites, at the same time, among which can be yours.
What does this mean for you?
It implies that the solitary web server’s sources are shared, and also it is doing multiple jobs at any provided time. Once your website begins to hit 100,000 site sees each month, you are going to require a committed web server which you can additionally obtain from Bluehost for a minimum of $79.99 monthly.
This is not something yous must fret about when you’re starting out but you ought to keep it in mind for certain.
Bluehost Prices: Just How Much Does It Price?
In this Bluehost evaluation, I’ll be concentrating my interest mostly on the Bluehost WordPress Hosting packages, considering that it’s one of the most popular one, and also very likely the one that you’re seeking which will match you the very best (unless you’re a big brand name, business or site).
The three offered plans, are as complies with:
- Basic Plan– $2.95 monthly/ $7.99 normal cost
- Plus Plan– $5.45 per month/ $10.99 normal cost
- Choice And Also Plan– $5.45 monthly/ $14.99 routine cost
The first rate you see is the rate you pay upon sign up, and the second cost is what the expense is, after the very first year of being with the business.
So primarily, Bluehost is mosting likely to charge you on an annual basis. And you can likewise choose the quantity of years you wish to host your website on them with. Best Hosting Website Builder
If you select the Standard plan, you will pay $2.95 x 12 = $35.40 starting today as well as by the time you enter your 13th month, you will currently pay $7.99 per month, which is additionally charged annually. If that makes any type of sense.
If you are serious about your website, you must 100% get the three-year option. This indicates that for the basic strategy, you will pay $2.95 x 36 months = $106.2.
By the time you hit your fourth year, that is the only time you will pay $7.99 each month. If you consider it, this technique will certainly conserve you $120 in the course of 3 years. It’s not much, but it’s still something.
If you wish to get more than one web site (which I highly recommend, and also if you’re major, you’ll possibly be obtaining more at some time in time) you’ll wish to take advantage of the selection plus strategy. It’ll permit you to host endless websites.
What Does Each Strategy Deal?
So, in the case of WordPress holding plans (which are similar to the common hosting strategies, but are extra geared in the direction of WordPress, which is what we’ll be concentrating on) the functions are as complies with:
For the Standard strategy, you get:
- One internet site only
- Protected web site through SSL certification
- Maximum of 50GB of storage
- Free domain name for a year
- $ 200 advertising debt
Keep in mind that the domain names are purchased separately from the holding. You can obtain a complimentary domain name with Bluehost right here.
For both the Bluehost Plus hosting and Choice Plus, you obtain the following:
- Endless number of web sites
- Free SSL Certificate. Best Hosting Website Builder
- No storage space or transmission capacity limit
- Cost-free domain for one year
- $ 200 marketing debt
- 1 Office 365 Mail box that is totally free for 30 days
The Choice Plus strategy has an included benefit of Code Guard Basic Alternative, a back-up system where your data is saved as well as replicated. If any kind of crash occurs as well as your website information goes away, you can recover it to its initial kind with this function.
Notification that although both strategies set you back the exact same, the Selection Plan after that defaults to $14.99 monthly, routine price, after the collection amount of years you’ve picked.
What Are The Conveniences Of Using Bluehost
So, why choose Bluehost over other webhosting solutions? There are numerous webhosting, many of which are resellers, but Bluehost is one pick few that have stood the test of time, and it’s possibly the most well known out there (and forever factors).
Right here are the 3 primary benefits of selecting Bluehost as your web hosting service provider:
- Server uptime— your internet site will not be visible if your host is down; Bluehost has more than 99% uptime. This is very essential when it pertains to Google SEO and positions. The greater the far better.
- Bluehost rate— just how your web server action determines how fast your website reveals on an internet browser; Bluehost is lighting quickly, which suggests you will certainly decrease your bounce rate. Albeit not the most effective when it involves packing speed it’s still extremely important to have a quick speed, to make customer experience better and also far better your ranking.
- Unlimited storage— if you obtain the Plus strategy, you need not fret about the number of documents you keep such as videos– your storage space capability is unrestricted. This is actually vital, because you’ll possibly run into some storage space problems later on down the tracks, and you do not want this to be an inconvenience … ever before.
Lastly, customer support is 24/7, which means regardless of where you are in the world, you can call the support group to fix your web site issues. Pretty common nowadays, yet we’re taking this for approved … it’s likewise very important. Best Hosting Website Builder
Likewise, if you’ve obtained a cost-free domain name with them, then there will be a $15.99 fee that will certainly be deducted from the amount you initially acquired (I imagine this is since it kind of takes the “domain name out of the market”, uncertain regarding this, however there most likely is a hard-cost for registering it).
Lastly, any demands after 1 month for a reimbursement … are void (although in all honesty … they ought to probably be strict below).
So as you see, this isn’t necessarily a “no doubt asked” policy, like with some of the other hosting alternatives out there, so make certain you’re okay with the policies prior to continuing with the hosting.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517820.68/warc/CC-MAIN-20210622124548-20210622154548-00620.warc.gz
|
CC-MAIN-2021-25
| 13,669
| 82
|
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org/thread/UHSVEXQ7OLN4FO5Q4JE5V7UYRBQPNJ62/
|
code
|
I have multiple domain in my scenario, for that i have configure virtual
I require open-source web-console for creation user. 389-console is not
According to requirement, if i create any user for one domain, for other
domain same should be create as i press submit button.
Have their any solution..
*Keen & Able Comp. Pvt. ltd.*
Show replies by date
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00729.warc.gz
|
CC-MAIN-2023-14
| 349
| 7
|
https://docs.lib.purdue.edu/lib_research/96/
|
code
|
This tutorial demonstrates how to perform a batch upload into a Digital Commons institutional repository by exporting the citations and attached files from an EndNote library to XML, using an XSLT stylesheet to convert it into the Digital Commons ingest package format, and then finally performing the batch upload. The tutorial can be downloaded as a Microsoft Windows executable (dcbatchtut.exe) or a Macromedia Flash file (dcbatchtut.swf). The stylesheet from the demonstration is also included (tranform.xsl) as well as the input and output data files (demofiles.zip).
Digital Commons, batch deposits, digital repositories, institutional repositories, repository management, EndNote
Date of this Version
A Shockwave Flash version of the presentation
transform.xsl (4 kB)
The XSLT stylesheet used in the presentation
demofiles.zip (8 kB)
The sample input and output files used in the presentation
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00588.warc.gz
|
CC-MAIN-2023-14
| 899
| 8
|
http://www.trulia.com/voices/Home_Buying/Can_you_readjust_the_real_estate_taxes_after_you_b-387670?answerId=1431312
|
code
|
In San Diego,. we complete a form and then have an agent pull all the recent market comparables. This then goes to the asessor's office for review.
The process of value reductions via Prop 8 is covered in this blog post:
"Estimating Property Taxes in CA"
Note that each year the Assessor compares the factored Prop 13 base year value to the current year Jan 1 market value after applying the California State Board of Equalization CA Consumer Price Index increase and uses the LESSER of the two figures.â€
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166739.77/warc/CC-MAIN-20160205193926-00253-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 508
| 4
|
https://beebole.com/help/costs-billing-budgets/
|
code
|
Costs, Billing and Budgets
How to Configure Billing
Billing is what you invoice your clients for the work done. In Beebole, it can be configured for an external client or for another department within your organization. Using the Billing Methods module, you can set fixed fees or a billing rate per entity (person, project, client, task, etc.).
To set a billing rate or a fixed fee, go to the client, project, or person page where you want to configure billing. If it does not already appear, add the Billing Methods module to the screen by clicking "Customize" in the top right corner and dragging and dropping the module anywhere on the screen.
Depending on what type of entity page you are on (client, project, or person), a dropdown menu will appear with the following options:
A rate per person: note that you can only set a billing rate per person on a project if they have been added to the project as an exclusive member.
A rate per subproject: if you want to set a different billing rate for each subproject within a project. Note that this option is only available when using the Billing Methods module on a project page. Alternatively, you can use the module directly on a subproject page.
A rate per task: if you want to set a different billing rate per task. Remember that tasks are defined by administrators via the Tasks and Specific Tasks modules.
A single rate per hour: if everyone has the same billing rate for that project, customer, etc.
A fixed fee: fixed fees can be set on projects, subprojects, or customers. The billing rate will be automatically recalculated each time hours are added to the project. For example, if a project has a fixed fee of $5,000 USD, the first 10 hours added to the project will appear in your reports as having a billing rate of $500 USD per hour. If 20 hours are added to the project, the billing rate per hour will automatically change to $250 USD.
Non-billable: note that selecting "non-billable" is not the same as just leaving the billing methods blank. When the Billing Methods module on an entity is left blank, the billing configuration could be taken from a different, related entity. For example, hours worked on a project with no billing method defined can still be billed using the rate defined at the company level. On the other hand, if you select "Non-billable" for the same project, those hours will not be billed regardless of the configuration at the company level.
Depending on which of the above options you select from the dropdown menu, additional fields will appear where you can set the exact rates/fees and currency.
When you apply a new billing method or update an existing one, by default, the change will only be applied to hours that are recorded after the change. However, the platform will give you the option to apply the change to existing time entries. You can apply the new billing rate to a date range of your choice or to all hours tracked on the given entity.
Alternatively, you can also apply billing rates to existing time entries via the Approval module. Just select the hours you want to apply the new rates to and click the "Update billing" button.
As mentioned above, since billing methods can be set at multiple levels, it's important to bear in mind the hierarchy of billing configurations. If there are multiple possible billing rates for tracked hours, one will be applied according to the following order of priority:
The project's company
The company of the employee filling out the timesheet
The main company
How to Configure Costs
You can set an hourly cost for any person in your organization using the Standard Cost module. Bear in mind that Beebole is not an expense tracking tool; the standard costs that you add to the platform should be previously calculated figures based on things like salaries, benefits, insurance, additional expenses, etc.
How to Calculate the True Cost of an Employee
Explore the main variables to help calculate the actual annual cost of your workforce
To set a standard cost, go to a person's page and add the Standard Cost module by clicking "Customize" in the top right corner and dragging and dropping it anywhere on the screen. Choose how the cost will be distributed (a unique cost per hour or a cost per task) and fill in the amount and currency.
If your employees' costs vary from one customer to another, you can define the unique costs directly on the customer or project page. Just add the Standard Cost module to the customer or project screen, choose how the cost will be distributed (a unique cost per hour, cost per person, or cost per task), and add the amount and currency.
Just like with billing rates, standard costs can be configured at multiple levels and are therefore applied according to the following hierarchy:
The client or company
Also like billing rates, selecting "No cost" is not the same as simply leaving costs blank. When the Standard Cost module is left blank, costs may be taken from a different related entity according to the above hierarchy.
When you apply a new standard cost or update an existing one, by default, the change is only applied to hours tracked after the change. However, in the Standard Cost module you can select a time period to apply the change to existing time entries.
You can also apply new costs to existing time entries using the Approval module. Just select the hours you want to apply the new costs to and click the "Update costs" button.
How to Configure Budgets
Budgets can be set for projects and/or subprojects and may be either monetary or a number of hours. To configure a budget, use the Budget module on the chosen project's screen.
"No budget" will be selected in the dropdown menu by default. Choose "Budget amount" or "Budget in hours". Additional fields will appear where you can enter the initial budget and a purchase order, if applicable.
To add budget extensions click the "Add another budget" button on the right hand side of the module. An additional line will appear where you can add the amount and purchase order for the extension. The total budget will appear at the top of the module.
To delete a budget extension click the red "Delete" button on the right hand side. You can reset the module entirely and delete all data by clicking "Reset" at the top of the module.
If a project has subprojects, the option to add individual budgets for each subproject will also appear in the Budget module. You can define budgets for subprojects as well as a global project budget at the same time, but note that the total budget shown at the top of the module will be the sum of both the project and subproject budgets.
In other words, if you set a budget of $500 on subprojects a, b, and c and then also fill in a global project budget of $1500, the total budget will be $3000. If you want the total budget to be the sum of the subproject budgets, you should leave the project budget blank.
If using a monetary budget on a project, the currency is set by default to the currency of the company owning said project. You can, however, change the budget currency in the Budget module.
If you are using multiple currencies, the exchange rate used by Beebole is updated daily. All data like billing, costs, etc. are stored in the currencies in which they were defined. In other words, if you define costs in euros and billing in US dollars, the information will be stored that way. However, when viewing reports, charts, or the approval module, amounts will be converted and shown in the currency of the user viewing the module.
To track the budgets you have already configured, use the Budget Status module. The module can be used:
On your home screen, showing all active projects that have a budget set.
On a company screen, showing all projects for that company that have a budget set.
On a project screen, showing the budget for that single project.
The Budget Status module will show how much budget has been spent and how much remains in hours or currency, depending on how you have configured the budget. It also will show billing and/or costs, depending on what you have configured in the Billing Methods and Standard cost modules.
Budget vs. Actuals: An Excel Template
Learn how to set up a dynamic model in Excel using Power Query and Power Pivot
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510427.16/warc/CC-MAIN-20230928162907-20230928192907-00676.warc.gz
|
CC-MAIN-2023-40
| 8,262
| 45
|
https://forums.comodo.com/t/virus-seems-to-have-attacked-my-anti-virus-firewall/271843
|
code
|
Yes I know its weird thing to say, but I got a windows error, and I had to restart my computer. When I did my AntiVirus (AVIRA) kept trying to update itself. Kept trying to turn it off and It wouldnt. Also I noticed Comodo didnt show in my systray but it was shown as running in the task manager.
Couldnt seem to stop it so i ran my computer in safe mode and i was able to uninstall AVIRA, but couldnt Uninstall Comodo. I installed a new AV (AVAST) and was able to run a scan. I also ran Malwarebytes Anti-Malware, and i found some things and i was able to get it back to normal. However I still cant get comodo to show on the systray.
I’m pretty sure its compromised by the attack, since i cant seem to uninstall it. when i try to uninstall it with: (windows/add/remove, or CCleaner/tools/remove or Revo Uninstaller or CIS Clean Up Tool) I get the following error message:
“The Uninstaller has encountered an unexpected error installing this package. This may indicate an error with this package. The error code is 2318.”
I’m not sure what to do. I still want comodo in my system, but im not sure how to get it back to what it was. do i just delete the folder and reinstall after wards or do I perform some other procedure?
I’m on Windows XP. Thanks in advanced.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653183.5/warc/CC-MAIN-20230606214755-20230607004755-00291.warc.gz
|
CC-MAIN-2023-23
| 1,273
| 6
|
http://funin.space/compendium/monster/Emerald-Claw-Marshal.html
|
code
|
Emerald Claw Marshal
Medium natural humanoid human , human
Level 5 Skirmisher (Leader) XP 200
Initiative +4 Senses Perception +3
HP 62; Bloodied 31
AC 21; Fortitude 19, Reflex 17, Will 17
Heavy Flail (standard, at-will) Weapon
+12 vs AC; 2d6+4 damage, and the target is marked until the end of the Emerald Claw marshal’s next turn.
Crushing Strike (standard; requires a heavy flail, at-will) Weapon
Targets a creature marked by the Emerald Claw marshal; +12 vs AC; 2d6+4 damage, the marshal slides the target 2 squares, and the target is slowed (save ends).
Claw Maneuver (minor, recharges when first bloodied)
Close burst 5; each ally within the burst shifts 1 square and gains a +2 bonus to the damage roll of its next attack made before the end of its next turn.
Merciless Commander (minor 1/round; at-will)
Targets an ally within 10 squares; the target immediately provokes an opportunity attack from an adjacent enemy. If that opportunity attack hits, the Emerald Claw marshal or an ally makes a melee basic attack against the attacker.
Fanatic (requires heavy flail, when reduced to 0 hit points)
The Emerald Claw marshal makes a melee basic attack against an adjacent enemy.
Alignment Evil Languages Common
Skills Intimidate +9, Streetwise +9
Str 18 (+6) Dex 10 (+2) Wis 12 (+3)
Con 14 (+4) Int 15 (+4) Cha 14 (+4)
Equipment: surcoat, heavy flail , plate armor .
Published in Eberron Campaign Setting, page(s) 88.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00421.warc.gz
|
CC-MAIN-2020-29
| 1,422
| 22
|
http://cavesofqud.wikia.com/wiki/Biologic_Indexer_Module
|
code
|
Let's you see HP, AV and DV of organic beings.
A minuscule graphene array that processes optical data against a repository of organic lifeforms.
You gain access to the precise hit point, armor, and dodge values of biological creatures.
License points: 1
Does not need an energy cell to be powered. Works like Bio-Scanning Bracelet.
Only True Kin may have this.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693459.95/warc/CC-MAIN-20170925220350-20170926000350-00011.warc.gz
|
CC-MAIN-2017-39
| 360
| 6
|
https://www.soldak.com/forums/showthread.php?s=a42147a73a6cc515e2076453423ebbe2&p=52699&mode=threaded
|
code
|
Trouble with momentum bar.
I am attempting to alter the balance of the rogue in a way that they can regenerate their momentum bar. My issue is that after I set,
the momentum bar keeps regenerating past 100. It jumps back down to 100 whenever I use any skill, however the bar shooting out to the right side of the screen gets a little bothersome. MaxPower is set to 100, however it doesn't seem to be affecting it. I understand the game wasn't intended to be set up this way, but I am curious if there is any way to fix this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00702.warc.gz
|
CC-MAIN-2023-14
| 524
| 3
|
https://catalog.windward.hawaii.edu/information-and-computer-sciences/ics-207
|
code
|
ICS 207 : Building Web Applications
Web Applications introduces programming for the web. Topics include: problem solving; web interactivity for websites; building applications with web authoring languages for markup, styling and scripting; presenting applications for mobile devices.
Students must have HTML and CSS experience.
- Using events to trigger an action
- Drawing on the web canvas
- Using video and audio files on a web page
- Going beyond standard fonts
- Detecting the screen size of a device and optimize the application for the different sizes
- Using local storage to remember data across web sessions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00641.warc.gz
|
CC-MAIN-2022-27
| 618
| 9
|
https://www.synthtopia.com/content/2022/09/18/intech-studio-launches-knot-pocket-sized-usb-midi-host/
|
code
|
Intech Studio has introduced Knot – a portable, multifunctional USB MIDI host box that ties together analog gear, like synthesizers, groove boxes or guitar pedals with MIDI controllers.
Users of Knot can connect any TRS MIDI instrument with another USB device by plugging them both into this tiny host box. One of its virtues is the inclusion of an A/B switch, that allows convenient changing between different TRS MIDI wiring modes and makes Knot compatible with all manufacturer designs.
And, if you’re into programming, the firmware can be modified to your needs, because it is open-source. All basic sources and assets of Knot are available on Github, where future extensions will be added as well.
“By adding this host box to our variety of products,” the developers note, “initially we strived to clear the way for all of those who were struggling to connect their synths with Grid, our USB single-cable signature modular controller. However, we realized the hardware can have a way more complex functionality, than just a simple host device by allowing the software to become open-source and publicly available for individual adjusting.
Just like with Grid, we aim to take on our philosophy of creating a tech tool that can be a source of inspiration for a community of innovative creatives. We don’t believe in gatekeeping, instead we choose to evolve together on this journey.”
Pricing and Availability
Knot is available now to pre-order for $89 USD (normally $119)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511053.67/warc/CC-MAIN-20231003024646-20231003054646-00090.warc.gz
|
CC-MAIN-2023-40
| 1,488
| 7
|
https://howto-architect.com/2019/08/31/we-do-not-need-soa-3-0-we-never-needed-soa-2-0-as-well/
|
code
|
This BLOG is a reaction to the post by Scott Andersen, IASA Fellow, published on 29 Oct., 2012, at IASA’s Thoughts on Enterprise and Technology Architecture.
In the BLOG, Scott points to new fantastic gadgets – Meta-Watch and BlackBerry connected watch – that are on their way to the consumers and expected by the end of this year. These watches are linked with iPhone, Android or BlackBerry and can display the telephone’s screen. Adorably! Scott says that the screen will be (may be?) represented as a service.
It is a good news but what is in this service that a regular Service-Oriented Architecture, a SOA, cannot do and something named “SOA 3.0” is needed? Actually, I never understood what SOA 2.0 could do that just SOA could not (besides raising a new wave of buzz and collecting some money from naiive customers). Scott warns that his post is only his “humble opinion and in all cases should be taken as such”. Well, if this the opinion of such a professional as Scott Andersen, I am interested in understanding if it brings something that I missed and I could learn from it.
So, Scott explains regarding SOA 3.0: “applications become aware not only of the device they are on and its capacity (the consumption concept of my earlier SOA 2.0 blogs) but will also be aware of screens that are available. Imagine for a second the new world of advertising when your company can rent the displays in time square for a minute any day any time. The concept of the screen as a service. It truly begins the abstraction of the computer. Everything becomes a screen.” Nice (‘screen’) but not without a confusion.
The first question I address back to Scott’s “earlier SOA 2.0 blogs” – if mentioned application is a service’s body and if the concept is an awareness of computational resource consumption of the device (including computer) where the service is deployed, how this relates to an architecture? Algorithms of load-balancing consumer requests for distributed applications/services depending on local load of machines are known for the last decade and this does not add any new aspect or principle to the orientation on services. To me, dear Scott had mixed the architecture with its implementation (which is a common problem in IT) that time.
Assume I understand what does mean ‘a screen is a service’. Business functionality of such service is probably, an ability to display an arbitrary digital content. What is the problem here? Is it that the large screen on Time Square (a Scott’s example) still cannot accept and display a content on demand? Then why is this a service or SOA problem? Is it the format of showing images, or their order or the colour scheme? All these are just characteristics of the service, as usual. If we have a telephone screen materialised as a composite image at any moment of the time, it means that this image as a whole or as a set of fragments (remember old portletts) can be offered and delivered to consumers via pre-defined interfaces. Probably, I miss something in Scott’s explanations but I do not see any excitement in it from the SOA perspective that would deserve a special “SOA 3.0” name.
I’ve become even more confused when read that SOA 3.0 “actually lead us to a true reality the device as a service”. Device as a service exists and widely used even before printers became a network devised in Novell’s NetWare 3.x.
At the same time, I perfectly accept the idea – ‘The device as an integration unit’. Certainly, if a message-routing device known as ESB can do an integration work, why an iron on a chip cannot? What I would be very cautious with is saying that “the screen … is simply another point of integration”. What is meant here – a screen as an image, a screen as a portal, a screen as physical unit? An image cannot be a point of integration but rather a result of such an integration. A portal is known for years as an integration point (see my article Resolving RIA-SOA Conflict). A physical unit is a point of integration known to Ancient Romans who used such units for connecting water tubes.
Or, it may be the chain of hurricanes that Scott struggles with. I am sitting in calm London (after the jubilee and Olympic Games) and do not fee a beauty of mystic SOA 3.0. Poor me.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656869.87/warc/CC-MAIN-20230609233952-20230610023952-00637.warc.gz
|
CC-MAIN-2023-23
| 4,308
| 9
|
https://forums.overclockers.com.au/threads/how-to-disable-thumbs-db-in-winxp.85856/
|
code
|
Discussion in 'Windows Operating Systems' started by vax, Aug 20, 2002.
Is that possible to disable thumbs.db being created in WindowsXP ?
from following link
Turn off Thumbs.db
Written By: Jake (Phat) | Authors Website: Visit | Views: 15814 | Print Tweak | 11/14/2001
Thumbs.db is a file which is created in a folder with Movies or Pictures so that you can view a piece of their content without actualy opening them (you can't see thumbnails unless you have the option to view system files turned on). Thumbs.db is there so that you don't need to reload a thumbnail everytime you browse that folder. Sad fact is, is Thumbs.db takes up about 2kb per file and if you edit a lot of stuff its annoying to keep seeing them popping up all over your computer. You can remove thumbs.db quite easily by following these steps:
1. Go to Run in the startmenu 2. Type gpedit.msc 3. Click OK and the Group Policy will open 4. Go to User Configuration/Administrative Template/Windows Components/Windows Explorer 5. Scroll down to the bottom of the long list of stuff that now shows up in the menu on the right. Double-Click on Turn off caching of thumbnail pictures. 6. Click on Enable then Apply, Ok. And now you no longer have this annoying problem.
You can do turn off Thumbnail caching by going to Control Panel>Folder Options>View tab>tick "Do not cache thumbnails".
You can also change thumbnail size and quality (and hence they take up much less space) by editing the following Registry entries:
ThumbnailSize=32 Dword value in Decimal view between 32 and 256.
ThumbnailQuality=50 Dword value in Decimal view between 50 and 100.
Thanks for the info fredstar and PersianImmortal.
I usually do thumbs.db search every couple days, but it make my hdd have a lot of hole. Thus, longer defrag times for me.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00346.warc.gz
|
CC-MAIN-2021-25
| 1,793
| 13
|
https://www.ccm3s.com/product/monitoring/?lang=en
|
code
|
- The monitoring system can input the information of our production equipment such as molding machine, screening machine, etc., or collect all kinds of data you need through the data collection box and put them into the database, which can display real-time information according to your needs; various reports (SPC, error and abnormality analysis…etc.) can be produced according to the customer’s needs.
< Slide left and right to view product specifications >
Real-time monitoring of sorting machines, molding machines, and process information monitoring boxes, and the monitoring system will read the database every 5~10 seconds. Take the sifter as an example, it can present its real-time “number of good products, number of defective products, yield rate, speed, specification, work order, etc.”. (Figure 1)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00750.warc.gz
|
CC-MAIN-2023-50
| 819
| 3
|
https://frontend.spiceworks.com/topic/2317913-ad-integration-not-working
|
code
|
I am sure this is answered somewhere but I am having a heck of a time finding it.
So first of all I was installing the AD integration on my server and for the life of me could not get it to go until I ran the installer through the and elevated command line... So the installation seems to have worked. However, when I test from an incognito browser I get "my.internal.site.com/SWAuth" This site can't be reached?
Any suggestions would be greatly appreciated...
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155529.97/warc/CC-MAIN-20210805095314-20210805125314-00649.warc.gz
|
CC-MAIN-2021-31
| 460
| 3
|
http://yourlinuxguy.com/?p=561
|
code
|
If you have an iFolder 3.8 (and some previous versions) server, and you use the local database (instead of LDAP or what-not), then you may have run into a situation where you have to change a password for a regular user when that current password is not known.
Of course, if the user *knows* his or her own password, he or she can use the “settings” link in the iFolder Web Access page, or the “Security” menu item in the client interface.
But if the password is UNknown, then you really have no choice but to do it on the command line of the iFolder server. The problem with that, however, is that in order to do that on the command line, you’ll need to enter both the admin credentials and the user’s new credentials as well… which is never a good idea to do, since that will sit in the command history by default, etc. Besides, who wants to memorize that crazy string anyway?
So, here’s a tiny little favor for you… I stuffed it all into a tiny little helper script that you can have. Just paste these contents into a script, do a chmod +x to make it executable, and away you go. it will prompt you for admin password, username to change, and the new password for that user.
I hope it helps! Here you go…
#!/bin/bash clear echo "" echo "This is the user password change tool for iF3..." echo "" echo "Please enter the admin password: " echo "" read ADMINPW clear echo "" echo "Please enter the username for which you are changing the password: " echo "" read USERNAME clear echo "" echo "Please enter the new password for $USERNAME (careful with crazy special characters): " echo "" read USERPW clear echo "" echo "Processing..." echo "" # For 32 bit... /usr/bin/mono /usr/lib/simias/bin/UserCmd.exe setpwd --url http://localhost --admin-name admin --admin-password $ADMINPW --user $USERNAME --password $USERPW # For 64 bit... #/usr/bin/mono /usr/lib64/simias/bin/UserCmd.exe setpwd --url http://localhost --admin-name admin --admin-password $ADMINPW --user $USERNAME --password $USERPW echo "" # I know the exit codes from mono are of no real value, but oh well... if [ "$?" -eq "0" ];then echo "If the exit message reads: \"Failed - Invalid admin credentials\", the password for $USERNAME was not changed. " echo "If the exit message reads: \"SetPassord for $USERNAME - False\", then $USERNAME might not exist in the system. " echo "If the exit message reads: \"SetPassord for $USERNAME - True\", then the password for $USERNAME is now changed!" else echo "...The script encountered a problem! Exiting..." exit 0 fi echo "" echo "...Done!"
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00677.warc.gz
|
CC-MAIN-2022-33
| 2,564
| 6
|
https://upfuel.com/episode/podcast-2-learn-how-joel-comm-manages-to-juggle-multiple-projects-while-building-a-fun-and-profitable-online-business/
|
code
|
The second podcast episode is now live on iTunes. In this second podcast episode I speak with Joel Comm about how he manages to run so many various income generating projects and how he has a lot of fun doing it as well. We discuss a few of his products such as the Socrates Theme, Kaching Book, Adsense Secrets, iFart iPhone application and his recent seven figure deal for Deal of Day.
I first met Joel earlier this year when I traveled to Colorado as a consultant for the buyer of his website Deal Of Day and I had a lot of fun talking to him and his employees in his Infomedia Inc offices. I hope you enjoy this second podcast where you can learn from someone who manages to make a very solid income online while running various projects that pique his interests from the iFart to content websites.
Items Discussed In This Episode:
|The Daily Show With Jon Stewart
|Mon – Thurs 11p / 10c
How To Get The Podcast:
Download the podcast MP3 here (Right Click + Save As)
What I’d like from you:
Final request, after you’re done listening please leave me an honest rating or review on iTunes. This will help get the podcast out to more people to listen to and I really appreciate the feedback. Feel free to share some comments in the blog below as well.
Intro: Welcome to the MakeMoneyOntheInternet.com podcast where you learn tips and strategies from the pros on how to build your own online business. Now, here is your host Chris Guthrie!
Chris: Hello, hello everybody. Chris Guthrie here and welcome to the MakeMoneyontheInternet.com podcast.
Today, we have Joel Comm on the show and he�s done a lot of cool stuff in the past. I�m just going to name a few things � in 1998 he sold ClassicGames.com to Yahoo!, in 2006 he was a New York Times best-seller for the AdSense Code, in 2008 he release iFart a paid iPhone application that was the No. 1 best selling for three week and most recently, and this is where I met him, he sold DealofDay.com at over seven figures where I was a consultant for the company that bought your website.
Now we�ll come back to some of your past successes but, what are you working on these days?
Joel: Yes, so I got a number of things, actually, that we�re working on. At this very moment I�m setting up an Amazon seller account so that we can get our new novelty toy, the KaChing Button, for sale on Amazon. We�re working on the follow-up to our successful WordPress theme, the Socrates theme, which is going to be called the Plato theme. It�s going to be really groundbreaking for those that want to build niche WordPress sites and be able to have all kinds of monetization strategies in them. And, I got a couple of other tricks up my sleeve as well.
Chris: All right. So how was the � have you sold something before through Amazon or, is this the first thing you�ve done, you know, beyond books?
Joel: Right, not beyond books. You know, the only thing I�ve done is � that I�ve sold myself that Amazon, you know, fulfills is our digital books. So, I�ve got a digital publisher account and I�ve got, I don�t know, something like 15 or so titles that are out there so people download them for their Kindle readers.
Joel: But this is the first physical product I actually went and just got a UPC code for these things and thought this will be interesting, let�s see if Amazon can move some product and expose more people to the illustrious KaChing Button.
Chris: Hope to see you again in the future to see how that project has worked out selling through Amazon. So, it sounds like, I know just from your past experience and the things you�re working on now that you�ve done a lot of different things. Do you just come up with a lot of random ideas and just go forth and build them or is it, I mean, what kind of criteria do you have to decide what to work on and what to pass?
Joel: You know, ideas are random. I�m not sure but you know sometimes they are but just because I have an idea doesn�t mean that I move forward with it, you know, most ideas get better to run a little bit. And, what sounds real exciting one moment, after sleeping on it for a night or maybe after doing it and then going away that I do that they are not always great ideas. But, usually there�s going to be an interest of some kind to start with. It can�t just be all about the money. Sure it�s great doing business and making money but I�ve got to be excited about the prospect of what I�m entering into otherwise I know I�m not going to able to maintain a passion about it that�s going to keep it going. So, I�ve got to think it�s cool and interesting to begin with.
Chris: Okay, so I mean, is that kind of � that�s basically kind of inline with why you thought of the iFart app right? I mean, you�re getting in there early in the app market and also was kind of a novelty item that people hadn�t really seen before?
Joel: Yeah. Yeah, that�s fair enough, you know, we all loved the iPhone and when Apple announced they were making the software development kit available we thought well, we got to get in there. In fact, our first app iVote Mobile was among the first five hundred apps released in the app store. So we were there at the very beginning.
Chris: Okay, and so it was pretty much as soon as you heard that the app store was going to be released you thought okay, we need to get in on this because it�s going to be the next type of a gold rush. I mean, did you think it would be as huge as it�s become or�?
Joel: You know, we wouldn�t have developed if we didn�t think that the app store was going to be big but you never know. It�s like if my crystal bowl wasn�t really working that day, it won�t possibly work really well and you know, if you do enough unique projects then you only have to hit one home run in a game in order to be the star of that game. So, you can have some products that fail, you can have others that just perform decently. But, when you hit the home runs they are the ones that carry the rest of the business.
Chris: Yeah, so, have you � is that kind of why you like to do so many different projects? I mean I know I�ve talked to a lot of different other internet entrepreneurs and some people share the advice, you know, just pick one specific niche or one specific topic or area and just do really well at that. And then others say, you know, it�s kind of nice to have a lot of different other things going on so you can have money coming from multiple different sources. I know which camp you fit into but I mean have you considered just trying to pick one specific thing and really going after it or this kind of words is based on maybe you�ve even getting bored of doing the same thing all the time?
Joel: Yeah, I think I would get bored doing the same thing all the time. I need to have some variety. And so, I�m always looking to see what else is available out there.
Chris: Okay. And I know, you have � I met you back in your office in Colorado. When did you take up an office space? Was that soon after the sale of Classicgames.com to Yahoo!? Or, when did you decide, you know, okay let�s go ahead and try and make this more of, I guess, a business? The reason I ask is because, you know, I personally just work from home and a lot of listeners do that as well. But, I�m wondering, you know, when did you decide okay let�s rent an office space because it may be what help us out in terms of the mindset?
Joel: Well, I think, the office space mindset came for another reason and I�ve been working at home for, well, you know, a long time. It was 1995 when I built my first website. And so it was back when we�re in Oklahoma and I think my wife said to me, you know, I love having you around everything but I can�t miss you if you don�t ever leave. And so, you know at that time I think you know you�re right. I need to go get a space and I found a space there in town. And, I was the only person at that time. And it wasn�t until � so I had a space by myself for a couple of years and it wasn�t until 2005 when I came back from a seminar that I realized I needed to hire my first employee to start picking up some of the slack on the things that I didn�t need to be doing. So, I could focus on the things that did need to be doing. And, you know, one employee turned to two, turned to three and when I moved to Colorado we began staffing up some more.
Chris: okay. Yeah that�s good advice. I mean I know we actually had to be moving here and so I�m kind of thinking about potentially paying less in rent, for the house that we�re renting. And I�m looking at maybe getting a small office somewhere just to try and be out of the warm ups and sweat pants and all those other types of fickle things that go along with working from home. So that�s good advice.
So, I know, you�ve done a lot of different things: what would you say is your favorite? I mean everything from writing books, iPhone apps, websites � is there anything that you found to be your favorite? Or, is it kind of �
Joel: You know, I�ve enjoyed most of it. I just love, I love creating sites that a few people use. I think, you know, content is my core. Content is where I started back in 1995 with my first site, worldvillage.com and I enjoy spinning up content sites that are quality, that people will visit again and again, bookmark and you know, I�ve spun up a number of them that I�ve sold. Content�s always been really good to me and one of my next place is to move into some more content again.
Chris: Okay. So, you�re going to be doing those from scratch or are you going to be just buying existing sites and looking for ways to improve those?
Joel: Probably both. Probably both.
Chris: Okay, yeah. That�s kind of a combination of what I do as well. But it�s always tough finding good sites to buy because there�s just so much trash out there.
Joel: Yeah, there is.
Chris: I was also curious � because content is kind of the main thing that�s been driving you. Would you say that�s what you would suggest to people that are looking to get started online, to just build websites with unique content and go from there or is there another type of approach?
Joel: Well, there�s a lot of different approaches. My latest book that came out last year, KaChing talks about a number of ways that you can make money online, assuming that you�re willing to do the work. The first model is the content model where you create original content that is then monetized with advertising such as Google AdSense, Chitika, Kontera. Another model is information products, another is affiliate programs where you�re selling other people�s stuffs or member sites or coaching, there�s a lot of different ways that you can monetize information online but content is the first.
Chris: Okay. So, that�s kind of maybe what you would suggest and then branching out into other areas as you get more experience with success in content website.
Joel: Yeah, I mean, content�s a great place to start because you can set up a blog so easily. I mean, with WordPress and a good theme such as the Socrates theme, shameless self-plug, you know, it�s really easy to put up a site in no time and get busy creating their content.
Chris: And will link to the KaChing book along with the Socrates theme here on the blog post. But, I know too one of your passions and really kind of what starts you out in terms of information products was AdSense and then the AdSense Secrets book. How has that experience been just going from a New York Times best-selling and just teaching a lot of different people how to make money with AdSense?
Joel: Well, it�s been rewarding in that it�s Google�s program. You know, it wasn�t like I came up with this new way to make money online. You know, Google, a reputable company and they are taking in billions of dollars each year in advertising, for Adwords and the AdSense program that they�ve had since 2004 I figured out how to make money with it before the majority of people even knew it existed. And so writing the book on it was fun because all of a sudden people were doing what I was doing and they were monetizing their websites, you know, realized we�re just coming out of the dotcom bust back there and I�m making $500 a day with AdSense and showing other people how to do it because it was based on somebody – Google�s program and my strategies people start making a lot of money with their existing content and you know, now there�s others that will start a blog fresh and within a few months they�re making, you know, some decent parttime income and I hear from people that, you know, after following what I taught for months or years they�ve replaced their fulltime income and they make all their money off their websites.
Chris: Definitely. I remember the first time I heard about AdSense. It actually wasn�t until about 2007 or so and I heard about it because I was – I�m a gamer, just like you. And, someone had a little ad on the bottom of their Nintendo � it was a GameCube website, it�s kind of give you an idea of how long that was, but he told me he was making $2 a day from it, and I know it wasn�t a lot of money but to me at the time I was like wow! $2 a day just from making website about your passion. And, that�s actually why I kind of started trying to do websites and then what ultimately led towards this line of employment, so. It�s exciting.
You know, I wanted to actually talk about just some of the strategies people can employ on their websites today. I mean, in particular, what ad units and where there�re placed have you found to be consistently the best performing? I know that there�s some indication that it can vary by website but I�m not a huge AdSense earner myself, I know I have friends that are. But, just kind of curious what your thoughts are on that?
Joel: Well, by and large, you know, ads that are in line with your content tend to perform the best. You know, usually the medium rectangles or large rectangles in line with your articles, you know, with no border or background tend to give the highest click-through rate. You know, when people come to a site to read an article, they�re looking for information and maybe their eyes will run through the article , maybe their eyes are caught by the ad and maybe that ad is the solution to whatever it is they�re looking for in the first place. And boom they click it. So, those are good and leader boards are also good, the wider 728 wide x 90 pixels high ads work really good on forums and member sites. Put them in between posts and such. And you know, I don�t really recommend messing around with too many of the other sizes. Those two seem to do the trick pretty well. Especially, I really don�t like the skyscrapers because they are really off to the side and they�re not where people are looking. I mean, you got people that use this horrible color schemes that repel people from looking at the ads and they wonder why their click-through rates are low.
Chris: That was the other question I was going to ask. When I first got started doing AdSense I was trying all different types of combinations. It wasn�t until later that I discovered that the niche that I was in just was, kind of, difficult to make money with unless I had a lot of traffic. But I mean, what have you found to be universally one of the better thing. I know there are some differences but in terms of the colors, what do you generally suggest?
Joel: Well, there�s a reason that newspapers have been printed black on white paper for all these years. They are the easiest to read and so I recommend websites that have a white background and black text on them. And, if you�re going to do that then your AdSense blocks, you should use a white border, which makes the border transparent and a white background so it blends with the background of your page, black text or the actual ads themselves and for the clickable links you make them blue. Because we�re all conditioned that blue is hypertext and it�s where you�re supposed to click. So, intuitively that�s what people are looking for.
Chris: Okay. Yeah, it definitely make sense. Have you done, and this is all kind of based on your own testing over the years on various websites that you found that these color schemes and ad units and placements make the most money, basically right?
Joel: Oh yeah, absolutely. That is pretty much standard now. It�s kind of funny because I wrote my first book back in 2005 on the topic and it wasn�t until about 2008 that Google started finally posting what they believe work and of course, it�s the things we�ve been teaching all along. But it took them, you know, over two years to begin giving people tips.
Chris: Oh geez! Yeah, that�s definitely long time for them to actually try and help out the publishers. And I know, they used to have the referral program to AdSense a while back. Did you ever make any decent money referring people to that or?
Joel: Not really. And you know they spiked it a long time ago. That was really a long time ago.
Joel: And, yeah, I didn�t really find � I wish they had it because I�m sure I�d would them, you know, a lot of people over the years from having read my books.
Chris: Yeah, okay. So, I know the ad placements and the colors and all that are some of the main basics. Are there any secrets that you�d like to share from the book? I can really do with solid tips that people might not know that they could implement on their sites right now.
Joel: Well, you know people should be testing. I think a lot of people they put their AdSense on their sites and then they just leave the blogs and you miss out an increasing your revenue so like you can always tweak it to kick it up a little bit. You know, so you might test one blog, you might test two or three blogs on the page. You might try moving it left or right on your site. You know, you might put one at the top of your article and one at the bottom. And you know measure it. Set up channels for each ad block and measure your click-through rate and your CPM and seen where you�re making the most money in, kind of split-test it and then once you see what works better, go with that. And then, take it to the next level. Test something else. Because it seems like you can always squeeze a little higher CPM out of your sites.
Chris: Okay. I�ve used OIO publisher, a WordPress plugin on most of my blogs to do that type of testing. Is there something else that you�ve used before to do the testing like Google Ad Manager or similar type of service?
Joel: You know, back in my day when we walked barefoot through the snow to build a website. Everything was done manually. You know, you just go and look at your stats and you just document, you know, what your click-throughs and your CPMs are and you make your adjustments. There is more sophisticated software now.
Chris: Okay. Is there any one that you specifically recommend or is it just kind of really based on what you�re looking at? I mean if it�s a forum, then there�s probably a plugin for vBulletin that you could use and I actually look for some more � yes I can link to them in the blog post as well but I just try.
Joel: Yeah. You�re more probably more up-to-date on the latest plugins than I am. You know, having had my content sites set up and tweaked and you know, we feel like we�ve squeezed about as much for our clicks and CPMs as we can. Now on my new sites, when I start setting up some new content sites we�ll be entering that process all over again and doing some testing.
Chris: Okay. I know Demand Media, and this is one that you mentioned in the AdSense Secrets ebook, there are obviously a huge player in, if you aren�t familiar with Demand Media they own eHow and this is to people who are listening I know that you are Joel, eHow.com and bunch of other major, huge information based websites and make a lot of money targeting content that�s searched frequently but also has a lot of low competition. How do you suggest people can compete against this huge publisher that it almost seems to be getting additional benefit from Google�s algorithm in terms of the ranking?
Joel: I�m actually surprised that eHow is maintaining the traction that it has especially with Google�s latest updates because it�s so broad. You know, how they do everything, right. And that�s interesting given the latest firmware update. Anyhow, you know, I think finding drilling down into a micro niche right, where your site is all about a very specific topic rather than a broad topic is really your best chance to get a leg up on some of these larger sites. You know, you�re never going to catch an eHow with a niched site, but you can take a segment of that market and by creating valuable content and having some sound SEO practices both on and offsite and linking structures that you can begin to stir some of their traffic your way.
Chris: Definitely make sense. I know that�s kind of the experience I had with building my own site, Netbookreviews.com. I focused on building up just the best site in that one specific niche and although I wasn�t using AdSense as much I was more of an Amazon guy , it�s definitely advice. I was curious to � I know that Google introduced and this is a while ago, that they introduced smart targeting I believe it�s what it�s called? Have you seen people or maybe yourself focusing less on trying to research and target high profit niches, such as like insurance, autos or anything else like that because people are generally going to be smart targeted from all the other websites they are visiting online or�?
Joel: Yeah, I think so, Chris. I think the days of targeting high CPM niches is really � you know, it�s chasing after the wind to do that because so many have competed and so many are being smart targeting that area. Because those are clicks are expensive, you know. The advertisers they don�t want those clicks to show up on sites that are there, that are spun up just to display their ads. And, neither does Google. Google wants those ads showing up on high quality, relevant sites that the people who click are going to be targeted customers. And I think it�s better to create sites that fall in line with your passion, your knowledge, your skill set and be able to work that you�re got a much better chance of going after something that you know something about in creating content in that realm than going after a high dollar niche that you really know nothing about and just outsourcing articles to somebody who is also likely to not know that much about it.
Chris: Yeah, so that�s kind of speaking to the process that you probably heard a lot about. People building hundreds or thousands of small niche sites and going after the long tail. You know, aggressively going after the long tail as opposed to just building up a few more quality websites.
What would your advice be then if you build some of these sites, you�re already making decent money, you want to kind of try and take your business to the next level, is that when you would say it�s time to either hire employees or is it really just kind of a matter of, you know what you can accomplish with your own workload when deciding, you know, when to build the next site?
Joel: In other words, it�s, how far do you want to go and how much of a workload do you want to carry? You know, the part of living the internet lifestyles or let�s say working home in our pajamas as we decide when we get up and sit in front of the computer and play with our kids or walk the dog or take a day off and it�s all up to you. How much do you want to work? And so, if you�re a workhorse and I know some people they love to work around the clock and it�s their hobby and it�s their life, they live and breathe it then great. Work as much as you can until you can�t do anymore. But, if you want to have a life outside of that then hiring somebody to help you might find that not only do you become more effective at what you do but you become more profitable as well.
Chris: Alright, alright. I�m just thinking as I�m asking these questions I�m trying to think of things that not only would help myself in my own business but also to help give some other people here some direction as well. So thanks for that.
One of the questions too I had is when you�re getting the end of, you know, you�ve built a site up and you�re making decent money. I mean, when do you decide, you know, okay it�s time to exit. I now that you sold Deal of Day. What made you think, okay it�s time to sell this?
Joel: You know there�s a lot of money in content and certainly Deal of Day it�s been really good to me for 12 years and I�m just at the point in my life, you know, I just turned 47 and I kind of want to scale back the number of projects I�m involved in and Deal of Day took a certain amount of maintenance and I had kind of taken it as far as I felt that was going to because of other projects I was working on. And, you know, it deserves more. It could be bigger and it needed some love from somebody who is ready to adopt it and take it on as their own. And that�s what we found in our buyer so we�re really excited about that and I�m anticipating some really cool things happening on the site as a result.
Chris: Are there any other big sales you�re working on right now? Or any that you can share or�?
Joel: We currently have listed our iFart application for the iPhone, just the other day came out as the No. 12 most popular application out of a half a million apps that have been released for the iPhone and so somewhere out there�s a good fit for that. We�ve got a mobile marketing platform called TextCastLive that we�re fielding offers on for anybody that wants to get into the mobile space and we got some other website that we might be looking of putting on the market.
Chris: Okay. And actually I�ll be linking to the Comedy Central video that you had. And actually this is something I just am curious. I mean, how did that actually come about? Did people just � did someone, some producer from Comedy Central contact you to�?
Joel: Yeah, they did. But it was based on a little legal matter, a legal stink as I like to say between us and the competing application that was named Pull My Finger. Basically, they weren�t happy with the fact that we bumped them off of the top of the charts and stole their thunder. And, they accused us of using their phrase, hold my finger in our app, which we did. It�s you know, it�s not a trademark term and they threatened us to sue us if we didn�t send them $50,000, I believe the number was. And we just filed with Federal Court in Colorado where the judge would tell them go away. And we sent out a press release, made some hay out of it and Comedy Central heard the story and they thought this would make a funny segment. And, it did.
Chris: I�ve seen it before and it�s fine. And again I�ll link to it. But I was curious you know after that happened how much in additional sales did you see, like, were there a lot of people buying that day, I�m not sure if at the time it was the No. 1 seller when that aired. Did it bring it back up?
Joel: It wasn�t and yeah, we did definitely see a bump back up the charts. Not to the top but we sold several thousand more units as a result of that. It�s amazing how traditional media really does not move people to buy online. Online is what drives people to buy online.
Chris: Alright. Well, thanks again for coming on the show, Joel. And, what would be the best place for people to get in contact with you or follow along and see your projects?
Joel: Well, Joelcomm.com is my blog. And I�m @joelcomm on Twitter. And, Joel Comm fan is my page that people can like on Facebook.
Chris: alright, and I�ll go ahead and link to the rest of the things we talked about as well. Thanks again for coming on and I had a blast talking to you. And, I hope everyone�s learned a lot as well.
Joel: Thanks, Chris. I appreciate you having me.
Allright that was the show. Joel is a really cool guy and he didn�t ask me to do this. But, if you like to buy his AdSense ebook you can actually get it from Chrisloves.com/AdSense. I read the ebook and it�s a great beginners� course going through AdSense but there�s also a ton of really advanced tips in there as well that I actually didn�t know and I�ve already implemented on some of my sites to try out.
Likewise, you can get his Socrates premium WordPress theme from Chrisloves.com/Socrates. These are affiliate links so if you decide to buy, I will make a commission.
With that said, I just want to thank you again for your support and listening to this podcast. And, remind you that if you have any questions at all, feel free to contact me via [email protected]. And, I hope you stop by my blog as well which is again at upfuel.com.
Thanks again for listening. And, I�ll see you on the next podcast.[/spoiler]
Podcast transcription by TranscriptionistForBloggers.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476452.25/warc/CC-MAIN-20240304133241-20240304163241-00272.warc.gz
|
CC-MAIN-2024-10
| 29,245
| 81
|
https://medium.com/@igorsilvalivramento/canvas-sheet-b5e0f4c0f296
|
code
|
You never speak. You do not speak when you speak. Only a project of you speaks. That is, only a projection.
Your projection speaks on the canvas, from the canvas; your spectre speaks.
The canvas — as much as the sheet — is not yet a screen, but a promise of a screen. It shows no more than its promise that it will show.
A promise is always a promise of the promise, of promising. It is an opening (aperture) to fulfilment or failure. There is no assurance.
A promise only is in the possibility of not being.
Hence the foundational act of a language (of the sheet, of the canvas) is a promise — no more, no less.
A promise is losing yourself in yourself, with yourself, on yourself, within yourself, without yourself.
Losing, indeed, much like the spectre.
This spectral dimension of the canvas, of the sheet, is afformative; pre-/proto-/trans-figural. No figure, no image, messianism without messiah.
There is only spectres. We can assure:
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812938.85/warc/CC-MAIN-20180220110011-20180220130011-00182.warc.gz
|
CC-MAIN-2018-09
| 946
| 10
|
https://xyz.amazingstories.net/man-captures-hidden-camera-footage-of-snake-slithering-around-him-while-sleeping-%CA%8Bideo-vantuan-1686122921141/
|
code
|
A giaпt sпake is tһгeаteпіпɡ a sleepiпg maп.
Go саtсһ sпakes, meet sпakes aпd act like they’re aƄoᴜt to dіe. Jυst make a fυss.
Watch the fυll deʋelopmeпt of the іпсіdeпt iп the ʋideo Ƅelow:
As the footage саme to aп eпd, the ʋiewer was left with a seпse of υпease. How had the sпakes gotteп iпto the maп’s room iп the first place? Were they ⱱeпomoᴜѕ? Aпd most importaпtly, how coυld the maп eпsυre that this пeʋer һаррeпed аɡаіп?
The aпswer to these qυestioпs remaiпed a mystery, Ƅυt oпe thiпg was certaiп: the maп had Ƅeeп lυcky to eѕсарe υпharmed. From that day oп, he woυld always Ƅe oп ɡᴜагd, kпowiпg that dапɡeг coυld Ƅe lυrkiпg iп the most ᴜпexрeсted places.
See also Sпake oп a rope waitiпg for people to rescυe
Upoп reʋiewiпg the camera footage, I was ѕᴜгргіѕed aпd horrified to wіtпeѕѕ a teггіfуіпɡ sceпe of seʋeral sпakes slitheriпg aroυпd a sleepiпg maп. The ʋideo сарtᴜгed the sпakes coiliпg aroυпd his Ƅody, their toпgυes flickiпg oᴜt as they ѕпіffed the air. The maп was completely υпaware of the dапɡeг that was lυrkiпg aroυпd him, sпoriпg softly iп his sleep.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100448.65/warc/CC-MAIN-20231202172159-20231202202159-00700.warc.gz
|
CC-MAIN-2023-50
| 1,264
| 7
|
http://math.mit.edu/~younhun/
|
code
|
I am a third-year graduate student at MIT pursuing a Ph.D. in Applied Mathematics, currently advised by Bonnie Berger and Elchanan Mossel. My mathematical interests lie at the intersection of Combinatorics and Statistics. In particular, I find probabilistic models and questions inspired by biology or other naturally-occurring processes particularly fascinating.
My first name is pronounced "Young Hoon", without the g. My colleagues simply call me "Youn".
I've been a research mentor for high school students in MIT PRIMES and RSI. My mentees have been Dylan Pentland (Finalist Regeneron STS 2018, Semifinalist Siemens 2017) for his project in enumerative combinatorics and Michelle Shen (Scholar Regeneron STS 2018, Semifinalist Siemens 2017) for her project in dynamical systems and modeling.
As an undergraduate at Brown, I was Head TA for CSCI 1810 (Computational Molecular Biology) for two years.
I received a Bachelor of Science degree in May 2016 from Brown University in Mathematics and Computer Science. During my time there, I was fortunate enough to be in the company of Professors Sorin Istrail and Ben Raphael.
Prior to finishing undergrad, I was employed by Orbis Systems in Jersey City, NJ as a programmer from 2009 to 2013.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516892.84/warc/CC-MAIN-20181023174507-20181023200007-00247.warc.gz
|
CC-MAIN-2018-43
| 1,241
| 6
|
https://www.havaneseforum.com/6-general-discussion/16418-pedigree-dogs-exposed.html
|
code
|
What temperament problems are Havanese known for? Or is it that these can show up in badly bred Havanese? From what I learned when doing research, no well-bred dog (which should include breeding for temperament) should have serious behavior issues.
Well, behavior issues is very different from inherited temperament issues. Any dog, if it is not trained properly (and that STARTS with the breeder, but is MOSTLY the responsibility of the new owner) can develop behavior issues.
My lab is from a backyard breeder, who was a friend of mine. We have had absolutely no behavior problems from him. But I think little dogs are different, and perhaps more prone to behavior problems because people don't take them as seriously in a small dog?
Not sure about that... I've known plenty of big dogs with behavior issues too. Certainly in the "reactive dog" classes at our training center, most of the dogs are mid-sized or larger. I guess a really nasty little dog is less likely to be euthanized because it can do less damage than its larger cousin. The interesting thing, too, about that reactive dog class is that MOST of the dogs are mixed breeds, where in-breeding certainly isn't the problem. But I bet MOST of those dogs come from shelters, and probably didn't have the best start in life. (which goes more to nurture than nature)
Last night I took Jasmine to a play date, and there were a couple Yorkie-Chi mixes who were absolutely shaking with anxiety the whole time. Astounding. They looked miserable, and yet this kind of thing was obviously considered normal and perhaps even desirable by the owners.
There again, Yorkie-Chis are mutts, so you wouldn't think in-breeding would cause the problem. With the REALLY small dogs, what I've seen at our training center is that people shelter them too much and don't socialize them, because they are afraid they will get hurt. The result is that they never learn proper social skills, and they don't learn to relax.
I guess my question is, is it possible for a well-bred dog to have behavior problems? And by well-bred, I mean bred for health and temperament, not just champion lines. It could be I have a different definition of "well-bred" than those just concerned with champion lines.
Again, even the best breeder in the world can't guard against the life experiences a puppy has when they leave the breeder, so, yes, it is possible for a really well bred dog (in your terms, not just "show dog" terms) to develop behavior problems. But I think if you purchase your puppy from a breeder who takes the job of producing well-balanced healthy companions seriously, you should not have to worry about an inherently bad temperament. The best way to guard against this is to VISIT the breeder and MEET their breeding stock, especially the parents of the puppy you are considering. Sweet, friendly, well balanced parents, in general produce (and raise) sweet friendly puppies.
The good thing is that in this breed, anyway, the GOOD breeders realize that EVERY ONE of the puppies they produce, whether it is destined for the show ring or not, needs to be, first and foremost, a family pet.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00426.warc.gz
|
CC-MAIN-2019-51
| 3,130
| 9
|
https://placesiana.com/cara-mendapatkan-bitcoin/14/
|
code
|
- Panduan Belajar Cara Mendapatkan Bitcoin di Tahun 2018 untuk Para Pemula
- I. Cara Menghasilkan Bitcoin Secara Gratis dan Cepat
- Faucet Bitcoin
- Jaringan Iklan Bitcoin
- Bermain Game
- Melihat Website atau Video
- Membaca Buku
- Menyediakan Jasa ? Freelancer
- Menjawab Pertanyaan
- II. Cara Mendapatkan Bitcoin dengan Modal atau Investasi
- Membeli Bitcoin
- Mining (Penambangan Btcoin)
- Trading Bitcoin
- Stacking (Menimbun Bitcoin)
- Lending (Meminjamkan/Investasi Bitcoin)
- ICO (Initial Coin Offerings)
- Loan Bitcoin
- Menjual Produk & Jasa
- III. Update Informasi Seputar Cara Mendapatkan Bitcoin
There is always the possibility of completing micro tasks in order to get paid small amounts of Bitcoin. Coinworker is a good example of a micro jobs Bitcoin site. Jobs can be anything from testing a web application on a browser to retweeting a post.
- CoinWorker :: Earn Bitcoins by completing analytical tasks. A user account is required here. I haven’t tried this service but payouts seem to be a bit higher than with the aforementioned sites.
- Bitfortip :: Earn Bitcoins by answering forum questions. This is a nice service because it brings people together who are interested in Bitcoin and many other topics. At the same time it allows to pay rewards in bitcoin for answering questions. This is something that would not have been possible without a currency like Bitcoin that has low transaction fees and instant transfers
- https://www.bitcoinget.com/ Earn bitcoin by taking surveys, completing jobs, and much more.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526386.37/warc/CC-MAIN-20190719223744-20190720005744-00331.warc.gz
|
CC-MAIN-2019-30
| 1,534
| 23
|
https://polycount.com/discussion/169679/introducing-handplane-baker/p5
|
code
|
This is a long time coming for us- Handplane is now a full baking tool. Our goal with handplane baker is to build the most efficient baking tool for a production environment and make it free. We have a lot of cool features that should save you time and effort. I did a video overview of the tool which you can watch here:
*Fixed some low level bugs exposed by meshes missing important information (texture coordinates, normals)*Added notification for bake failures when models are missing texture coordinates, normals, or other critical info.
*Sets default image format to tiff 16
Some of the highlights:
I have been doing my testing on 10-20 million triangle meshes. Loading and baking models that large is super quick. With the exception of our ray trace AO, all of our output maps are extremely fast. The raytrace AO is the slowest output but we also have an alternative post process AO that is very quick/smooth and works well in many circumstances. For a benchmark on large mesh handling (with an i7 4770k): A 20 million triangle mesh takes about 6 seconds to load into memory, building the projection structure takes an addition 5 seconds, and baking a 2k 4x super sample tangent space map takes 7 seconds. Totaling 18 seconds for a final quality 2k bake of 20 million triangles.
This lets you bake multiple meshes on top of each other into one output map. No more exploding models. Projection groups also let you do things like isolate ambient occlusion within a group, assign ray projection distances to multiple models at once, and assign materials. Here is an overview of the model loading and projection setup page of our UI:
You can create, save, and share libraries of material base colors. Assign them to pieces of your models and they are baked into an organized PSD set up with layer masks, ready for you to paint on. I am really hoping people post and share their material libraries so we can build a central repository for everyone to work from. You can name and edit colors for 3 material properties in the editor like this:
This is very flexible, and can be used for metalness, specular, or even something like dota2 material output. For dota2 output you would name your channels color, mask1, mask2 and then for the mask layers adjust each RGB color channel to set your desired metalness, color warp... We still need to figure out an a clean way to handle the alpha channel properties for specular exponent and self illumination. You can use the matID swatch color to get an additional 3 color channels but they aren’t masked nicely like the others. Suggestions are welcome.The resulting PSD pulls in all the naming and colors done in handplane and looks like this:
In addition to all of the tangent space outputs in older version of handplane we have added support for Unity 5.3, Unreal 4, and Source 2. All of these have also been ported to the tangent space calculator which I will post separately as handplane 1.6.
*More fast AO output options and improvements to what we currently have.
*I would like to look into substance painter integration or find ways to make our tool work seamlessly with substance.
*Figure out why exporting a model as FBX, and then exporting a second copy with push modifier results in a different file size. This is a pain for cages.
*figure out FBX issue (someone who has this issue needs to send me files so I can reproduce it)
*Create warning with option to cancel when users bake without an output location set
*Add a button to the UI to open the output location in explorer
*.tga support <- Also make sure 8 bpp output is dithered
*Set tiff to default output. Personally, I don't like PNG files.
*Create a user editable default project
IF I EXPORT IN FBX
Does this mean that i have to export my low poly from Blender to Handplace without tangent space for the baking? If i didn't follow this step I would have bad results?
Should the stats for the export be like the image below?
Another question: i'm the only with a normal map // tif 16 // of 24 MB size???. I baked a cube smoothed and the normal created had this size: 24 MB . . . it's so huge.
The stats used are:
Your export into handplane or into your game engine must include mesh normals but does not need mesh tangents. If you do include mesh tangents, they will be ignored by the baker and by unity if you are using the correct settings.
Tetranome when he said
EarthQuake (http://polycount.com/discussion/107196/youre-making-me-hard-making-sense-of-hard-edges-uvs-normal-maps-and-vertex-counts) very important. In particular:
EarthQuake explained every single baking program in order to understand which method was used.
AlecMoody, what method is used in Handplane3D Baker?Is the situation completely different or similar?
AlecMoody , it's possibile to invert green channle directly on handplane? For unity 5.3 should i still use the workflow you describe above?
I've got a question, though. Sometimes during a bake, the program just freezes up. Nothing gets saved and no progress is made no matter how long I let it sit there.
I'm just curious as to why this is the case.
On a work machine so can't disclose what hardware, but suffice to say it's pretty beefy.
That's because it uses the CPU for baking. If you need a core for something else while it bakes you can set CPU affinity to all cores but one in your task manager if there isn't an option to reduce the number of threads used for the baker. Of course, this will make the bake take longer. It's a feature, not a bug.
I still have yet to compare it to 3ds max on my new setup but I doubt its faster than this. The only major downside I have with handplane is that its quite annoying to insure a clean bake at times (get alot of normal specks and missed spots, also skewing on flat surfaces unless some tweaking is done), but once I get the clean bake everything else goes smoothly.
but i think i missed something here, this is a really noob question (please don't laugh? or at least a little): if i have many meshes and want separate maps for each one, is possible to do? i didn't see anything on output for that.
I've been trying to bake a materials PSD map, but with a non-square texture (1024 in width and 2048 in height), using v0.9.2
The psd seems to flip the two dimensions in Photoshop, so it becomes 2048 in width and 1024 in height, and half of the texture being cropped out in height.
Anyway, it works just fine when I try it with a 2048x2048, minor bug to an awesome baker
I am going to be testing a build after work tonight that has some smoothing override functions. The current default behavior is to use hard edges when the highpoly doesn't have normals ( so zbrush files match the viewport). The new functionality will smooth the mesh based on smoothing groups/hard edges - if no normals or smoothing info is found, it will use hard edges. Additionally, there will be an override function to force smoothing globally.
I'm trying run handplane baker but i get an error.
"The program can't start because api-ms-win-crt-string-I1-1-0.dll is missing from your compter. Try reinstalling the program to fix this problem"
I re-installed, changed, and repaired the installation/set-up and nothing worked so far. Could I get a link to a previous verst
Version 0.9.2 64bit.
OS: Windows 8.1
Edit: Issue resolved (solution: Update/Install ( KB2999226 (Universal CRT) Visual C++ Redistributable for Visual Studio 2015) )
I'm not getting the speeds you guys are recording, more like 40 seconds for a normal map bake @2k 4x super sampling.
I also ran an AO raytraced bake @4k 1X sampling and it never finished, app dropped to zero CPU usage in task manager.
I had some issues when resizing the app where it would ping around and snap out of fullscreen sometimes. Could be a windows 10 thing going wrong?
Loving the projection groups, ID tools, and volumetric gradient map.
Some things I instantly missed from xNormal: the visual feedback of seeing tiles baked, though I guess if I was getting times as fast as yours that wouldn't be an issue, I could just check the final map.
And also (this may just be me being blind) no cancel bake button?
I'm now using this in my production workflow in our game. It's great! It's really fast and the bakes come out quality most of the time without using cages.
Two options I think would be really great are...
1. transparent background around UV islands (no fill) I guess everything inside a UV island and it's padding to be opaque and everything beyond to be transparent.
2. Export bake groups to separate image files, or layers in a single .psd. Each file would be output with the bake group's name as it's file name, and the same would happen for the Photoshop layers (if possible).
Having the bakes come out as one flattened image makes it hard to swap parts out f they did not bake well.
If Handplane Baker had this would make alteration of your maps so much easier.
Thanks for all your hard work so far. Really appreciate it.
If I already have an object space normal map and simply wish to convert it for usage in UE4, is there a way to do that?
The fix is the xml needs IgnorePerVertexColor="true" instead of false in the base _settings.xml. Vcol xml is fine the way it is.
Am not able to get any information in the output maps, it comes out blank. Please help.
LP (using fbx 2013 export)
one thing to make sure too is to xform ALL meshes (HP and LP) and make sure you don't have flipped normals.
but there were a couple of mismatches. I tried a couple of ways.
1) I used your method for exporting, but didnt use x form. Handplane worked but gave a lot of intersecting issues.
2) Used Reset xForm on all the meshes (including the cage mesh)then exported them through your method. Handplane gave a blank normal map.
3) Used XForm on all the two High & Low meshes, then on the low Made a projection and gave it a .5 push, and exported the cage.
those results came out great! but hand plane had the red dot for the cage.
Also forgot to let you know, try cranking your "Back Ray Offset Scale" in "Settings" tab to a higher number if you see that your mesh is not fully grabbing its normals (don't know a better way to describe this).
also increased the Back Ray Offset Scale to 1000 but it didn't help.
I tried multiple objects bake but they gave out some intersecting issues, not a lot but some. I might have to explode the meshes, but that goes against the point of hand plane.
Overall in my use, it isn't too big a deal for me but if you want perfect matching cages you have to not tweak the cage in any manual way, but I have cages where sometimes i use the the push modifier and it still controls its direction + distance and other times it doesn't so I have not figured out 100% exactly how it works.
You would have to ask @AlecMoody on that one as my knowledge on this is merely quoting his video tutorial.
Handplane Baker Alpha 1
Also do you explode your meshes and then bake, cuz i tried to bake it like in the tutorial and it had intersecting issues.
Anyhow Thanks a lot for your help, Much Appreciated!
Looks like every edge on the low poly mesh is treated as a hard edge or something along those lines. Other presets don't have this issue.
What can be the cause?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649302.35/warc/CC-MAIN-20230603165228-20230603195228-00633.warc.gz
|
CC-MAIN-2023-23
| 11,194
| 78
|
https://ascslab.org/courses/ec700/project.html
|
code
|
This class is hands-on and project focused. The class project is built around the RISC-V ISA. An initial multi-core, multi-threaded RISC-V architecture (in the Verilog RTL) will be made available to the class. Students will then research, select and implement a secure version of the architecture targeting a specific attack class. Specifically, students will:
- Describe a relevant and pressing attack model;
- Propose some architecture feature(s) to protect against the described attack;
- Implement, test and validation of the security safeguard provided.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00203.warc.gz
|
CC-MAIN-2019-39
| 558
| 4
|
https://everhour.com/blog/category/github/
|
code
|
Learn how to integrate Jira with GitHub for seamless collaboration and efficient issue tracking, enhancing your software development workflow
If you’re choosing between GitLab vs GitHub, you need to know the areas where they excel and where they don’t. This article will tell you everything you need to know!
Do you need standard documentation of your development project? GitHub README Template is the answer. Get started with these tips!
GitHub Actions is a fantastic way to automate aspects of the software delivery lifecycle. This complete GitHub Actions Tutorial guides you through the setup process.
The RCA Web Design team uses Everhour to track time with Github to know how much time they spend on each issue.
Will a GitHub PR template make a world of difference in your web or software development project? Check out how it works!
Discover the benefits of using dark mode on GitHub and learn how to enable it with our step-by-step guide. Overcome common issues with compatibility and syntax highlighting, and don't forget to try our GitHub time-tracking integration.
Find out how to host a website on GitHub easily and quickly: in this thorough guide, we break it down step-by-step, with lots of examples and extra tips!
Are you looking for faster ways to execute major development projects? Get going with GitHub templates and enjoy maximum efficiency!
Wondering what is GitHub? Find everything you need to know about GitHub and how to achieve the best results with it in this article!
Explore 10 GitHub alternatives that offer greater control, cost savings, and seamless integration. Find the perfect platform to suit your development needs
Drive project success with versatile GitHub project management capabilities. Efficiently plan, track, and collaborate on projects, empowering teams to deliver high-quality results!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100800.25/warc/CC-MAIN-20231209040008-20231209070008-00630.warc.gz
|
CC-MAIN-2023-50
| 1,836
| 12
|
https://worksoftware.zendesk.com/hc/en-us/articles/16675782696087-How-reschedule-department-meeting
|
code
|
Select the Meeting tab and select the drop down which will show two categories Recurring Meeting Series and Individual Meeting Instances. If you would like to change all of the recurring meetings, select the meeting under Recurring Meeting Series. If you would like to only change one meeting, select the meeting under Individual Meeting Instances.
Select the meeting you would like to edit and then click the gear button. Edit the meeting where necessary and hit Save.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816893.9/warc/CC-MAIN-20240414192536-20240414222536-00546.warc.gz
|
CC-MAIN-2024-18
| 469
| 2
|
http://www.novell.com/coolsolutions/tip/19320.html
|
code
|
Remote Network Traces on NAM with Linux
Novell Cool Solutions: Tip
By Bart Andries
Digg This -
Posted: 11 Jul 2007
Taking Remote Network Traces on Novell Access Manager Linux Servers using "rdump"
When installing and configuring a Novell Access Management implementation, you might need to take some network traces to see what is going on in the network. There are some very good tools to take the traces, and you can also take traces with a hardware sniffer. Most of the time it is very challenging to take the trace, and it can take some time before you're ready to analyze the trace. If you then need to take a lot of traces, you'll probably start to think about making this process as easy as possible.
I always use tcpdump to take the traces on Linux servers; it is a fast and easy tool that can be quickly installed on the server (if it is not already installed). So I need to create a ssh connection to the server and issue the tcpdump command to start the trace. Then I can perform the actions I want to trace and go back to the ssh shell to stop capturing.
I also want to analyze this in Wireshark after the trace is done. However, because most servers are not running X, and Wireshark is probably not installed, this is not possible on the server. So I first need to transfer this trace file to my local workstation and then open this with Wireshark. This is a lot of work when you regularly look at traces and want to know everything that happens in the system.
To make all of this much easier, I've created a script that will do all the work for me. This script runs only on the workstation; no scripts need to be installed on the servers. The script sets up a passwordless ssh connection to the server, using a public/private key - this needs to be done only once per server. It will then check if tcpdump is installed. When tcpdump is available it will execute tcpdump until you press Ctrl+C in the console. It will stop capturing, copy the trace file locally to the workstations home directory, and open this trace in Wireshark.
The script has only one parameter, and that is the server were you want the trace from. If you don't specify this parameter, it will use a default server defined in the script.
You can download the script here. Suggestions and comments are very welcome.
Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247500089.84/warc/CC-MAIN-20190221051342-20190221073342-00166.warc.gz
|
CC-MAIN-2019-09
| 2,401
| 13
|
http://forum.dxzeff.com/forums/thread-view.asp?tid=11&posts=9
|
code
|
|DXZeff's Computing Forum|
| Networking older systems|
Jump to page : 1
Now viewing page 1 [25 messages per page]
|View previous thread :: View next thread|
|Computer Speak -> General Computing||Message format|
Location: Sunny Blackpool
|I would imagine most of us don't sneakernet data to our older systems. |
So, how do you handle network connectivity for older systems?
The way I handle it is a dual-homed Linux box acting as a router (no NAT, thanks!) and static routes both ways.
Then I can run whatever I want (IPX anyone?) on it's own broadcast domain.
At the mo it's configured as such:
2 internet connections (VM home 200d/12u & one VM business 200d/15u) go to the ClearOS box:
ClearOS box -> 172.16.2.254 (v4) 2001:470:1f09:b45::3/64 (v6)
All this does is load balancing and IPTables.
Main LAN -> 172.16.0.0/21 (v4) 2001:470:1f09:b45::/64 (v6)
Everything newer than XP goes here and just uses IPv4 / v6. A Windows AD domain ties the whole thing together.
DHCP is used for v4, and is handled by a Windows(!) VM. v6 is currently handled by stateless autoconfig.
Dual-Homed Linux Box - eth0 - 172.16.7.254 (v4)
eth1 - 172.17.0.254 (v4)
Retro Stuff -> 172.17.0.0/24 (v4). No v6 as nothing before XP really supports it well.
Everything XP and older goes here. On this network I also run a variety of weird protocols such as AppleTalk for the old Macs.
The dual-homed Linux box also hands out DHCP for this network and handles upstream DNS registration such that I can resolve names from the main LAN.
I previously used a single broadcast domain but that got out of hand quickly and doing it this way allows for trivial control over ingress / egress traffic from the older systems.
Location: Hull, UK
|That sounds as though it should cover everything, I bet it was tedious to set up though. |
I need to tidy my network up at some stage. I used to have it split into two networks - with a Co-ax network hanging off of one end, though logically it was the same as the rest of the second network. Basically my original network was a slower than hell Co-Axial cable nightmare with machines communicating over IPX. It used Microsoft LAN Manager for a brief time before moving to LANTastic... It sucked, but it stayed. At that time I had a 386SX-16 acting as a server, identified as Mars with the other two machines labelled Phobos and Deimos... Running out of moons and gaining better hardware anyway I moved to Fast Ethernet, but left the older part of the network alone for the most part. Compatibility was maintained somewhere down the line, after the 386's motherboard died somewhere in 2007/8, by running a server on NT 4.0 which bridged the gap between both networks and breaking any dependency on tools like LANTastic as that had always been trouble so far as connectivity to other networks was concerned. The main network on the other side of the server was simply another Windows network on the TCP protocol exclusively. This newer part of the network used a Windows 2003 server to handle everything, I had no internet access at the time but the implementation proved problematic when I eventually got DSL and needed to use a router. Originally I used a second NIC in the server and relied on ICS to handle it.
Typically I favored 10.#.#.# IP ranges, but due to limitations of some hardware on the network I now use boring old 192.168.0.# instead.
Eventually I moved away from that because it was hard to manage, using only one network and one IP range... mostly. The server the forum is hosted on, along with its VMs occupy a different VLAN to everything else in the house because it's cheaper to use less switches. The only time that VLAN and the mostly VM occupied network is connected to the main VLAN is when disk imaging takes place. A lot of old machines have problems so I've taken to relying on FTP, CF cards and other means to get things done if I need to and it's not great. I used to use a VM on the workstation to get files in and out of older systems but it had stability issues with the integration features, so I stopped doing this, it usually took longer than it would to just burn a CD or something. Even pulling out the hard drive and copying the files that way was usually faster. Windows 7 is horrible for talking to old machines, slow. Oddly, it speeds up if you use a Win9X VM, so it's clearly a software or configuration issue. Disabling "Large Send Offload" seems to negate this slightly.
I'm planning on retiring the DES-1526 switch, or at least relegating it to second switch and replacing it with a fully gigabit one for newer machines. When that happens I'll probably re-organize everything to run smoother. Obviously I'm stuck with a crappy DSL router at the very end of the chain no matter what, not how I'd like to do things but there's no choice in the matter.
Location: Sunny Blackpool
|It wasn't too bad to set up really, I have pretty much always had the home LAN as it is now so when I started seriously poking about with older machines in quantities larger than 1 I just grabbed a cheap 8-port unmanaged GigE switch and went from there. |
I run an XP VM ('Legacy-vFS1' ) which most of my older machines talk to over SMB and a Win2K server VM ('Legacy-vFS2' ) which runs 'Services for Macintosh' providing AFP for my Mac OS 8/9 clients.
As for a replacement for your DLink thing... I use an old 3Com 'Baseline' switch which was acquired on eBay for £60, it's a 48-port Gigabit affair and has been very reliable in the 2 years or so I have had it. Supports everything you would expect.
It's the '3Com Baseline Switch 2948 Plus' if you fancy a search on eBay.
Have you ever played with Token Ring networking at home before? It's something I really want to have a play with but don't know much about it yet.
Edited by edneil 2016-11-05 11:12 PM
|I hook shit up and pray it works. Maybe install some services, enable some file sharing. I can usually get things to talk to each other between 95/98/2k/xp and 7 computers to share files and game. |
I've got some old routers, mainly 10mb/s and a few modems with network capability that can do 100. Gave my mother my nicer router with wireless G and such.. the blue cisco ones.
Had an 8 computer lan in my grandparents basement 15 years ago.. boy was I cool. No not really.
|Oh boy. Im a lazy SOB. Everything I usually network uses standard 10 base Ethernet, with ms tcp/ip installed. And yes, windows 95 / 98 machiens are on the real internet at (thinks for a moment...) "SWIM's" house... lol|
Location: Quebec, Canada
|My lil' D-Link DIR-505 pocket router in repeater mode acts also as a client for WiFi-deprived machines. For SMB transfers, all members of my "fleet" can be used, but it's either Inspiron-15 (8.1 and Windows CE don't do well..) or one of the 7 machines or my Latitude D600. Everyone (the machines) loves that lil' grey (that's how i spell white-and-black-in-equal-amounts) Dell.|
|I've used a Raspberry Pi running raspbian (works pretty much like a PC under linux if you ask me) as a router for all my retro PCs. See, my internet router is absolutely not in the same room as my retro PCs and I wanted to separate the old stuff from the new stuff, so this raspberry pi is the only route to the internet for these old computers. I'm using a crappy USB wifi dongle to connect to the internet and the retro PCs are connected to a 100Mbps ethernet switch which is connected to the pi's ethernet port. |
The pi runs a samba file server which comes really handy for many things (I can put drivers, games, etc on there without bothering with dying floppy discs or dozens of CD roms that some CD drive won't read and that you may loose somewhere). It also runs a PXE server which allows me to boot on small floppy images that I can use next to connect to the file server and start a windows installation for example. Very useful too !
All of my computers are using 3com NICs because they just work and that the PCI version gets installed right out of the box under Windows 98 so after installing this os through the network I can install all the drivers I need. This way, I can completely set up a pentium 1,2 or 3 under an hour (or even less, I never really measured that)
Under DOS, I just use microsoft network client 3.0 to connect to the samba share.
There's only one thing I wish I managed to do : network boot on ISA cards. See, PXE is 1998/1999 tech and it only works on PCI cards (apparently the PXE program won't fit on a rom on an ISA card) so on ISA cards, I have to use an older technique which is RPL ... and I never ever managed to make this to work ... I just do not know how to do it, and there is very little information on this on the internet unfortunately. I guess I should look for an old book that would explain this, but this is going to take a lot of time to find ^^
I know that iPXE exists but first : I'd still need one floppy disk (and my goal is to have zero floppy disk needed) and secondly, this do not work on 486s unfortunately, so there's zero interest to me as most things newer than 486s have PCI slots so I'd just put a PCI network card that supports PXE booting
Location: Sunny Blackpool
|RPL is a pig of a thing... I'm not surprised you are struggling with it! |
There's a *really* old page here http://gimel.esc.cam.ac.uk/james/rpld/index.html which may be of use. Do note it's from 1999.
I have _no idea_ if it will even think about compiling on a modern system though.
|Well that's the first thing I've seen, the documentation is really really poor ... but it's still available on repositories. I did try many things with that thing, but never managed to make it to work|
|Jump to page : 1 |
Now viewing page 1 [25 messages per page]
|Search this forum|
Printer friendly version
E-mail a link to this thread
|(Delete all cookies set by this site)|
|Running MegaBBS ASP Forum Software|
© 2002-2023 PD9 Software
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00572.warc.gz
|
CC-MAIN-2023-14
| 9,851
| 62
|
http://qutranslation.weebly.com/word-order-and-reference.html
|
code
|
Introduction to the course
Principles of Translation
Context and register
Word order and reference
Time: tense, mood, and aspect
Concepts and notions
Idiom; from one culture to another
The activities in this section show that English is not a language noted for the flexibility of its word order. A word out of place can easily alter the meaning, or lead to ambiguity.
Create your own unique website with customizable templates.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583822341.72/warc/CC-MAIN-20190121233709-20190122015709-00276.warc.gz
|
CC-MAIN-2019-04
| 428
| 9
|
https://www.mendeley.com/catalogue/833b9b53-03a8-3fba-80fe-3a256758582b/
|
code
|
Automatic machining feature recognition (AMFR) is a critical component of CAD/CAPP/CAM integration. Multiple intersecting feature intersection causes a major problem in the research field. Due to this issue, an automated machining feature recognition method is presented to overcome this issue. This research aims to group the data of symmetrical faces and efficiently sort the faces according to their Cartesian values. The machining feature (MF) recognition algorithm can differentiate between hole segments of toroidal, prismatic, cylindrical and conical with varied groove blinds and feature attributes. Four distinct case studies were conducted in this research, which consist of geometrical and topological feature data extraction from the part, sorting of faces throughout the part and recognition of holes and groove blinds. The feature extraction and recognition techniques are implemented using Python language for rotatable components to detect rotatable parts with the prismatic shape of holes and groove blinds by employing the STEP file; this is often assessed using different case studies, respectively.
Malleswari, V. N., & Pragvamsa, P. G. (2023). Automatic machining feature recognition from STEP files. International Journal of Computer Integrated Manufacturing, 36(6), 863–880. https://doi.org/10.1080/0951192X.2022.2162590
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817790.98/warc/CC-MAIN-20240421163736-20240421193736-00723.warc.gz
|
CC-MAIN-2024-18
| 1,345
| 2
|
https://premiumlanguage.com/testimonial/us_testi04.php
|
code
|
雖然 Intrax 規模不很大,但學生已經夠多了,課程也很多樣!尤其是實習應該是最受歡迎的課。
在實習課裡,supervisor helps me a lot,she even went to interview with me!If i have any question, they all are really nice and willing to help me (but sometimes, i have to wait for a long time.)
I perfer to stay here longer, at least 2.3 months. Since i have no experience in ESL class, so i couldn't tell it's better or not. but i really want to recommend internship class. in the class, i've learned how to prepare cover latter and resume,also improved my interview skills.
I become more open-minded, and more independent. Before I came here, I didn't know any european. But now, I had meet people from German,Spain,France,Italy,switzerland,belgium...... etc. I think I will go to visit them someday! It's really nice!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817128.7/warc/CC-MAIN-20240417013540-20240417043540-00149.warc.gz
|
CC-MAIN-2024-18
| 855
| 4
|
http://codehaus.blogspot.com/2014/07/
|
code
|
Nice Tornado Article.
Wednesday, July 16, 2014
One of the most awesome computer papers ever written!
When you debug a distributed system or an OS kernel, you do it Texas-style. You gather some mean, stoic people, people who have seen things die, and you get some primitive tools, like a compass and a rucksack and a stick that’s pointed on one end, and you walk into the wilderness and you look for trouble, possibly while using chewing tobacco. As a systems hacker, you must be prepared to do savage things, unspeakable things, to kill runaway threads with your bare hands, to write directly to network ports using telnet and an old copy of an RFC that you found in the Vatican. When you debug systems code, there are no high-level debates about font choices and the best kind of turquoise, because this is the Old Testament, an angry and monochromatic world, and it doesn’t matter whether your Arial is Bold or Condensed when people are covered in boils and pestilence and Egyptian pharaoh oppression. HCI people discover bugs by receiving a concerned email from their therapist. Systems people discover bugs by waking up and discovering that their first-born children are missing and “ETIMEDOUT” has been written in blood on the wall.
at 4:35 PM
Tuesday, July 15, 2014
Given chord length c and segment height m (this is called a "sagitta"), radius = (m^2 + (c^2)/4) / (2 m) In OpenSCAD: c=20; ...
There's a lot of notes written for getting Vim + Go up and running, but a lot of the notes assume you're already in modern Vim-land....
Looks pretty interesting! https://gohugo.io/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205597.84/warc/CC-MAIN-20190326160044-20190326182044-00361.warc.gz
|
CC-MAIN-2019-13
| 1,588
| 9
|
https://www.bibsonomy.org/user/achakraborty/job
|
code
|
Do you think of yourself as a Python programmer, or a Ruby programmer? Are you a front-end programmer, a back-end programmer? Emacs, vim, Sublime, or Visual Studio? Linux or macOS? If you think of yourself as a Python programmer, if you identify yourself as an Emacs user, if you know you’re better than those vim-loving Ruby programmers: you’re doing yourself a disservice. You’re a worse programmer for it, and you’re harming your career. Why? Because you are not your tools, and your tools shouldn’t define your skillset.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247512461.73/warc/CC-MAIN-20190222013546-20190222035546-00577.warc.gz
|
CC-MAIN-2019-09
| 534
| 1
|
https://askubuntu.com/questions/474842/stuck-at-gnu-grub
|
code
|
I am using Ubuntu 14.04 with no other OS alongside.
Normally, when I start the computer, it shows the Asus logo first and then go to the Ubuntu loading screen and finally to Ubuntu home screen.
Now it shows Asus logo first and then shows GNU GRUB purple screen asking me to choose OS (I've already mentioned that Ubuntu is the only one OS I installed in my laptop).
So I choose Ubuntu, hit enter and it goes back to Asus logo and to GNU GRUB screen again.
It happens this way again and again and again, in other word, it's looping.
GNU GRUB version is 2.02 beta2-9
GNU GRUB version 2.02~beta2-9ubuntu1
Advanced options for Ubuntu
Memory test (memtest86+)
Memory test (memtest86+, serial console 115200)
Above is what the purple screen looks like.
I tried the third option Memory test. Found no error till pass 3.
There are two options in "Advanced options for Ubuntu"
Ubuntu, with linux 3.13.0-27-generic
Ubuntu, with linux 3.13.0-27-generic (recovery mode)
When I hit them, both options show the black terminal-like screen for a second or two and then to Asus logo. It is also looping.
Some say to fix this problem using boot repair. And some say boot repair slow down the reboot process.
Sorry for long detail question and thank you all...
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00308.warc.gz
|
CC-MAIN-2019-43
| 1,241
| 18
|
http://wiki.archivesportaleurope.net/index.php/APE_API_documentation
|
code
|
APE API documentation
The Archives Portal Europe helps people to search for and retrieve archival descriptions harvested from archival institutons throughout Europe. These services are available as an Application Programming Interface (API) as well. Request and response parameters are provided and delivered in JSON. This enables every programmer to create an (online) user interface for searching in and retrieving of archival descriptions.
We expect readers of this documentation to:
- know how to develop with a RESTful API and JSON
- understand the hierarchical data structures in archival descriptions
- have a notion about the XML-data standard Encoded Archival Description (EAD) 2002
- have a notion about the way data is copied ('harvested') in EAD-format from Collection Management Systems at archival institutions
In the current version (v3.0.0), various services are available. The endpoint of the API is https://api.archivesportaleurope.net/services. You must use an API-Key and a special value for accept in the HTTP-header.
Via https://www.archivesportaleurope.net/ApeApi/ you can access the ApeApi Explorer were you can try the services and see the responses.
|search (POST)||Services for searching with a term in the content and filtering on facets|
|content (GET)||Services for getting detailed information about a particular result|
|hierarchy (GET)||Services for getting detailed information about the position of a result in the hierarchy of a finding aid|
|download (GET)||Services for downloading XML-files|
|institute (POST / GET)||Services for getting full lists|
20161101: currently the production server holds version 3.0.0 of the API, compared to version 2.0.0 these services/improvements have been added:
added request or response parameters:
- /search/ead -> added request parameter sortFields
- /content/ead/clevel -> added response parameter fondsUnitTitle
To enable the Archives Portal Europe team to monitor the use of the API, you need to request an API-key to be able to use the API. The API-key can be requested via the option API on the homepage of the portal, or directly via this url: https://www.archivesportaleurope.net/get-api-key.
Note: you need to be a registered user of the Archives Portal Europe to be able to get an API key, so if you don't have an account for the Archives Portal Europe's My Pages functionality yet, please create one first. You can read more on the My Pages functionality over here.
Using the API you need to add a parameter "APIkey" to the header of your request
'APIkey' : 'put_your_personal_API-key_here'
Depending on the version of the API that you want to use, the accept parameter in the header must be set to a specific Content Type. For version 3.0.0 this is: application/vnd.ape-v3+json
'accept' : 'application/vnd.ape-v3+json'
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123632.58/warc/CC-MAIN-20170423031203-00145-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 2,804
| 24
|
https://community.home-assistant.io/t/cant-add-esphome-integration/524303
|
code
|
After I installed the yaml file to my esp32dev board, I then went add the ESPhome integration.
I filled the host details and pressed ‘Submit’ , then I got this error: On that page to add the esp32dev board, It also asks to ‘Please enter the connection settings’. My board is ESP32-WROOM-32, so what should I have for the board name.
Can’t connect to ESP. Please make sure your YAML file contains an ‘api:’ line.
The esp32dev board is connected to my laptop by usb cable.
The said file does have an ‘api’ line, unless there is something wrong with it.
esphome: name: home platform: ESP32 board: esp32dev wifi: ssid: "xxxxxxxxx" password: "xxxxxxxxx" # Enable Home Assistant API api: port: 6053 password: '' reboot_timeout: 15min ota: safe_mode: true port: 3232 password: '' i2c: sda: 21 scl: 22 scan: True id: bus_a sensor: - platform: bme280 temperature: name: "Lounge Temperature" oversampling: 16x pressure: name: "Lounge Pressure" humidity: name: "Lounge Humidity" address: 0x76 update_interval: 60s - platform: bh1750 name: "BH1750 Illuminance" address: 0x23 update_interval: 60s # Enable fallback hotspot (captive portal) in case wifi connection fails #ap: # ssid: "Home Fallback Hotspot" # password: "ZfMkVWyPrmBn" captive_portal: # Enable logging logger:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00464.warc.gz
|
CC-MAIN-2023-06
| 1,279
| 6
|
https://community.spiceworks.com/topic/1392272-spiceworks-down
|
code
|
Our spiceworks stopped responding. I logged into the server and tried restarting the service but it got stuck on stopping. After 10 minutes , I restarted the server. The server has been back up for 5 minutes or so, but i'm getting the following. I see the spiceworks.exe still growing in memory size (currently, 250, 000K)
The server is currently too busy processing requests.
Please wait a moment and try your request again.
We apologize for any inconvenience this may have caused you.
Seeing this message too often? Take a look at this doc to improve Spiceworks' performance.
This is on a W28 r2 machine with 16 GB RAM.
I guess never mind, it finally kicked in when the memory was over 300,000K and I got back in. I'm not sure what happened in the first place though.
We aren't. We are behind as any application upgrades (even helpdesk) require change control/testing before we can do in production and we have other projects that have delayed this.
We are currently on 7.3.00112
Be aware that upgrades from systems that are older maybe more likely to cause corruption, then you need to get support to fix the DB for you.
Not to mention when the systems become that much older you get a 90 day expiration notice and then the upgrade is scheduled (although I don't know if this stops your current system or does an in-place upgrade for you)
Just something to be aware of
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00205.warc.gz
|
CC-MAIN-2021-39
| 1,371
| 12
|
https://www.meetup.com/clojure-pdx/events/245270782/
|
code
|
Location visible to members
We've had a few presentations on using re-frame and continue to get many questions on this fantastic tool. There is a lot to learn but you will be rewarded when building single page apps. So this month our own Matthew Lyon will present:
Constructing Interfaces with re-frame
Matthew Lyon has been building the front- and back-ends for web applications for over twelve years, and with Clojure for three years. He enjoys creating interfaces for helping people work through complex interactions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572980.56/warc/CC-MAIN-20190917000820-20190917022820-00373.warc.gz
|
CC-MAIN-2019-39
| 520
| 4
|