url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
http://trerust.xyz/archives/2136
code
Novel–Versatile Mage–Versatile Mage Chapter 2061 – The Older, the Wiser noiseless scintillating “You were definitely the 3rd particular person I explored,” Bola educated him blandly. Fielding blushed in humiliation after listening to the phrase. Fielding was finding it difficult to breathe effectively. The minute Bola still left his retailer, he acquired immediately up to date the Eagle Demoness he acquired ties with about Bola’s program. The Eagle Demoness relayed the details with their queen, Euryale. “If your home is for long enough, you might recognize that also the most potent pals.h.i.+ps will disappear over time. Most people maintain phoning my label and recalling the youngsters we propagated, but few years after, it would deteriorate. My initial pal easily explained to some others my identity and location so he could generate a situation inside the Magical a.s.sociation. My subsequent friend, when I instructed him I had been planning to pass away, dug up my severe and stole my dearest funerary physical objects so that he could purchase some magical Products for his daughter. My sight had been shut as i heard him whining on how selfish I used to be. Basically If I could drop tears, he may have found that I had been still lively. I had been just in deeply slumber…” Bola explained for instance a grieving bard. Even more importantly, including the Holy Court Mage whom these folks were relying upon was phony. She was Euryale’s disguise! Fielding blushed in shame after ability to hear the text. “So the Sacred Court Mages won’t be on this page. My fantastic-excellent-grandchild has also set up a Domain name of Mirroring s.p.a.ce close to us. It will guard every non-life thing in our atmosphere which will help prevent any detrimental magic from seeping out,” Bola knowledgeable Mo Admirer. “Fielding was my best friend. Our buddies.h.i.+p survived for 20 decades. Which had been already quite amazing. People are incomprehensive. Several years is enough for someone to vary entirely. Perhaps the biggest relations.h.i.+p is no go with for petty gains to them,” Bola went on. “If your home is long enough, you are going to be aware that even best pals.h.i.+ps will disappear over time. Lots of people continue to keep calling my name and recalling the youth we propagated, but 10 years later on, it may well weaken. My first friend easily told other people my label and location simply so he could acquire a job from the Miracle a.s.sociation. My secondly buddy, as i instructed him I was about to pass away, dug up my serious and stole my beloved funerary things simply so he could acquire some wonder Equipment for his son. My eyes were actually closed down as i heard him whining precisely how self-centered I became. Generally If I managed to lose tears, he can have seen that I found myself still lively. I found myself just in profound slumber…” Bola said just like a grieving bard. What astonished Mo Lover was the belief that Bola’s helper obtained already colluded with Euryale! “I still don’t fully understand!” Mo Enthusiast rolled his sight. “I lied to you personally. I didn’t really just fulfill my old close friends. The simple truth is, I needed already got youngsters back whenever i was still constructing the existing Sacred Location. My young children been working extremely difficult. Their loved ones is already a professional clan during the Sacred Metropolis. One of my fantastic-good-grand kids is an excellent Judicator, as well as a righteous guy. He arrived at the Parthenon Temple and gone on his knees to talk to me, thus i presented him some beneficial advice. He guaranteed me that he would a single thing in my situation, so long as it wasn’t versus the Sacred Const.i.tution,” Bola continued calmly. Nonetheless, Bola obtained said it all. Remembrances ended up only perfect for recollection, but it really did not necessarily mean he was required to undertake a single thing. His only matter ended up being to stay a number of years longer and sustain his youngsters, like Euryale! “So the Holy Judge Mages won’t be on this page. My terrific-great-grandchild in addition has established a Website of Mirroring s.p.a.ce close to us. It could protect every non-residing thing in our surroundings preventing any destructive secret from dripping out,” Bola knowledgeable Mo Enthusiast. “You were actually your third guy I traveled to,” Bola educated him blandly. “I still don’t recognize!” Mo Fanatic rolled his eye. Euryale was extremely efficient at deceiving people today. She could declare a person’s sound, visual appearance, and manner, to make sure that including the Eyesight in the Great Dragon could not see through her conceal! It turned out indeed quite difficult to get a used demon being like her. People were not able to notify who was the important Euryale and who was her disguise. Mo Enthusiast frowned when he saw Maggie’s transformation. “Fielding was my pal. Our associates.h.i.+p lasted for twenty decades. Which was already quite spectacular. Men and women are incomprehensive. Several years will do for a person to alter totally. Even best associations.h.i.+p is not any suit for petty profits directly to them,” Bola continued. “My grasp, I already stated that dwelling a long-term living have their merits,” Bola responded. Maggie was using three-” high heels. Her toned thighs and legs had undertaken up two-thirds of her system proportion. The very sharp shoes ended up caught firmly to the ground. “It’s far too late to regret it now. In just two minutes or so, the Holy Courtroom Mages will surround this position. Sadly, a small girl taken place to pass away in the creating there after her blood was pulled free of moisture. I think the Holy Opinion Court can look in it,” Maggie announced smugly. Euryale got acquired from Fielding along with the serious Maggie. She employed her skill to consider Maggie’s visual appeal and pretended to take part in the procedure to discover out who was seeking to have her out. She was setting up to remove them all at one time, but she did not expect to have Bola to always be so cunning! Mo Admirer frowned when he spotted Maggie’s transformation. Section 2061: The Old, the More intelligent “Stop becoming psychological. This isn’t an anime where you could recall your recent for a couple events, yet still just one or two a few moments have pa.s.sed in fact. The Holy Judge Mages shall be in thirty just a few seconds!” Mo Fanatic claimed. Novel–Versatile Mage–Versatile Mage
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00790.warc.gz
CC-MAIN-2022-40
6,610
29
https://www.megamaxservices.com/jobs/full-stack-developer/
code
- Document technical designs using object-oriented, n- tier design technique standards and guidelines to develop. NET-based software applications. - Deploy the application to the cloud utilizing technologies like Octopus, Azure, microservices, and Kubernetes. - Interact with users to understand requirements and provide support in fixing issues during user acceptance testing. - Collaborate effectively with users, project managers, business analysts, testers, and other team members. - Develop functionality using the ASP.NET core, MVC Framework (C#.NET). - Created complex SQL views and stored procedures to implement business logic. - Participated and facilitate agile and scrum meetings which included sprint planning, daily scrums or standups, sprint check-in, sprint review, and sprint retrospective. Megamax Services is a complete IT solution provider company majorly focused on IT Infrastructure setup and support and Service Management (IMS), headquartered in Noida, with its global presence in USA and UK. As long-term strategic business partners, Megamax Services delivers IMS excellence through Managed IT Services, Consulting & Training Services, Infrastructure & Technology Solutions, and Service Management Consulting Solutions to its customers, globally. Its customers are medium to large businesses where IT function plays a vital role as a business driver.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00258.warc.gz
CC-MAIN-2021-49
1,375
8
http://pycdep.sourceforge.net/Getting%20Started.html
code
You’ve installed pycdep and obviously now you want to use it. The way pycdep works is as follows: Run the following command python pycdep.py --sandbox mysandbox -R /path/to/code --common-root-folder code --strip-common-root-folder This will run pycdep.py. The –sandbox option causes pycdep to use folder mysandbox as a sandbox folder, i.e. a folder where intermediary files (should they be needed) will be saved. Also, if you don’t specify a path to a log file, the log file will end up in the sandbox folder. The -R option adds /path/to/code as a directory that has to be recursively scanned and processed. The –common-root-folder option specifies that path names in the output will be truncated so that they start with the folder specified as –common-root-folder. The –strip-common-root-folder causes the common root folder to be stripped from the pathnames. To clarify the meaning of the –common-root-folder and –strip-common-root-folder options, here’s an example. Suppose you have the following file structure /path/to/code/a.cpp /path/to/code/a.h /path/to/code/lib/b.cpp /path/to/code/lib/b.h If you specify –common-root-folder code, the path names in the resulting prolog and graphviz files will be truncated as follows code/a.cpp code/a.h code/lib/b.cpp code/lib/b.h If you additionally specify –strip-common-root-folder, the path names in the resulting prolog and graphviz files will be truncated as follows a.cpp a.h lib/b.cpp lib/b.h Load the generated prolog program in swi-prolog (type the following on the command line/shell) swipl -f dependencies.pl You arrive in an interactive prompt, and now you can already perform prolog queries. The generated prolog file contains many examples of predefined queries. A useful one is the full_report query. It will write all kinds of interesting facts about your source code in a text report Use its source. Here’s an example prolog query that finds all the files that include file ‘lib/b.h’ ?- findall(F, F depends on 'lib/b.h', Result), write(Result). Here’s how to dump all the include dependencies in a .dot file for visualization using graphviz (inside prolog) ?- findall([F1,F2], F1 includes F2, Result), to_graphviz('test.dot', Result). (see the documentation inside the generated prolog file to see more examples). To generate a .png from the .dot file, you can do (on a command line) dot -Tpng test.dot -o test.png If you are not too keen on learning prolog to get information about the system, you can try out the very experimental natural language front-end. To be able to use this front-end you need to load two additional prolog files: intuitivequery.pl and categories.pl. The file intuitivequery.pl implements an interpreter for a dialect of a superset of a subset of the artificial-intelligence markup language (AIML). The categories.pl defines the natural language queries using an AIML-like syntax. Unlike AIML, the syntax used in categories.pl is not XML based, which makes it less cumbersome to edit. Here’s how to load the generated prolog file together with the natural language interface, and to start up the natural language input loop swipl -g "[dependencies, intuitivequery, categories], loop." You can now type queries like ?- Show me the header files in project 'lib' ? ?- Which header files are included by noone ? Please see categories.pl, and tests.pl for some inspiration about possible queries. To get an overview of the command line options, type python pycdep.py --help As a result you get usage: pycdep.py [-h] [-s SANDBOX] [-l LOGFILE] [-V VERBOSITY] [-r REPORTNAME] [-v] [-p PREFIXLENGTH] [-i] [-I INCLUDEDIR] [-R RECURSIVEINCLUDEDIR] [-X EXCLUDEDIR] [-Y RECURSIVEEXCLUDEDIR] [-C CPPSUFFIX] [-H HEADERSUFFIX] [-S SEPARATOR] [-P PROLOGDATABASENAME] [-d HIERARCHYDEFINITION] [-f COMMONROOTFOLDER] [-c] The available command line options are -h, --help show this help message and exit -s SANDBOX, --sandbox SANDBOX generates intermediate files in the SANDBOX folder (if any) -l LOGFILE, --logfile LOGFILE choose how to name the logfile -V VERBOSITY, --verbosity VERBOSITY choose log level (50 = CRITICAL, 40 = ERRROR, 30 = WARNING, 20 = INFO, 10 = DEBUG, 0 = NOT SET) -r REPORTNAME, --report-name REPORTNAME generates a report with name REPORTNAME -v, --version show version and quit -p PREFIXLENGTH, --prefix-length PREFIXLENGTH max no of path prefixes to keep (to avoid duplicate names) -i, --case-insensitive consider file names to be case insensitive (win32) -I INCLUDEDIR, --include-dir INCLUDEDIR add a directory to examine non-recursively -R RECURSIVEINCLUDEDIR, --recursive-include-dir RECURSIVEINCLUDEDIR add a directory to examine recursively -X EXCLUDEDIR, --exclude-dir EXCLUDEDIR exclude a directory from analysis -Y RECURSIVEEXCLUDEDIR, --recursive-exclude-dir RECURSIVEEXCLUDEDIR exclude a directory and all its subdirectories from analysis -C CPPSUFFIX, --cppfile-suffix CPPSUFFIX define file extension of c/c++ files (default: cpp) -H HEADERSUFFIX, --headerfile-suffix HEADERSUFFIX define file extension of header files (default: h) -S SEPARATOR, --path-separator SEPARATOR define an additional path separator (default: '/') -P PROLOGDATABASENAME, --prolog-name PROLOGDATABASENAME define the name of the prolog database that will be saved -d HIERARCHYDEFINITION, --hierarchy-definition HIERARCHYDEFINITION location of hierarchy.txt (include constraint specificiation) -f COMMONROOTFOLDER, --common-root-folder COMMONROOTFOLDER when displaying a file path, always start from the common root folder if possible -c, --strip-common-root-folder when displaying a file path starting from a common root folder, include the common root folder from the displayed path
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00367-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
5,695
32
https://www.freelancer.com/projects/shopify-site/lay-out-shopify-store-migrate/
code
We have about 80 products made by Jenvey Dynamics Ltd on our store at [url removed, login to view] hosted on Magento. We want to migrate these products to a new store on Shopify that has a similar look as and the branding of the manufacturer's website ([url removed, login to view]). We are the official US reseller for this company. For the migration, we have a data dump of the products in a CSV file. The Magento Attribute Set should be used as the category (collection) in Shopify. For the store design, we would like to have a couple of collections to navigate to directly from the header and a contact us page where an email can be sent to us with questions. Besides that, the products should be front and center. We have a number of images we can provide for use as the hero. In general, the desire is to keep the store simple and functional. We're not looking to have a custom template built, but are open to suggestions of templates to buy. In your bid, please state the free or paid template you suggest using for this site.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825363.58/warc/CC-MAIN-20181214044833-20181214070333-00061.warc.gz
CC-MAIN-2018-51
1,034
4
https://www.sqlservercentral.com/Forums/Topic1456217-1549-1.aspx
code
I have an alert set up to email me whenever one of my mirrors move into state 6 ("connection with mirror lost"). Most times it loses the connection for a very brief time and doesn't reach the threshold for a failover (a network issue). This of course triggers an alert, when really I don't want to be alerted until the connection's been lost for 5 seconds. Will the "Delay Between Responses" option on the alert properties dialog box (set to 5 seconds of course) get me where I want to be, or will I have to employ some more complicated solution? Or, should I simply not monitor for state 6 and only monitor for when failovers have occurred (states 7 & 8)??
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887535.40/warc/CC-MAIN-20180118171050-20180118191050-00447.warc.gz
CC-MAIN-2018-05
657
2
http://news.sys-con.com/node/2671217
code
|By Jason Bloomberg|| |May 21, 2013 03:30 PM EDT|| When ZapThink last wrote about Business Process Management (BPM) in the Cloud in March 2012, we challenged both vendors and BPM customers to rethink their approach to BPM software, eschewing a heavyweight middleware approach for the lightweight, hypermedia-oriented approach that Representational State Transfer (REST) encourages. And while we did generate some short-lived buzz, most of the response – or lack thereof – was little more than a resounding silence. True, work on Cloud-friendly, REST-based BPM continues in certain dusty corners of academia, most notably in the research of Cesare Pautasso, a professor at the University of Lugano in Switzerland. But in spite of his notable contributions to Thomas Erl’s SOA with REST book, the enterprise software and Cloud marketplaces have largely either ignored or misunderstood his research as well as ZapThink’s on this topic. While it’s amusing to theorize a vast vendor conspiracy, positing middleware dinosaurs actively working to distract their customer base from lighter weight, Cloud-friendly approaches, the reality is likely to be far more mundane. People just don’t get it. Or to be precise, our audience doesn’t get how all the pieces—BPM, REST, Cloud, and even a bit of SOA—fit together. To help resolve this confusion, let’s resort to an age-old technique: let’s draw some pictures. Framing the Cloud-Friendly BPM Problem Let’s start this discussion with an illustration of an admittedly simplistic business process involving one person and some back-end system, as shown in Figure 1 below. Note that in the figure above, the user might tackle a few tasks, and then the server takes over, executing a few tasks on its own. While the server is busy doing its thing, the user might query the server as to the current status of the process. So far so good, but we don’t want our server to serve only one user at a time. After all, the whole point of the client/server pattern is that it is many to one. As a result, we need to introduce the notion of a process instance. For the sake of simplicity let’s assume that we don’t have more than one person participating in a particular instance at the same time. But we might have multiple people each running their own instance of a process, for example, completing a purchase on a Web site, as shown in Figure 2 below. In the figure above, the BPM engine running on the server spawns a process instance to deal with the interactions with the user. If multiple users initiate the same process, the server can instantiate as many process instances as necessary, and the engine keeps track of where every user is in their instance—in other words, the instance state. How to keep track of all this state information in a scalable, robust manner is at the core of numerous distributed computing challenges. Today’s BPM engines generally run on Enterprise Service Buses (ESBs), which maintain state by spawning threads—short-lived, specialized object instances that run in the execution environment of the ESB. But while threads are short-lived, process instances might take days or weeks to complete, and furthermore, threads are specific to the execution environment, making cross-ESB processes difficult to implement. For these reasons, we call state management the Achilles Heel of traditional, heavyweight (Web Services-based) SOA. If such ESB-centric issues weren’t bad enough, the Cloud introduces a new wrinkle. Because we want to run our server in the Cloud, we don’t want to use it to maintain any state information, because we expect virtual machine (VM) instances to fail. In the Cloud, we provide automated recovery from failure rather than avoiding failure. However, if we store all the state information in the underlying persistence tier (not shown), then we limit our scalability, since every time anyone clicks a link, we must update a database somewhere. What we need is a better way of dealing with state information that both allows our BPM engines to be Cloud friendly, and also frees us from the limitations of our ESBs. Or perhaps we must reinvent our ESBs to work in the Cloud. However you slice the problem, Hypermedia-Oriented Architecture (HOA) has the answer. HOA to the Rescue As ZapThink has discussed before, many people misconstrue REST as an API style that features a uniform interface, where in reality it’s a style of software architecture for building hypermedia systems. Why is the latter definition a better one? Because Roy Fielding, its creator, says so. That being said, work continues on the architectural context of REST, perhaps extending Fielding’s original thinking, as well as beyond the API style that most techies think of when they think about REST. We call this extension of the REST architectural style Hypermedia-Oriented Architecture, or HOA. The central principle of HOA is the HATEOAS REST constraint: hypermedia is the engine of application state. In essence, HOA separates two different types of state information: application state and resource state. Application state corresponds to the user’s place in the runtime workflow consisting of hyperlinked representations, while resource state remains on the server, keeping track of persisted state information and state information that multiple users share. On the one hand, HATEOAS requires hypermedia to manage all state information specific to individual clients, and on the other hand, delegates all other state information to the server. REST also specifies a set of verbs for querying an changing state information: GET for querying resource state without changing it, and three verbs that change the resource state: POST for initializing a resource, PUT for updating a resource, and DELETE for deleting a resource (assuming we’re using HTTP as our transport protocol). Note, therefore, that all verbs other than GET change the resource state, while all verbs, including GET, change the application state. Furthermore, all state information appears in the messages between client and server: the requests from client to resource, and the representations from resource to client. By extension, HATEOAS requires us to only use POST, PUT, or DELETE when—and only when—we must update resource state. With this principle in mind, we have a real problem with the process in Figure 2. Note that the server is maintaining application state, which HOA forbids. But we can’t solve this problem simply by picking up the process instance from the server and sticking it in the client and expecting it to work properly, because sometimes we really do want to update the resource state. We somehow need to separate the process instance into two (or more) pieces so that hypermedia on the client can be the engine of application state while the BPM engine remains the engine of resource state. Figure 3 below illustrates this principle. The client sends a POST to the server, which initializes a resource. In this case, that new resource sends a hypermedia representation to a stateless intermediary which caches the representation. This hypermedia representation is essentially an abstraction of a dynamic set of hyperlinked representations, for example, one or more php scripts that can generate a set of hyperlinked Web pages. Once the intermediary has the hypermedia representation, it returns the initial representation (for instance, a Web page) to the client. From that point on, as long as the client is navigating the application via hypermedia, changing only the application state as the user moves from one step in the process to the next, there is no need to change the resource state—and thus, no further POSTs, PUTs, or DELETEs are allowed. The client may perform a GET, because GETs change only the application state. The intermediary may be able to handle the GET on its own (if the necessary information is resident in the cache) or can turn around and perform a GET on an underlying resource, if necessary. Furthermore, the application state may change without any interactions with the intermediary or the server by leveraging programmatic capabilities on the client. If the client is an arbitrary piece of software then this capability is trivial. But even if the client is a browser, it’s possible to change the state of an application without fetching anything from the server. In fact, there are many was to accomplish this feat. Sometimes, of course, a hypermedia application, which we might also call a HOA process, must update resource state, for example, when it’s time to process the user’s credit card or change the number of widgets in inventory. Then—and only then—do we perform a PUT. The most important characteristic of the process in Figure 3 is the fact that the intermediary is entirely stateless. If for some reason the VM that is hosting the hypermedia representation that is serving the client crashes, the Cloud environment must simply spawn a replacement and reload the same hypermedia representation as before. The client won’t lose its place because the hypermedia on the client are maintaining the application state. Similarly, we can horizontally scale the middle tier however and whenever we like. Instead of one VM hosting a particular hypermedia representation, we could have two or a hundred, and it doesn’t matter which one responds to a particular GET from the client. Combining HOA Processes and Traditional BPM The problem with the example in Figure 3, of course, is that every client’s process is separate from every other client’s process. However, most business processes in today’s organizations involve multiple parties—either multiple people or multiple enterprise applications or some combination. On first glance, HOA doesn’t address such complex processes, since HATEOAS only deals with application state, not resource state. Fortunately, HOA works perfectly fine in this broader context as well, because it calls for a separation of application and resource state while providing for multiple ways to update resource state. After all, POST, PUT, and DELETE all update resource state, and any user can execute these verbs for a particular resource. Figure 4 below illustrates this more complex process. In the figure above, a POST from a client instructs the BPM engine to instantiate a process instance on the server as in Figure 2. The first step in this process creates a hypermedia representation for the client to interact with as in Figure 3. Meanwhile, the resource state may change via any event, including a server-generated event or the action of a different user. If a user executes a PUT on the client to the hypermedia representation on the intermediary, then that representation turns around and PUTs to the appropriate underlying resource. Or perhaps the client PUTs to an underlying resource directly. Either way, the PUT goes to a hyperlink the client obtained from a previous representation at an earlier step in the process. We might call the process running on the server a Composite RESTful Service, because the intermediary may abstract the entire server-based process via one or more RESTful URIs. A simple example of a Composite RESTful Service is a chat window application. Multiple users share the same chat session, so clearly the chat session state is part of the resource state. There are a few essential points to keep in mind about the illustration in Figure 4. First, the intermediary remains stateless and therefore Cloud-friendly. We must maintain resource state in the persistence tier, but since we’ve offloaded the maintenance of application state to the client, we won’t be overburdening our database. We may also interact with our Composite RESTful Service via RESTful interactions, an essential benefit that Prof. Pautasso emphasizes in his research. And finally, not only is the middle tier horizontally scalable and elastic, so is the client tier—because every user brings their own client to the process. The ZapThink Take With the addition of an appropriate approach to building a RESTful Service abstraction, Figure 4 also serves as an illustration of how to implement RESTful SOA, what ZapThink refers to as “next generation” SOA in our Licensed ZapThink Architect (LZA) course as well as in my new book, The Agile Architecture Revolution. We therefore have a single, simple diagram bring together the worlds of SOA, BPM, Cloud, REST, and HOA. The secret to getting all these architectural trends to work well together centers on how we deal with state information. We must first separate application state from resource state, and then subsequently take the conceptual leap to understanding that the best way to implement our business processes is by combining HOA processes with Composite RESTful Services. Once we make this leap, however, the pieces of this complicated puzzle finally fall into place. Image credit: Bruce Guenter Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way. Mar. 25, 2017 03:00 AM EDT Reads: 5,639 All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work... Mar. 25, 2017 02:00 AM EDT Reads: 2,773 Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization... Mar. 25, 2017 01:45 AM EDT Reads: 1,411 DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. Mar. 25, 2017 12:15 AM EDT Reads: 1,541 What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, will discuss the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information, Mar. 25, 2017 12:00 AM EDT Reads: 449 SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add... Mar. 24, 2017 11:45 PM EDT Reads: 623 Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., will discuss how these tools can be leveraged to develop a lasting competitive advanta... Mar. 24, 2017 11:15 PM EDT Reads: 2,489 TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte... Mar. 24, 2017 10:15 PM EDT Reads: 4,146 With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, will discuss some of the security challenges of the IoT infrastructure and relate how these aspects impact Smart Living. The material will be delivered i... Mar. 24, 2017 10:00 PM EDT Reads: 1,875 In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), will provide an overview of various initiatives to certifiy the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldw... Mar. 24, 2017 08:15 PM EDT Reads: 301 My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum... Mar. 24, 2017 06:30 PM EDT Reads: 2,618 Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, will provide a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services ... Mar. 24, 2017 06:30 PM EDT Reads: 4,123 SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises. Mar. 24, 2017 05:15 PM EDT Reads: 1,308 SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads. Mar. 24, 2017 04:45 PM EDT Reads: 1,776 Have you ever noticed how some IT people seem to lead successful, rewarding, and satisfying lives and careers, while others struggle? IT author and speaker Don Crawley uncovered the five principles that successful IT people use to build satisfying lives and careers and he shares them in this fast-paced, thought-provoking webinar. You'll learn the importance of striking a balance with technical skills and people skills, challenge your pre-existing ideas about IT customer service, and gain new in... Mar. 24, 2017 04:30 PM EDT Reads: 2,274
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188891.62/warc/CC-MAIN-20170322212948-00337-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
20,789
64
http://ux.stackexchange.com/questions/tagged/information+design-principles
code
User Experience Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Advice on layout/presentation of information to user in web page I'm developing a website that helps connect nightlife promotion companies with consumers. Each night life company hosts events and various venues and on various nights. I want to give users/consumers ... Aug 12 '11 at 19:56 newest information design-principles questions feed Hot Network Questions How to deal with players asking for an "infodump" Should I learn to use LaTex to write up a History Masters Thesis? Reverse a string word by word Easy-to-learn system that supports one-on-one play What? No error? Is there a secular, non vulgar alternative to "for heaven's sake"? What text format is least likely to clash with ebook formats? Chemfig - command arrow - alignment doesn't work Why do we use so complicated notation? Why is this allowed? ("Fourier's Trick"; finding the coefficients in a Fourier Series) Bash : check if $REPLY is in a range of number (between numA and numB) How does the command prompt know when to wait for exit? Meaning of the term - in the blind Why is "distro", rather than "distri", short for "distribution" in Linux world? Do we need to formally teach the Greek Alphabet? Investigating suspects and evidence in a D&D3.x based world How to tell if a code is loseless. What's the inspiration for the owlbear? Why do the trends in reactivity not apply for francium? 'The good of the people' & 'follow in everything the general will' Better size aligning? Is it dangerous to go to the USA as a Russian now? Explaining the difference between computer science and computer literacy Why does apt removes unwanted packages when giving * as suffix? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Overflow Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010115284/warc/CC-MAIN-20140305090155-00009-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
2,297
53
https://forums.wincustomize.com/143931/page/1/#1266311
code
If you have a new gaming PC, believe me installing XP is HEEEEELLLLLLLL... XP Sp1 official genuine, (I know you can upgrade with Windows update, I'm talking here about the SETUP) - Doesn't like SATA HDD's - Hmmm dual core?!?! it sees my Intel Core 2 Duo and AMD Athlon 64 X2 as a single 800Mhz CPU - PCI-Express Ethernet card: everything (including windows updates) are successful once every 5 times. The rest of the time, the file is corrupted, or clock at 99%. Of course the millisecond you upgrade to SP2, everything works perfectly. This can take up to a full day of work Vista, 20min and your done.. printer works too without putting a disk in, or downloading anything from the support site. FAST startup, programs starts faster. The second you log-in to your account, even while you got your startup programs starting up, you can start using your computer right away, the start menu won't disappear in front of you. And trial software system fails in most programs (that means it considers that you purchased it and that a serial was inserted. Heck you even get "Thank you for purchasing xyz program" message) Many never before fixed bugs since Win2000 are finally fixed: - Programs don't lose focus - tool tip on start menu does not appear on the back of the menu. - improved network system - FIX the bug: plug/unplug multiple times a USB device on the same USB port, and then the USB port will stop working, until reboot of computer. - MUCH better resource management. - A REAL firewall system (quiet good actually) BTW: Windows Vista runs quiet smoothly with: - 512MB of RAM @ 333Mhz - Geforce FX 5200 AGP - AMD Athlon XP 2500+ - Nforce 2 - Aero GUI with transparency The reason why it take so much RAM, is because of super fetch. Disable that service, and see your programs take more time to load, yet see your RAM used amount drop. The only reason why it's slow on games, is because there are no optimized drivers for the sound card, motherboard and video card. Remember Windows XP was out since 2001 and was based on Win2000. So companies was very comfortable programing drivers for it. Oh talking about games, I got all my old 1999 and older games that WORKS under Vista RC2 public beta, yes they work, without any hack! - WarCraft II (original for windows) - Emperor: Battle of Dune - Simcity 1 - Prince of Persia DOS version - TestDrive for DOS - Need For Speed HighStakes Anyway, in my case, I have no choice to upgrade... ========================================================================A bit more interesting question: Since Vista come out, we see many people change to MAC or Linux... Do you think releasing vista was bad for Microsoft? What do you think that caused Vista to cause people to change to different OS? When I saw Windows XP came out, it had less features then added from Win2000 then Vista from XP. Windows XP main new features: - New ugly (at my taste) Skin - Dog in the search - Side column - Remove the ability to make your own custom folder style. - Activation system - Welcome screen - Fast user Switch Heck I was using Win2000 since 2006 Jan. and that was when I build my new computer. I had a Windows XP SP1 CD that I won in a contest, and I never bother to look at it.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00398.warc.gz
CC-MAIN-2022-05
3,216
45
https://docs.complycube.com/api-reference/
code
Overview of the ComplyCube API Reference. ComplyCube is a turnkey cloud and API platform for automating Identity Verification (ID&V), Anti-Money Laundering (AML), and Know Your Customer (KYC) compliance workflows. The ComplyCube API follows REST principles - it has predictable, resource-oriented URLs and uses HTTP response codes. All API responses, including errors, return JSON. We’ve designed our documentation to be easy to follow. It is grouped by resource type and offers practical examples. As a general guideline, if a property has an empty or null value, we drop it from the JSON unless there’s a strong semantic reason for its existence.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648850.88/warc/CC-MAIN-20230602172755-20230602202755-00223.warc.gz
CC-MAIN-2023-23
652
5
http://www-01.ibm.com/software/globalization/topics/charsets/collection.html
code
Collection or character set The character set or collection is either the entire set or a subset of characters that are assigned code points in a coded character set. It is typically a collection of all the characters needed for a given language or application. Knowledge of the character set contained in a coded character set is needed in some situations. See Figure 4. Terms like the ASCII repertoire, the Latin-1 or Latin-5 character set, the portable character set in the UNIX and Linux worlds, or the EBCDIC invariant or syntactic character set, are references to character sets, without any specific encodings being associated with them. These character sets have been encoded using different encoding schemes to create different compatible or interchangeable coded character sets. Some of the character sets are specified for portability or programming language syntactic reasons and must be contained in any coded character set used for that environment. Character set identification assists in the differentiation of one version of a coded character set from the next, especially when the set expands in size over time. If a coded character set has reached its maximum possible size (per the encoding scheme definition) its maximum character set will be fixed. If there is still room, a coded character set can grow in size. Often, the same code page identifier is retained for the new coded character set, and the previous assignments of characters remains unchanged. In such cases, one cannot distinguish the old from the new using the code page identifier alone. The character set identification will help with this. IBM standards call for using both character set and code page identifiers (resulting in different CCSIDs), and for retaining the same code page identifier for the new coded character set. Knowledge of the character set defined in the coded character set helps in managing the flow of characters from a system that supports an expanded set to a system that is still back level. It helps in differentiating conversion resources created from the old and the new definitions. The identification of character sets has been dealt with to different degrees of precision in the industry -- from loosely identifying closely related sets to distinguish even a single character difference. IBM standards permits, and has registered, character sets with their identifiers. In theISO/IEC 10646 collection, numbers identify open as well as fixed subsets. Collection numbers have also been assigned to the fixed repertoire of each major edition or amendment to the standard (equivalent of selected version identifier for both ISO/IEC 10646 and Unicode) as the standard grows in size. While the term character set is usually applied to the set of graphic characters such as the letters A to Z, control characters such as Horizontal Tab or Carriage Return that are also used in plain text are also included in coded character definitions. Typically a set of control characters are also included in a coded character set definition. The set of control characters are associated with the encoding scheme definition in most cases. ISO/IEC 2022 provides a mechanism for invoking and using different control character sets from the ISO Registry. They can be found in some terminal data stream specifications and their emulations. The character set identification is also useful in knowing the set of characters that are generated from a particular keyboard layout or supported by a font resource in a printer or display. These are examples of use of character sets outside the world of coded character sets.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00174-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
3,615
9
https://labnario.com/page/4/
code
Huawei AR routers have easy and effective CPU usage monitoring tool. They generate alarm, when CPU usage reaches 80%. When CPU usage falls to 75%, recovery usage alarm is generated again (clear alarm). This is a default behaviour, but these values can be easily changed in order to help optimize system performance and ensure system stability. Let’s configure CPU usage alarm threshold as 85% and recovery usage alarm threshold as 80%. The following command can be used for that: <labnario>system-view [labnario]set cpu-usage threshold 85 restore 80 Info: Succeeding in setting task cpu usage threshold 85 restore 80.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649343.34/warc/CC-MAIN-20230603201228-20230603231228-00583.warc.gz
CC-MAIN-2023-23
619
3
https://bbs.archlinux.org/viewtopic.php?pid=1247526
code
Here is a screenshot of where the machine hangs when starting (this is within VirtualBox, Win7 host and an Arch guest)--> http://i.imgur.com/4KPLmY8.png Not sure how to troubleshoot this, the `Fallback initramfs` has the same issue. Last edited by raj (2013-03-22 21:21:19) how did you install Arch ? Did you follow the Beginner's guide to the letter? Did you install grub or syslinux ? There's no such thing as a stupid question, but there sure are a lot of inquisitive idiots ! Sorry, I should have stated this was not a new install. It was due to a vbox guest-utils update I got off -Syu a few days ago. Yesterday's update resolved the issue of Xorg not finding an available screen. Case closed, thanks for the reply. Please edit your first post to tag the subject [solved]. Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123491.68/warc/CC-MAIN-20170423031203-00257-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
785
8
https://vivapinkfloyd.blogspot.com/2008/08/how-to-make-ut2004-use-oss-on-debian.html
code
Some of us playing Unreal Tournament 2004 may be familiar with the error open /dev/[sound/]dsp: Resource temporarily unavailable. This is usually caused because the sound system is busy (for example, you have an audio player opened, or a paused movie). The simplest way to get over this problem is to install the alsa-oss package, which contains the aoss wrapper to allow the use of ALSA OSS library. In Debian, type as root: apt-get install alsa-oss Then run UT2004 like this: Depending on where your game is installed. The good thing about this is that you can now listen to music in a player separate from the one included in the game and also hear the UT2004 sounds.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00708.warc.gz
CC-MAIN-2022-05
670
5
https://kidakaka.com/blog/2014/04/10/ubuntu-one-closes/
code
With Ubuntu came a new feature … Ubuntu One. I chanced on it at pretty much the same time as Dropbox, and ended up opting to use Dropbox instead. However, there was Ubuntu One instantly integrated with all of my Ubuntu installations … there was some merit in this system. Did I purchase it, no. I did not pay for this service since for the same price (free), there were far many other services providing more space (Dropbox can give you upto 20GB, so can Copy). Suffice to say that I am aware about Ubuntu One, but never really depended on it for my file sharing/saving needs. Today, I saw this mail in my inbox – We are writing to you to notify you that we will be shutting down the Ubuntu One file services, effective 1 June 2014. This email gives information about the closure and what you should expect during the As of today, it will no longer be possible to purchase storage or music from the Ubuntu One store. The Ubuntu One file services apps in the Ubuntu, Google, and Apple stores will be updated appropriately. As always, your content belongs to you. You can simply download your files onto your PC or an external hard drive. While the service will stop as of 1 June, you will have an additional two months (until 31 July 2014) to collect all of your content. After that date, all remaining content will If you have an active annual subscription, the unused portion of your fees will be refunded. The refund amount will be calculated from today’s We know you have come to rely on Ubuntu One, and we apologise for the inconvenience this closure may cause. We’ve always been inspired by the support, feedback and enthusiasm of our users and want to thank you for the support you’ve shown for Ubuntu One. We hope that you’ll continue to support us as together we bring a revolutionary experience to new devices. The Ubuntu One team Lesson to be learn here – Even if you are closing down, do so with grace and always inform your customers about when you are closing shop. I will not say I am sad to see this, that would be lying … in fact this is the nature of things, either you consolidate into a bigger service or you go down.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476180.67/warc/CC-MAIN-20240303011622-20240303041622-00082.warc.gz
CC-MAIN-2024-10
2,154
24
https://gregives.co.uk/
code
How I changed my font loading strategy and reduced the first stage fonts by over 100 kB Why I decided to use a CMS for my website and why I chose to use Forestry.io Learning how to write regular expressions has been a great investment of my time, you should learn regex too! A computer vision system for a chess-playing robot. Tracks the state of a chessboard over time, using Python, scikit-learn and OpenCV. Progressive web app designed to share photos at festivals around the world, created for a university module on modern web development.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00461.warc.gz
CC-MAIN-2021-10
544
5
https://boardgamegeek.com/thread/1149950/dual-zendo
code
I need a healer! We've started playing a variant of Zendo that we call dual Zendo. Instead of making one rule the master instead makes two (which should be a bit simpler than whatever your current level is) and builds the usual 2 starting koans. The first koan should conform to both rules and is marked white. The second should fail both rules and be marked black. The game now proceed as normal except that each koan the students build are checked for both rules and marked white or black if it conforms/fails both rules. However if it fails one of the rule and fits the other it is instead marked green. During a mondo a student may guess that a koan will be marked green by putting both his white and his black guessing stone into his hand. When making a guess students must make a guess as to what the 2 rules are and in case of a wrong guess the master must build a counterexample as usual (the counterexample may be a "green" koan). My last rule was: A koan must consist of exactly 2 colors A koan must have less small pieces than other (medium & large) pieces This is a neat idea which makes the most sense (I think) for a group that's played a lot of binary, one-rule zendo. Logically speaking, of course, two rules is equivalent to one ambiguous rule assessed using a supervaluationist semantics: that is, it unambiguously applies only if it applies on all readings, it is unambiguously false only if it fails to apply on all readings, and it is indeterminate otherwise. Once you have that formal structure, three-rule and n-rule generalizations are obvious. Of course, n-rule Zendo would be an unplayable mess for some n.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216051.80/warc/CC-MAIN-20180820082010-20180820102010-00674.warc.gz
CC-MAIN-2018-34
1,632
10
https://dumbseoquestions.com/q/canonical_on_site_search
code
My client has an ecommerce site. I try to fix canonical among other things. I discovered google index the urls that came from searches performed inside the site. Should I put canonical pointing to homepage for every url that is produced after someone performs a search for a product or brand on site? is this the correct practice?
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00559.warc.gz
CC-MAIN-2019-26
330
1
https://alvinalexander.com/java/jwarehouse/akka-2.3/akka-docs/rst/scala/persistence.rst.shtml
code
Akka/Scala example source code file (persistence.rst) The persistence.rst Akka example source code .. _persistence-scala: ########### Persistence ########### Akka persistence enables stateful actors to persist their internal state so that it can be recovered when an actor is started, restarted after a JVM crash or by a supervisor, or migrated in a cluster. The key concept behind Akka persistence is that only changes to an actor's internal state are persisted but never its current state directly (except for optional snapshots). These changes are only ever appended to storage, nothing is ever mutated, which allows for very high transaction rates and efficient replication. Stateful actors are recovered by replaying stored changes to these actors from which they can rebuild internal state. This can be either the full history of changes or starting from a snapshot which can dramatically reduce recovery times. Akka persistence also provides point-to-point communication with at-least-once message delivery semantics. .. warning:: This module is marked as **“experimental”** as of its introduction in Akka 2.3.0. We will continue to improve this API based on our users’ feedback, which implies that while we try to keep incompatible changes to a minimum the binary compatibility guarantee for maintenance releases does not apply to the contents of the ``akka.persistence`` package. Akka persistence is inspired by and the official replacement of the `eventsourced`_ library. It follows the same concepts and architecture of `eventsourced`_ but significantly differs on API and implementation level. See also :ref:`migration-eventsourced-2.3` .. _eventsourced: https://github.com/eligosource/eventsourced Changes in Akka 2.3.4 ===================== In Akka 2.3.4 several of the concepts of the earlier versions were collapsed and simplified. In essence; ``Processor`` and ``EventsourcedProcessor`` are replaced by ``PersistentActor``. ``Channel`` and ``PersistentChannel`` are replaced by ``AtLeastOnceDelivery``. ``View`` is replaced by ``PersistentView``. See full details of the changes in the :ref:`migration-guide-persistence-experimental-2.3.x-2.4.x`. The old classes are still included, and deprecated, for a while to make the transition smooth. In case you need the old documentation it is located `here <http://doc.akka.io/docs/akka/2.3.3/scala/persistence.html>`_. Dependencies ============ Akka persistence is a separate jar file. Make sure that you have the following dependency in your project:: "com.typesafe.akka" %% "akka-persistence-experimental" % "@version@" @crossString@ Architecture ============ * *PersistentActor*: Is a persistent, stateful actor. It is able to persist events to a journal and can react to them in a thread-safe manner. It can be used to implement both *command* as well as *event sourced* actors. When a persistent actor is started or restarted, journaled messages are replayed to that actor, so that it can recover internal state from these messages. * *PersistentView*: A view is a persistent, stateful actor that receives journaled messages that have been written by another persistent actor. A view itself does not journal new messages, instead, it updates internal state only from a persistent actor's replicated message stream. * *AtLeastOnceDelivery*: To send messages with at-least-once delivery semantics to destinations, also in case of sender and receiver JVM crashes. * *Journal*: A journal stores the sequence of messages sent to a persistent actor. An application can control which messages are journaled and which are received by the persistent actor without being journaled. The storage backend of a journal is pluggable. The default journal storage plugin writes to the local filesystem, replicated journals are available as `Community plugins`_. * *Snapshot store*: A snapshot store persists snapshots of a persistent actor's or a view's internal state. Snapshots are used for optimizing recovery times. The storage backend of a snapshot store is pluggable. The default snapshot storage plugin writes to the local filesystem. .. _Community plugins: http://akka.io/community/ .. _event-sourcing: Event sourcing ============== The basic idea behind `Event Sourcing`_ is quite simple. A persistent actor receives a (non-persistent) command which is first validated if it can be applied to the current state. Here, validation can mean anything, from simple inspection of a command message's fields up to a conversation with several external services, for example. If validation succeeds, events are generated from the command, representing the effect of the command. These events are then persisted and, after successful persistence, used to change the actor's state. When the persistent actor needs to be recovered, only the persisted events are replayed of which we know that they can be successfully applied. In other words, events cannot fail when being replayed to a persistent actor, in contrast to commands. Event sourced actors may of course also process commands that do not change application state, such as query commands, for example. .. _Event Sourcing: http://martinfowler.com/eaaDev/EventSourcing.html Akka persistence supports event sourcing with the ``PersistentActor`` trait. An actor that extends this trait uses the ``persist`` method to persist and handle events. The behavior of a ``PersistentActor`` is defined by implementing ``receiveRecover`` and ``receiveCommand``. This is demonstrated in the following example. .. includecode:: ../../../akka-samples/akka-sample-persistence-scala/src/main/scala/sample/persistence/PersistentActorExample.scala#persistent-actor-example The example defines two data types, ``Cmd`` and ``Evt`` to represent commands and events, respectively. The ``state`` of the ``ExamplePersistentActor`` is a list of persisted event data contained in ``ExampleState``. The persistent actor's ``receiveRecover`` method defines how ``state`` is updated during recovery by handling ``Evt`` and ``SnapshotOffer`` messages. The persistent actor's ``receiveCommand`` method is a command handler. In this example, a command is handled by generating two events which are then persisted and handled. Events are persisted by calling ``persist`` with an event (or a sequence of events) as first argument and an event handler as second argument. The ``persist`` method persists events asynchronously and the event handler is executed for successfully persisted events. Successfully persisted events are internally sent back to the persistent actor as individual messages that trigger event handler executions. An event handler may close over persistent actor state and mutate it. The sender of a persisted event is the sender of the corresponding command. This allows event handlers to reply to the sender of a command (not shown). The main responsibility of an event handler is changing persistent actor state using event data and notifying others about successful state changes by publishing events. When persisting events with ``persist`` it is guaranteed that the persistent actor will not receive further commands between the ``persist`` call and the execution(s) of the associated event handler. This also holds for multiple ``persist`` calls in context of a single command. The easiest way to run this example yourself is to download `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_ and open the tutorial named `Akka Persistence Samples with Scala <http://www.typesafe.com/activator/template/akka-sample-persistence-scala>`_. It contains instructions on how to run the ``PersistentActorExample``. .. note:: It's also possible to switch between different command handlers during normal processing and recovery with ``context.become()`` and ``context.unbecome()``. To get the actor into the same state after recovery you need to take special care to perform the same state transitions with ``become`` and ``unbecome`` in the ``receiveRecover`` method as you would have done in the command handler. Identifiers ----------- A persistent actor must have an identifier that doesn't change across different actor incarnations. The identifier must be defined with the ``persistenceId`` method. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#persistence-id-override .. _recovery: Recovery -------- By default, a persistent actor is automatically recovered on start and on restart by replaying journaled messages. New messages sent to a persistent actor during recovery do not interfere with replayed messages. New messages will only be received by a persistent actor after recovery completes. Recovery customization ^^^^^^^^^^^^^^^^^^^^^^ Automated recovery on start can be disabled by overriding ``preStart`` with an empty implementation. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recover-on-start-disabled In this case, a persistent actor must be recovered explicitly by sending it a ``Recover()`` message. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recover-explicit If not overridden, ``preStart`` sends a ``Recover()`` message to ``self``. Applications may also override ``preStart`` to define further ``Recover()`` parameters such as an upper sequence number bound, for example. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recover-on-start-custom Upper sequence number bounds can be used to recover a persistent actor to past state instead of current state. Automated recovery on restart can be disabled by overriding ``preRestart`` with an empty implementation. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recover-on-restart-disabled Recovery status ^^^^^^^^^^^^^^^ A persistent actor can query its own recovery status via the methods .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recovery-status Sometimes there is a need for performing additional initialization when the recovery has completed, before processing any other message sent to the persistent actor. The persistent actor will receive a special :class:`RecoveryCompleted` message right after recovery and before any other received messages. If there is a problem with recovering the state of the actor from the journal, the actor will be sent a :class:`RecoveryFailure` message that it can choose to handle in ``receiveRecover``. If the actor doesn't handle the :class:`RecoveryFailure` message it will be stopped. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recovery-completed .. _persist-async-scala: Relaxed local consistency requirements and high throughput use-cases -------------------------------------------------------------------- If faced with relaxed local consistency requirements and high throughput demands sometimes ``PersistentActor`` and it's ``persist`` may not be enough in terms of consuming incoming Commands at a high rate, because it has to wait until all Events related to a given Command are processed in order to start processing the next Command. While this abstraction is very useful for most cases, sometimes you may be faced with relaxed requirements about consistency – for example you may want to process commands as fast as you can, assuming that Event will eventually be persisted and handled properly in the background and retroactively reacting to persistence failures if needed. The ``persistAsync`` method provides a tool for implementing high-throughput persistent actors. It will *not* stash incoming Commands while the Journal is still working on persisting and/or user code is executing event callbacks. In the below example, the event callbacks may be called "at any time", even after the next Command has been processed. The ordering between events is still guaranteed ("evt-b-1" will be sent after "evt-a-2", which will be sent after "evt-a-1" etc.). .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#persist-async .. note:: In order to implement the pattern known as "*command sourcing*" simply call ``persistAsync(cmd)(...)`` right away on all incomming messages right away, and handle them in the callback. .. warning:: The callback will not be invoked if the actor is restarted (or stopped) in between the call to ``persistAsync`` and the journal has confirmed the write. .. _defer-scala: Deferring actions until preceding persist handlers have executed ----------------------------------------------------------------- Sometimes when working with ``persistAsync`` you may find that it would be nice to define some actions in terms of ''happens-after the previous ``persistAsync`` handlers have been invoked''. ``PersistentActor`` provides an utility method called ``defer``, which works similarily to ``persistAsync`` yet does not persist the passed in event. It is recommended to use it for *read* operations, and actions which do not have corresponding events in your domain model. Using this method is very similar to the persist family of methods, yet it does **not** persist the passed in event. It will be kept in memory and used when invoking the handler. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#defer Notice that the ``sender()`` is **safe** to access in the handler callback, and will be pointing to the original sender of the command for which this ``defer`` handler was called. The calling side will get the responses in this (guaranteed) order: .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#defer-caller .. warning:: The callback will not be invoked if the actor is restarted (or stopped) in between the call to ``defer`` and the journal has processed and confirmed all preceding writes. .. _batch-writes: Batch writes ------------ To optimize throughput, a persistent actor internally batches events to be stored under high load before writing them to the journal (as a single batch). The batch size dynamically grows from 1 under low and moderate loads to a configurable maximum size (default is ``200``) under high load. When using ``persistAsync`` this increases the maximum throughput dramatically. .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#max-message-batch-size A new batch write is triggered by a persistent actor as soon as a batch reaches the maximum size or if the journal completed writing the previous batch. Batch writes are never timer-based which keeps latencies at a minimum. The batches are also used internally to ensure atomic writes of events. All events that are persisted in context of a single command are written as a single batch to the journal (even if ``persist`` is called multiple times per command). The recovery of a ``PersistentActor`` will therefore never be done partially (with only a subset of events persisted by a single command). Message deletion ---------------- To delete all messages (journaled by a single persistent actor) up to a specified sequence number, persistent actors may call the ``deleteMessages`` method. An optional ``permanent`` parameter specifies whether the message shall be permanently deleted from the journal or only marked as deleted. In both cases, the message won't be replayed. Later extensions to Akka persistence will allow to replay messages that have been marked as deleted which can be useful for debugging purposes, for example. .. _persistent-views: Persistent Views ================ Persistent views can be implemented by extending the ``PersistentView`` trait and implementing the ``receive`` and the ``persistenceId`` methods. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#view The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary the referenced persistent actor is actually running. Views read messages from a persistent actor's journal directly. When a persistent actor is started later and begins to write new messages, the corresponding view is updated automatically, by default. It is possible to determine if a message was sent from the Journal or from another actor in user-land by calling the ``isPersistent`` method. Having that said, very often you don't need this information at all and can simply apply the same logic to both cases (skip the ``if isPersistent`` check). Updates ------- The default update interval of all views of an actor system is configurable: .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#auto-update-interval ``PersistentView`` implementation classes may also override the ``autoUpdateInterval`` method to return a custom update interval for a specific view class or view instance. Applications may also trigger additional updates at any time by sending a view an ``Update`` message. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#view-update If the ``await`` parameter is set to ``true``, messages that follow the ``Update`` request are processed when the incremental message replay, triggered by that update request, completed. If set to ``false`` (default), messages following the update request may interleave with the replayed message stream. Automated updates always run with ``await = false``. Automated updates of all persistent views of an actor system can be turned off by configuration: .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#auto-update Implementation classes may override the configured default value by overriding the ``autoUpdate`` method. To limit the number of replayed messages per update request, applications can configure a custom ``akka.persistence.view.auto-update-replay-max`` value or override the ``autoUpdateReplayMax`` method. The number of replayed messages for manual updates can be limited with the ``replayMax`` parameter of the ``Update`` message. Recovery -------- Initial recovery of persistent views works in the very same way as for a persistent actor (i.e. by sending a ``Recover`` message to self). The maximum number of replayed messages during initial recovery is determined by ``autoUpdateReplayMax``. Further possibilities to customize initial recovery are explained in section :ref:`recovery`. .. _persistence-identifiers: Identifiers ----------- A persistent view must have an identifier that doesn't change across different actor incarnations. The identifier must be defined with the ``viewId`` method. The ``viewId`` must differ from the referenced ``persistenceId``, unless :ref:`snapshots` of a view and its persistent actor shall be shared (which is what applications usually do not want). .. _snapshots: Snapshots ========= Snapshots can dramatically reduce recovery times of persistent actors and views. The following discusses snapshots in context of persistent actors but this is also applicable to persistent views. Persistent actors can save snapshots of internal state by calling the ``saveSnapshot`` method. If saving of a snapshot succeeds, the persistent actor receives a ``SaveSnapshotSuccess`` message, otherwise a ``SaveSnapshotFailure`` message .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#save-snapshot where ``metadata`` is of type ``SnapshotMetadata``: .. includecode:: ../../../akka-persistence/src/main/scala/akka/persistence/Snapshot.scala#snapshot-metadata During recovery, the persistent actor is offered a previously saved snapshot via a ``SnapshotOffer`` message from which it can initialize internal state. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#snapshot-offer The replayed messages that follow the ``SnapshotOffer`` message, if any, are younger than the offered snapshot. They finally recover the persistent actor to its current (i.e. latest) state. In general, a persistent actor is only offered a snapshot if that persistent actor has previously saved one or more snapshots and at least one of these snapshots matches the ``SnapshotSelectionCriteria`` that can be specified for recovery. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#snapshot-criteria If not specified, they default to ``SnapshotSelectionCriteria.Latest`` which selects the latest (= youngest) snapshot. To disable snapshot-based recovery, applications should use ``SnapshotSelectionCriteria.None``. A recovery where no saved snapshot matches the specified ``SnapshotSelectionCriteria`` will replay all journaled messages. Snapshot deletion ----------------- A persistent actor can delete individual snapshots by calling the ``deleteSnapshot`` method with the sequence number and the timestamp of a snapshot as argument. To bulk-delete snapshots matching ``SnapshotSelectionCriteria``, persistent actors should use the ``deleteSnapshots`` method. .. _at-least-once-delivery: At-Least-Once Delivery ====================== To send messages with at-least-once delivery semantics to destinations you can mix-in ``AtLeastOnceDelivery`` trait to your ``PersistentActor`` on the sending side. It takes care of re-sending messages when they have not been confirmed within a configurable timeout. .. note:: At-least-once delivery implies that original message send order is not always preserved and the destination may receive duplicate messages. That means that the semantics do not match those of a normal :class:`ActorRef` send operation: * it is not at-most-once delivery * message order for the same sender–receiver pair is not preserved due to possible resends * after a crash and restart of the destination messages are still delivered—to the new actor incarnation These semantics is similar to what an :class:`ActorPath` represents (see :ref:`actor-lifecycle-scala`), therefore you need to supply a path and not a reference when delivering messages. The messages are sent to the path with an actor selection. Use the ``deliver`` method to send a message to a destination. Call the ``confirmDelivery`` method when the destination has replied with a confirmation message. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#at-least-once-example Correlation between ``deliver`` and ``confirmDelivery`` is performed with the ``deliveryId`` that is provided as parameter to the ``deliveryIdToMessage`` function. The ``deliveryId`` is typically passed in the message to the destination, which replies with a message containing the same ``deliveryId``. The ``deliveryId`` is a strictly monotonically increasing sequence number without gaps. The same sequence is used for all destinations of the actor, i.e. when sending to multiple destinations the destinations will see gaps in the sequence if no translation is performed. The ``AtLeastOnceDelivery`` trait has a state consisting of unconfirmed messages and a sequence number. It does not store this state itself. You must persist events corresponding to the ``deliver`` and ``confirmDelivery`` invocations from your ``PersistentActor`` so that the state can be restored by calling the same methods during the recovery phase of the ``PersistentActor``. Sometimes these events can be derived from other business level events, and sometimes you must create separate events. During recovery calls to ``deliver`` will not send out the message, but it will be sent later if no matching ``confirmDelivery`` was performed. Support for snapshots is provided by ``getDeliverySnapshot`` and ``setDeliverySnapshot``. The ``AtLeastOnceDeliverySnapshot`` contains the full delivery state, including unconfirmed messages. If you need a custom snapshot for other parts of the actor state you must also include the ``AtLeastOnceDeliverySnapshot``. It is serialized using protobuf with the ordinary Akka serialization mechanism. It is easiest to include the bytes of the ``AtLeastOnceDeliverySnapshot`` as a blob in your custom snapshot. The interval between redelivery attempts is defined by the ``redeliverInterval`` method. The default value can be configured with the ``akka.persistence.at-least-once-delivery.redeliver-interval`` configuration key. The method can be overridden by implementation classes to return non-default values. After a number of delivery attempts a ``AtLeastOnceDelivery.UnconfirmedWarning`` message will be sent to ``self``. The re-sending will still continue, but you can choose to call ``confirmDelivery`` to cancel the re-sending. The number of delivery attempts before emitting the warning is defined by the ``warnAfterNumberOfUnconfirmedAttempts`` method. The default value can be configured with the ``akka.persistence.at-least-once-delivery.warn-after-number-of-unconfirmed-attempts`` configuration key. The method can be overridden by implementation classes to return non-default values. The ``AtLeastOnceDelivery`` trait holds messages in memory until their successful delivery has been confirmed. The limit of maximum number of unconfirmed messages that the actor is allowed to hold in memory is defined by the ``maxUnconfirmedMessages`` method. If this limit is exceed the ``deliver`` method will not accept more messages and it will throw ``AtLeastOnceDelivery.MaxUnconfirmedMessagesExceededException``. The default value can be configured with the ``akka.persistence.at-least-once-delivery.max-unconfirmed-messages`` configuration key. The method can be overridden by implementation classes to return non-default values. .. _storage-plugins: Storage plugins =============== Storage backends for journals and snapshot stores are pluggable in Akka persistence. The default journal plugin writes messages to LevelDB (see :ref:`local-leveldb-journal`). The default snapshot store plugin writes snapshots as individual files to the local filesystem (see :ref:`local-snapshot-store`). Applications can provide their own plugins by implementing a plugin API and activate them by configuration. Plugin development requires the following imports: .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#plugin-imports .. _journal-plugin-api: Journal plugin API ------------------ A journal plugin either extends ``SyncWriteJournal`` or ``AsyncWriteJournal``. ``SyncWriteJournal`` is an actor that should be extended when the storage backend API only supports synchronous, blocking writes. In this case, the methods to be implemented are: .. includecode:: ../../../akka-persistence/src/main/scala/akka/persistence/journal/SyncWriteJournal.scala#journal-plugin-api ``AsyncWriteJournal`` is an actor that should be extended if the storage backend API supports asynchronous, non-blocking writes. In this case, the methods to be implemented are: .. includecode:: ../../../akka-persistence/src/main/scala/akka/persistence/journal/AsyncWriteJournal.scala#journal-plugin-api Message replays and sequence number recovery are always asynchronous, therefore, any journal plugin must implement: .. includecode:: ../../../akka-persistence/src/main/scala/akka/persistence/journal/AsyncRecovery.scala#journal-plugin-api A journal plugin can be activated with the following minimal configuration: .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#journal-plugin-config The specified plugin ``class`` must have a no-arg constructor. The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to ``akka.persistence.dispatchers.default-plugin-dispatcher`` for ``SyncWriteJournal`` plugins and ``akka.actor.default-dispatcher`` for ``AsyncWriteJournal`` plugins. Snapshot store plugin API ------------------------- A snapshot store plugin must extend the ``SnapshotStore`` actor and implement the following methods: .. includecode:: ../../../akka-persistence/src/main/scala/akka/persistence/snapshot/SnapshotStore.scala#snapshot-store-plugin-api A snapshot store plugin can be activated with the following minimal configuration: .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#snapshot-store-plugin-config The specified plugin ``class`` must have a no-arg constructor. The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to ``akka.persistence.dispatchers.default-plugin-dispatcher``. Plugin TCK ---------- In order to help developers build correct and high quality storage plugins, we provide an Technology Compatibility Kit (`TCK <http://en.wikipedia.org/wiki/Technology_Compatibility_Kit>`_ for short). The TCK is usable from Java as well as Scala projects, for Scala you need to include the akka-persistence-tck-experimental dependency:: "com.typesafe.akka" %% "akka-persistence-tck-experimental" % "2.3.5" % "test" To include the Journal TCK tests in your test suite simply extend the provided ``JournalSpec``: .. includecode:: ./code/docs/persistence/PersistencePluginDocSpec.scala#journal-tck-scala We also provide a simple benchmarking class ``JournalPerfSpec`` which includes all the tests that ``JournalSpec`` has, and also performs some longer operations on the Journal while printing it's performance stats. While it is NOT aimed to provide a proper benchmarking environment it can be used to get a rough feel about your journals performance in the most typical scenarios. In order to include the ``SnapshotStore`` TCK tests in your test suite simply extend the ``SnapshotStoreSpec``: .. includecode:: ./code/docs/persistence/PersistencePluginDocSpec.scala#snapshot-store-tck-scala In case your plugin requires some setting up (starting a mock database, removing temporary files etc.) you can override the ``beforeAll`` and ``afterAll`` methods to hook into the tests lifecycle: .. includecode:: ./code/docs/persistence/PersistencePluginDocSpec.scala#journal-tck-before-after-scala We *highly recommend* including these specifications in your test suite, as they cover a broad range of cases you might have otherwise forgotten to test for when writing a plugin from scratch. .. _pre-packaged-plugins: Pre-packaged plugins ==================== .. _local-leveldb-journal: Local LevelDB journal --------------------- The default journal plugin is ``akka.persistence.journal.leveldb`` which writes messages to a local LevelDB instance. The default location of the LevelDB files is a directory named ``journal`` in the current working directory. This location can be changed by configuration where the specified path can be relative or absolute: .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#journal-config With this plugin, each actor system runs its own private LevelDB instance. .. _shared-leveldb-journal: Shared LevelDB journal ---------------------- A LevelDB instance can also be shared by multiple actor systems (on the same or on different nodes). This, for example, allows persistent actors to failover to a backup node and continue using the shared journal instance from the backup node. .. warning:: A shared LevelDB instance is a single point of failure and should therefore only be used for testing purposes. Highly-available, replicated journal are available as `Community plugins`_. A shared LevelDB instance is started by instantiating the ``SharedLeveldbStore`` actor. .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#shared-store-creation By default, the shared instance writes journaled messages to a local directory named ``journal`` in the current working directory. The storage location can be changed by configuration: .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#shared-store-config Actor systems that use a shared LevelDB store must activate the ``akka.persistence.journal.leveldb-shared`` plugin. .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#shared-journal-config This plugin must be initialized by injecting the (remote) ``SharedLeveldbStore`` actor reference. Injection is done by calling the ``SharedLeveldbJournal.setStore`` method with the actor reference as argument. .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#shared-store-usage Internal journal commands (sent by persistent actors) are buffered until injection completes. Injection is idempotent i.e. only the first injection is used. .. _local-snapshot-store: Local snapshot store -------------------- The default snapshot store plugin is ``akka.persistence.snapshot-store.local``. It writes snapshot files to the local filesystem. The default storage location is a directory named ``snapshots`` in the current working directory. This can be changed by configuration where the specified path can be relative or absolute: .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#snapshot-config .. _custom-serialization: Custom serialization ==================== Serialization of snapshots and payloads of ``Persistent`` messages is configurable with Akka's :ref:`serialization-scala` infrastructure. For example, if an application wants to serialize * payloads of type ``MyPayload`` with a custom ``MyPayloadSerializer`` and * snapshots of type ``MySnapshot`` with a custom ``MySnapshotSerializer`` it must add .. includecode:: code/docs/persistence/PersistenceSerializerDocSpec.scala#custom-serializer-config to the application configuration. If not specified, a default serializer is used. Testing ======= When running tests with LevelDB default settings in ``sbt``, make sure to set ``fork := true`` in your sbt project otherwise, you'll see an ``UnsatisfiedLinkError``. Alternatively, you can switch to a LevelDB Java port by setting .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#native-config or .. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#shared-store-native-config in your Akka configuration. The LevelDB Java port is for testing purposes only. Miscellaneous ============= State machines -------------- State machines can be persisted by mixing in the ``FSM`` trait into persistent actors. .. includecode:: code/docs/persistence/PersistenceDocSpec.scala#fsm-example Configuration ============= There are several configuration properties for the persistence module, please refer to the :ref:`reference configuration <config-akka-persistence>`. Other Akka source code examples Here is a short list of links related to this Akka persistence.rst source code file: Copyright 1998-2021 Alvin Alexander, alvinalexander.com All Rights Reserved. A percentage of advertising revenue from pages under the /java/jwarehouse URI on this website is paid back to open source projects.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00354.warc.gz
CC-MAIN-2022-27
34,194
10
http://www.aeriagames.com/forums/en/viewtopic.php?t=1865549
code
Taken from my old post: 1.1 classes that deal different roles like ranged DPS and a class that uses resources like money (the dollmaster?) 1.2 hybrid classes, ie. splashing to a different class 1.3 racial bonuses/traits/skills 1.4 branching class upgrades (swordsman chooses to be a knight or crusader) 2.1 collision-based, optimal AoE, optimal range... all of which are still possible in a point-and-click game. 2.2 elements, fire > earth > wind > water > fire, holy > dark/undead, ghost > neutral, etc. 2.3 Status ailments and corresponding status resistances/immunity/counters 2.4 Attack speed, lucky dodge, cast speed, block rate, condition resistance, and other such secondary combat values. 2.5 weapon classes that differ the performance, like fast dagger vs strong axe vs stunning hammer 2.6 move speed modification as a basic mechanic (as in not available to everyone and is accessible for balance issues like meelee units having an ability to close the gap to ranged fighters) 2.7 monster elemental attunement with corresponding skills (not the "attacks with X element" type, more like uses a real skill that may or may not be available to the players on top of basic attacks) 3.1 Cards with varied effects reaching further than stats. could be abilities, elemental effects, status boon/bane, increasing secondary combat values, could be ANYTHING as long as its not str+1 iterations. 3.2 Set combos of items and cards. Also, very interested in seeing a randomized loot stat stable which was somewhat confirmed to be included.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190295.4/warc/CC-MAIN-20170322212950-00382-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,534
15
http://www.ubuntuvibes.com/2013/01/valve-suggesting-windows-users-to-try.html?m=1
code
After the launch of Steam Linux beta, you may have seen a big Tux logo on Steam client download page. When visiting Steam download page from a web browser running on a Linux based OS, you are pointed to Linux version of Steam and a promo ad about Steam Linux beta is shown. However, Valve is now showing the same page to Windows users as well, encouraging them to download Ubuntu and try out latest Steam Linux beta. Here is a screenshot (thanks to doorknob60) Not only Valve is promoting their newly launched product, they are also actively advertising Linux with lines like "Not running on Linux yet? Grab Ubuntu 12.04 LTS".
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123172.42/warc/CC-MAIN-20170423031203-00302-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
626
5
https://www.mdh.se/en/malardalen-university/education/courses?kod=ITE246
code
Digital Interaction Design in a Spatial Context In this course, we go beyond the screen as the surface for interaction. We explore multimodal ways of interacting with digital technology where the focus on elements and materials in the physical space is given great importance. Occasions for this course Autumn semester 2021 2021-08-30 - 2021-10-03 (full time 100%) Course only for programme students Course syllabus & literatureSee course plan and literature list (ITE246) Completed courses of 30 credits within the program Interaction Design.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00501.warc.gz
CC-MAIN-2021-39
543
8
https://members.loria.fr/KFort/publications/videos/
code
- Interview in a documentary on microworking (fr), 2020: https://www.caracteres.media/micro-travail-economie-du-clic-au-service-de-intelligence-artificielle/ Video capture of presentations - Ethics and NLP at DiLCO on Oct. 8th, 2021. Video. - Ethics and AI (a view from NLP) at the noon webinar of SAILS, in Leiden, on Sept. 27, 2021. Video. - Crowdsourcing: (a bit of) theory and ((quite) some) practice. enetCollect plenary meeting. Bolzano, Italy, Sept 2017 : http://videolectures.net/1stmeeting2017_fort_crowdsourcing/ - Yes, We Care ! Results of the Ethics and Natural Language Processing Surveys (LREC 2016), Portorož, Slovenia, May 2016: http://videolectures.net/lrec2016_fort_processing_surveys/ - Crowdsourcing and human annotation : going beyond the legends to create quality data. Biomedical Linked Annotation Hackaton 1 Symposium. Tokyo, Japan, Feb. 2015:
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00111.warc.gz
CC-MAIN-2021-43
868
7
https://community.bt.com/t5/Archive-Staging/Poor-Broadband-Download-speed-1mps-instead-of-6-5mps/m-p/1427666
code
Hi everyone, I have been with BT for over 2years and am not quite happy with the download speed rate over the years until recently I got into graphic designing which requires downloading of files to work with at home. Yes surfing the web, watching youtube videos and nextflix works fine even at times movies buffers I dont mind I can over look that but downloading files for work is what am frustrated and more concerned about. My download rates cant go beyond or above 1mps, looks like its locked or restricted to 1mbps. Its either 1mps or below which I find unfair and surprise about. My package states and estimates my downoad speed will be between 6.5mbps and 8.5mbps but am not anywhere close to 6.5mps. I have only got 1 laptop and a smartphone connected to the BT Hub 4, I have called custermer service done line test, exchange test etc no faults found nothing has been done about my download speed. I have done speed test the results was : Download speed (Mbps): 8.47 Upload speed (Mbps) :0.91 Ping latency (Mbps) : 48.13 Can anyone please assist on how I can increase download speed from 1mbps to 6.5mbps? Experts in the forum kindly assist thank you. Solved! Go to Solution. frpom your btspeedtester results you have posted you have a download speed of 8.4Mbps speed tests use mbps (mega bits pers second), when downloading a file it will display the speed in mb/s (mega bytes pers second) so should be 8 times less as 8 bits to a byte. I think the problem is you are confusing your bits and bytes http://www.checkyourmath.com/convert/data_rates/per_second/megabits_megabytes_per_second.php Thanks for your quick response Here is the test results: Download speed : 8.44 Upload speed: 0.90 Ping latentcy : 41:00 Here is the further diagnostic test: download speed 8.44mbps upload speed .09mbps I am just wonderinf why am not getting this download speed when I actually download a file? Okay I have gone through the event log here is mine : |14:37:52, 28 Dec.||( 5715.530000) DSL noise margin: 6.00 dB upstream, 6.00 dB downstream| |14:37:51, 28 Dec.||( 5714.410000) DSL line rate: 1051 Kbps upstream, 9464 Kbps downstream| @imjolly absolutely not, am quite sure of what am saying, the speed test says 8.4mbps. Downloading a file the download speed is 1.0mbps or less for that matter in kilobytes. Look at the screenshot I attached for an example. The download speed is in kilobytes (KB) The point is the speed test says 8.4Mbps why am I getting 1mbps or less thats my point. As I already posted your connection speed, profile and download speed are consistent. Your download speed from the website is consistent with your btspeedtester results just btspeedtester is showing in bits and the website in bytes. I posted an explanation and a link to follow Your down connection speed is 9464Kbps (bits) . You are downloading at 960 KB.sec (bytes) which is the same as 960 x 8 = 7680Kbps. 8 bits to a byte this is not much lower than your profile of 8347Kbps (9464 x 0.882) Thanks for the explanation, I got Mb(megabit) and MB(megabyte) mixed up.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00497.warc.gz
CC-MAIN-2021-21
3,050
31
http://vjdbc.sourceforge.net/documentation/overview.htm
code
VJDBC ("Virtual JDBC") is a JDBC type 3 driver which provides a client-server model for remote access of JDBC datasources over different network protocols. Accessing JDBC-Datasources over networks is sometimes difficult. The JDBC specification defines four types of JDBC drivers. A type 3 driver is a net protocol full Java driver; it converts JDBC calls to a database-independent net protocol, which is then translated into the database protocol by the server. It depends heavily on the database vendor or 3rd-party vendors if there is a type 3 driver available for a specific database. Furthermore these drivers mostly need an additional daemon process on the server which can serve calls from the net protocol. The configuration of this daemon process is also vendor-specific. Most of them require direct socket connections on some custom ports which is crucial in protected environments behind firewalls. Finally if you want to change the database the game will start from the beginning. VJDBC is a vendor-agnostic type 3 JDBC-Driver with which you can remotely access every JDBC-Database in an efficient manner. Due to its command-oriented design different networking protocols can be supported quite easily. There is a similar open source project called RmiJdbc. The main difference between VJDBC and RmiJdbc is that RmiJdbc exposes the complete interface of the JDBC-Objects via RMI, so every call on an JDBC-Object will go over the network. This can be a major performance killer. VJDBC uses a different approach with command objects and a very thin remote interface.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891277.94/warc/CC-MAIN-20180122093724-20180122113724-00291.warc.gz
CC-MAIN-2018-05
1,575
5
https://netassured.co.uk/aws-certified-solution-architect-associate/
code
This post details my study strategy for the AWS Certified Solution Architect – Associate exam. I’m frequently being asked how much I know about migrating or building out services into AWS. Until recently I’d not even created an account or logged into the AWS console. Some downtime between contracts over the last few weeks has provided the time I needed to set about fixing my knowledge gap by taking a training course. Almost everyone I asked highly recommended A Cloud Guru, specifically Ryan Kroonenburg . Signing up to their AWS CSA course is simple all of which is delivered online. I originally purchased through Udemy but switched over to using the A Cloud Guru site to consume the content. If you’ve already purchased the course through Udemy I recommend making the switch. The A Cloud Guru UI is much nicer, easier to use and enables you to easily track your progress. It’s an excellent well presented course, possibly the best online course I’ve taken. The course flows nicely with plenty of hands on labs and end of section quizzes. All you need once signed up for the course is to create an AWS account which is used for the labs. Other study materials I used Amazon’s study guide is well worth purchasing. A Cloud Guru’s IOS app is another essential purchase IMHO. The app has practice exams allowing 60 randomized questions in 80 minutes. There is also a mini-exam mode and a quick 15 minute practice exam. Its been good fun studying for the AWS CSA exam and well worth the time commitment. One of my customers needs some assistance standing up some Palo Alto firewalls in AWS. Despite my initial lack of commercial experience with AWS I feel suitably equipped to assist them.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00466.warc.gz
CC-MAIN-2023-50
1,707
6
http://jnarvey.com/2007/10/02/near-shoring-to-vancouver-ending-before-it-begins/
code
Oct 02 2007 Business columnists are all abuzz with hype over the potential shift from US tech companies outsourcing overseas to near-shoring right here in Vancouver. Somehow, the column seems a little dated. Just a few months after Microsoft’s much-talked-about expansion into our Pacific Rim metropolis, the Canada-US exchange rate has moved up to par. One leg of the near-shoring rationale has already collapsed. The other – similarity of culture, quality of work and shorter travel times – seemed pretty wobbly to begin with. Surely there are at least as many qualified American professional geeks as Vancouver-based ones. At this point, our one saving grace – and it’s a big one – is that Vancouver is quite possibly the nicest place in the world to live, and will likely remain so in future. Hopefully, that’s enough to ensure our high tech industry clusters have a chance to thrive.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00421-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
902
5
https://www.wwoz.org/news?page=204
code
At 'OZ, one of our main missions is to bring New Orleans music to the world. Props to Satchmo, who was always far ahead of us. Now, over four decades after his death, it's easy to underestimate the size and scope of his mid-century fame. But the newsreel clip about his 1960 ...
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608870.18/warc/CC-MAIN-20170527055922-20170527075922-00237.warc.gz
CC-MAIN-2017-22
278
4
https://versompchatwho.web.app/1015.html
code
Download and install custom roms for redmi k20 pro mi 9t pro. Press enter to run androidx86 without installation. After trying a few popular emulators, i decided to try virtualbox software. Its actually pretty easy to set up, and will offer you the full android experience in a matter of a few minutes. The success of android os cannot be over emphasized and anyone who has used an android device will testify to its uniqueness and usefulness. Select the android iso that you previously downloaded to boot the machine of with. Android x86 virtual machine images for vmware and virtualbox. How to install android os on a pc using virtualbox. Follow these and you will be done in a few seconds. If you have not already created a virtualbox virtual machine for android x86 yet, do so as. If you select android oreo iso, start the virtual machine click on the start. Once the android starts running, select the fourth line which is the install android86 to hard disk. Though there is no official method to install phoenix os on virtualbox, but i have found a method through which you can. If you have not already created a virtualbox virtual machine for androidx86 yet, do so as. After installing boinc on your computer, you can connect it to as many of these projects as you like. Download and install the vmware player and android oreo disk iso file for personal use choose the free version while advanced users will need the paid version that has more features to offer. Virtualbox is a generalpurpose full virtualizer for x86 hardware, targeted at server, desktop and embedded use. Download the cloudready image below to your downloads folder not to the usb stick and then follow the appropriate link below to our install guide for detailed instructions on how to proceed. If you are using android x86 for debugging purposes, some binaries gdb for example are built for 32bit architectures and will not support debugging 64bit binaries such as the android app host. Virtualbox guide how to install cyanogenmod on virtualbox. Next, select the installation option if you wish to install android on the vm for a. Download and install genymotion desktop for windows, mac or linux and enjoy the latest android versions on your machine. Click start in the virtualbox main menu, and you should get options to run androidx86 without installation, or to install it, alongside a couple of other options. If youre itching give android a try but dont necessarily want use your whole computer for the task, the best option is to run it in a virtual machine using virtualbox. Play android games on pc virtualbox in less than 10. Android x86 developers announced the latest ported version of android x86 6. In the new window select the iso for the android image you just downloaded, open, then select it from the storage tree list and click ok. I have searched a lot, but what i have found is that one has to compile guest additions with androidx86 kernel headers. Of if youre too lazy to hit that popular enter key again, just hang loose for 60 seconds. How to install phoenix os on virtualbox, this video is created for those who want to install phoenix os on virtualbox. Virtualbox is a multi platform and opensource virtualization tool for windows, linux, mac os x, and other operating systems, that lets you create virtual disc units in which you can install a guest operating system within the one you normally use on your computer, using it in the same way as if it was actually installed. Get project updates, sponsored content from our select partners, and more. Once the android starts running, select the fourth line which is the install android 86 to hard disk. Powerful control over your virtualbox infrastructure from anywhere. Virtualboxes free virtualboxr images browse android. The configuration settings of virtual machines are stored entirely in xml and are independent of the local machines. Android 9 for virtualbox installation guide 2019 youtube. How to install phoenix os on virtualbox in windows 1078. Install android oreo on virtualbox on windows 10 techspite. To install android oreo on pc or virtual box you need to downnlaod two setups on your pc. Is there anyway to install apple ios in virtualbox. This is very easy with prompts on the screen guiding you through the entire process. Instalar o android no guia virtualbox passo a passo. Click the green start arrow to poweron your virtual machine. A simple step by step guide to installing the x86 version of android 8. Some people love android devices but cannot afford to buy one, so the best alternative is installing it on their pc. Download and install virtualbox for windows hosts download virtualbox for mac hosts if you wish to install androidx86 on a mac download the most recent androidx86 emulator build for asus eeepc 4. Virtualbox also comes with a full software development kit. Complete guide to running android in virtualbox make. Download and install virtualbox if you dont already have itits available for windows, macos, and linux. The android x86 team created their own code base to provide support on different x86 platforms, and set up a git server to host it. Below is the settings i use and the vdi file that i used for. If you are using androidx86 for debugging purposes, some binaries gdb for example are built for 32bit architectures and will not support debugging 64bit binaries such as the android app host. Androidx86 is a project to port android open source project to x86 platform, formerly known as patch hosting for android x86 support. Hello, yesterday i was trying to install android emulators on my pc. I am running android x86 on virtualbox, and i want the pointer integration enabled, which needs virtualbox guest additions to be installed on the guest os. Download android x86 virtual machine vdi and vmdk for virtualbox and vmware. Select what you want to download and click on the download now button. How to install android oreo on vmware or virtualbox. Utility for android to mount virtualbox shared folders. Either way, you will need to head over to the vmware player download page. Run android inside your windows, linux, and mac os x. The vboxsf and vboxguest drivers from guest additions utility mount. In this video i go over how to install android 8 and other versions on virtualbox. The androidx86 team created their own code base to provide support on different x86 platforms, and set up a git server to host it. This is the first release candidate for the androidx86 6.141 804 328 623 243 1488 184 943 811 192 1155 381 280 1334 1359 620 1342 509 634 1270 1533 945 545 1118 1148 1499 190 237 1606 398 65 950 105 830 414 362 532 1420 1465 232 900 56 418 1232 1033
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476137.72/warc/CC-MAIN-20240302215752-20240303005752-00768.warc.gz
CC-MAIN-2024-10
6,679
7
http://lists.openshift.redhat.com/openshift-archives/users/2015-August/msg00039.html
code
Im trying to expose an UI that will allow developers to be able to see application specific logs in the containers. kubectl logs <pod> <container> or oc build-logs might be able to get the build logs but Im trying to provide a way for the user to look inside the container. ) or similar services was what I was trying to do, which expose the terminal as a web application. If we could package gotty along with the container images and expose them, then the end-users might be able to get a view into the container. The problem I have is that if I have multiple replicas fronted by a service, then the order of getting to a particular pod running gotty becomes tricky as we dont know which pod is being serviced. If we have a replica of 1, that solves the problem because there is only one pod but Im trying to see what if we have multiple replicas set up. Is there any way to expose a Pod directly instead of going through the service? I could potentially label each of the pods differently and have Services that target each of the pods but that means I have to define Services everytime I increase/decrease the Pod. Are there any better ways than this. Im a few days into learning openshift, so my terminologies might be incorrect, but please let me know if its unclear.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585561.4/warc/CC-MAIN-20211023033857-20211023063857-00709.warc.gz
CC-MAIN-2021-43
1,272
5
https://bugs.freedesktop.org/show_bug.cgi?id=92150
code
In BYT platform, the data use 'write_imageui' with RGBA32 is all 0xFF, do not match what I want to write in. sample code:git://bee.sh.intel.com/git/smartcamera/cltest.git branch=16bitread_8bitwrite; Mengmeng, could you take a look at this bug? I think conformance test should cover this case on BYT platform, right? It turns out to be a bug of test code. write_imageui works on BYT. So close this bug.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00383.warc.gz
CC-MAIN-2022-21
401
5
https://scrippsnews.com/stories/how-different-demographics-could-shape-the-2016-election/
code
Lots of people are talking about what the 2016 election might look like if only particular demographics voted. That's not how it works. But it's an interesting conversation. We compared polling data to find out. Polling data for this video was pulled from an Oct. 19 Quinnipiac University national poll.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510358.68/warc/CC-MAIN-20230928031105-20230928061105-00441.warc.gz
CC-MAIN-2023-40
303
2
https://dev.parley.be/PPBe/parley_mainpage_ynh/src/branch/stable/conf
code
A yunohost package contains the main page for https://parley.be You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long. The website is done in English. There are some placeholder things for nl and fr as well, but the language header on the en page is commented out There are no links in the README.md to the actual repo, only mentions to it I wanted everything set up and done before changing those things |3 years ago| |nginx.conf||3 years ago|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00207.warc.gz
CC-MAIN-2023-14
533
5
https://remoteok.io/remote-jobs/17423-remote-computer-vision-engineer-density
code
Safe for work mode 👉 Hiring for a Remote position?on the 🏆 #1 Remote Jobs board. Remote Health by SafetyWing Global health insurance for freelancers & remote workers Computer Vision Engineerclosed ✅ 9 applications (1%) This job post is closed and the position is probably filled. Please do not apply. \nPeople count through a door might sound simple. Solved, even. In reality, there are significant and unexpected challenges. Our sensor’s analysis must account for millions of environmental variates; handle odd and unpredictable human behavior; and, to be ubiquitous, must amass its data anonymously. The problem of people count is unsolved and complex; but we’re convinced it’s fundamental, fascinating, and global. At the core of our method is an image processing algorithm. It understands the direction of objects as they collide and move through a threshold, it’s capable of discerning a door swing from a plant, and it knows when a line has formed. We’re looking for an experienced computer vision engineer to help us turn large amounts of data into real-time, always-on people count. Ideally, you have a deep interest in computer vision, especially in segmentation and tracking, and experience architecting vision-based products. If you get excited about predictive movement analysis, background segmentation, and optimizing code to run on an embedded device, we’d love to talk. See more jobs at Density # How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead. Recommended remote workers for this job I'm a full-stack software developer, architector and designer 15+(CTO). Help teams and companies bui... I'm a technical support engineer from Espinho, Portugal. I'm also a freelance fullstack developer, ... I am a dynamic and motivated young woman, currently living in Italy. I have 4+ years of experience i... Hey, I'm Steve ✌️ I'm a Software Engineer with 10+ years experience writing PHP, Ruby & Python,... [Spam check] What is the name of Elon Musk's company going to Mars?
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00247.warc.gz
CC-MAIN-2021-43
2,094
16
https://technet.microsoft.com/en-us/library/aa547352(v=bts.10).aspx
code
How to Enable SSL for SSO Use this command to enable Secure Sockets Layer (SSL) between all the Enterprise Single Sign-On (SSO) servers and the SSO database. To enable SSL for Enterprise Single Sign-On Click Start, click run, and then type cmd. At the command line, go to the Enterprise Single Sign-On installation directory. The default installation directory is <drive>:\Program Files\Common Files\Enterprise Single Sign-On. Type ssoconfig –setSSL <yes/no>, where <yes/no> indicates whether you want to enable SSL in the SSO system. Note On a system that supports User Account Control (UAC), you may need to run the tool with Administrative privileges.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189377.63/warc/CC-MAIN-20170322212949-00294-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
656
7
http://www.theinquirer.net/inquirer/news/1041509/microsofts-banned-words-in-msn-messenger
code
One of the words is "remember", which the Korean site attempts to spin into a story of unrequited love. Microsoft's take on it is far simpler. "Remember" has the word "member" inside it, and that's a banned word, not because of the physical meaning of the word, but because people might masquerade as Microsoft staff. The site extracted a list of other words banned as user names, and those include Microsoft, MSN, admin, message, messenger, engineer, executive, help and info. Four letter words of a naughty nature are also banned, as well as contentious areas of the world like Scunthorpe. Here's more. µ It's time for our regular two-step through the Google news Bug bounty offer: accepted
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00132-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
693
7
https://staventabs.com/hc/guides/ios/basics/working-with-a-score/create-a-new-score-element/
code
The musical event and the bar are the main constructing blocks of the score. Creating new elements of these types is a basis for writing music in the Stave'n'Tabs. When the Input mode is activated you typically use the Forward button to go to the next musical event. Due to the context-sensitivity of the application's user interface the same button can add a new musical event or a new bar for you automatically. Here is a non-exhaustive list of examples of applying the Forward button in different situations. You can create bars and musical events explicitly using Create Before or Create After buttons. They are available in the Fragment selection mode. The type of a new element is defined by a kind of currently selected fragment. For example, to insert a new empty musical event before the selected one: Switch the selection to the Fragment mode if not yet done so. Tap . A new musical event is created and becomes selected. Explicit adding of a new bar right after the current position is no harder. Make sure that the selected fragment is a bar, not its musical events. See Bars fragment to learn how to carry out it quickly. Tap . A new bar with an empty musical event is created and becomes selected.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576345.90/warc/CC-MAIN-20190923084859-20190923110859-00441.warc.gz
CC-MAIN-2019-39
1,211
10
https://codedump.io/share/sv5OtkXmkdD8/1/what-container-to-store-unique-values-with-no-operatorlt-defined
code
I need to store unique objects in a container. The object provides a There's indeed no standard container, and that's because it would be inefficient. O(N), to be precise - exactly the brute force search you imagine. std::unordered_set<T> avoid a brute-force search by taking advantage of a non-trivial property of T. Lacking either property, any of the existing N members of a container could be equal to a potential new value V, and you must therefore compare all N members using
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863834.46/warc/CC-MAIN-20180620182802-20180620202802-00064.warc.gz
CC-MAIN-2018-26
481
4
https://www.trafiklab.se/api/trafiklab-apis/sl/
code
SL’s APIs is a collection of 7 APIs which provide information about planned and realtime traffic. |Well suited for||Not so suited for| |Quickly getting the next departures from a stop||Analysing public transport information| |Quickly calculating a route from A to B||Applications requiring huge amounts of API calls| |Digital Signage||Data visualisation| |Mobile applications||Building your own route-planner algorithm| SL’s APIs only offer data for SL’s traffic. ResRobot offers the same data for entire Sweden. If you want to do analytics, or if your service will have more than a million active users, we recommend using GTFS data directly, or hosting your own API based on GTFS data.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100686.78/warc/CC-MAIN-20231207185656-20231207215656-00662.warc.gz
CC-MAIN-2023-50
693
8
https://forum.beta.tribalwars.net/index.php?threads/tribal-forum-search-bb-codes-shown-as-bb-code.3746/
code
- Summary of the issue (title of the post) tribal forum search - bb - codes shown as bb code - Overview of the bug (description): The results show the posts with it's original bb-codes, not with the resulting styles e.g. - Steps to reproduce: create a forum post using bb-codes and search for it (display results as posts). - Reproduction rate (Every time? Sometimes?): - Browser and Version: - Visual Reference if available (Screenshot) please put them in a spoiler.: - Player name and market for rewards:
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00097.warc.gz
CC-MAIN-2023-06
506
10
https://paragraph.xyz/@loraofflowergarden/big-news
code
BIG NEWS! ! !📢📢📢 We will be take a snapshot on February 1st with the following BENEFITS: 1 Airdrop 1 MAYC to whoever has the most Lora Of Flower Garden. 2 All holders will get a whitelist for our next artwork, it's FreeMint for whitelist! ! ! Collect this post to permanently own it. Subscribe to Lora Of Flower Garden and never miss a post. - Loading comments...
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00224.warc.gz
CC-MAIN-2023-50
372
7
https://devpost.com/software/gps-health-ohvwai
code
Existing emergency response processes rely on relatively old technologies and in the case of locating cell phones, depend on data from cell towers that are often imprecise and difficult for 911 dispatchers to interpret. The caller's location is the critical piece of information that will enable help to reach the caller. The goal of GPS Health is to first channel widespread cellular GPS tools to report accurate location information to the 911 dispatcher and emergency responders. Location is visualized via map features on a web application that is updated with medical information and updates from the caller via mobile app. Seamless integration and real-time application can expedite emergency services, advance public safety and save costs. What it does The mobile app was built for iOS and hosts a number of features including recording user data, Wi-Fi calling, location detection and data updates. When a user calls 911 through the app, a Wi-Fi call is made using the twilio platform and the 911 dispatcher is given the URL to a web-app that displays the location of the caller (Google Map widget). Medical information and live-updates about the caller's situation are also posted on to the web-app for emergency responders to view. How we built it In order to determine the position and transmit additional medical information about the caller, we implemented an iOS app using Swift and the iOS location services. While dialing 911 the caller could enter additional information about what type of emergency it is, detailed information about the location (floor and room number) easily via the app. As a backend we use Firebase to exchange the information between the caller and EMT/ dispatcher and twilio as a communication platform. The web app which displays caller's information and location is built with React using Google Maps API. Challenges we ran into The challenges we faced when developing our product included: 1) Coordinating APIs including Google Maps, Twilio, and Firebase with npm libraries, 2) coordinating multiple actors when getting website off the ground, and 3) merge conflicts. Accomplishments that we are proud of We are proud of our early prototype and the positive feedback we received when exploring our product with a professional 911 dispatcher. What we learned During our need-finding process we came to understand some of the various inefficiencies and needs of the emergency response process. During the building process we learned how to integrate Twilio calling as well as fundamental What's next for GPS Health Beyond testing and developing the features further, we seek to develop partnerships with public safety departments in order to train
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00447.warc.gz
CC-MAIN-2022-40
2,688
13
http://www.woodmann.com/collaborative/tools/index.php/LordPE
code
From Collaborative RCE Tool Library |Current version:||1.41 (Deluxe b)| |Last updated:||September 30, 2009| |Direct D/L link:||Locally archived copy| |Description:||LordPE is a tool e.g. for system programmers which is able to edit/view many parts of PE (Portable Executable) files, dump them from memory, optimize them, validate, analyze, edit,... * Task viewer/dumper * Huge PE editor (with big ImportTable viewer, ...) * Break'n'Enter (break at the EntryPoint of dll or exe files) * PE Rebuilder * The first GUI PE editor in the world supporting the new PE32+ (64bit) format ?! (only editing support - no rebuilding, dumping, comparing etc.) * New plugin interface added! You can develop LordPE Dump Engines (LDE) now. Look at \Docs\LDE.tXt for more information. * Added LDE: IntelliDump which can dump .NET CLR processes * Added structure lister for SectionHeaderTable, PE headers and DataDirectories (the "L" buttons) * Added hex edit buttons (the "H" buttons) in the DataDirectoryTable viewer * Added PE.OptionalHeader.Magic and PE.OptionalHeader.NumberOfRvaAndSizes to the PE editor * TLSTable DataDirectory is now editable * Possibility to increment/decrement the number of DataDirectories added * Etc etc etc... |Related URLs:||No related URLs have been submitted for this tool yet| Here below you will find useful notes about this tool, left by other users. You are welcome to add your own useful notes here, or edit any existing notes to improve or extend them. Trojan horse in a file! AVG found something: LordPE 1.41 Deluxe b.zip:\LPE-DLX_1.4.zip:\LDS_Clients\CoolDump1.4\Genoep.dll";"Trojan horse Generic15.ABQO"; No, there is no trojan in it. Seriously, if only generic malware is found and on top of that by only a fraction of AV-engines you can't say it's malware. Errrr..., you can, but hardly anybody will agree. (please also edit it if you think it fits well in some additional category, since this can also be controlled)
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275981.56/warc/CC-MAIN-20160524002115-00067-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
1,942
26
https://www.thejournal.club/c/paper/23832/
code
Currently, short signature is receiving significant attention since it is particularly useful in low-bandwidth communication environments. However, most of the short signature schemes are only based on one intractable assumption. Recently, Su presented an identity-based short signature scheme based on knapsack and bilinear pairing. He claimed that the signature scheme is secure in the random oracle model. Unfortunately, in this paper, we show that his scheme is insecure. Concretely, an adversary can forge a valid signature on any message with respect to any identity in Su's scheme.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00400.warc.gz
CC-MAIN-2023-06
588
1
http://www.softpile.com/Development/Source_Code/Review_06301_index.html
code
Easily integrate 3D content into your application or develop top-level arcade games from scratch. DirectX , OpenGL and Glide Programmers: use this package to dramatically shorten development time. Very fast engine with the highest standard in graphics. This is SDK for Borland Builder, Also available SDKs for VC++,VB, Delphi ,Borland C++, Converters from 3DS and Max, WorldBuilder, TerrainBuilder, Morfit 3D Programming Book !3D Morfit 3D Engine SDK 3.0 (for Borland Builder) keywords: This program is no longer available for download from our website. Please contact the author of !3D Morfit 3D Engine SDK 3.0 (for Borland Builder) at email@example.com for any additional information.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719215.16/warc/CC-MAIN-20161020183839-00506-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
686
4
https://wiki.flightgear.org/User:Zakalawe
code
Real name: James Turner, 'james' on FlightGear IRC. Best to get in touch via email (or the developer mailing list). Working on various areas, most especially GPS/FMS support in the code code, and support for digital/glass cockpit displays in the near future. I've also worked on the navradio code, overhauling some of the core data models (especially relating to spatial indexing and searches), and many general code cleanups. For details, see Plan-zakalawe.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474541.96/warc/CC-MAIN-20240224144416-20240224174416-00835.warc.gz
CC-MAIN-2024-10
458
3
https://www.freelancer.com.jm/projects/android/social-application/
code
Very good day to all of you, I'm looking to have a social application made that uses it's user location (google maps overview) and where they can make or join events (after an account is created). These events should give the creator of the event the possibility to accept or decline people who want to join. Furthermore there should be an option for payments via paypall and credit card. Users should be able to create an account and get points for joining or hosting events. These points can be used for discounts with companies we work with. In events there should be a chat function with other people who join the event. There should be an option to add your facebook friends and see who uses the app. There should be an open API towards Uber Eats or food home delivery apps. There should be a possibility to share events via facebook. There should be push messages when an event comes close or someone wants to join an event There should be a possibility to add events to your phones agenda. Events: users can add: time, date, location (on map), put a price to join the event, add a picture, User accounts: name, profile picture, email and phone number A website has to be made for the application itself als promotion and there has to be an admin login to see the amount of users, block users and have an overview of discount codes (add, remove and times used) Logo has to be created as app icon and for website. More information can be given on a later stage. Many thanks for all your replies. Due date for an initial version that can be used and tried is 3 months from start.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146176.73/warc/CC-MAIN-20200225233214-20200226023214-00277.warc.gz
CC-MAIN-2020-10
1,583
15
https://www.mochisprite.com/news/half-body-colored-sketches-now-25-limited
code
I have recently gotten into a bit of an economic pinch, thanks to some expenses that I had to take care of. In light of this, I am opening comission slots EXCLUSIVELY for half-body colored sketches! These have also been reduced in price to 25£! This sale ends on friday, November 9th! Please see the above images for examples of what to expect. Any comission requests are to be submitted via my commission form! Please remember to choose "half body" as the tier option and "colored sketch" as the coloring option, so I know that I can accept your submission!
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541319511.97/warc/CC-MAIN-20191216093448-20191216121448-00478.warc.gz
CC-MAIN-2019-51
559
5
https://www.gammon.com.au/forum/bbshowpost.php?bbsubject_id=1529
code
|Here's a few other things I've previously thought about on this matter.| Text sent from mud requires multiple TCP packets due to the large amount of text sent: - I've noticed, from a single source, that when it's trying to transmit more data that it can fit in a single TCP packet, the maximum size of the packet is always a consistant size. - This could possible be used to cause MushClient to 'learn' when there is a big likelyhood of more text coming from the mud. - If you give a limit of 1 second(the mud should send both packets with little to no delay) for additional data, then you can remove the chance of a packet just happening to be the right size but still having a valid prompt at the end.(with minimal delay) This is a bad option, I believe, as it uses a bit of intelligent guessing, but still has small chances of delay in processing with a specific packet length. Prompts don't supply a carriage return, so MushClient won't attempt to compare a trigger to it. Well, how about giving an option for specific triggers that don't require a carriage return. - Insert an option into your trigger input GUI to 'Match on prompt.' - Internally, if the final line of data you've received from the server doesn't end in a carriage return, then try to match it only against triggers that use that option. - Allow for gagging/altering partial lines if this option is selected - if you find any lines with this prompt, just delete the data that would appear on the screen.(leaving all/final color changes?) Data inserted into 'output to world' should not have an extra carriage return if this option is on, and I believe world.tell "" may work appropriately to cause a prompt-like behaviour if you wanted to insert data through scripting. - Potentially, another string of data may be added when processing each line of data.(internally) If a verifiable prompt is found, store it in it's own section to note it as a prompt. Then when you process your next 'true' line, it can process the entire 'true' line without having to worry about the prompt in your triggers. ('true' line in this case refers to any data received without the prompt prepended to it) When you get a final carriage return, join the two strings correctly for the display and storage into history buffer. - Insert a global/world option 'Match trigger prompts on all lines.' This could cause MushClient to attempt to match any trigger marked 'Match on prompt' at the beginning of every line that comes from the mud. If this global/world option is checked, then MushClient can check every line for a prompt, if it is found, store it internally as a prompt(after performing necessary actions to it as noted in the trigger setup), the remainder of the line is then processed through standard triggers. (If this is used, any prompt that ends on * or (.*) or (.*)? etc. would cause problems with other triggers, as MushClient would think the whole line is a prompt, even if it is not. This would only really work for prompts that had a specific character that ended the prompt and still didnt' supply a carriage return.) I prefer this option. Although this seems very possible in theory, not knowing the internal workings of MushClient, I don't know how plausible it is to be added. I believe only the new global/world option would cause much additional processing overhead, depending on how many different prompt triggers you have(why more than 1 active at a time anyways?) I would almost say put this in it's own minisection under world options, since you should only need 1, but since the triggers section has a lot of other things that it could use, it seems appropriate to insert it into that section. Okay, there's my 3 cents. ...hmm, anyone got change for a nickel?
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865679.51/warc/CC-MAIN-20180523141759-20180523161759-00144.warc.gz
CC-MAIN-2018-22
3,738
15
https://wso2.com/library/conference/2015/06/wso2con-eu-2015-towards-a-winning-api-strategy/
code
[WSO2Con EU 2015] Towards a Winning API Strategy - WSO2Con Europe 2015 - WSO2Con - WSO2 Director – API Architecture, WSO2 Sumedha is the Director of API Architecture at WSO2. Sumedha has contributed to the successful implementation of data, SAP and repository-based integration projects, as well many WSO2 QuickStart development consulting engagements. He is an active committer with the Apache Axis2 project. Sumedha’s article, 'Carbon: Towards a Server Building Framework for SOA Platform', was featured at the fifth International Workshop on Middleware for Service Oriented Computing in New York. He holds a B.Sc. in MIS from the National University of Ireland and is currently reading for an M.Sc. in computer science from the University of Moratuwa, Sri Lanka.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948235171.95/warc/CC-MAIN-20240305124045-20240305154045-00457.warc.gz
CC-MAIN-2024-10
769
5
http://meta.programmers.stackexchange.com/users/76601/brett-bonner
code
Top Network Posts - 7Simple Characteristic Function or Step Function in pgfplots - 5git: what is the correct merging or rebasing workflow to modify a maintenance branch and apply those patches to another branch? - 5Illegal access Error java - 5What can a company do to restrict offsite contract developers from redistributing GPLv2-licensed code modifications? - View more network posts → Keeping a low profile. This user hasn't posted... yet.
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826908.63/warc/CC-MAIN-20160723071026-00281-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
445
8
https://wiki.metacentrum.cz/wiki/IMP
code
IMP's broad goal is to contribute to a comprehensive structural characterization of biomolecules ranging in size and complexity from small peptides to large macromolecular assemblies, by integrating data from diverse biochemical and biophysical experiments. IMP provides an open source C++ and Python toolbox for solving complex modeling problems, and a number of applications for tackling some common problems in a user-friendly way. IMP can also be used from the Chimera molecular modeling system, or via one of several web applications. Following modulefiles are present in MetaCentrum module add imp-2.7.0 Notice: This application supports parallel computing (MPI, OpenMP) which can have weird consequences. For more details about parallel computing visit the page How to compute/Parallelization.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531994.14/warc/CC-MAIN-20190421160020-20190421182020-00470.warc.gz
CC-MAIN-2019-18
800
4
https://devs.x-cart.com/how-to_articles/how_to_set_expiration_date_for_static_files_being_served_directly_by_nginx.html
code
How to set expiration date for static files being served directly by Nginx page last edited on 28 February 2016 In case all static files are served directly by Nginx on your site, you may want to set expiration date for these static files. To do it, specify additional directives in your nginx configuration file, within the “http” section, for example: Help make this document better This guide, as well as the rest of our docs, are open-source and available on GitHub.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00358.warc.gz
CC-MAIN-2022-27
474
6
http://www.linuxquestions.org/questions/mepis-64/kwrite-refuses-to-print-478954-print/
code
KWrite refuses to print I just added the BJC 800 (I think) driver for my Canon i850, based on what someone else here had success with. KWrite failed to print a document using this printer and driver. The failure consisted only of nothing happening. It appears to be a KWrite problem, because the test page did print during driver setup, and I just tested it elsewhere, successfully printing a Wikipedia article. Has anyone else had KWrite fail to print? "KWrite failed to print a document using this printer and driver. The failure consisted only of nothing happening." I suggest that you use kjobviewer to see if the document made it onto the print queue or not. If the document did not make it onto the print queue then the problem is in kwrite. If the document is on the print queue but will not print then the problem is in the printer driver setup. If you do not have kjobviewer handy then look at the files on /var/spool/cups and see if you can figure out what is going on. |All times are GMT -5. The time now is 08:17 AM.|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122167.63/warc/CC-MAIN-20170423031202-00372-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,029
6
https://blog.jprosevear.org/2005/03/27/time-warp/
code
Crazy. Child born – boom 4 months later and I haven’t blogged. Finally got the Evolution 2.2.0 and 2.2.1 release notes out. 2.2.2 for GNOME 2.10.1. Hopefully get head versioned up and rolling again for GNOME 2.12 this week. Finally got around to some gnome-pilot patch application, need to send out a “state of the pilot” email to the gnome-pilot list. Checked in some fixes to the bonobo-daemon branch to clean up the configuration. Checked out and built gnome-chess for the first time in probably a year. Need to resurrect this either as the same C or a C#/Mono project, but first I have to think about the UI, I changed it long ago but it feels overly complicated for the usual use cases. Richard Hoelscher contacted me a while back about SVG icons which would be greatly welcomed. As for Emily (the aforementioned daughter), she has recently started rolling over. Nice dinners with parents on both sides over Easter. Finally started installing the countertop in the kitchen. Getting better with the caulking gun.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211935.42/warc/CC-MAIN-20180817084620-20180817104620-00409.warc.gz
CC-MAIN-2018-34
1,024
5
https://edu.opencampus.sh/courses/158
code
The opencampus.sh Machine Learning Degree is a program that comprises four mandatory elements. To successfully obtain the degree, you must complete these elements within four semesters. If you completed one of the elements before the program for the opencampus.sh Machine Learning Degree started, we will of course still accept these as part of the degree. By applying for the degree and getting accepted, you get preferred access to all machine learning courses from opencampus.sh (for organizational reasons you still have to do the application process for each course) and a special certificate showing your experience in the field of machine learning. With the application to the program, you will be registered for the information event that will be held before each semester, about four weeks before the semester opening at opencampus.sh. At this event you get additional information on the different elements of the program and shortly after the event we send out the confirmations to the accepted applicants. You will get an invitation to the event as soon as the date is fixed. The four mandatory elements to obtain the degree are: Completion of one of the following courses at opencampus.sh: - Einführung in Data Science und maschinelles Lernen mit RStudio - Advanced Machine Learning Sessions Participation in one of the following events: - Prototyping Week from opencampus.sh (participating with an AI project or startup) - Coding.Waterkant hackathon A more detailed view into the content of the degree and its courses you get from our course book. This semester it is still in development but you can find the first content here.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00480.warc.gz
CC-MAIN-2021-10
1,643
11
https://issues.apache.org/jira/browse/DIRMINA-723
code
Affects Version/s: 2.0.0-M6 Fix Version/s: None Environment:Ubuntu Linux, kernel 2.6.x The problem was discussed with Emmanuel Lecharny in mail lists: If you compare OrderedThreadPoolExecutor and standard ThreadPoolExecutor, you can see that ThreadPoolExecutor has useful params: - core pool size - maximum pool size - work queue size If you use unbounded thread pools and queues with mina Acceptor or Connector, you may get OutOfMemoryError under critical load because Java creates too many threads. With ThreadPoolExecutor you may limit the number of threads (maximumPoolSize) and use a bounded queue (ex. LinkedBlockingQueue of limited capacity). Unfortunately, this does not work with OrderedThreadPoolExecutor -both "waitingSessions" and "sessionTasksQueue" do not allow to configure their size nor pass a different queue implementation. Even though OrderedThreadPoolExecutor extends ThreadPoolExecutor, it overrides the behavior significantly - seems that its meaning of "corePoolSize" and "maximumPoolSize" is different.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00492.warc.gz
CC-MAIN-2021-31
1,027
12
https://www.elastic.co/guide/en/security/current/prebuilt-rule-1-0-2-peripheral-device-discovery.html
code
Identifies use of the Windows file system utility (fsutil.exe ) to gather information about attached peripheral devices and components connected to a computer system. Rule type: eql Risk score: 21 Runs every: 5m Searches indices from: now-9m (Date Math format, see also Additional look-back time) Maximum alerts per execution: 100 - Threat Detection Rule license: Elastic License v2 ## Config If enabling an EQL rule on a non-elastic-agent index (such as beats) for versions <8.2, events will not define `event.ingested` and default fallback for EQL rules was not added until 8.2, so you will need to add a custom pipeline to populate `event.ingested` to @timestamp for this rule to work. process where event.type in ("start", "process_started") and (process.name : "fsutil.exe" or process.pe.original_file_name == "fsutil.exe") and process.args : "fsinfo" and process.args : "drives" Framework: MITRE ATT&CKTM - Name: Discovery - ID: TA0007 - Reference URL: https://attack.mitre.org/tactics/TA0007/ - Name: Peripheral Device Discovery - ID: T1120 - Reference URL: https://attack.mitre.org/techniques/T1120/ Intro to Kibana ELK for Logs & Metrics
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643388.45/warc/CC-MAIN-20230527223515-20230528013515-00759.warc.gz
CC-MAIN-2023-23
1,146
20
http://blog.galaxyzoo.org/page/2/
code
It was discussed within the science team once the nature of Hanny’s Voorwerp was becoming clear, since the color of that giant loop suggested similar emission-line properties at a larger redshift. Kevin gave it the name “Teacup” in honor of this loop. Then in March 2009, Georgia State University colleague Mike Crenshaw was here on my campus for a thesis defense. I showed him this object, and he mentioned that one of their graduate students was doing spectroscopy of active galaxies at the Lowell Observatory 1.8m telescope that week. Two nights later, Stephen Rafter from GSU obtained a long-slit spectrum crossing the loop and showed that it was, indeed, gas photoionized by an AGN. Later this object featured in the Voorwerpje hunt, as one of the 8 cases showing an energy deficit from the nucleus so it must have faded. Indeed, this example was a major factor in showing that the Hunt project would be worthwhile. Today’s post is also from Dr Enno Middelberg and is the second part of two explaining in more detail about radio interferometry and the techniques used in producing the radio images in Radio Galaxy Zoo. In a previous post I have explained how the similarity of the electric field at two antennas’ locations is related to the Fourier transform of the sky brightness. Unfortunately, we’re not quite there (yet). You may have heard about sine and cosine functions and know that they are one-dimensional. Images, and the sky brightness distribution, however, are two-dimensional. So how can we imagine a two-dimensional Fourier transform? In this case, we have to combine 2D waves with various frequencies, amplitudes, and orientations into one image. We can make a comparison with waves on a lake. Just like a sine or cosine wave, a water wave has an amplitude and a frequency, but in addition it also has an orientation, or a direction in which it travels. Now let us think of a few people sitting around a pond or lake. Everyone kneels down to generate waves which then propagate through the water. Let us further assume that the waves are not curved, but that the crests and valleys are parallel lines. Now all these waves, with properly chosen frequencies, amplitudes, and directions will propagate into the center of the pond, where the waves interfere. With just the right parameters, the interference pattern can be made to look like a 2D image. In a radio interferometer, every two telescopes make a measurement which represents the properties of such a wave, and all waves combined then can be turned into an image. Let me point out that the analogy with the lake is taking things a little bit too far: since the water waves keep moving across the lake, a potential image formed by their intererence will disappear quite quickly, but I hope you get the point about interfering 2D waves. To illustrate this further I have made a little movie. Let us assume that the radio sky looks just like Jean Baptiste Joseph Fourier (top left panel in the movie). I have taken this image from Wikipedia, cropped it to 128×128 pixels, and calculated its Fourier transform. The Fourier transform is an image with the same dimensions, but the pixels indicate the amplitude, phase and frequency of 2D waves which, when combined, result in an image. Then I have taken an increasing number of pixels from this Fourier transform (which ones is indicated at the top right), calculated which 2D waves they represent (bottom right), and incrementally added them into an image (bottom left). At the beginning of the movie, when only few Fourier transform pixels are used, the reconstructed Mr. Fourier is barely recognizable, with 50 Fourier pixels added, one begins to identify a person, and with an increasing number of waves added, the image more and more resembles the input image. You should play it frame by frame, in particular at the beginning, when the changes in the reconstructed image are large. In radio interferometry, Mr. Fourier’s image is what we want (how does the sky look like?), but what we get is only the pixels shown in the upper right image. Each of these pixels, all by itself, provides information as illustrated in the bottom right, but all together, they yield an image such as in the bottom left image. And the more pixels we measure, the more accurate the image becomes. So in summary: a radio interferometer makes measurements of the similarity of the electric field at two locations, and the degree of similarity represents the Fourier transform of the sky radio brightness for the two antennas in that instant. Astronomers then reconstruct the sky brightness from all these measurements taken together – that’s also why the technique is called “synthesis imaging”, or “aperture synthesis”. And if you’ve kept reading until here without having your brain turn to mush – congratulations! This is typically the subject of lectures for advanced physics students. I’ve been learning about radio interferometry now for more than 15 years and am still discovering new and interesting bits. I’ve used some statistical tools to analyze the spatial distribution of Galaxy Zoo galaxies and to see whether we find galaxies with particular classifications in more dense environments or less dense ones. By “environment” I’m referring to the kinds of regions that these galaxies tend to be found: for example, galaxies in dense environments are usually strongly clustered in groups and clusters of many galaxies. In particular, I’ve used what we call “marked correlation functions,” which I’ve found are very sensitive statistics for identifying and quantifying trends between objects and their environments. This is also important from the perspective of models, since we think that massive clumps of dark matter are in the same regions as massive galaxy groups. We’ve mainly used them in two papers, where we analyzed the environmental dependence of morphology and color and where we analyzed the environmental dependence of barred galaxies. These papers have been described a bit in this post andthis post. We’ve also had other Galaxy Zoo papers about similar subjects, especially this paper by Steven Bamford and this one by Kevin Casteels. What I loved about these projects is that we obtained impressive results that nobody else had seen before, and it’s all thanks to the many many classifications that the citizen scientists have contributed. These statistics are useful only when one has large catalogs, and that’s exactly what we had in Galaxy Zoo 1 and 2. We have catalogs with visual classifications and type likelihoods that are ten times as large as ones other astronomers have used. What are these “marked correlation functions”, you ask? Traditional correlation functions tell us about how objects are clustered relative to random clustering, and we usually write this as 1+ ξ. But we have lots of information about these galaxies, more than just their spatial positions. So we can weight the galaxies by a particular property, such as the elliptical galaxy likelihood, and then measure the clustering signal. We usually write this as 1+W. Then the ratio of (1+W)/(1+ξ), which is the marked correlation function M(r), tells us whether galaxies with high values of the weight are more dense or less dense environments on average. And if 1+W=1+ξ, or in other words M=1, then the weight is not correlated with the environment at all. First, I’ll show you one of our main results from that paper using Galaxy Zoo 1 data. The upper panel shows the clustering of galaxies in the sample we selected, and it’s a function of projected galaxy separation (rp). This is something other people have measured before, and we already knew that galaxies are clustered more than random clustering. But then we weighted the galaxies by the GZ elliptical likelihood (based on the fraction of classifiers identifying the galaxies as ellipticals) and then took the (1+W)/(1+ξ) ratio, which is M(rp), and that’s shown by the red squares in the lower panel. When we use the spiral likelihoods, the blue squares are the result. This means that elliptical galaxies tend to be found in dense environments, since they have a M(rp) ratio that’s greater than 1, and spiral galaxies are in less dense environments than average. When I first ran these measurements, I expected kind of noisy results, but the measurements are very precise and they far exceeded my expectations. Without many visual classifications of every galaxy, this wouldn’t be possible. Second, using Galaxy Zoo 2 data, we measured the clustering of disc galaxies, and that’s shown in the upper panel of the plot above. Then we weighted the galaxies by their bar likelihoods (based on the fractions of people who classified them as having a stellar bar) and measured the same statistic as before. The result is shown in the lower panel, and it shows that barred disc galaxies tend to be found in denser environments than average disc galaxies! This is a completely new result and had never been seen before. Astronomers had not detected this signal before mainly because their samples were too small, but we were able to do better with the classifications provided by Zooites. We argued that barred galaxies often reside in galaxy groups and that a minor merger or interaction with a neighboring galaxy can trigger disc instabilities that produce bars. What kinds of science shall we use these great datasets and statistics for next? My next priority with Galaxy Zoo is to develop dark matter halo models of the environmental dependence of galaxy morphology. Our measurements are definitely good enough to tell us how spiral and elliptical morphologies are related to the masses of the dark matter haloes that host the galaxies, and these relations would be an excellent and new way to test models and simulations of galaxy formation. And I’m sure there are many other exciting things we can do too. Today’s post comes from Dr Enno Middelberg and is the first part of two explaining in more detail about radio interferometry and the techniques used in producing the radio images in Radio Galaxy Zoo. I have written in an earlier post about the basic idea of how to increase the resolution of a radio telescope: use many telescopes, separated by kilometers, and observe the same object with all. Here is a little more information about how this works. At the very heart of an interferometer is the van Cittert-Zernike theorem: it essentially states that the degree of similarity of the electric field at two locations is a measure of the Fourier transform of the sky brightness distribution. Now that’s a big bite to swallow, but let me explain it in less confusing words: the electric field is all we can measure – radio waves are electromagnetic waves, and radio telescopes are sensitive to the electric field. Now we can build a radio telescope in a way that it produces as its output a voltage which is proportional to the electric field which the antenna receives from, e.g., a galaxy. Much of the signal will be noise from our own Milky Way, the atmosphere and the electronics which amplify the feeble signals, but a tiny little bit of the signal will be caused by radio waves from space, and both antennas will receive a little bit of these. Now suppose we have two telescopes separated by 1 km or so, and both telescopes produce such voltages which contain a little bit of this signal. The voltages are digitised and the two data streams are fed into a correlator. The correlator is a computer which takes the two data streams and calculates their correlation coefficient, which is an indicator for their similarity. If the two data streams have nothing in common (for example, because an unexperienced PhD student pointed the two antennas in different directions :-) ) then the correlation coefficient will be zero, which is to say that they are not similar at all. However, if the two telescopes point at the same source, the data streams will have a few bits in common, and the correlator spits out a correlation coefficient which is not zero. This is our measurement! Now that we’ve that out of the way, we need to talk about Fourier transforms. The van Cittert-Zernike theorem states that the correlation coefficient is a measure for the Fourier transform of the sky brightness. Now what is a Fourier transform? The Fourier transform is an ingenious way of representing a mathematical function with a sum of sine and cosine functions. That is, if I take a large number of sine and cosine functions with various (but carefully selected!) frequencies and amplitudes, then their sum will be an accurate representation of another function, for example a square wave or a sawtooth. Check out the Wikipedia page on Fourier series (which are related to Fourier transforms, but easier to understand), which has a number of nice animations to illustrate this, such as this one: You can also play with Paul Falstad’s Java applet to see how to construct functions using sine and cosine waves interactively – very instructive! In part 2 of this post I will explain how astronomers use 2D Fourier transforms to assemble images of the radio sky. We’re pleased to announce that Radio Galaxy Zoo has been translated to traditional character Chinese. Many thanks to the Zooniverse’s Chris Snyder for getting all the technical things set up for the translation to go live and Mei-Yin Chou at Academia Sinica’s Institute of Astronomy & Astrophysics (ASIAA) for helping verifying the translation. What follows is an announcement describing Radio Galaxy Zoo and the translation in traditional character Chinese and then in English: 電 波星系動物園[中 文版]歡 迎你的加入!在此我們欣然宣佈本計畫中文版開始啟用。 感謝中央研究院天文及天文物理研究所 Dr. Meg Schwamb (Meg是 參與Planet Hunters 和Planet Four計 畫的科學家)以 及天文推廣團隊成員黃珞文協助, 再將它 們和紅外圖像進行比較及匹 配,這麼一來,在你的協助下, 噴流和宿主星系間本來付之闕如的關聯 性,未來將可建立成形。 http://radio.galaxyzoo.org/?lang=zh_tw Welcome to Radio Galaxy Zoo (Chinese)! It is with great pleasure that we announce the launch of the Chinese version of our project. We are very grateful to Dr Meg Schwamb (from Planet Hunters and Planet Four) and Lauren Huang from Academia Sinica’s Institute of Astronomy & Astrophysics (ASIAA) for their help with translating our project from English to traditional Chinese characters. Supermassive black holes (~several hundred million times the mass of our Sun) lie deep in the cores of many galaxies. And though we cannot directly see these black holes, we do sometimes see the huge radio jets originating from the galaxy cores. Galaxies in the radio sky can look quite different from the one seen in the optical wavelengths by instruments such as our own eyes. Some galaxies do not have any central radio emission but only radio jet(s) emanating outwards. Sometimes these jets are straight but at other times, they can be blobby, one-sided or bent. With very large all-sky radio surveys, we have many hundreds of thousands of radio jets and blobs that need to be matched to their host galaxies. Therefore we invite you to see the Universe as you have never seen before and help us map the radio sky by matching the radio jets and filaments to the galaxies (as seen in the infrared images) from whence they came. http://radio.galaxyzoo.org/?lang=zh_tw Radio Galaxy Zoo participants have been swamping the Science Team with an incredible number of interesting objects via Talk. Many of these are challenging our understanding of how radio galaxies work, both at their launching sites in supermassive black holes, and in the ways that the ejected jets of radio plasma interact with their environment. We’ll be highlighting some of these curious discoveries in subsequent blogs, but here’s a recently found one that’s just “too good to be true.” Today, we know that a galaxy’s redshift (the measure of how fast it is moving away from us — we use z = velocity/speed of light, approximately) is an excellent indicator of distance. This is due to the overall expansion of the Universe. So a galaxy with z=0.049 is moving away at 14,700 km/s, and is located about 650 million light years away, while a galaxy with z=0.26 is moving at about 89,000 km/s and is 3 billion light years away. How, then, could two such galaxies each be a source of radio emission which appear to be connected with a thin radio filament? That’s exactly what the following picture shows, where the optical picture, in green, is from the Sloan Digital Sky Survey (SDSS) , and the purple structure, outlined in white contours, is radio emission from the Faint Images of the Radio Sky at Twenty cm (FIRST, from the Very Large Array). Radio Galaxy Zoo participants have looked at approximately 40,000 systems so far, so in such a large collection, this unusual object is likely just a coincidence, rather than some failure in our understanding of cosmic expansion. However, it would be nice to get some higher resolution radio images to see what the structure really looks like. If you haven’t given Radio Galaxy Zoo a try yet – please join us at http://radio.galaxyzoo.org. We’re finding all kinds of fascinating new structures, while simultaneously creating a large database matching up radio emission with the supermassive black holes from which they were born. I’m happy to announce that thanks to the hard work of more than 80,000 volunteers, we’ve recently completed classifying the infrared images of galaxies taken from the UKIDSS survey! There were more than 70,000 images of galaxies on the site that you helped to classify; once the data are reduced, one of our main goals is to compare your classifications to those from the Galaxy Zoo 2 project and study how morphology changes as a function of the wavelength in which those galaxies are observed. Melanie Galloway, a PhD student at the University of Minnesota, will be focusing on these this summer as part of her thesis work. Some early results have shown that, as we predicted, features like galactic bars are often more prominent in the infrared. Below is a nice example of this phenomenon: the image is of the same galaxy (SDSS J115244.84+054059.1). In the optical image on the right (from GZ2), you can see a spiral galaxy with lots of star-formation, but the clumpy morphology of the gas clouds can hide the shape of the bar in the galaxy. In the UKIDSS image on the right, the blue gas clouds from star formation aren’t picked up in the infrared, and the stellar bar is much more clearly visible. This is supported by your classifications: the probability of a bar jumps from just 25% in GZ2 to 67% in GZ:UKIDSS. This marks the third set of galaxy images we’ve completed since the relaunch of Galaxy Zoo in 2012 (following the high-redshift CANDELS images from Hubble and the artificially-redshifted images from FERENGI). There are still tens of thousands of galaxies from the SDSS left to classify in Galaxy Zoo, though, and we’ll be adding new sets of images in the coming months. Thanks again for your help, and we’ll report on the results of the UKIDSS data as it comes in! Last October, Galaxy Zoo began including new images from the UKIDSS survey on the main site. These are many of the same galaxies that were classified in GZ2, but the images come from a completely different telescope and a different wavelength — the infrared. While there’s a lot of science we’ll be able to do comparing galaxy morphologies at different wavelengths, many volunteers have noticed artifacts (features that aren’t real astronomical objects) in the UKIDSS images that can look very different from what you’re used to seeing in the SDSS or Hubble images: - green squares - rings and ghosts - grid patterns and speckles These are only a small percentage of the images we’re looking at, but it’s important to identify them and try to separate them cleanly from the galaxies we’re classifying. So here’s our “spotter’s guide” to UKIDSS image artifacts. All of the UKIDSS images you see in Galaxy Zoo are what we call “artificial-color” — we use images captured by the telescope’s infrared detector, and then combine the different infrared wavelengths into a single color image. For our images, we use data from the Y-band filter (1.03 microns) for the red channel, J-band filter (1.25 microns) for green, and K-band (2.20 microns) for the blue channel. The images in Y, J, and K were taken at separate times and with different detectors and filters. So for changes in either the camera or the sky, these will often only show up in one color in the GZ images. Some users have identified a persistent pattern in the images that looks like four little green pixels arranged in a square (looks a little like the UKIDSS logo!). This is from the J-band images. The origin of the squares comes from the way that UKIRT processes data. Each patch of the sky is imaged in multiple exposures, and then these exposures are combined to get the final, deeper image. So each pixel in the image comes from four different locations on the detector. In the case of J-band images, the telescope actually took 8 different exposures during the dither pattern. For a few of the observing runs, the telescope lost the guidestar which keeps it positioned at the correct location; that means that the expected number of counts at the position of a bright star is lower due to the bad frame in the interlaced data. Normally, the software algorithms in UKIDSS drop the bad frames and correct for this effect; as GZ volunteers have identified, though, there are some cases where it didn’t work perfectly. (Many thanks to UKIDSS Survey Scientist Steve Warren at Imperial College London for his help in explaining this phenomenon.) Since the exposure pattern is in a square, the bad pixels will show up where there’s a bright star and one of the four frames is bad (meaning counts are lower than they should be). That’s the origin of the pattern showing up in some images. As mentioned above, the telescope takes multiple exposures for each part of the sky that it images. To improve this, for some of the bands, it images the same part of the sky for a second round, but offsets the location of these by either an integer or half-integer pixel. The reason for this is so we can improve the angular resolution of the telescope – that is, distinguishing small features in the galaxy that are normally blurred out by either the Earth’s atmosphere or the limiting power of the telescope itself. In the final data products, images from these offset frames are combined onto a fixed pixel scale in a process called interleaving. In some sources (bright ones especially), the gridding isn’t perfect and you can see some of the scale for this in the images. Another feature people have spotted are what have been called “ghosts”: these can be either regular or irregularly shaped objects appearing in a couple specific colors. There might be multiple causes for these, but one of the most common is the presence of an actual contaminant (a speck of dust, for example) that got into the optics of the telescope. Since the telescope isn’t designed to focus on nearby objects, the point source is distorted, usually into a ring-like shape. The color of these images, like the green squares, depends on what band they were imaged in; red for Y-band, green for J-band, or blue for K-band. Here’s one example: you can see the green and blue ring to the right of the galaxy in the color GZ image. The raw data (in black and white) shows the same ring in multiple locations, which tells us that it remained in the same position on the detector, but appears several times as the telescope moves over the sky. We hope this has been useful, but please continue to discuss these in Talk and on the forums; particularly if there are any artifacts that impede your ability to make a good galaxy classification. Happy hunting, and thanks for continuing to participate with us on Galaxy Zoo. Several of the Galaxy Zoo science team are together in Taipei this week for the Citizen Science in Astronomy workshop. If we’ve been a bit quiet it’s because we’re all working hard to get some of the more recent Galaxy Zoo classifications together from all of your clicks into information about galaxies we can make publicly available for science. But we thought we’d take this opportunity of all being in the same place to run a live Hangout. We might end up talking a bit about the process of combining multiple clicks into classifications, as well as some of the recent Galaxy Zoo science results. And we’re of course happy to take questions, either as comments below, as Tweets to @galaxyzoo or via the Google+ interface. We plan to do this during our lunch break – probably about 12.00pm Taipei Standard Time tomorrow (which is, if I can do my sums, 4.00am UK time, or Wednesday 5th March at 11.00pm EST, 8.00pm PST). As usual the video will also be available to watch later: You know those odd features in some SDSS images that look like intergalactic traffic lights? They aren’t intergalactic at all: they’re asteroids on the move in our own solar system. They move slowly compared to satellite trails (which look more like #spacelasers), but they often move quickly enough that they’ve shifted noticeably between the red, green, and blue exposures that make up the images in SDSS/Galaxy Zoo. When the images from each filter are aligned and combined, the moving asteroid dots its way colorfully across part of the image. These objects are a source of intense study for some astronomers and planetary scientists, and the SDSS Moving Object Catalog gives the properties of over 100,000 of them. Planetary astronomer Alex Parker, who studies asteroids, has made a video showing their orbits. I find their orbits mesmerizing, and there’s quite a lot of science in there too, with the relative sizes illustrated by the point sizes, and colors representing different asteroid compositions and families. There’s more information at the Vimeo page (and thanks to Amanda Bauer for posting the video on her awesome blog). One of the most common questions we receive about asteroids from Galaxy Zoo volunteers is whether there will ever be a citizen science project to find them. So far, as the catalog linked above shows, the answer has been that computers are pretty good at finding asteroids, so there hasn’t been quite the need for your clicks… yet. There are some asteroids that are a little more difficult to spot, and those we’d really like to spot are quite rare, so stay tuned for a different answer to the question in the future. And in the meantime, enjoy the very cool show provided by all those little traffic lights traversing their way around our solar system.
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657129431.12/warc/CC-MAIN-20140914011209-00230-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
CC-MAIN-2014-41
27,059
82
https://mail.python.org/pipermail/ipython-dev/2008-April/002675.html
code
[IPython-dev] Ipython1 architecture... Glenn H Tarbox, PhD glenn at tarbox.org Fri Apr 11 02:48:43 EDT 2008 I'm starting to dig through the Ipython1 code... first thoughts are there's been some serious thinking done here and I'm impressed. I have a lot of ideas driven by past-life experiences but tempered by recent-life reality and am interested in beginning a dialog on very high level architectural approaches. In particular, as I'm sure is very clear to all, its very important that we keep things as orthogonal as possible. For example, I'm very interested in the general needs of parallel computing but also require elements of more conventional peer-to-peer and client-server architecture. From what I see, there shouldn't be any problems in that I'm guessing that there are principles relating to non-generic resources (e.g. live data sources or heterogeneous compute resources) which might need to be addressed... this might be a simple matter of labeling convention and a couplea data structures... but some discussion might be But, the first thing I came across in digging through the code was reference to threads on the client side. My concern is basically that virtually all the code I write avoids threads. Threads are fine, of course, but are often the single biggest source of difficulty in a project for reasons I don't need to go into. So, I'm interested in how and why threading is used in the client side of Ipython1. Is it simply to provide support for non-blocking gui activity (like using -q4thread on the ipython command line) or is there some other reason. Given twisted, I'd posit that its not really necessary to use threads to support non-blocking command line behavior or anything else on the client. The reactor (or select in general) can get around this... but might require a rewrite of pyreadline which is planned but would take time... which is why the threading is used... I'd probably go farther and say that threading might really only have a proper place in the architecture for compute bound problems. Given that this is a parallel computing activity, that would make sense, but it would really need to be thought through (and it looks like it might very have been very carefully thought through). Fortunately, twisted provides very nice integration between the "main" thread and compute threads, so I'm sure that all is well... but... On the client side, it appears as though the reactor itself is spawned in a thread. Given that my gui code will be entirely reactor driven, I'm probably fine... but how is the thread coordination planned? From previous posts its clear that the use of the new twisted blocking thread stuff is heavily used (can't recall the proper names right now). Again, great... but it would be good to know what the intent is and, more importantly, if I'm gonna get bit. >From a more researchy / advanced / cool stuff perspective, I'd be interested in any thinking thats been done WRT coroutines and the wild stuff that can go on in that space. Erlang (and I'm no Erlang expert) has a very thorough architecture which provides all kinds of fanciness including the persisting / migration of tasklets and levels of architectural constructs answering questions I haven't yet asked (not unlike twisted). There's been some sniffing around by the stackless folks to see what the right approach to nailing stackless and twisted might be. I see some of the concepts proposed for Ipython1 potentially fitting nicely with things like tasklet migration. Of course, there are and its on the above that orthogonality need be maintained. I could digress into domain specific nastiness... but will spare the larger group... I guess that I hope for some increased radiation on the Glenn H. Tarbox, PhD | Don’t worry about people stealing your ideas. If your ideas 206-494-0819 | are any good, you’ll have to ram them down people’s throats glenn at tarbox.org (gtalk) + ghtdak on aim/freenode More information about the IPython-dev
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00435.warc.gz
CC-MAIN-2022-33
3,983
63
http://freecode.com/tags/graphics-conversion?page=1&sort=name&with=172&without=
code
BRL-CAD is a powerful constructive solid geometry solid modeling system that includes an interactive geometry editor, ray-tracing support for rendering and geometric analysis, path-tracing for realistic image synthesis, network distributed framebuffer support, and image and signal-processing tools. DeuTex is a wad composer for Doom, Heretic, Hexen and Strife. It can be used to extract the lumps of a wad and save them as individual files. Conversely, it can also build a wad from separate files. When extracting a lump to a file, it does not just copy the raw data, it converts it to an appropriate format (such as PPM for graphics, .au for samples, etc.). When it reads files for inclusion in pwads, it does the necessary reciprocal conversions. In addition, DeuTex has functions such as merging wads, etc. Enscribe creates digital audio watermark images. The rasterlines of an image are converted into frequency components via an inverse fast fourier transform. The watermarks sound like soft hissing, but when viewed in a frequency-versus-time display, the image is clearly visible. Manipulation of the audio file, such as by MP3/Ogg compression, reverb, or flange, muddies the image, but it remains visible. GL2PS is a C library providing high quality vector output for any OpenGL application. It uses sorting algorithms capable of handling intersecting and stretched polygons, as well as non manifold objects. It provides advanced smooth shading and text rendering, culling of invisible primitives, mixed vector/bitmap output, and much more. It can currently create PostScript (PS), Encapsulated PostScript (EPS), Portable Document Format (PDF), and Scalable Vector Graphics (SVG) files, as well as LaTeX files for the text fragments. Ghostscript is a processor for PostScript and PDF files. It can rasterize these files to a wide variety of printers, devices for screen preview, and image file formats. Since applications tend to prepare pages for printing in a high-level format such as PostScript, most Unix users with low-level bitmap printers, such as inkjets, use GhostScript as part of the printing process. In addition, Ghostscript is capable of converting PostScript files, functionality comparable to Adobe Acrobat Distiller, but on the command line. In addition, Ghostscript is used for file import and viewing by a great many other applications, including xv, ImageMagick, gimp, and xdvi. Several GUI wrappers for viewing PostScript and PDF files exist, including GSview, ghostview, gv, ggv, and kghostview. This is far from a comprehensive list. GraphicsMagick is a robust collection of tools and libraries which support reading, writing, and manipulating an image in over 90 major formats including popular formats like DPX, DICOM, BMP, GIF, JPEG, JPEG-2000, PDF, PNG, PNM, SVG, and TIFF. A high-quality 2D renderer is included, which provides a subset of SVG capabilities. C, C++, Perl, Tcl, and Ruby are supported. Originally based on ImageMagick, GraphicsMagick focuses on performance, minimizing bugs, and providing stable APIs and ABIs. It runs on all modern variants of Unix, Windows, and Mac OS X.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703317384/warc/CC-MAIN-20130516112157-00074-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
3,126
6
http://community.wizards.com/content/forum-topic/2358786
code
I noticed the card Bloodchief Ascension yesterday and immediately noticed it's potential in multiplayer play. Ideally, with a number of opponents, you can get the 3 counters on it fairly quickly, then use destruction / mill cards to do massive or lethal damage to your opponents. The deck would have to be partially black, since the main card is black. I was then figuring it could also have blue elements to prevent me from being killed right away (Counterspell, Propaganda), as well as a number of effective mill cards (Archive Trap) that would immediately deal enough damage to finish an opponent. Red then has spells that can do respectable damage to get counters (Lightning Bolt) as well as cards to cause destruction for damage and to help save me from being the primary target (Volcanic Fallout). Black itself does have a number of useful cards in both of these categories that are especially useful in multiplayer situations. Syphon Soul for damage to get counters and Syphon Mind for card advantage and damage once ascension has 3 counters. That said, I am unsure whether it be wiser to do UB, BR, or UBR. Overall, the deck is going to need a number of things, which I need possible suggestions for. 1. It needs ways to make sure I get Bloodchief Asension out fairly early. If all else fails, Diabolical Tutor is useful. 2. It will need ways to protect the card in question from being destroyed, and stop the other players from killing me before I can kill then. This is what things like Volcanic Fallout, Counterspell, and Propaganda are intended for. Though I'm sure there are more effective potential cards. 3. The deck needs to be able to make sure 2 damage is done to an opponent at least 3 times. This is where Lightning Bolt, Syphon Soul, and a few others. 4. Finally I need to be able to inflict massive damage once Ascension has it's counters. This is where mass destruction cards as well as mill cards will be useful. My favorite being Archive Trap, since it's likely that a player will search, and this would then be a free instant kill to ANY opponent. So I need help... - What colors should I run; what cards would be useful in getting this to run smoothly; etc. I will start working on a potential deck list. ...Still don't understand how to card tag :\
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00211-ip-10-137-6-227.ec2.internal.warc.gz
CC-MAIN-2015-40
2,276
13
https://connectingart.ch/product/stilllife/
code
Availability: In stock In this series of plant still lifes, I refer to such concepts as: the change of the world and the distortion of reality, or simply how a person’s vision of everyday life changes. Here in this series there is a reference to aphorisms and folk sayings that vividly describe us. Each work is a reflection of my subjective feelings, which I explore through tales and folk wisdom. 50cm by 60cm
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652207.81/warc/CC-MAIN-20230606013819-20230606043819-00002.warc.gz
CC-MAIN-2023-23
413
3
https://www.memecenter.com/tauzoo/likes
code
Obama first 100 days: Bin Laden still at large Trump first 100 days: Bin Laden DEAD What does someone who wears glasses and someone who is sexually abused from an early age have in common? They think how they see the world is normal. whats up queers. I'm still alive. anyone still follow me?
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572471.35/warc/CC-MAIN-20190916015552-20190916041552-00175.warc.gz
CC-MAIN-2019-39
291
6
https://www.eventbrite.it/e/biglietti-react-essentials-for-designers-33171981289
code
Azioni e Pannello dettagli React Essentials for Designers ven 26 maggio 2017, 10:00 – 18:00 CEST Take a break from your daily routine and diving into a day-long experience to code and learn the React essentials. Crafted for designers, project managers, and creatives, you're going to learn how to code user interfaces and think in React. At the end of the day we're going to apply the knowledge by building a cool web app. Who we are We are building a community for people outside of the tech hubs who are interested in coding and building products. For designers, project managers, and creatives. This audience ranges from young professionals to mature professional within the creative industries. You strive to learn, and master your skills to make great products and experiences. Who will teach you? Henrique Alves is an experienced Software Engineer consultant. He focused his career on Front-end Engineering and User Experience Design. He brings his experience building React prototypes and products for startups in Europe and South America. Why learn with us? We're not a school. We have a vast experience building digital products and we are still "playing the game". We’re entrepreneurs, creatives and makers. The lessons are practical focused on real-world problems. Why take this workshop? React became a popular library to build user interfaces because its component driven approach. It powers small, medium and big sites like Facebook, Netflix, AirBnB and many others. This workshop will change the way you see web applications. It will help you to design better products by understanding how things works under the hood. Not to mention the freedom you’ll experience when you can prototype your own UI components. Is one day enough? We have carefully compiled the right dose of information which will give you a good foundation. From there you decide what do you want to learn and what to build upon this foundation. - Building components not pages - React components - Styling components - Fetching data from API - Chrome DeveloperTool - Coding good practices Previous coding experience is not required. Basic HTML and CSS experience is helpful. Good English level is required. Bring along a laptop, power charger and an opened mind. Please install the following free software before class: - Google Chrome 09:15 — 10:00 • Doors open. Registration and light breakfast 10:00 — 10:05 • Quick introduction 10:05 — 10:15 • Software check-list 11:00 — 11:15 • Coffee break 11:15 — 12:00 • Building components and Thinking in React 12:00 — 12:30 • Diving into React 12:30 — 13:00 • Styling components 13:00 — 14:15 • Lunch break 14:15 — 14:45 • Fetching data and using Chrome DevTools 15:00 — 17:10 • Hands on! Let’s build some cool
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00214-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,787
38
https://africacomplaints.com/business-finance/211538-regent-insurance-company-how-the-.html
code
How do people that must help you with insurance not specify things like, "you must pay an admin fee"? On top of it all, clearly no one listened to what I requested and deducted money a month before they were supposed to. I asked for one simple thing, "since no one explained this to me, please rather then not deduct at the end of September, to make up for deducting earlier, seeing that they can't reverse the money". Now I'm paying for a car that's not even on the road yet. This doesn't make any sense to me and it's messed up my whole day and finances. Your service badly so far!!! Not impressed, one bit.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00324.warc.gz
CC-MAIN-2021-31
609
3
https://docs.tenable.com/integrations/CyberArk/security-center/Content/CyberArkSSHAuto.htm
code
Note: The Address field in the CyberArk Account Details for an account/host must contain a valid IP/FQDN and must be resolvable on your network. This value is vetted during the collection and discovery process. Address values that are null, or unresolvable, are not added to the scan. To configure SSH auto-discovery: Log in to Tenable Security Center. The My Scans page appears. Click + New Scan. The Scan Templates page appears. Select a Scan Template. The scan configuration page appears. - In the Name box, type a name for the scan. - In the Targets box, type an IP address, hostname, or range of IP addresses. (Optional) Add a description, folder location, scanner location, and specify target groups. Click the Credentials tab. The Credentials pane appears. In the Select a Credential menu, select the Host drop-down.. From the Authentication Method drop-down, select CyberArk SSH Auto-Discovery. The CyberArk SSH Auto-Discovery field options appear: Configure each field for the SSH authentication. Option Description Required The IP address or FQDN name for the user’s CyberArk Instance. The port on which the CyberArk API communicates. By default, Tenable uses 443. The Application ID associated with the CyberArk API connection. Users may optionally specify a Safe to gather account information and request passwords. no AIM Web Service Authentication Type There are two authentication methods established in the feature. IIS Basic Authentication and Certificate Authentication. Certificate Authentication can be either encrypted or unencrypted. CyberArk PVWA Web UI Login Name Username to log in to CyberArk web console. This is used to authenticate to the PVWA REST API and gather bulk account information. CyberArk PVWA Web UI Login Password Password for the username to log in to CyberArk web console. This is used to authenticate to the PVWA REST API and gather bulk account information. CyberArk Platform Search String String used in the PVWA REST API query parameters to gather bulk account information. For example, the user can enter UnixSSH Admin TestSafe, to gather all UnixSSH platform accounts containing a username Admin in a Safe called TestSafe. Note: This is a non-exact keyword search. A best practice would be to create a custom platform name in CyberArk and enter that value in this field to improve accuracy. Elevate Privileges with Users can only select Nothing or sudo at this time. If enabled, the scanner uses SSL through IIS for secure communications. Enable this option if CyberArk is configured to support SSL through IIS. Verify SSL Certificate If enabled, the scanner validates the SSL certificate. Enable this option if CyberArk is configured to support SSL through IIS and you want to validate the certificate. Caution: Tenable strongly recommends encrypting communication between your on-site scanner and the CyberArk AIM gateway using HTTPS and/or client certificates. For information on securing the connection, refer to the Tenable Security Center User Guide and the Central Credential Provider Implementation Guide located at cyberark.com (login required). - Click Save.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817239.30/warc/CC-MAIN-20240418191007-20240418221007-00816.warc.gz
CC-MAIN-2024-18
3,119
35
https://git.sr.ht/~brocellous/wlrctl
code
fix keyboard arg parsing correct brace and bracket output wlrctl is a command line utility for miscellaneous wlroots Wayland extensions. At this time, wlrctl supports the foreign-toplevel-mangement (window/toplevel command), virtual-keyboard (keyboard command), and virtual-pointer (pointer command) protocols. Requires wlroots 0.13+ There is an AUR package for wlrctl here. And an openSUSE package here. Otherwise, build with meson/ninja e.g. $ meson setup --prefix=/usr/local build $ ninja -C build install wlrctl is still experimental, and has just a few basic features. Check the man page wlrctl(1) for full details. Some example uses are: $ wlrctl keyboard type 'Hello, world!' ... to type some text using a virtual keyboard. $ wlrctl pointer move 50 -70 ... to move the cursor 50 pixels right and 70 pixels up. $ wlrctl window focus firefox || swaymsg exec firefox ... to focus firefox if it is running, otherwise start firefox. $ wlrctl toplevel waitfor mpv state:fullscreen && makoctl dismiss ... to dismiss desktop notifications when mpv becomes fullscreen You can send patches to the mailing list or submit an issue on the issue tracker.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647459.8/warc/CC-MAIN-20230531214247-20230601004247-00483.warc.gz
CC-MAIN-2023-23
1,147
19
https://forum.bricksbuilder.io/t/product-image-gallery/867
code
When using Woocommerce you need to add a product image gallery on the product page. When images are more then 4, it shows then in a extra row. But it has to be only 1 row with arrows. This should be made available in the element setting. This post is about 1 year old. Do you have any new solution for it? I founded the Ivan Nugraha’s solution in youtube, but the problem with that, i have to implement an extra JS pack.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00234.warc.gz
CC-MAIN-2022-33
422
3
https://www.thewindowsclub.com/commands-to-manage-files-and-folders-through-cmd
code
This article lists down various commands that you can use to manage files and folders through Command-Line in Windows 11/10. Although a lot of users prefer using a graphical user interface to manage files for a hassle-free experience, some also use the command-line interface to perform file management tasks. In any case, it is always better to know alternative solutions to execute a task. In this guide, I will be creating a list of useful commands that you can use for file or folder management on your Windows 10 PC. To perform a specific task on files or folders, there is a dedicated command that you need to enter in CMD. Let’s check out these commands! Commands to Manage Files and Folders through CMD Here are the commands that you should know to manage files and folders using Command Prompt in Windows 11/10: 1] Create a File or Folder in CMD To create a folder, type the folder name with the location where you want to create the folder. Here is the command: mkdir <folder name with path> To create a file of a specific size (in bytes), use the below command: fsutil file createnew file.txt 4000 In place of file.txt, enter the filename with its extension and full path. And, 4000 is the file size in bytes. Related: How to Create Multiple Folders using Command Prompt and PowerShell. 2] Delete Files or Folder in CMD You can remove a folder using the below command: rmdir <folder name with path> In order to delete a file, the command is: del "<filename with path>" If you want to delete all files from the current folder, enter the command: To delete files with a specific extension only, say png, use the command: If you want to delete files with a particular string in their filename, e.g., xyz, you can use the below command: 3] Find Files in a Particular Folder To find files inside a folder based on different parameters, you first need to navigate to the folder using the command: cd "<folder name with location>" Now, you can find files older than n days in a specific folder using the below command: forfiles /s /m *.* /d -n /c "cmd /c echo @file -n with the number of days. Like if you want to find files older than 2 days, type To find files larger than a specific size, use the command: forfiles /S /M * /C "cmd /c if @fsize GEQ 3741824 echo @path" In the above command, 3741824 is the file size to search files greater than this size. Read: Managing Files and Folders in Windows 11 – Tips & Tricks 4] Rename all file extensions present in a folder at once You can also batch rename file extensions in CMD. Suppose, you want to rename the file extension of all images to JPG, you can use the below command: ren *.* *.jpg 5] Get File Creation Time and Date To check the creation time and date of a specific file, use the command: dir /T:C filename 6] Check for a string inside a file To find all lines containing a particular string in a file, you can use the command: findstr string file-name For example, to display all lines with “twc” in a text file, you need to enter the command: findstr twc twc.txt Do remember that the above command is case-sensitive. To find sentences with any specified string, use a command like: findstr /C:"string1 string2 string3..." filename 7] Check for all Hidden Files in a Folder Use the below command to get a list of hidden files in a directory: dir /A:H /B 8] Compress a File in CMD The command to compress a file in a folder is: compact /c filename 9] Hide/ Unhide a file through CMD To hide a file, the command used is: attrib + h filename You can unhide the file again using the command: attrib -h filename 10] Set/ Unset Read-Only attribute to a file To make a file read-only, the command is: attrib +R filename If you want to remove the read-only attribute from a file, the command is: attrib -R filename 11] Command to Rename a File/Folder rename oldfilename.pdf newfilename.pdf 12] Read File Content in CMD You can read text file content in CMD using the below command: 13] Open a File in Default Application You can open a file in its default application by entering a simple command: 14] Move File / Folder to different Location Suppose you want to move TWC12.pdf file to TWC folder in G drive, use below command: move TWC12.pdf G:\TWC\ Command to move all files with a specific extension: move *.png G:\TWC\ To move files starting with a particular letter, say A, command is: move A* G:\TWC\ Similarly, you can move a folder using a command like below: move foldername <new location> move TWC1 G:\TWC\ 15] Command to Copy Files You can copy files from one location to another using command: copy Sourcefolder DestinationFolder Hope this article helps you learn some useful commands to manage files and folders through the command line in Windows 11/10.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650620.66/warc/CC-MAIN-20230605021141-20230605051141-00688.warc.gz
CC-MAIN-2023-23
4,734
82
http://fontstruct.com/gallery/tag/Decorative
code
The Gallery: Browse and find FontStructions We found 411 FontStructions tagged with “Decorative”. Last edit: 21.10.2014 Last edit: 18.10.2014 Last edit: 15.10.2014 Last edit: 03.10.2014 Tags: Font, Monty Python, Monty, Python, Fonty, Experimental, Different, Display, Decorative, New, Abstract, Geometric, Sans Serif, Unique Last edit: 24.09.2014
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444774.49/warc/CC-MAIN-20141017005724-00370-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
350
8
http://divide-studio.com/blogs/ionutcava/trac-is-up-milestones-are-set-framework-crashes-all-the-time-great-d/
code
What this means: - Timeline: every change to the code is archived and documented on a line-by-line basis. - Roadmap: the various milestones we’ve set for the development process. - SVN hosting: everyone can download the code via a SVN client. - Ticket and Bug report systems: Submit bugs, ideas or changes for us to review. - And lots more … Commits have been slow lately due to the following reason: - Resource management in the code was a complete mess. No real relation between elements in the code. Thus, a SceneGraph was developed to help clear the mess. It was not commited to the main code base due to memory leaks and improper handling of FlyWeight object creation and shading assignment. - The ResourceManager class was rewritten to create objects based on a ResourceDescriptor that contains the future object’s name, physical location on the disk (i.e. model files, textures etc. if they exists), flags for various conditions on create (i.e. flipped textures, no-material geometry). Not committed due to the same FlyWeight problem. - The Material system needs a rewrite. Switching between shaders all the time isn’t a good idea. Also creation/destruction is a complete mess. - Other minor bugs/improvements. These changes are massive. They will reflect object management in scenes and proper loading/unloading of resources. Rendering large batches of geometry should be faster (forests?) and the whole scene setup easier to understand (SceneGraph defined/dumped in XML format?). Once this commit is in the main codebase, a lot of smaller ones will follow adding features: Shader/Material based batch rendering, Advanced lighting based on material properties, Deferred Lighting, Speed improvements etc. If you look at the videos available on the main site in the media gallery, the time difference from the “OBJ importer” video and the “Grass generation” one is of about 6 months. From the last video until now (as of writing this post), it’s been well over a year. The changes ARE that massive. Hope future videos will prove that 😉 Until then … check for changes, and feel free to study the code and give feedback if you feel like it.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525133.20/warc/CC-MAIN-20190717081450-20190717103450-00426.warc.gz
CC-MAIN-2019-30
2,167
13
https://sourceforge.net/p/bristol/discussion/529491/thread/9400a7db/
code
Just wondering, before I submit a feature request, would any of you think it'd be a good idea for some of the Bristol emulations (perhaps starting with the mini moog) to automatically detune while running? I.E just like the analog equivelent? Like to know your thoughts on this. [Cross posted to KVRaudio.com] They already do a bit of this: the -detune flag will let you define the discrepancy with the target frequency and whenever a note is pressed a random amount of this value is added/subtracted from the true frequency. As per the original this will thus be different by voice and some will vary up, others down depending on the accuracy of their components. I did consider doing this by variance over time however I don't really see the point of that: if you like a bit of detune to bring some movement into the sound then this does a reasonable job. If I introduced more and more variance over time then eventually you would just have to retune the synth which everybody agreed was a royal pain and was eventually done automatically. I am open to suggestions on how it could be improved but if it does not include a reasonable degree of randomness between voices then one might as well just detune the synth to start and have done. Ok, forget I even mentioned it. While it would be a humourous novelty, I think it would annoy a lot of people. We could perhaps start a more general thread: what options should be included that are now missing? Some of the stuff I would consider are 1. finishing up some of the loose end emulators 2. OSC support although I am not convinced this is interesting 3. Yamaha ym2151 (dx9/dx27/fb01) soundchip 4. Yamaha ym2128/2129 (dx7/dx1/tz816) reversed engineered combo chip 5. ELKA Synthex 6. LASH support (API is in flux at the moment though) 7. DSSI/LV2 support (not my favourite but has been requested) A guy called David Horvath is working on a mother GUI for the emulators, details can be found in This might also be worth working on with him, I was considering doing one of these but I am very agreeable to this one. Other points are DSP improvements such as the one you raised 9. Aliasing noise reduction 10. More filter optimisation It is worth saying that the next release at least will probably just consist of another biggger SID synth. Anyway, I think I might post this as separate thread this afternoon and see where the discussion goes. I'm particularly liking that MonoBristol mother GUI, simple and to the point! I have a few concerns though about Dssi/LV2 support, while I'm not opposed to it but what use would it be? With JACK, audio can pass between different applications, so would we need a plugin for doing such things? I'll talk to you about the bigger SID softsynth in a different thread. I kind of see DSSI and Jack as separate audio servers and the request came in from somebody using it and also have reservations about it. It's good to have it on the list though, things will no doubt get more priority which means other features will have get less. Can't have a list where everything is crucial. Listen, I will put this in a different thread and take it up there. We could also start a separate one, both on Open discussion if you have specific questions about -sid2 (you can see it now with the -libtest option). Kind regards, Nick. Log in to post a comment. Sign up for the SourceForge newsletter: You seem to have CSS turned off. Please don't fill out this field.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00126-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
3,435
33
https://blogs.msdn.microsoft.com/girishr/2008/12/06/pdc2008-sets-benchmark-for-wireless-connectivity/
code
Many of you who were there at the PDC keynote room might have missed the UFO style devices (in the picture below) from Xirrus. But several of these devices were silently (perhaps a little flashy) and dutifully doing their job in the keynote room. Pardon my lazy blogging for not saying enough about it at the event as I promised earlier. But this press release from Xirrus will fascinate you. An industry record was made at PDC as we connected 2,890 concurrent users in a 100,000 square foot keynote hall through Wi-Fi. You can read more on this and the other records that were set by Xirrus here: We had over 2,000 connections pretty much during every keynote. With such large number of connections, attendees might have noticed how much bandwidth was eaten up. We didn’t use up the entire bandwidth but were over 90% of the capacity most of the times throughout the week. Hopefully that explains why you were getting a slower connection at times, as it was not just you, but thousands of other people on the same network around you. Our priority was to make sure nothing catastrophic happens as we also had a backup wireless solution if the high-density solution didn’t work out. Thankfully we didn’t ever have to go to our backup Wi-Fi. Next stop, we’ll think about scaling out High Density devices throughout the conference center and more bandwidth (of course) to make your conference experience better. Now you have one more reason to attend PDC 2009 and other future Microsoft events. So what are you waiting for?
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818694719.89/warc/CC-MAIN-20170926032806-20170926052806-00215.warc.gz
CC-MAIN-2017-39
1,528
6
https://betanews.com/2014/09/27/iobit-uninstaller-4-fully-removes-windows-8-apps/
code
IObit Uninstaller 4 fully removes Windows 8 apps IObit has announced the release of IObit Uninstaller 4, an interesting extension of its freeware program uninstaller. The program now has the ability to completely remove Windows 8 apps. A "batch uninstall" option ditches as many as you like in a single operation, and the "Powerful Scan" tool ensures there's no hard drive or Registry junk left behind. System Restore integration will optionally prompt you to create a restore point before you begin. A new "Quick Uninstall" button runs the original uninstaller silently, when possible, so you're not hassled by any messages or alerts. Perhaps most surprising, Uninstaller 4 now automatically detects uninstalls, even if you’re carrying them out from the standard Windows applet. Once these are complete it runs a background scan, displays a warning of any junk, and removes it on demand. There’s a simplified interface, smarter removal of browser toolbars (IE, Firefox, Chrome and Opera are supported), and better detection of leftovers in general. This worked well for us, but arguably it’s also too intrusive, and it also left an extra process running in the background on our test PC. But if you think this is a step too far then it’s easily disabled (click Settings, clear "Call Powerful Scan when uninstallation operation detected"). IObit Uninstaller 4 is available now for Windows 2000 and later.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00471.warc.gz
CC-MAIN-2018-51
1,412
8
https://blog.yudiz.com/grpc-remote-procedure-call/
code
gRPC is a remote procedure call framework, through this blog we will understand factors regarding gRPC like what is gRPC, why it is popular and we thought this blog we will understand some keywords that gRPC contains like stubs,microservices. So let’s Start through our first initial intro of gRPC. What is gRPC? gRPC is a new open-source and high-performance Remote procedure call framework that we can run in any environment. It allows us to run written code on the local computer, even though it may be executed on another computer. Now the question arises in our mind is what are APIs and how does it work? Overview of API API stands for Application Program Interface as its name suggests it is an interface between different services like it will work as a communicator between different services, like if we discuss one example of API through the hotel, we will be one service and kitchen of hotel is the second service so if we want to order some food it will be hard for us to directly communicate with kooks so what we will do we use one communicator as an API and those are waiters we will give order to them they take the food from the kitchen and serve us. This is the actual concept of API’s. There are some different protocols that we can use to communicate with different services like, - RPC (Remote procedure call) - SOAP (Simple object access protocols) - Rest (Representational state transfer) gRPC is updated protocols by Google with higher performance, interoperability, security, productivity, etc. Some people think gRPC is a totally new concept developed by Google. and there is no protocol that contains this kind of feature. So this is wrong because gRPC is not a new concept. But gRPC was developed in 2015, before that developer is also using one of the older versions of this protocol and that is RPC. RPC is a form of Client-Server communication that uses a function call rather than an HTTP call. See this image gRPC does replicate this architectural style of client-server communication, via function calls. So I think now you can see that architecture is the same as RPC so gRPC is not a new concept but we can say it is adopted from this old technique and improve the performance and security, productivity and then they make it more popular in the last few years. So now the question is: What is the strength that makes gRPC so popular? What is the strength that makes gRPC so popular? There are some reasons that make gRPC popular like Compare to other protocols Abstraction is easy like in this it is a just function call. The second thing it supports a lot of languages like Go, C++, Java, Python, C#, etc. Third is code generation, in this, they used a proto compiler to generate the .proto file. So due to this, it will auto-generate server-side skeletons (Like request, response file) and client-side networks stubs which saves significant development time in applications with various services.So this is the strength we can say that makes gRPC popular and one more thing besides this all gRPC is popular because Microservices are very popular. Microservices is too popular today but still let me give just an intro to what microservices is. What is microservices So it is an architecture that divides different services into small parts as its name suggests Micro-Services, so several services in different programming languages we can run on a single server. In which we can do, service to service interaction as well. So in this gRPC helps the most by providing capability and support to solve the typical issue that arises in this kind of situation when we want to use a different kind of services. Now let’s see gRPC architecture… The working mechanism of gRPC contains 5 steps: As we discussed that the client-server communication so it is common that two things will be this only means the first step of this mechanism will be Client and Last will Server. Afterwords gRPC contains stubs on both sides to pack and unpack the param. so one will be client stub and the second is server stub so now one remaining step is RPC runtime that is the middle communicator between this all steps so steps are: - Client Stub - RPC Runtime - Server Stub In many blogs, you can find more than 5 steps but it’s common because as we talk before, gRPC communicate with the local kernel first then goes for a public server kernel so might be this step increases the count otherwise these are the common steps. What is a stub? It is one type of interface between RPC clients. The stub includes Connecting, Sending/Receiving steaming or non-streaming requests, canceling calls, and so on. Maybe some have one question: In SOAP we are passing in XML format, in Rest we are passing in JSON format so what we are passing in gRPC makes it more secure compared to others? So as we know in soap and rest we are passing our data in XML and JSON file that is readable data means a human can read the things while in gRPC we are passing the data through Protocol Buffers as we discussed before proto file so what will happen we will pass the same data but in encoding form, like if I am passing int a and int b they will pass this two param in encoding form now at the server end stub will decode this package and response the data as per the requirements. Whatever we have discussed today is the surface of what gRPC is and how it works. Maybe some of you have doubt like in which particular situations I have to use this different protocols so it totally depends on project requirements. Like if the project contains small data we can go for Rest but if the project contains a large number of data then security is one of the important concerns so at that time we have to go with gRPC as it contains encoding and decoding features so this is all about gRPC. Happy Learning . Share via social media
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817128.7/warc/CC-MAIN-20240417013540-20240417043540-00584.warc.gz
CC-MAIN-2024-18
5,828
35
https://powerusers.microsoft.com/t5/Building-Flows/Posting-a-form-response-to-a-Group-Conversation-mailbox/td-p/120018
code
I am trying to set up a GDPR compliant contact form on a site and have been considering the use of an embedded Office 365 form, combined with an Office 365 Group and associated mailbox open to external senders. The group is private and visible only to all our enquiry handler team. The contact form will simply ask for the enquirers email, name, subject and question. I would like Flow to then send this securely to the groups conversation/mailbox. We can then reply to the individual’s enquiry from within Outlook Online or Outlook Desktop - every member of the group having 'send as' permissions for the Group so responses are sent from the group's email address, and not their individual work emails. I am aware that flow allows me to send a notification email upon receipt of a form submission, and that I can include any of the form's fields within this email. I could therefore send a similar email to the Group email address. However, I do not believe that would be GDPR compliant, given the issues with security of data transit over email. What I envisage is Flow being able to achieve this securely through the Office 365 Group REST API. However, I don't know if an action exists that does this? It also seems silly, if such functionality is possible, to then be storing the form submissions within an Excel sheet. Is it therefore possible for Flow to delete the record after adding it to the Group mailbox/or simply tell Forms to stop storing a copy of responses in an Excel Spreadsheet (GDPR being all about how you store sensitive info)? I am aware there are other tools out there that are set up for enquiry handling. However, we only handle about 5 enquiries a week for a small project and it would be good if we can leverage the Office 365 tools we're paying for. Hi @ eddie89, Do you want to use the Office 365 Group REST API to achieve your securely requirement? You could refer to link below to learn more abou the custom connectors in Microsoft Flow: Please make a test on your side to see if your requirement could be achieved. Have you had a chance to apply @v-yuazh-msft‘s recommendation to adapt your Flow? If yes, and you find that solution to be satisfactory, please go ahead and click “Accept as Solution” so that this thread will be marked for other users to easily identify! Thank you for being an active member of the Flow Community! Flow Community Manager Check out the News & Announcements to learn more. Participate in the Power Virtual Agents Community Challenge Power Platform release plan for the 2021 release wave 2 describes all new features releasing from October 2021 through March 2022. DynamicsCon is a FREE, 4 half-day virtual learning experience for 11,000+ Microsoft Business Application users and professionals.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00163.warc.gz
CC-MAIN-2021-39
2,764
14
http://www.tomshardware.com/forum/246758-30-abit-motherboard-major-error
code
Dont remember if I had the pro, but I had the same problem with a KV8, had to flash the BIOS as it didnt support the newer processors. Of course, without an older CPU, you will have to get the BIOS chip from Abit and update it that way. Hi I am having the same problem. Computer starts then shuts down. Get the codes 8.b., 9.C.,9.d.,9.E.,9.F. Was using a Hercules 400 watt power supply when the problem developed.Changed to a new Chiefmax 650 watt power supply. Had the same problem when I first built the system but I was using a 300 watt power supply. Went to the 400 watt worked for while now no joy with either 400 or 600 watt. Am running windows XP pro,have an AMD 3000+ socket754 processor,ATI radeon X1300 pro sapphire graphics card, OEM cd-rom,Sony 8x/4x/32x cd-burner, western digital 40g hard drive. Can any one help with this problem? thanks
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663718.7/warc/CC-MAIN-20140930004103-00247-ip-10-234-18-248.ec2.internal.warc.gz
CC-MAIN-2014-41
852
2
https://discuss.erpnext.com/t/how-load-javascript-code-in-all-doctype/46377
code
Do you have a custom app available? If so, you can add your script there and have it linked into any document by adding app_include_js = [ place your script file in yourapp/public/js/yourscript.js add to yourapp/public/build.json If you are looking for a reference, you can look into https://github.com/libracore/erpnextswiss/blob/master/erpnextswiss/public/build.json The base code of your script is executed when the document is loaded. You can make use to the jQuery document ready function to be sure the form is available and then apply your filter. Hope this helps…
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00243.warc.gz
CC-MAIN-2020-24
573
7
https://podcasts.google.com/feed/aHR0cHM6Ly9mcmVuY2hibGFibGEubGlic3luLmNvbS9yc3M/episode/ZGQ4NTZiOTYtYmFmMS00ZWM4LWFjYTktYWI3MDAzZWNlNDBi?sa=X&ved=2ahUKEwiUjJ_zmcrzAhWwGFkFHSYlCSgQkfYCegQIARAF
code
Welcome to the twenty-seventh episode of the French Blabla podcast where we will cover tips to increase your fluency while boosting your way of learning. This episode is the second part of our journey through the many nuances of the word "anyway". The first part can be found here (click to go to the episode). Stay tuned! In this Episode Music by bensound.com
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00101.warc.gz
CC-MAIN-2021-43
360
3
https://www.tfir.io/instana-introduces-new-profiling-functionality-for-node-js-go/
code
Instana has introduced a set of enhanced troubleshooting and monitoring capabilities for developers with new always-on production profiling solutions for Node.js and Go applications, as well as automatic tracing of Python Serverless functions running on AWS Lambda. Instana’s new profiling functionality for Node.js and Go, available for free to all Instana APM users, automatically captures profiles for every process in production without impacting resource overhead or application performance. Like other performance entities inside Instana’s solutions, profiles are available to Instana’s Unbounded Analytics engine, allowing patterns across multiple profiles to be discovered and analyzed collaboratively. Moreover, Instana said that it has also extended its AutoTrace features to automatically trace Python functions on Lambda AWS. The new native tracing capability collects a trace for every request independent of X-Ray, all without code modifications. Unlike traditional APM tools, Instana’s automated Application Performance Monitoring (APM) solution is said to discover all application service components and application infrastructure, including Cloud infrastructure such as AWS and Lambda, orchestration infrastructure like Kubernetes and Docker, application services and DevOps processes. Instana’s always-on profiling capabilities automatically attaches to Node.js, Go, and Java processes, with no restarts or manual configuration. This allows all application stakeholders across development organizations to get the exact data they need, when they need it, to understand how their code impacts overall service performance and where the opportunities to optimize performance and scalability exist.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00247.warc.gz
CC-MAIN-2023-06
1,722
5
http://alligatr.co.uk/
code
I'm Reece Selwood aka alligator. I'm based in Kent in the UK and I do weird programming things. I've been running an unofficial viewers/donations tracker for the biannual Games Done Quick marathons since 2013. A discord bot, designed to be simple to understand and easy to extend. My entry into the Awful Winter Jam 2018. It won "Best Use of Theme". A driving game about a dad mostly yelling. My entry into the GitHub Game Jam 2017. It came 15th overall out of 206 entries.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00468.warc.gz
CC-MAIN-2022-05
473
5
https://lovecityblog.com/2012/12/04/diy-love-ornament-garland/
code
This little Christmas mantle decoration didn’t really turn out the way I had envisioned. I was planning to string the ornaments along the string, each little ball spaced perfectly apart. As I quickly strung the ornaments on the string, they began to bunch up. I held up the string of ball ornaments to start the process of spreading them out… and I fell in love with the bunchiness. I love easy, unexpected decor, don’t you?
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512584.10/warc/CC-MAIN-20171211071340-20171211091340-00355.warc.gz
CC-MAIN-2017-51
430
6
https://vividcode.io/
code
I was trying to maintain an old React app built using Webpack 1.13. When starting the app, I couldn’t add breakpoints in Chrome’s developer tools. It turned out it’s a Webpack issue with latest version of Chrome, see this GitHub issue. When using a thread to run a task, it’s common to have the task wait for certain condition before it can continue. This is usually done by using busy-waiting with a volatile variable as the flag. This flag is set by another thread to stop the waiting. Before Java 9, we use an empty loop to wait for the condition. The new static method onSpinWait() of class Thread in Java 9 can make the waiting more efficient. Java 9 is about to release soon. Frameworks and tools continue to add support for this new Java 9 release. Spring framework has already provides support for Java 9. In this post, we are going to run Spring projects using Java 9 and Docker. Java 9 is scheduled to release on September 21. If you want to try Java 9 early-access builds now, you can download them from jdk.java.net and install them locally. However, installing the Java 9 builds makes them the default JDK in your machine, which may affect other applications that depend on the JDK. A better choice is to use Docker. If you don’t have Docker installed, download and install it from docker.com. The Docker image we are going to use is the official Apache Maven image which supports Java 9. We use the tag 3.5.0-jdk-9 to get the Maven 3.5.0 with JDK 9. I just want to share some private NodeJS modules for reuse in my other projects. A simple private registry is a good choice. sinopia is what I used. The installation is very easy. Now I’m going to add Disqus comments to my AMP site. Disqus already supports this and has an official guide about it. The whole process is smooth with only few exceptions. Jackson is a popular library to handle JSON in Java. It has built-in serializers and deserializers to handle common data types. If you want to serialize and deserialize custom types, you can add custom serializers and deserializers. The code in this post is tested using Jackson 2.7.2 and should work with Jackson after 1.7. The class we want to serialize and deserialize is com.github.zafarkhaja.semver.Version from the jsemver library. We want objects of this class to be serialized as simple strings, e.g. When talking about the new Java 9 release, many people focus on the Java Platform Module System (JPMS). There are many small changes that can make developer’s life easier in Java 9.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00166.warc.gz
CC-MAIN-2017-39
2,520
16
https://www.duplichecker.com/binary-to-hex.php
code
Binary to Hex Binary to hexadecimal converter allows you to convert binary to hex format with one click. Just type your binary digits and press convert to get the accurate conversion. You can convert your complex binary codes into the hexadecimal format by following the below steps. Step-1: Enter the binary code in the input box that you want to convert to hex digits. Step-2: Click the “Convert” button for conversion. Step-3: The results will appear in the right box immediately. Step-4: Copy results to your clipboard or save the file by clicking on the “Download” button. The main functionalities of our free binary to hex converter are as follow: You can find many online binary to hexadecimal converters, but most are paid and require registration. However, Duplichecker provides a free binary to hex converter that does not require any registration from users. You can use the utility as many times as you desire without any restriction. Easy to Use Interface The binary to hex converter has a user-friendly interface that lets you use it without any complex procedure. Even if you are a beginner, you can easily use the utility and perform conversion straightaway. Unlike most online binary to hex converters, our bin to hex conversion tool doesn’t make you wait long for conversion. Once you enter the binary code and press the convert button, the utility will quickly convert binary to hexadecimal digits. The binary to hex converter provides you with accurate results in just a few seconds. If you were not sure about the accuracy, you can further examine the results manually. The online binary to hex converter is a web-based utility that can be accessed from all devices, including Android, Mac, Tablets, and personal computers. The conventional ways of converting binary to hex values often require extensive time and strong mathematical skills. Moreover, you have to make lengthy calculations and remember the conversion table values to accomplish this task. However, you can take help from the below methods to convert binary to hex format. Let’s have a look. Method 1: Convert Binary to Hex with Conversion Table The most preferred and easiest way of converting binary to hexadecimal is by using a conversion table. We all know that binary numbers consist of 0 and 1, which are known as bits. In contrast, hexadecimal is also a positional numeral system in which each hex digit represents 4 bits (binary digits) or numbers containing Alphabets from A to F. Let’s understand how this method works by converting (00101101101)2 to hexadecimal. For example: Convert (00101101101)2 to Hexadecimal. First, group the binary numbers into the set of 4 digits starting from the right. We all know that every 4 numbers in binary become 1 digit in hexadecimal. If the total numbers cannot be divided into four digits, we add zeros to the left of the last digit. 0001 0110 1101 Now, find the corresponding hexadecimal number from the binary to hexadecimal table. 0001 = 1, 0110 = 6, 1101 = D Now, combine the numbers to get the final value. (00101101101)2 = 16D Method 2: Convert Binary to Hex Without Conversion Table The following method allows you to convert binary numbers to hexadecimal without using a conversion table. In this method, binary numbers are first converted to decimal, and then they will translate into hexadecimal. The binary number can be translated into a decimal number by multiplying each binary digit by the respective power of 2. Later, convert decimal to hexadecimal by dividing 16 until the quotient is zero. The following binary to hexadecimal example will help you get familiar with the method adequately. For example: Convert (0101010101011)2 to Hexadecimal. First, we translate the binary number to decimal. (0101010101011)2 = 0 × 212 + 1 × 211 + 0 × 210 + 1 × 29 + 0 × 28 + 1 × 27 + 0 × 26 + 1 × 25 + 0 × 24 + 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20 (0101010101011)2 = 0 × 4096 + 1 × 2048 + 0 × 1024 + 1 × 512 + 0 × 256 + 1 × 128 + 0 × 64 + 1 × 32 + 0 × 16 + 1 × 8 + 0 × 4 + 1 × 2 + 1 × 1 (0101010101011)2 = 0 + 2048 + 0 + 512 + 0 + 128 + 0 + 32 + 0 + 8 + 0 + 2 + 1 (0101010101011)2 = 2731 Therefore, (0101010101011)2 = (2731)10 Now, as we have obtained the decimal number, we convert it into hexadecimal. We will divide the decimal value, which is 2731 by 16 until the quotient is zero. 2731/16 = 170 is the quotient; the remainder is 11 170/16 = 10 is the quotient; the remainder is 10 10/16 = 0 is the quotient; the remainder is 10 The final obtained number by positioning the numbers from bottom to top will be 101011. As we all know that the hexadecimal number system only deals with 0 - 9 in numbers and 10 -15 in alphabets as A - F; therefore, the obtained number in hexadecimal will be = AAB Hence, (0101010101011)2 = (AAB)16 Hexadecimal Number System Hexadecimal, often shortened as hex, is a system based on the base 16 that is used to simplify how binary numbers are represented. The hexadecimal numeral system is a 16 symbols numeral system developed so that an 8-bit binary number can be written. It can be represented using only two diverse hex digits - one hex digit illustrating each nibble or either in 4-bits. The hexadecimal system is more successful than other number systems because it is easier to write numbers as hexadecimal. The hexadecimal system uses the decimal numbers (0-9) and depicts six extra symbols A B C D E F. Letters which are taken from the English alphabet, used as the numerical symbols for the values which are greater than ten. For instance, Hexadecimal “A” represents decimal 10, and hexadecimal “F” indicates decimal 15. Uses of Hexadecimal Number System The hexadecimal number system is mostly preferred by the software developers and coders to simplify the base-2 number system. The binary system is used by a computer system; however, humans use the hexadecimal system to shorten binary numbers. Moreover, converting binary to hex format will make it easier for humans to understand it easily. The primary uses of the hexadecimal system are as follows: - Hexadecimal numbers are often used to define the locations in memory. They can easily describe each byte as two hexadecimal digits that can be compared to eight digits while using binary format. - Web developers often use hexadecimal numbers to define colors on web pages. The RGB colors are often characterized by two hexadecimal digits. For instance, RR stands for red, GG stands for green, and BB stands for blue. - Hexadecimal is also used to represent Media Access Control (MAC) addresses which consist of 12-digit hexadecimal numbers. - Hexadecimal is used to display error messages. It will also help programmers to find and fix an error. Binary Number System The binary system is a base-2 system that contains two digits (0,1). Humans mostly use the decimal system whereas, computers and all digital devices generally use a binary language system. The system has a string of zeros and ones that are encoded into the computers to receive and provide a command. Professionals who work with computers tend to group bits for a more precise understanding.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00699.warc.gz
CC-MAIN-2023-50
7,144
54
http://sharepoint.stackexchange.com/questions/tagged/internet-explorer+asp.net
code
to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site SharePoint parent page refresh on close of Modal Dialog, IE issue I have a Webpage in which two webparts are installed. This is a system wherin user can make holiday request and manager can approve or reject. In 1st webpart The holiday requested by a user is shown ... Jul 10 '12 at 12:50 newest internet-explorer asp.net questions feed Hot Network Questions Excel VBA program only running at 25% speed RSS Aggregator for Craft? Ask how to pronounce advisor's last name? Could I access a blocked website via a virtual machine if the host OS has the web address blocked in its hosts file? Vector Laplace Beltrami operator of the Gauss map How does continue work? Interview puzzle with a deck of cards, some cards upside-down Why do some diodes have a glass package? What is the purpose of Social Security and Medicare deductions from my paycheck? Values don't update after I call action on a commandbutton? Crop jpeg into circular tikz node Jungler or Support in Twisted Threeline (3v3 map) The Rock, Paper, Scissors, Lizard, Spock Tournament of Epicness Generally speaking, is it better to make all the functional parts or get UI working first - or a mix of the two? What are the downsides of a twisted wheel? How does Bluebird's util.toFastProperties function make an object's properties "fast"? Are antiviruses still useful? Art and Culture are to "Philistine" as Feeling and Compassion are to what? Intersection of lists of disjoint intervals Will Java Final variables have default values? How to get the harmonics of ZZTop? Evidence of why the Standard Model is a successful theory of particle physics bash script to back up file Wow that's a big integer! What's its largest prime factor? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00292-ip-10-146-231-18.ec2.internal.warc.gz
CC-MAIN-2014-23
2,330
52
http://sap-basis.atwebpages.com/g3-define_s_rfc_permissions_using_usage_data.html
code
Define S_RFC permissions using usage data Retain the values of the permission trace to the role menu If you only want to translate the description of the role, it is recommended to record the PFCG transaction and to change the source language of the role using the Z_ROLE_SET_MASTERLANG report before the LSMW script runs through. The report on how to change the source language can be found in SAP Note 854311. Similarly, you can use the SECATT (Extended Computer Aided Test Tool, eCATT) transaction to perform the translation instead of the LSMW transaction. If, after an upgrade or after inserting a support package, you have used the SU25 transaction with steps 1 or 2a to bring suggested values to the latest SAP system state, you must restore the suggested values to the customer's organisation levels with the PFCG_ORGFIELD_UPGRADE report. To do this, you must run the report for each field, with the report's search engine showing only the affected organisation levels. Correct settings of the essential parameters The authorization check for the authorization objects PS_RMPSORG and PS_RMPSOEH runs as follows following a user entry: The system determines the organizational unit to which the user is assigned. Starting from this organizational unit, the system creates a list of all organizational units that are superior to the organizational unit determined in the first step in the hierarchy. The system determines the set (M1) of all organizational objects that are assigned to these organizational units. The system determines the organizational unit to which the object to be processed is assigned (corresponds to the lead organizational unit in the attributes of the object to be processed). Starting from this lead organizational unit, the system creates a list of all organizational units that are superior to the determined organizational unit within the hierarchy. The system determines the set (M2) of all organizational objects assigned to these organizational units. The system forms the intersection (from M1 and M2) of the matching organizational objects of the user and the object to be processed. The system determines the organizational levels that match for the user and the object being processed. Once a matching organizational level is found, the system performs the authorization check for the other fields of the authorization object (e.g., type of object or activity); if the system cannot determine a common organizational level, processing is rejected. If the user is allowed to perform the requested activity, processing is allowed; otherwise, the system rejects processing. Define explicit code-level permission checks whenever you start transactions from ABAP programmes or access critical functions or data. This is the easiest and most effective defence to protect your business applications from misuse, because programming-level permission checks can ensure two things: Incomplete or incorrect validation of the executed transaction start permissions will result in compliance violations. Complex permission checks can also be performed adequately for the parameterized use of CALL TRANSACTION. For the assignment of existing roles, regular authorization workflows require a certain minimum of turnaround time, and not every approver is available at every go-live. With "Shortcut for SAP systems" you have options to assign urgently needed authorizations anyway and to additionally secure your go-live. A central basis for extensively digitized processes are structured specifications that regulate system access and control access rights. Since the maintenance effort would be too great if individual authorizations were entered in the user master record, authorizations can be combined into authorization profiles.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473360.9/warc/CC-MAIN-20240221002544-20240221032544-00411.warc.gz
CC-MAIN-2024-10
3,761
10
http://traveltips.usatoday.com/obtain-french-passport-21095.html
code
How to Obtain a French Passport French passports are issued to nationals to allow them to pursue international travel. It certifies their identity. Since 2006, the conditions of issuance have become stricter. French nationals can apply for the document in France, or at French consulates around the world. The passport includes information such as the bearer's place of birth, nationality, surname, photo, gender, height and signature. Go to a French consulate if you live in the United States and are applying for a passport. French consulates are located in major cities including Chicago, Boston, Houston, Atlanta, Los Angeles, Miami, San Francisco, Washington D.C., New Orleans and New York. Present official identification and proof of French nationality. This includes the long-form copy of your French birth certificate. If you've been married or divorced or have changed your name, get a recent copy of your birth certificate. You also can furnish a certificate of French nationality, naturalization certificate or declaration of acquisition of French nationality. Display proof of your status if you're living in the United States. Valid documents include an American passport, green card, employment authorization card or U.S. visa. Bring a utility bill, rent receipt or tax return to prove your address. If you're a student living on campus, you can show a letter from your university with your address on it. Present one recent passport-size photo taken recently, with a white or light gray background. Do not wear anything on your head when having the picture taken. Pay for your French passport using cash or a credit card. Immediate delivery is not available, but the document is usually processed within three weeks. Appear in person at the consulate to pick up your French passport. - Attribution: Monsieur Fou; License: Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license - Attribution: Godefroy; License: Creative Commons Attribution-Share Alike 3.0 Unported license - Attribution: Der Statistiker; License: GNU Free Documentation License, Version 1.2 - Attribution: David Monniaux; License: Creative Commons Attribution-Share Alike 2.0 France license - Attribution: Photograph: Eric Pouhier, Modifier: Rainer Zenz, Niabot (last modification); License: Creative Commons Attribution-Share Alike 4.0 International, 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828356.82/warc/CC-MAIN-20171024090757-20171024110757-00165.warc.gz
CC-MAIN-2017-43
2,446
14
http://furrtek.free.fr/?a=waimea
code
Waimea is a free standalone open-source chronogram editor (Github page). Its aim is to provide a way to produce clear waveform sheets from simple text difinitions and intuitive controls. It was inspired by Wavedrom. It can be run on Windows XP and later, and also easily on Linux through Wine. No installation required. It is currently written in VB6, but plans are being made to move away from it. Speed isn't an issue, portability is. Rendering is done with OpenGL. - Simple markdown syntax (see Github page) saved as human-readable text files - Fast rendering - Rulers (colored vertical lines) - 6 predefined colors - Auto-refresh timeout to avoid unnecessary refreshing Pins (popup notes) Pins are a way to anotate waveforms at precise times without cluttering the view. The text note pops up when the cursor is over the pin, and all notes in view can be seen by holding down the ALT key. They can be assigned colors. Drag scrolling with fixed labels Navigation in large chronograms is simply done by drag scrolling. The name labels for each waveform are transparent and locked horizontally so they're always visible. A left double-click resets the sheet's position. Anti-aliasing and inverted colors Fully customizable themes aren't implemented yet, but anti-aliasing and color inversion options are available. The color saturation setting can also be used to adjust readability. Time (or ticks) can be measured easily by "tracing a line" between two points with the right mouse button. - Scaling: Stretch (or shrink) the sheet by this factor. - Groups opacity: Group colored box opacity. - Ticks opacity: Tick marks opacity (can be disabled). - Color saturation: Sets how vivid colors are. A low setting results in almost grey. - Live refresh: Automatically refresh the chronogram render while editing the definition. If disabled, refresh is done manually by pressing the F5 key. - Alt shows all pin notes: When enabled, holding the ALT key shows all visible pin notes. - Load last opened file: Does exactly that. - Default / Inverted: Chose color scheme. - Anti-aliasing: Smooths out graphics. Settings are saved when Waimea is closed. The Extend waves dialog is a beta feature meant to allow extending wave definitions easily and uniformly. It isn't well tested and can give unexpected results.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00235-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,302
36
https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/demystifying-microcode-updates-for-intel-and-amd-processors/ba-p/1000845
code
Hello, my name is Steve Mathias, Microsoft Premier Field Engineer (PFE) and I wanted to spend a moment to discuss the “mechanics” of the Intel Microcode Updates that you may see coming down from Microsoft Update or the Windows Catalog. The security implications of why you should update the microcode on your processors are already covered in the below documentation from us and our partners (Spectre/SBB/etc.): The purpose of this blog is to help answer why Microsoft is collaborating with our partners Intel and AMD on these microcode updates and a little background on how these updates work. To start the discussion, we need to lay down a key fact: When processors are manufactured, they have a baseline microcode baked into their ROM. This microcode is immutable and cannot be changed after the processor is built. Modern processors do have the ability at initialization to apply volatile updates to move the processor to a newer microcode level. However, as soon as the processor is rebooted, it reverts back to the microcode baked into their ROM. These volatile updates can be applied to the processor one of two ways – System Firmware/BIOS via OEM and by the Operating System (OS). However, as stated earlier, neither is updating the microcode in the processors ROM. If you were to remove the processor from one computer and install in a computer with an older System Firmware/BIOS and an un-updated OS, you will be back to being vulnerable. Couple common questions: Why is Microsoft collaborating with Intel and AMD and publishing Microcode Updates via Microsoft Update? The answer is simply that Windows offers the broadest coverage and quickest turnaround time to address these vulnerabilities. Microcode updates delivered via the Windows OS are not new; as far back as 2007 some updates were made available to address performance and reliability concerns. Can I skip taking updates delivered via Windows and only take updates from my OEM via System Firmware/BIOS Update? Technically speaking you could but as mentioned earlier, often Microsoft Update may have the microcode updates to address issues much sooner. Work with your OEM to help make this decision or simply take the updates from Microsoft Update. Is there a problem if I update my System Firmware/BIOS with one version of a microcode update and allow Windows to install a different version of a microcode update? When the processor boots, it has versioning to make sure it is utilizing the latest microcode updates regardless of where it may be coming from. So, installing System Firmware/BIOS updates and microcode updates from Microsoft Update is perfectly acceptable. It is possible that the OEM updates the microcode to one level and the OS updates the microcode to an even higher level during the same boot. In Windows, how are microcode updates delivered to the processor? Microcode updates install like any other update. They can be installed from Microsoft Update, WSUS, SCCM or manually installed if downloaded from the Catalog. The key difference is that the payload of the hotfix is primarily one of two files: mcupdate_GenuineIntel.dll – Intel mcupdate_AuthenticAMD.dll - AMD These files contain the updated microcode and Windows automatically loads these via OS Loader to patch the microcode on the boot strap processor. This payload is then passed to additional processors as they startup as well the Hyper-V hypervisor if enabled. Hopefully this information will help demystify what these microcode updates are and allow you to confidently install these updates proactively.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816734.69/warc/CC-MAIN-20240413114018-20240413144018-00685.warc.gz
CC-MAIN-2024-18
3,571
17
https://www.takethiscourse.net/c-basics-for-beginners-learn-c-fundamentals-by-coding/
code
For those who are looking for a way to learn C programming language, this course is the perfect answer for such people. The course has high quality videos that contains everything that is required to learn C programming language. By the end of the course, learners will have full grip over a number of concepts like how to work with primitive types and expressions, what is C programming language and how it works, how to use arrays and list, how to work with text, date and time, how we can debug C# applications effectively and much more. Thus this course can guide a person about C programming language in the most efficient way ever. We have given a detailed review of this course below for our readers but just wanted to highlight some points: - 52,000+ students enrolled - 13,745 Review available - 5 Hours on-demand videos - 10 hours on-Demand Video - 10 Articles - 2 Supplemental Resources - Access on Mobile and TV - Certificate of Completion - Lifetime Access - 30 days Money back guarantee Most attractive feature is 30 days Money Back Guarantee means there is no risk. If you didn’t like this Ruby online course, you can refund your money back within next 30 days. Who should take this course? This course can be taken by the following people. - Anyone interested in learning C programming language. - Any beginner who wants to start a career in this field. - Any experienced professional who wants to improve or upgrade his skills. Social Proof – Reviews of International Students & Professionals about this course: Let us now discuss the part that has the most influence on the viewers that is the social proof. This section contains a number of comments that have been given by such people who have taken this course and have completed it till the end. So let us take a look at some positive and negative feedback. - This is a very informational course that has helped me a lot in revising all the fundamentals of C programming language. I had been out of practice for quite a while and this course turned out to be really helpful. (Eamonn Reilly/5 star). - After completing this course, I can now easily give a positive feedback about it. The course is very well organized and it was quite challenging to solve the exercises given. The instructor is amazing, he is fluent in English and he really knew what he was doing. Overall an amazing course it is. (Gela Pataraia/5 star). - I have a job that I really like but then there came a task where I needed to learn C#. This course provided me with full guideline as to what C programming is and how I can use it to do my job that I have been assigned. All the C# basics were explained in a beautiful manner and the instructions were very simple. (Aloksatpathy/5 star). - The whole structure and the pacing of the course is very good. Each step was explained thoroughly and I even managed to make notes and listen to the instructor at the same time. (Wyatt/5 star). - I used to study this subject in University but I can definitely say that this course has taught me more than I managed to learn at University. (Moris Kabba/5 star). - I really enjoyed the way the lectures have been delivered and also everything explained in the course was very engaging and easy to understand. Overall the course is a winner and totally worth our time and effort. (Fernando Albert/5 star). - I really like that all the difficult topics have been split into smaller ones and then explained one by one. I’d say that after completing this course, I can easily recommend this course to all those who wish to learn C programming language. (Vedang Patil/5 star). - Thank you for providing this course. I am a person who had zero background of this field and now I’m leaving with a solid foundation. (Denisa Novotna/5 star). - I didn’t like the course at all as it is not beginner friendly. I would advise not to take this course is you are a complete beginner and have no prior experience in this field because you won’t be able to get a hold of the topics taught. (Leucioiu Stefan/2 star). - After completing this course an evaluating what I have managed to learn, I’d say that the course is a little overrated because most of the concepts are under explained for a beginner level person. Also the examples explained have fewer details in it that is insufficient. (Pera Casian/ 3 star). - I have started to feel like I am wasting my time with this course because I am not able to gain anything from it. The explanation given is not sufficient for a complete beginner to understand. I would say that this course is not what I thought it would be when I was taking it. (Dalton D Carpenter/ 2 star). With every course are some alternatives that can be taken if one feels that the main course is not enough for him. By taking a look at these alternatives, learners can get an idea about what the courses are and what do they offer and then in this way it can become easy to decide which course should be suitable enough. Programming for everybody: Programming for everybody is an excellent choice for every learner that will not only help a learner with basic programming language but will also improve computer programming skills along with python programming. The course will give you a sense of programming. Why people use programming and how it has changed so many lives. You will learn about conditional coding, different variables and expressions and how you can deal with them. How you can install python and start writing your first program. The course has an amazing rating of 4.8 and the instructor Charles Severance has tried to deliver everything there is to know about programming in this course. In short taking this course can give a push to start your career in advanced programming. Code yourself: An introduction to programming The course contains everything there is to know about how to write programs and do coding. There will be a number of skills that a learner will gain by the end of the course like computer programming, what are algorithms and how to write them and programming language from the scratch. The course aims to provide a complete guidance to all the people out there who have interest in this field and who wish to start a career in this field. The course ensures that after completing it, a person will have enough knowledge about programming that will help him take his understanding to an advanced level. How to reuse your codes, testing and documenting your programs and most importantly how you can create your very first computer program will be taught in the course. Hence we can say that the course is totally worth buying. We can conclude our discussion by saying that this course is indeed the most helpful and easy to understand course that has everything there is to know about C programming language. The course is perfect for beginners even if they have no prior experience in this field. The course contains a number of quizzes and real world examples that can help a learner to understand the concepts very clearly. The rating of this course is 4.5 and more than ninety thousand students have enrolled in this course and have given a positive feedback about it. Also if you take this course now, you’ll get it at a huge discount so don’t waste your time anymore and just click on the take this course button now to get a lifetime access to the contents of this course.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00294.warc.gz
CC-MAIN-2021-21
7,381
37
https://ibm.github.io/event-streams/2018.3.1/administering/quotas/
code
Kafka quotas enforce limits on produce and fetch requests to control the broker resources used by clients. Using quotas, administrators can throttle client access to the brokers by imposing network bandwidth or data limits, or both. Kafka quotas are supported in IBM Event Streams 2018.3.1 and later. About Kafka quotas In a collection of clients, quotas protect from any single client producing or consuming significantly larger amounts of data than the other clients in the collection. This prevents issues with broker resources not being available to other clients, DoS attacks on the cluster, or badly behaved clients impacting other users of the cluster. After a client that has a quota defined reaches the maximum amount of data it can send or receive, their throughput is stopped until the end of the current quota window. The client automatically resumes receiving or sending data when the quota window of 1 second ends. By default, clients have unlimited quotas. For further information about quotas, see the Kafka documentation. You can set quotas by using the Event Streams CLI as follows: - Log in to your cluster as an administrator by using the IBM Cloud Private CLI: cloudctl login -a https://<Cluster Master Host>:<Cluster Master API Port> The master host and port for your cluster are set during the installation of IBM Cloud Private. - Run the following command to initialize the Event Streams CLI: cloudctl es init - Use the entity-configcommand option to set quotas as follows. Decide what you want to limit by using a quota type, and set it with the --config <quota_type>option, where <quota_type> can be one of the following: producer_byte_rate- This quota limits the number of bytes that a producer application is allowed to send per second. consumer_byte_rate- This quota limits the number of bytes that a consumer application is allowed to receive per second. request_percentage- This quota limits all clients based on thread utilisation. Decide whether you want to apply the quota to users or client IDs. To apply to users, use the --user <user> option. Event Streams supports 2 types of users: actual user principal names, or application service IDs. - A quota defined for a user principal name is only applied to that specific user name. To specify a principal name, you must prefix the value for the u-, for example, - A quota defined for a service ID is applied to all applications that are using API keys that have been bound to the specific service ID. To specify a service ID, you must prefix the value for the s-, for example, To apply to client IDs, use the --client <client id> option. Client IDs are defined in the application using the client.id property. A client ID identifies an application making a request. You can apply the quota setting to all users or client IDs by using the --client-default parameters, respectively. Quotas set for specific users or client IDs override default values set by these parameters. By using these quota type and user or client ID parameters, you can set quotas using the following combinations: cloudctl es entity-config --user <user> --config <quota_type>=<value> cloudctl es entity-config --user-default --config <quota_type>=<value> cloudctl es entity-config --client <client id> --config <quota_type>=<value> cloudctl es entity-config --client-default --config <quota_type>=<value> For example, the following setting specifies that user u-testuser1 can only send 2048 bytes of data per second: cloudctl es entity-config --user "u-testuser1" --config producer_byte_rate=2048 For example, the following setting specifies that all application client IDs can only receive 2048 bytes of data per second: cloudctl es entity-config --client-default --config consumer_byte_rate=2048 cloudctl es entity-config command is dynamic, so any quota setting is applied immediately without the need to restart clients. Note: If you run any of the commands with the --default parameter, the specified quota is reset to the system default value for that user or client ID (which is unlimited). cloudctl es entity-config --user "s-consumer_service_id" --default --config producer_byte_rate
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648209.30/warc/CC-MAIN-20230601211701-20230602001701-00212.warc.gz
CC-MAIN-2023-23
4,149
48
http://forums.nesdev.com/viewtopic.php?f=2&t=8493
code
- NESICIDE developer - Posts: 1099 - Joined: Mon Oct 13, 2008 7:55 pm - Location: Minneapolis, MN Shiru, very nice article! One thing I'd added to NESICIDE is the ability to see mixed-mode source during debugging. That helps -- I think, anyway -- people that want to develop in C see exactly what the C compiler generates for any particular C source line of code. Example:Shiru wrote:Read the article Also, this statement: is not really true. NESICIDE can do that. I've been stepping through AlterEgo at the C level for quite a while now. Thank you for such a polished and useful example of programming for NES in C.Shiru wrote:The problem is that there are no comfortable debuggers for C code compiled into 6502 assembly around (yet) - ones that allow to put breakpoints on random C lines and see what the variables contain at the moment. Usually you only have an assembly level debugger in some emulators, and it is not very helpful with compiled code. Something I don't understand so well : Then you write C code that is less portable and less readable, and more low level. I don't have anything against it but doesn't this kill the purpose of using C instead of assembly ?Due to very limited NES resources, such as CPU speed, RAM and ROM size, writting a proper, clean C code isn't very effective. To make it faster and shorter you have to do things that otherwise aren't considered acceptable. They are disable some of C advantages, making the code more low level and less structured. There are suggestions that will make your code more effective, but certainly less readable: Random example, you have a level map that is larger than 256 bytes, lets say 32x32, and need to get a value from it using two 8-bit vars, mx and my. So, for C code: Code: Select all Code: Select all lda my sta ptr_l lda #0 sta ptr_h dup 5 asl ptr_l ;<<5 rol ptr_h edup lda ptr_l clc adc mx sta ptr_l lda ptr_h adc #0 sta ptr_h lda ptr_l clc adc #<map sta ptr_l lda ptr_h adc #>map sta ptr_h ldy #0 lda [ptr_l],y In the past Tepples has stated that the Koei simulation games were written in C, and that is probably why they seem sluggish. I bet that we could do better. I seriously considered making some sort of an RPG as my next nesdev project, and coding part of it in C, but I lack the time to take on such a project right now. You'll run into the other problem with RPG battle logic in C - the resulting code will be huge, so the problem with bankswitching have to be solved somehow. Not sure what exactly you're asking here. As long as the runtime functions (pusha/popa etc) are placed in a fixed bank it should work OK. The switching itself has to be done manually of course.mic_ wrote:Do you know if CC65 handles cross-bank calls well (linking symbols that have been compiled into separate banks)? This was quite a hassle with SDCC (Z80) and I ended up having to write some custom tools for it, and it still didn't work perfectly. It doesn't happen automagically, but depending on the calling convention used, you may be able to write a bunch of stubs that make cross-bank calls using a trampoline in the fixed bank.mic_ wrote:Nice info and cleanly written. Do you know if CC65 handles cross-bank calls well (linking symbols that have been compiled into separate banks)? Let's say you split your code/data into separate banks which are compiled and linked separately (to avoid having duplicates of the same stuff for different combinations of banks) and then combined into a .nes file.Not sure what exactly you're asking here.Do you know if CC65 handles cross-bank calls well bank0.c -> bank0.obj -> bank0.bin bank1.c -> bank1.obj -> bank1.bin header + bank0.bin + bank1.bin + ... -> game.nes If bank0 wants to call a function in bank1, can you tell CC65 to resolve the address of that function without actually putting a copy of the function in bank0 as well? Using hardcoded addresses is a PITA, and using a proxy function at a fixed address to delegate calls isn't really that nice either IMO. With SDCC I ended up writing a tool that parsed the .SYM files generated by the linker and it would output a header file with named function pointers for any given bank that other banks could use. It didn't work when there were cross-dependencies (bank X and bank Y both wanted to call eachother), so in some instances I still had to use hardcoded addresses.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00461.warc.gz
CC-MAIN-2021-10
4,344
26
https://globalmedicalaid.com/en/country/ukraine
code
Helps pregnant women fight serious complications such as haemorrhage or heavy and uncontrolled bleeding. (in English - without subtitles) Vingmed donates blood fractionation machines to Ukraine via GMA. Excerpt from press conference at Ukraine Crisis Media Center (UCMC). (in Danish - without subtitles) Report from GMA's transport of 4 ambulances from Falck in Vejle to Kiev, Ukraine. Inteview with Hans Frederik Dydensborg. (in Ukrainian - without subtitles) After a long drive from Vejle to Kiev, the recipients of the first ambulance donated by Falck to Ukraine are interviewed via GMA. In early July 2015, representatives of GMA paid an inspection visit to Boryspil Central Hospital, which has received several donations from GMA. CollaboratorsGlobal Medical Aid cooperates with the Ministry of Healthcare of Ukraine.https://en.wikipedia.org/wiki/Ministry_of_Healthcare_(Ukraine) Information about the countryYou can find general information about Ukraine here:https://en.wikipedia.org/wiki/Ukraine The state of health in Ukraine is reviewed here:https://en.wikipedia.org/wiki/Health_in_Ukraine WHO informs about Ukraine:https://www.who.int/countries/ukr/https://www.euro.who.int/en/countries/ukraine Ministry of Healthcare website:https://en.moz.gov.ua/
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474660.32/warc/CC-MAIN-20240226130305-20240226160305-00110.warc.gz
CC-MAIN-2024-10
1,259
13
https://escalate.projects.uvt.ro/data-analytics-skills-escalator-video/
code
Data Analytics Skills Escalator The Escalator Model is a relatively new development that helps to assess whether a region has sufficient citizens skilled in a particular field/sector critical to economic success and that the skills and training needed to enter or progress in this field/sector are available locally, at all levels. The Escalator Model looks at what particular skills are really critical for a region. This video presents the role that the University of Exeter has taken, together with its regional partners, in creating a ‘data analytics skills escalator’ for the Exeter City region. It looks at how and why a university, working with other skills providers and economic development bodies, decided that ‘data analytics’ should be a regional skills priority and at the steps taken to deliver the skills we agreed were needed.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475711.57/warc/CC-MAIN-20240301225031-20240302015031-00527.warc.gz
CC-MAIN-2024-10
850
4
https://fgiasson.com/blog/index.php/2007/05/22/browsing-musicbrainzs-dataset-via-uri-dereferencing/?replytocom=12322
code
Musicbrainz’s dataset can finally be browsed, node-by-node, using URI dereferencing. What this mean? Since the Musicbrainz relational database has been converted into RDF using the Music Ontology, all relations existing between Musicbrainz entities (an entity can be a Music Artist, a Band, an Album, a Track, etc.) are creating a musical relations graph. Each node of the graph is a resource and each arc is a property between two resources. Welcome in the World of RDF. This means that from a resource “Madonna” we can browse the musical relations graph to find other entities such as Records, People, Bands, Etc. Kingsley, inspired by Diana Ross, said: “URI Everything, and Everything is Cool!” This is cool! Now Diana Ross has her own URI on the semantic web: http://zitgist.com/music/artist/60d41417-feda-4734-bbbf-7dcc30e08a83 Have their own too! URIs for Musical Things These URIs are not only used to refer to Musicbrainz entities. In fact, these URIs are used to refer to any Musical Entities that you can describe using the Music Ontology. In a near future, the Musicbrainz data will be integrated along with data from Jamendo and Magnatune. In the future, we will be able to integrate any sort of musical data at the same place (radio stations data, user foaf profiles relations to musical things, etc.). So from a single source (http://zitgist.com/music/) all these different sources of musical data will be queriable at once. The URI schemes are defined in the Musicbrainz Virtuoso RDF View: All these URI schemes terms refer to their Music Ontology classes’ descriptions. I am getting closer and closer to the first goal I set to myself when I first started to write the Music Ontology. This first goal was to make the Musicbrainz relational database available in RDF on the Web. Months later and with the help of the Music Ontology Community (specially Yves Raimond that worked tirelessly on the project) and the OpenLink Software Inc. Team, we finally make this data available through URI dereferencing. From there, we will build-up new music services, integrate more musical datasets into the Music Data Space, etc. It is just the beginning of something much bigger.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00243.warc.gz
CC-MAIN-2023-50
2,195
13
https://support.thorfx.com/en/support/solutions/articles/44002046401-thorfx-i-want-to-switch-the-device-i-am-using-for-two-factor-authentication
code
I would like to change the device I am using for Two-Factor Authentication? How do I proceed? You just need to disable the method you are using on one device, then enable which Two-Factor Authentication method you want to use on another device. Perfect! Can you remind me of the path I need to follow? Of course. The below example is showing you how to disable Two-Factor Authentication via SMS. You can either enable the same method on another device or enable Two-Factor Authentication using an Authenticator app. Click on Disable 2FA. Enter the 6-digit code that was sent to your phone. What do I do if I don't receive the code? You will need to click on click Resend OTP. Once you receive the code, click on Disable. Now that's done, do I just enable the other method? Yes, that is correct. You will just need to click on Enable 2fa with the authenticator app. Scan the QR Code. What if I can't scan the QR Code? Click on Unable to scan? Try entering the code manually. You will then have the option to enter a Long code onto your authenticator app. Once that is done, enter the 6-digit code that shows on the authenticator app. Click on Submit.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00183.warc.gz
CC-MAIN-2022-05
1,149
19
https://www.instructables.com/community/?search=protoboard
code
Now that I have a new oscilloscope (rigol DS1054Z), and have learned a LOT about programming and electronics while down at NASA langley for a research program (semi-intern), I feel like I should start working on a real project (other than my quadcopter), and was thinking about making a nixie tube clock. I would like to make it from scratch of my own design as I feel one learned the most about electronics by not using other's designs. However, I do not have the resources or time to bother trying to etch my own PCB. I attempted that before and was not able to get usable results. What kind of costs am I looking at if I locate a relatively local company for the job? I hate calling people and companies, but I suppose that is still the most practical way to figure out what I need to do and place an order, I don't know. Economies of scale is a great thing, makes things in bulk production cheap! However, would the costs of ordering one or 2 PCBs be non-economical and/or non-practical? Should I consider many smaller projects and stuff to be created on a breakout board? I hate messy, sloppy protoboard, it is just nasty, though easy for non-high-density boards. however, I would love to hear the opinions of others for these things. When dealing with nixie tubes, are sockets for the 1N-14 avalible? A lot of the new old stocks seem to have very thin and long legs, almost as if they are meant for through hole soldering directly onto a PCB like capacitors and resistors. I would prefer a socketed tube to make replacing them quicker and easier.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00222.warc.gz
CC-MAIN-2021-10
1,552
1
https://www.sitepoint.com/tag/local-storage/
code
James Hibbard demonstrates how to persist checkbox checked state (useful for remembering users' preferences) and implement a check/uncheck all button. Tag: local storage HTML5's Local Storage API has fantastic browser support and there are plenty of neat little tools and utilities that ease its use, 9 of which are examined here. Luis Vieira describes how you can use font subsetting along with asynchronous loading and local storage to improve the performance of web fonts. This article shows how jQuery and the Web Storage API can be used to auto populate form data based on historic data. This article shows how jQuery and local storage can be combined to create a simple to-do list application.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00293.warc.gz
CC-MAIN-2022-49
699
6
https://www.hellopearl.com/so-imaging-setup/sidexis-4
code
1. Open your Sidexis 4 application, then access the 'Options' menu by clicking the gear icon in the toolbar on the upper right. 2. From the options screen, navigate to 'General Settings', and select 'Communication Partners". 3. In the 'Name' field, enter the file name you would like to assign to the export file which Second Opinion will access (e.g Second_Opinion). 4. Click the folder icon next to the 'Mailbox file (SLIDA) field to define the location where your Second Opinion accessible export file will be saved (e.g. Desktop). 5. Enable the following options: -Receives copies of new exposures automatically -Integrate patient name in image name 6. Scroll down to 'SLIDA 3D' section and for 'Output format' select ".dcm". 7. Scroll back up to the 'Mailbox file (SLIDA)' field and copy the entire file path text. 8. Return to the Second Opinion setup wizard and paste this text in the "Path to Image" field, as described in Step 2.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00027.warc.gz
CC-MAIN-2023-14
938
10
https://www.jetbrains.com/help/upsource/2017.1/searching-for-text.html
code
Searching for text Upsource lets you search the source code for text occurrences across all projects. To perform a full-text search: Enter your text query into the omni-search box in the top-right corner: You'll be redirected to a text search page where your search results will be listed. Narrow down your search to a particular project selecting it from the drop-down list:
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00261.warc.gz
CC-MAIN-2019-39
375
6