url stringlengths 13 4.35k | tag stringclasses 1
value | text stringlengths 109 628k | file_path stringlengths 109 155 | dump stringclasses 96
values | file_size_in_byte int64 112 630k | line_count int64 1 3.76k |
|---|---|---|---|---|---|---|
https://mailman.mit.edu/pipermail/krbdev/2020-September/013350.html | code | libverto event context for certauth plugin?
kenh at cmf.nrl.navy.mil
Tue Sep 22 21:05:57 EDT 2020
>FWIW I think that OpenSSL 3.0 will make it a bit easier, with the "read
>stuff from disk" having been genericized/modularized and letting you "drop
>in" alternative implementations via "provider" modules, but of course
>OpenSSL 3.0 is not done yet...
I'm not quite understanding how this would help; are you saying that
you would suggest the "read stuff from disk" routines be abstracted
to query servers via OCSP? If so, I don't see how that helps things
with lack of a libverto context in certauth, because if that OCSP
query blocks your whole KDC blocks.
If you're suggesting abstracting out the "read from disk" routines to
read my internal database I use to store CRLs ... ugh. Two things:
I really do not want to depend long-term on this database format,
and my reading of the current code is that it really wants to slurp
everything into memory and search it that way. I am not sure changing
out the "read from disk" code helps in that case. It may very well be
that they changed enough other things that you could substitute a function
that does something smart with regards to CRL querying, but again,
depending on an internal database format is NOT something I want to
This is all probably moot for now, since running a local OCSP server
works perfectly fine today and that's the approach I'll take going forwards.
More information about the krbdev | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100769.54/warc/CC-MAIN-20231208180539-20231208210539-00709.warc.gz | CC-MAIN-2023-50 | 1,457 | 24 |
https://aminoapps.com/c/awsxmehangout/page/blog/important/7WaB_5DCPu8N7e220Mz8MwB074JEEarG44 | code | I won't be online as often as usually! I have school starting back up. ( :cry: :sob: ) I will try to be at least online 30 minutes to an hour a day, maybe more, depends. School is started up tomorrow for me.
That's all for now!
Awsome, signing out! | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00227.warc.gz | CC-MAIN-2021-49 | 248 | 3 |
https://heasarc.gsfc.nasa.gov/docs/asca/sis_v0_8.html | code | Current SIS Calibration Files
This bulletin was originally released via e-mail exploders on 1994 Nov 9; WWW version 1995 Apr 13.
To coincide with the release of FTOOLS version 3.2, the ASCA GOF has released new sets of:
- SIS response matrices
- SIS blank sky background event files.
These new software and calibration files enable ASCA users to analyze SIS data in PI (Pulse Invariant) rather than PHA space. The chief advantage of using PI is that events from different chips (but same SIS camera) can be combined without compromising the calibration.
This note provides a brief summary of current calibration status and data analysis procedure.
A. Scheduling Requirements
Increasing CTI (Charge Transfer Inefficiency) is a manifestation of radiation damage which became noticeable by early 1994 in the form of a secular gain change. To calibrate this effect, the SIS team produced an initial calibration file (sisph2pi.fits in the refdata area of FTOOLS v3.2). In addition, observations of Cas A were obtained to improve the calibration. The analysis of these data is in an advanced stage.
The same Cas A data also show a discrepancy in energy scale between S0C1 (SIS0-chip1) and S1C3 (SIS1-chip3). In view of both this (also reported from PV phase SNR and Cluster observations) and the secular gain variation, the SIS team is refining the absolute gain calibration of all eight CCD chips. At the moment, there may be a 0.5-1 per cent systematic uncertainty in absolute gain calibration.
B. What is SIS PI?
The FTOOL SISPI (in FTOOLS 3.2) will fill the PI column in your SIS Faint or Bright event files. If you use the default settings:
- The calibration file sisph2pi.fits is used (transparently).
- The PI channels are defined to have 3.65 eV/bin. Bright mode event files and most pha files are rebinned. In such cases, the PI bins are integer multiples of 3.65 eV.
- Differences due to particular chip or position on the chip are taken into account.
- The secular gain drift is corrected.
PI is currently not available for SIS Fast mode data because the PHA-to PI conversion is position-dependent (Fast mode data do not contain positional information).
C. PI columns in your data files
All event files in the public archive will have the PI columns filled in (except Fast mode data). For proprietary guest-observer data, the current processing script does not fill the PI columns, but this will change shortly. In the meantime, GOs should run SISPI themselves. Note that the PI channels of event files may be safely repopulated through multiple runs of SISPI: the original event data are not affected.
D. Spectral extraction procedures
- Run SISPI on the event file(s), if necessary.
- Read the event file into XSELECT. Then type "set phaname PI" in order to ensure that the PI column is used for spectral extractions (the current version of XSELECT uses the PHA column by default).
- Use filters as appropriate. Extract and save a spectrum.
- Extract a background spectrum - either from a source-free region of
the SIS, or from the new blank sky files which are available via
anonymous FTP at legacy.gsfc.nasa.gov in the directory:
caldb/data/asca/sis/bcf/bgd/94novThese new blank sky background files are essentially unchanged from the previous release in April, except for having their PI columns filled. After reading in a background file into XSELECT, don't forget to "set phaname PI".
E. SIS response generators
After creating source and background PI files, the next step is to obtain the appropriate response. As for PHA files, the spectral response is divided into two parts, the RMF (redistribution matrix file) and the ARF (ancillary response file, i.e., the effective area). When both are combined into a single file, it usually has the extension .rsp.
In general, SIS .rmf files can be generated using the script sisrsp. This provides a friendly interface to the underlying FTOOL, sisrmg. FTOOLS version 3.2 contains sisrmg version 0.8, which is different from the previous version 0.6 as follows:
- High-PH tail model for partial charge collected events is now included. (This has the harmless side-effect of producing the "log: DOMAIN error" message when run on Suns, but the output matrices are still valid).
- Now uses the ISAS QE (Quantum Efficiency) determination for individual chips, based on observations in Dec 93 of 3C 273.
- PI matrices can now be generated.
- The secular gain change due to CTI can now be corrected (using
the same sisph2pi.fits calibration file as SISPI). This means that
you can combine either:
- a PI spectrum with a PI response, or
- a PHA spectrum with a PHA matrix generated with the correct gain for the epoch of observation. When data from only one chip are used, this is equivalent - for data analysis purposes - to using a PI file with a PI response.
For detailed instructions, type "sisrsp" without arguments.
In addition to the secular gain change, there have been smaller changes in the SIS resolution. The Calibration of this effect is underway but is not incorporated in the v0.8 matrices.
For ARF generation, use the FTOOL ASCAARF which takes as input the RMF (PI or PHA) and the spectral file (PI or PHA). ASCAARF will then create the .arf file which has the effective area curve calculated for your extraction region.
F. Ready-made response matrices
For your convenience, we are providing a 'standard set' of SIS matrices in directory caldb/data/asca/sis/cpf/94nov9 via anonymous FTP at legacy.
For all chips:
s0c0g0234p40e0_1024v0_8i.rmf: Grade 0234 Bright2 mode 1024-channel PHA (h) and PI (i) matrices.
s0c0g0234p40e0_512v0_8i.rmf: Grade 0234 Bright2 mode 512-channel PHA (h) and PI (i) matrices
s0c0g0234p40e1_512v0_8i.rmf: Grade 0234 Bright mode 512-channel PHA (h) and PI (i) matrices
For S0C1 and S1C3:
s0c1g02p40e1_512v0_8i.rmf: Fast mode 512-channel PHA (h) and PI (i) matrices, Fast mode grade 0.
s0c1g0234p40e1_512_1av0_8i.rsp: Typical rsp file for a point source observation (rmf & arf combined)
Koji Mukai & the ASCA GOF with Geoffrey B. Crew & the SIS team
If you have any questions concerning ASCA, visit our Feedback form. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00181.warc.gz | CC-MAIN-2021-25 | 6,105 | 52 |
http://tomasjurman.blogspot.com/2010/07/xml-in-nutshell.html | code | BasicXML was designed to transport and store data.
XML is use to:
- WSDL for describing available web services
- WAP and WML as markup languages for handheld devices
- RSS languages for news feeds
- RDF and OWL for describing resources and ontology
- SMIL for describing multimedia for the web
- All XML elements must have a closing tag
- XML are case sensitive
- XML elements must be properly nested
- XML document must have a roo element
- XML attribute values must be quoted
There are 5 predefined entity references in XML:
XML elements naming rules:
- Names can contain letters, numbers, and other characters
- Names cannot start with a number or punctuation character
- Names cannot start with the letters xml (or XML, or Xml, etc)
- Names cannot contain spaces
ValidationXML with correct syntax is "Well Formed" XML.
XML validated against a DTD is "Valid" XML.
DTD (Document Type Definition )
The purpose of a DTD is to define the structure of an XML document. It defines the structure with a list of legal elements:
W3C supports an XML-based alternative to DTD, called XML Schema:
XML NamespaceXML Namespaces provide a method to avoid element name conflicts.
Namespaces can be declared in the elements where they are used or in the XML root element:
Note: The namespace URI is not used by the parser to look up information.
The purpose is to give the namespace a unique name. However, often companies use the namespace as a pointer to a web page containing namespace information.
In the XSLT document below, you can see that most of the tags are HTML tags.
The tags that are not HTML tags have the prefix xsl, identified by the namespace xmlns:xsl="http://www.w3.org/1999/XSL/Transform": | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590314.29/warc/CC-MAIN-20180718174111-20180718194111-00362.warc.gz | CC-MAIN-2018-30 | 1,694 | 29 |
https://jira.xwiki.org/browse/XCOMMONS-2126 | code | I don't know if it happens all the time, but I've encountered a situation where I executed Distribution Wizard in a non interactive fashion, and it simply failed with the following stacktrace:
Problem comes from there:
A collection is iterated with a foreach loop, but then the repair() method updates the collection, which means the collection changes while being iterated. Basic ConcurrentModificationException.
A solution could be to make sure DefaultInstalledExtension#getNamespaces() returns a copy of the collection (which is supposed to be an UnmodifiableSet BTW), or manually creates a copy in OutdatedExtensionsDistributionStep. But the whole thing is actually weird. | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00684.warc.gz | CC-MAIN-2022-05 | 676 | 4 |
https://community.cloudflare.com/t/issues-opening-pages-on-safari/346134 | code | The website is working all okay on other devices but it is not working properly on iPhone Safari. The page never loads on Safari on iPhone. Tried reset the cache and changed the network but no luck. Sometimes it takes too much time and loads only some part without images and we are unable to debug whats the issue.
the domain is https://www.luxuryproperty.com This domain is working all okay on other browsers and in mobiles but it isn’t working on Safari. Previously it showed the offline cached page of the Website and then it showed the partial content on mobile Safari. | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00614.warc.gz | CC-MAIN-2022-21 | 576 | 2 |
https://experts.colorado.edu/display/pubid_64337 | code | In this article, we tackle the issue of sorting at the metropolitan area by utilizing an alternative methodological approach that permits us to avoid problems plaguing earlier studies. For this analysis, we take two Metropolitan Statistical Areas (MSAs) as our test cases: the Houston MSA and the Atlanta MSA. For each metropolitan area, we employ Monte Carlo computer simulations to randomly create a large number of metropolitan “jurisdictional” groupings. Based upon these Monte Carlos, we are able to estimate the level of jurisdictional homogeneity that is attributable to random chance. The observed levels of sorting, including the increasing homogeneity as populations decrease, are entirely consistent with what one might find if clusters of households were randomly grouped together into municipalities. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00092.warc.gz | CC-MAIN-2023-50 | 817 | 1 |
https://publications.iitm.ac.in/publication/computationally-efficient-wavelet-transform-based-digital | code | This paper proposes a novel wavelet-transform-based directional algorithm for busbar protection. The algorithm decomposes the current and voltage signals into their first-level details, which consist of frequencies in 500- to 1000-Hz bandwidth, for generating directional signals. A high level of computational efficiency is achieved compared to the other wavelet-transform-based algorithms proposed, since only the high-frequency details at the first level are employed in this algorithm. The validity of this method was exhaustively tested by simulating various types of faults on a substation modeled in the Alternative Transients Program/Electromagnetic Transients Program. The algorithm correctly discriminated between bus faults, various types of external faults, and transformer energization even in the presence of current-transformer saturation. This paper also provides the design details of the algorithm using field-programmable gate array technology. © 2007 IEEE. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100081.47/warc/CC-MAIN-20231129105306-20231129135306-00397.warc.gz | CC-MAIN-2023-50 | 977 | 1 |
https://www.slideshare.net/nazeerhussain319/black-bookforabap | code | Introduction ®Congratulations on buying the SAP ABAP™ Handbook ! This book features comprehensive content onthe various concepts of the SAP system and ABAP language. SAP technology was introduced by SAPAG, Germany. For over thirty years, SAP technology has formed an indispensable part of many businessenterprises with respect to enterprise resource planning (ERP). The pace of technological enhancementsis getting faster day by day, and this has been particularly true with SAP. Today, most companies usingSAP software employ it to build applications that have more to do with cross-platform reliability.About This Book ®The SAP ABAP™ Handbook covers hundreds of topics, theoretically as well as practically, related to the ®SAP ABAP programming language. The book also covers the SAP R/3 release, the functionalities of theABAP/4 language, various tools to develop ABAP programs in SAP systems, data access in the SAPsystem, system architecture, and system administration. A few chapters of this book also provideimplementation-ready program code to help you understand different concepts.This book is ideal for beginners who intend to familiarize themselves with the SAP ABAP technologybecause it begins with the very basics and then moves on to more complex topics. An added advantageto this book is that it also suits professionals who are already familiar with SAP technology and want toenhance their skills. It describes the techniques and procedures that are employed most frequently byusers working with SAP R/3.This book is divided into easy-to-understand sections, with each topic addressing different programmingissues in SAP ABAP, such as: Introduction of the SAP system and ABAP The logon process The GUI ABAP Workbench ABAP Dictionary ABAP programming in ABAP Editor Internal tables Data access Modularization techniques: subroutines, function modules, and source code modules Dialog programming in ABAP Data transfer techniques: BDC and LSMW SAPscript and Smart Forms Creating reports in the SAP system Defining and implementing BADIs Object orientation in ABAP Cross-application technologies: IDoc, ALE, and RFCThis is just a partial list of all of the valuable information in this book. The book provides special coverageof the SAP ABAP technology implemented in mySAP ERP, more than any other book dedicated to thesubject.Our sole intent has been to provide a book with in-depth and sufficient information so that you enjoyreading and learning from it. Happy reading!
How to Use This BookIn this book, we have employed the mySAP ERP software to run the code. You must, therefore, installmySAP ERP on your system to use and implement the applications provided in the book. This bookbegins with the basics of SAP software and makes you familiar with its user interface. After that, itdiscusses ABAP/4 commands and programming. This book consists of 14 chapters and 2 appendices toexplain the concepts and techniques of ABAP/4 programming.
ConventionsThere are a few conventions followed in this book that need to be introduced. For example, the code inthis book is given in the form of code listings. The code with a listing number and caption appears asfollows:Listing 7.1: Declaring a table type with the TYPES statementREPORT ZINTERNAL_TABLE_DEMO.*/declaring table type by using the TYPES statementTYPES: BEGIN OF DataLine, S_ID TYPE C, S_Name(20) TYPE C, S_Salary TYPE I, END OF DataLine.TYPES MyTable TYPE SORTED TABLE OF DataLine WITH UNIQUEKEY S_ID.WRITE:/MyTable is an internal table. It is a sorted typeof table with a unique key defined on the S_ID field.. ®The SAP ABAP™ Handbook also provides you with additional information on various concepts in theform of notes, as follows: Note To know more about the MOVE statement, refer to the "Moving and Assigning Internal Tables" section of this chapter.Every figure contains a descriptive caption to enhance clarity, as follows:Figure 7.7: Adding code in the ABAP editor—change report screen
In this book, tables are placed immediately after their first callout. An example of a table is below.Table 7.1: List of table types Open table as spreadsheet Table Type DescriptionINDEX TABLE Creates a generic table type with index accessANY TABLE Creates a fully generic table typeSTANDARD TABLE or TABLE Creates a standard tableSORTED TABLE Creates a sorted tableHASHED TABLE Creates a hashed table
Other ResourcesHere are some other useful HTML links where you can find texts related to SAP ABAP and some helpfultutorials: http://help.sap.com/ http://sapbrainsonline.com/
Chapter 1: A Gateway to SAP SystemsOverviewSystems Applications and Products in Data Processing (SAP) is business software that integrates allapplications running in an organization. These applications represent various modules on the basis ofbusiness areas, such as finance, production and planning, and sales and distribution, and are jointlyexecuted to accomplish the overall business logic. SAP integrates these modules by creating acentralized database for all the applications running in an organization. You can customize SAPaccording to your requirements by using the Advanced Business Application Programming (ABAP)language. ABAP, also referred to as ABAP/4, is the technical module of SAP and the fourth-generationprogramming language used to create applications related to SAP.The SAP system was introduced as an Enterprise Resource Planning (ERP) software designed tocoordinate all the resources, information, and activities to automate business processes, such as orderfulfillment or billing. Nowadays, the SAP system also helps you to know about the flow of informationamong all the processes of an organizations supply chain, from purchases to sales, including accountingand human resources.Integration of different business modules is a key factor that separates SAP from other enterpriseapplications. Integration of business modules helps to connect various business modules, such asfinance, human resources, manufacturing, and sales and distribution, so that the data of these modulescan be easily accessed, shared, and maintained across an enterprise. Integration also ensures that achange made in one module is reflected automatically on the other modules, thereby keeping the dataupdated at all times.In this chapter, you learn about SAP and its need in todays businesses. This chapter also deals with theimportance of ERP and its implementation in SAP. The chapter provides a comprehensive history of SAP,focusing on the circumstances that necessitated its development, and, finally, on how its introductionhelped to improve system performance and business efficiency. In addition, this chapter describes theneed for the ABAP/4 language in SAP and also explains the architecture of the SAP system, including itsthree views: logical, software-oriented, and user-oriented. It also explores the various components of theapplication servers that are used in SAP, such as work processes, the dispatcher, and the gateway, anddescribes the structure and types of work processes. You also learn how to dispatch a dialog step, aprocedure that helps a user to navigate from one screen to another in the SAP system, as well as twoimportant concepts: user context and roll area, which are memory areas that play an integral role indispatching dialog steps and in implementing a work process. This chapter also explains the client-dependency feature of SAP. The chapter concludes with a brief discussion on the integrated environmentof SAP.
Explaining the Concept of an ERP SystemA system that automates and integrates all modules of business areas is known as an ERP system orsimply ERP. An ERP system is used to integrate several data sources and processes, such asmanufacturing, control, and distribution of goods in an organization. This integration is achieved by usingvarious hardware and software components. An ERP system is primarily module-based, which impliesthat it consists of various modular software applications or modules. A software module in an ERP systemautomates a specific business area or module of an enterprise, such as finance or sales and distribution.These software modules of an ERP system are linked to each other by a centralized database.A centralized database is used to store data related to all the modules of the business areas. Using acentralized database ensures that the data can be accessed, shared, and maintained easily. Combinedwith the module-based implementation, an ERP system improves the performance and efficiency ofbusiness processing.Before the advent of the ERP system, each department of a company had its own customized automationmechanism. As a result, the business modules were not interconnected or integrated, and updating andsharing data across the business modules was a big problem. Lets use an example to understand thisconcept better. Suppose the finance and sales and distribution modules of an enterprise have theirrespective customized automation mechanisms. In such a setup, if a sale is closed, its status would beupdated automatically in the sales and distribution module. However, the updated status of the sale of anitem would not be updated in the finance module automatically. Consequently, the revenue generatedfrom the sale of an item would need to be updated manually in the finance module, resulting in a greaterprobability of errors and an asynchronous business process. The problem was fixed with the help of theintegration feature built into the ERP system.Another benefit of the ERP system is that it helps synchronize data and keep it updated. Ideally, an ERPsystem uses only a single, common database to store information related to various modules of anorganization, such as sales and distribution, production planning, and material management.Despite the benefits of the ERP system, the system has certain drawbacks. Some of the major drawbacksof the ERP system are: Customization of ERP software is restricted because you cannot easily adapt ERP systems to a specific workflow or business process of a company. Once an ERP system is established, switching to another ERP system is very costly. Some large organizations may have multiple departments with separate, independent resources, missions, chains-of-command, etc., and consolidation into a single enterprise may yield limited benefits.SAP was introduced to overcome the drawbacks of the contemporary ERP systems. The introduction ofSAP systems not only removed the preceding bottlenecks but also led to improved system performanceand business efficiency by integrating individual applications. In other words, an SAP system ensuresdata consistency throughout the system, in addition to removing the drawbacks of the contemporary ERPsystems.Next, lets explain why and how an SAP system is introduced in business processing.
History of SAP SystemsSAP is a translation of the German term Systeme, Anwendungen, und Produkte in derDatenverarbeitung. It was developed by SAP AG, Germany. The basic idea behind developing SAP wasthe need for standard application software that helps in real-time business processing. The developmentprocess began in 1972 with five IBM employees: Dietmar Hopp, Hans-Werner Hector, Hasso Plattner,Klaus Tschira, and Claus Wellenreuther in Mannheim, Germany. A year later, the first financial andaccounting software was developed; it formed the basis for continuous development of other softwarecomponents, which later came to be known as the SAP R/1 system. Here, R stands for real-time dataprocessing and 1 indicates single-tier architecture, which means that the three networking layers,Presentation, Application, and Database, on which the architecture of SAP depends, are implemented ona single system. SAP ensures efficient and synchronous communication among different businessmodules, such as sales and distribution, production planning, and material management, within anorganization. These modules communicate with each other so that any change made in one module iscommunicated instantly to the other modules, thereby ensuring effective transfer of information.The SAP R/2 system was introduced in 1980. SAP R/2 was a packaged software application on amainframe computer, which used the time-sharing feature to integrate the functions or business areas ofan enterprise, such as accounting, manufacturing processes, supply chain logistics, and humanresources. The SAP R/2 system was based on a two-tier client-server architecture, where an SAP clientconnects to an SAP server to access the data stored in the SAP database. SAP R/2 was implemented onthe mainframe databases, such as DB/2, IMS, and Adabas. SAP R/2 was particularly popular with largeEuropean multinational companies that required real-time business applications, with built-inmulticurrency and multilanguage capabilities. Keeping in mind that SAP customers belong to differentnations and regions, the SAP R/2 system was designed to handle different languages and currencies.The SAP R/2 system delivered a higher level of stability compared to the earlier version. Note Time-sharing implies that multiple users can access an application concurrently; however, each user is unaware that the operating system is being accessed by other users.SAP R/3, based on a client-server model, was officially launched on July 6, 1992. This version iscompatible with multiple platforms and operating systems, such as UNIX and Microsoft Windows. SAPR/3 introduced a new era of business software—from mainframe computing architecture to a three-tierarchitecture consisting of the Database layer, the Application layer (business logic), and the Presentationlayer. The three-tier architecture of the client-server model is preferred to the mainframe computingarchitecture as the standard in business software because a user can make changes or scale a particularlayer without making changes in the entire system.The SAP R/3 system is a customized software with predefined features that you can turn on or offaccording to your requirements. The SAP R/3 system contains various standard tables to execute varioustypes of processes, such as reading data from the tables or processing the entries stored in a table. Youcan configure the settings of these tables according to your requirements. The data related to thesetables are managed with the help of the dictionary of the SAP R/3 system, which is stored in an SAPdatabase and can be accessed by all the application programs of SAP.The SAP R/3 system integrates all the business modules of a company so that the information, onceentered, can be shared across these modules. The SAP R/3 system is a highly generic andcomprehensive business application system, especially designed for companies of various organizationalstructures and different lines of business.The SAP R/3 system runs on various platforms, such as Windows and UNIX. It also supports variousrelational databases of different database management systems, such as Oracle, Adabas, Informix, andMicrosoft SQL Server. The SAP R/3 system uses these databases to handle the queries of the users.With the passage of time, a business suite that would run on a single database was required. This led tothe introduction of the mySAP ERP application as a follow-up product to the SAP R/3 system. The
mySAP ERP application is one of the applications within the mySAP Business Suite. This suite includesmySAP ERP, mySAP Supply Chain Management (SCM), mySAP Customer Relationship Management(CRM), mySAP Supplier Relationship Management (SRM), and mySAP Product Lifestyle Management(PLM). The latest release of the mySAP ERP application is SAP ERP Central Component (ECC6.0). ThemySAP ERP categorizes the applications into the following three core functional areas: Logistics Financial Human resources Note The book focuses on the latest release of the mySAP ERP application, i.e., ECC6.0.As stated earlier, the runtime environment and integrated suite of application programs within the SAPR/3 system are written in a fourth-generation language, ABAP/4.
Need for ABAPABAP, or ABAP/4, is a fourth-generation programming language first developed in the 1980s. It was usedoriginally to prepare reports, which enabled large corporations to build mainframe business applicationsfor material management and financial and management accounting.ABAP is one of the first programming languages to include the concept of logical databases, whichprovides a high level of abstraction from the centralized database of the SAP system. Apart from theconcept of logical databases, you can also use Structured Query Language (SQL) statements to retrieveand manipulate data from the centralized database. To learn more about working with databases with thehelp of the SQL statements, refer to Chapter 8.The ABAP programming language was used originally to develop the SAP R/3 system. That is, theruntime environment and application programs in the SAP R/3 system are written in the ABAP language.The SAP R/3 system provides the following set of applications, also known as functional modules,functional areas, or application areas: Financial Accounting (FI) Production Planning (PP) Material Management (MM) Sales and Distribution (SD) Controlling (CO) Asset Management (AM) Human Resources (HR) Project System (PS) Industry Solutions (IS) Plant Maintenance (PM) Quality Management (QM) Workflow (WF)These functional modules are written in the ABAP language. In addition, you can use the ABAP languageto enhance the applications that you create in the mySAP ERP system. For instance, besides theavailable reports and interfaces in the mySAP ERP system, you can create your own custom reports andinterfaces.The ABAP language environment, which includes syntax checking, code generation, and the runtimesystem, is a part of SAP Basis. SAP Basis, a component of an SAP system, acts as a technologicalplatform that supports the entire range of SAP applications, now typically implemented in the frameworkof the SAP Web Application Server. In other words, the SAP Basis component acts as an operatingsystem on which SAP applications run.Similar to any other operating system, the SAP Basis component contains both low-level services, suchas memory management and database communication, and high-level tools, such as SAP Smart Formsand log viewers, for end-users and administrators. You learn more about these concepts later in thisbook.The ABAP language provides the following features: Data sharing— Enables you to store data in memory at a central location. Different users and programs can then access the data without copying it. Exception handling— Helps define a special control flow for a specific error situation and provide information about the error. Data persistency— Enables you to store data permanently in relational database tables of the SAP R/3 system.
Making enhancements— Enables you to enhance the functionality of programs, function modules, and global classes, without modifying or replacing the existing code.
Exploring the Architecture of SAP R/3As stated earlier, the SAP R/3 system evolved from the SAP R/2 system, which was a mainframe. TheSAP R/3 system is based on the three-tier architecture of the client-server model. Figure 1.1 shows thethree-tier architecture of the SAP R/3 system:Figure 1.1: SAP R/3 architectureFigure 1.1 shows how the R/3 Basis system forms a central platform within the R/3 system. Thearchitecture of the SAP R/3 system distributes the workload to multiple R/3 systems. The link betweenthese systems is established with the help of a network. The SAP R/3 system is implemented in such away that the Presentation, Application, and Database layers are distributed among individual computersin the SAP R/3 architecture.The SAP R/3 system consists of the following three types of views: Logical view Software-oriented view User-oriented viewThe Logical ViewThe logical view represents the functionality of the SAP system. In this context, the R/3 Basis componentcontrols the functionality and proper functioning of the SAP system. Therefore, in the logical view of theSAP R/3 system, we describe the services provided by the R/3 Basis component that help to executeSAP applications.The following is a description of the various services provided by the R/3 Basis component: Kernel and Basis services—Provide a runtime environment for all R/3 applications. The runtime environment may be specific to the hardware, operating system, or database. The runtime environment is written mainly in either C or C++, though some parts are also written in the ABAP programming language. The tasks of the Kernel and Basis services are as follows: o Executing all R/3 applications on software processors (virtual machines). o Handling multiple users and administrative tasks in the SAP R/3 system, which is a multiuser environment. When users log on to the SAP system and run applications within it, they are not connected directly to the host operating system, since the R/3 Basis component is the actual user of the host operating system. o Accessing the database in the SAP R/3 system. The SAP R/3 Basis system is connected to a database management system (DBMS) and the database itself. R/3 applications do not
communicate with the database directly; rather, these applications communicate with the database through the administration services provided by the R/3 Basis system. o Facilitating communication of SAP R/3 applications with other SAP R/3 systems and with non- SAP systems. You can access SAP R/3 applications from an external system by using the Business Application Programming Interfaces (BAPI) interface. o Monitoring and controlling the SAP R/3 system when the system is running. ABAP Workbench service—Provides a programming environment to create ABAP programs by using various tools, such as the ABAP Dictionary, ABAP Editor, and Screen Painter. Presentation Components service—Helps users to interact with SAP R/3 applications by using the presentation components (interfaces) of these applications.The Software-Oriented ViewThe software-oriented view displays various types of software components that collectively constitute theSAP R/3 system. It consists of SAP Graphical User Interface (GUI) components and Application servers,as well as a Message server, which make up the SAP R/3 system. Since the SAP R/3 system is amultitier client-server system, the individual software components are arranged in tiers. Thesecomponents act as either clients or servers, based on their position and role in a network. Figure 1.2shows the software—oriented view of the SAP R/3 architecture:Figure 1.2: Software-oriented viewAs shown in Figure 1.2, the software-oriented view of the SAP R/3 system consists of the following threelayers: Presentation layer Application layer Database layerPresentation LayerThe Presentation layer consists of one or more servers that act as an interface between the SAP R/3system and its users, who interact with the system with the help of well-defined SAP GUI components.For example, using these components, users can enter a request, to display the contents of a databasetable. The Presentation layer then passes the request to the Application server, which processes therequest and returns a result, which is then displayed to the user in the Presentation layer. While an SAPGUI component is running, it is also connected to a users SAP session in the R/3 Basis system. Note The servers in the Presentation layer have been referred to as Presentation servers in this chapter.Application LayerThe Application layer executes the application logic in the SAP R/3 architecture. This layer consists ofone or more Application servers and Message servers. Application servers are used to send userrequests from the Presentation server to the Database server and retrieve information from the Database
server as a response to these requests. Application servers are connected to Database servers with thehelp of the local area network. An Application server provides a set of services, such as processing theflow logic of screens and updating data in the database of the SAP R/3 system. However, a singleApplication server cannot handle the entire workload of the business logic on its own. Therefore, theworkload is distributed among multiple Application servers. Figure 1.3 shows the location of theApplication server between the Database and Presentation servers:Figure 1.3: Application serverThe Message server component of the Application layer (shown in Figure 1.2) is responsible forcommunicating between the Application servers. This component also contains information aboutApplication servers and the distribution of load among these servers. It uses this information to select anappropriate server when a user sends a request for processing.The separation of the three layers of the SAP R/3 system makes the system highly scalable, with the loadbeing distributed among the layers. This distribution of load enables the SAP R/3 system to handlemultiple requests simultaneously. The control of a program moves back and forth among the three layerswhen a user interacts with the program. When the control of the program is in the Presentation layer, theprogram is ready to accept input from the user, and during this time the Application layer becomesinactive for the specific program. That is, any other application can use the Application layer during thistime. As soon as the user enters the input on the screen, the control of the program shifts to theApplication layer to process the input and the Presentation layer becomes inactive, which means that theSAP GUI (the user interface of the SAP R/3 system) cannot accept any kind of input. In other words, untilthe Application layer completes processing the input and calls a new screen, the SAP GUI does notbecome active. The procedure in which a new screen is presented before the user is known as a dialogstep. Dialog steps are processed in the Application layer, as shown in Figure 1.4:Figure 1.4: A dialog stepDatabase LayerThe Database layer of the SAP R/3 architecture comprises the central database system. The centraldatabase system has two components, DBMS and the database itself. The SAP R/3 system supportsvarious databases, such as Adabas D, DB2/400 (on AS/400), DB2/Common Server, DB2/MVS, Informix,Microsoft SQL Server, Oracle, and Oracle Parallel Server.
The database in the SAP R/3 system stores the entire information of the system, except the master andtransaction data. Apart from this, the components of ABAP application programs, such as screendefinitions, menus, and function modules, are stored in a special section of the database, known asRepository, also known as Repository Objects. The database also stores control and customized data,which govern how the SAP R/3 system functions. Distributed databases are not used in the SAP R/3system because the system does not support them. Note Master data is the core data, which is essential to execute the business logic. Data about customers, products, employees, materials, and suppliers are examples of master data. Transaction data refers to information about an event in a business process, such as generating orders, invoices, and payments.The User-Oriented ViewThe user-oriented view displays the GUI of the R/3 system in the form of windows on the screen. Thesewindows are created by the Presentation layer. To view these windows, the user has to start the SAP GUIutility, called the SAP Logon program, or simply SAP Logon. After starting the SAP Logon program, theuser selects an SAP R/3 system from the SAP Logon screen. The SAP Logon program then connects tothe Message server of the R/3 Basis system in the selected SAP R/3 system and retrieves the address ofa suitable Application server; i.e., the Application server with the lightest load. The SAP Logon programthen starts the SAP GUI connected to the Application server.The SAP GUI starts the logon screen. After the user successfully logs on, the initial screen of the R/3system appears. This initial screen starts the first session of the SAP R/3 system. Figure 1.5 shows theuser-oriented view of the SAP R/3 system:Figure 1.5: User-oriented viewA user can open a maximum of six sessions within a single SAP GUI. Each session acts as anindependent SAP GUI. You can simultaneously run different applications on multiple open R/3 sessions.The processing in an opened R/3 session is independent of the other opened R/3 sessions.
Explaining the Architecture of the Application ServerOne of the most important components of the SAP R/3 system is the Application server, where ABAPprograms run. The Application server handles the business logic of all the applications in the SAP R/3system. The Application layer consists of Application servers and Message servers. Application serverscommunicate with the Presentation and Database layers. They also communicate with each otherthrough Message servers. Application servers consist of dispatchers and various work processes,discussed later in this chapter. Figure 1.6 shows the architecture of the Application server:Figure 1.6: Architecture of the application serverFigure 1.6 shows the following components of the Application server: Work processes— Represents a process used to execute the user request. An Application server contains multiple work processes that are used to run an application. Each work process uses two memory areas, the user context and the roll area. The user context contains information regarding the user, and the roll area contains information about program execution. Dispatcher— Acts as a bridge to connect different work processes with the respective users logged on to the SAP R/3 system. The requests received by Application servers are directed first to the dispatcher, which enrolls them to a dispatcher queue. The dispatcher then retrieves the requests from the queue on a first-in, first-out basis and allocates them to a free work process. Gateway— Acts as an interface for R/3 communication protocols, such as a Remote Function Call (RFC). RFC is the standard SAP interface used to communicate between SAP systems. Shared Memory— Represents the common memory area in an Application server. All work processes running in an Application server use shared memory. This memory is used to save the contexts (data related to the current state of a running program) or buffer data. Shared memory is also used to store various types of resources that a work process uses, such as programs and table content.Describing a Work ProcessA work process is a component of the Application server that is used to run individual dialog steps used inan SAP R/3 application. Each work process contains two software processors, the Screen processor andthe ABAP processor, and one database interface. A work process uses two special memory areaswhenever it processes a user request. The first memory area is known as user context, which holdsinformation regarding the user logged on to the SAP R/3 system. This information consists of userauthorization as well as the names of currently running programs. The second memory area is known asthe roll area, which holds information about the current program pointer (the location in which data of the
program is stored), dynamic memory allocations, and the values of the variables needed to execute theprogram.Exploring the Structure of a Work ProcessIn this section, we discuss the structure of a work process that is used in the R/3 system. Figure 1.7shows the components of a work process:Figure 1.7: The components of a work processAs shown in Figure 1.7, the three components of a work process are: Screen processor ABAP processor Database interfaceThe Screen ProcessorIn R/3 application programming, user interaction and processing logic are different operations. From theprogramming point of view, user interaction is controlled by screens consisting of flow logic. The screenprocessor executes screen flow logic and also controls a large part of the user interaction. This flow logichelps a work process to communicate with the SAP GUI through a dispatcher. The screen flow logic alsoincludes some modules, such as PROCESS AFTER INPUT (PAI) and PROCESS BEFORE OUTPUT (PBO),which explain the flow of data between the screens.The ABAP ProcessorThe ABAP processor executes the processing logic of an application program written in the ABAPlanguage. The ABAP processor not only processes the logic but also communicates with the databaseinterface to establish a connection between a work process and a database. The screen processorinforms the ABAP processor of the module of the screen flow logic that will be processed. Figure 1.8shows the communication between the screen processor and ABAP processor, when an applicationprogram is running:
Figure 1.8: The screen processor and the ABAP processor at workThe Database InterfaceThe database interface performs the following tasks in a work process: Establishing or terminating the connection between the work process and the database Accessing database tables Accessing the R/3 Repository Objects, such as ABAP programs and screens Accessing catalog information (the ABAP Dictionary) Controlling transactions (commit and rollback handling) Managing table buffering on an Application serverFigure 1.9 shows the different components of the database interface:Figure 1.9: Components of the database interfaceAs shown in Figure 1.9, databases can be accessed in two ways: using Open SQL statements and usingNative SQL statements.Open SQL provides statements that, in conjunction with other ABAP constructions, can simplify or speedup database access. Native SQL statements, on the other hand, are a subset of standard SQL that is notintegrated with the ABAP language code. To learn more about Open and Native SQL statements, refer toChapter 8.The Database-specific layer (Figure 1.9) hides the differences between database systems from the rest ofthe components of the database interface.Now, lets describe the various types of work processes.Types of Work ProcessesAll work processes can be categorized into five basic types on the basis of the tasks they perform: dialog,update, background, enqueue, and spool. In the Application server, the type of the work process
determines the kind of task for which it is responsible. The dispatcher starts a work process, anddepending on the type of work process, assigns tasks to it. This means that you can distribute workprocess types to optimize the use of resources in the Application servers. Figure 1.10 shows differenttypes of work processes within an ABAP Application server:Figure 1.10: Types of work processesIn Figure 1.10, you see the different types of work processes, including the dialog work process, updatework process, background work process, enqueue work process, and spool work process.Table 1.1 describes the types of work processes:Table 1.1: Different types of work processes Open table as spreadsheetWork Description SettingsProcessDialog work Deals with requests to execute dialog The maximum response time of aprocess steps triggered by an active user. The dialog work process can be set by dialog work process is not used for specifying the time in the requests that take long to execute and rdisp/max_wprun_time parameter. lead to high central processing unit (CPU) consumption. The default time for a dialog work process is 300 seconds. If the dialog work process does not respond in this time period, it is terminated.Update work Executes database update requests. The rdisp/wp_no_vb profile parameterprocess There must be at least one update work is used to control the number of process per SAP system, but there can update work processes of V1 modules, be more than one update work process and the rdisp/wp_no_vb2 per dispatcher. An update work process parameter is used to control the is divided into two different modules, V1 number of update work processes of and V2. The V1 module describes critical V2 modules. or primary changes, for example, creating an order or making changes to the material stock in the SAP R/3 system. The V2 module describes less critical secondary changes. These are pure statistical updates, for example,
Table 1.1: Different types of work processes Open table as spreadsheetWork Description SettingsProcess calculating the sum of the values of certain parameters. V1 modules have higher priority than the V2 modules.Background Executes the programs that run without The number of background workwork the involvement of the user, such as processes can be changed byprocess client copy and client transfer. There specifying the value in the must be at least two background work rdisp/wp_no_btc parameter. processes per SAP system, but more than one background work process can be configured per dispatcher. Usually, background work processes are used to perform jobs that take a long time to execute.Enqueue Handles the lock mechanism. It The number of enqueue workwork administers the lock table, which is the processes can be specified in theprocess main part of a Logical Unit of Work rdisp/wp_no_enq parameter. (LUW). The lock table stores the locks for logical databases in the SAP R/3 system. Only one enqueue work process is required for each SAP R/3 system.Spool work Passes sequential data flows on to The parameter to set the number ofprocess printers. Every SAP system requires at spool work processes is rdisp/ least one spool work process. However, wp_no_spo. there can be more than one spool work process for a dispatcher. Note In Table 1.1, all the parameters related to different types of work processes are specified in the Maintain Profile Parameters screen of the SAP system. You can access the Maintain Profile Parameters screen by entering the RZ11 transaction code in the Command field. To learn more about the Command field, refer to Chapter 3.Now, lets discuss how dialog steps are executed by a work process.
Dispatching Dialog StepsThe dispatcher distributes the dialog steps among the various work processes on the Application server.A dialog step is a procedure in which a new screen appears in the SAP R/3 system for user interaction.Dispatching of dialog steps means navigating from one screen to another screen, where one screenaccepts a request from the user and the other screen displays the result of the request.It is very important for a programmer in SAP to understand how dialog steps are processed anddispatched, because the process is completely different from the processing involved in executing anABAP program. Note A dialog step is an SAP R/3 screen, which is represented by a dynamic program called a dynpro. The dynpro program consists of a screen and all the associated flow logic. It contains field definitions, screen layout, validation, and flow logic. A flow logic explains the sequence in which the screens are processed. When users navigate the SAP R/3 system from screen to screen, they are actually executing dialog steps. A set of dialog steps make up a transaction.Often, the number of users logged on to an ABAP Application server is many times greater than thenumber of available work processes. In addition, each user can access several applications at a time. Inthis scenario, the dispatcher performs the important task of distributing all the dialog steps among thework processes on the ABAP Application server. Figure 1.11 shows an example of how dialog steps aredispatched in an ABAP Application server:Figure 1.11: Dispatching dialog stepsFigure 1.11 shows two users, User 1 and User 2. The dispatcher receives a request to execute a dialogstep from User 1 and directs it to work process 1, which is free. Work process 1 addresses the context ofthe application program (in shared memory), executes the dialog step, and becomes free again. Now, thedispatcher receives a request to execute a dialog step from User 2 and directs it to work process 1. Workprocess 1 executes the dialog step in the same way that it did in the case of User 1. However, while workprocess 1 is in progress, the dispatcher receives another request from User 1 and directs it to workprocess 2 because work process 1 is not free. After work processes 1 and 2 have finished processingtheir respective dialog steps, the dispatcher receives yet another request from User 1 and directs it towork process 1, which is now free. When work process 1 is in progress, the dispatcher receives anotherrequest from User 2 and directs it to work process 2, which is free. This process continues until all therequests of the users are processed.From the preceding example, we can conclude that a program assigns a single dialog step to a singlework process for execution. The individual dialog steps of a program can be executed on different workprocesses, and the program context must be addressed for each new work process. Moreover, a workprocess can execute dialog steps of different programs from different users.An ABAP program is always processed by work processes, which require the user context for processing.A user context represents the data specifically assigned to an SAP user. The information stored in theuser context can be changed by using the roll area of the memory management system in SAP.
Describing the User Context and Roll Area in the SAP SystemAll user contexts are stored in a common memory area of the SAP system. The memory managementsystem of SAP comprises the following three types of memory which can be assigned to a work processin SAP: SAP Roll Area— Specifies a memory area with a defined size that belongs to a work process. It is located in the heap of the virtual address space of the work process. SAP Extended Memory— Represents a reserved space in the virtual address space of an SAP work process for extended memory. The size of the extended memory can be set by using the em/initial_size_MB profile parameter of the Maintain Profile Parameters screen in the SAP system. Private Memory— Specifies a memory location that is used by a work process if a dialog work process has used up the roll area memory and extended memory assigned to it.Roll area memory is used as the initial memory assigned to a user context. Roll area memory is allocatedto a work process in two stages. In the first stage, memory is allocated by specifying theztta/roll_first parameter in the Maintain Profile Parameters screen. However, if thismemory is already in use by the work process, additional memory is allocated in the second stage. Thesize of the additional memory area is equal to the difference between the ztta/roll_area andztta/roll_ first parameters. Here, the ztta/roll_area parameter specifies the total size of theroll area, in bytes. Figure 1.12 shows the structure of the roll area memory:Figure 1.12: Structure of the roll area memory in SAPAs shown in Figure 1.12, whenever a dialog step is executed, a roll action occurs between the roll bufferin the shared memory and the roll local memory, which is allocated by the ztta/roll_firstparameter. The area in the shared memory, which belongs to a user context, is then accessed. Note thatwhen the context of a work process changes, its data is copied from the local roll area to a commonresource called the roll file through the roll buffer (a shared memory).As shown in Figure 1.12, the following roll processes are performed by the dispatcher: Roll-in —Copies user context from the roll buffer (in shared memory) to the roll local memory Roll-out—Copies user context from the roll local memory to the roll buffer
The Client-Dependency FeatureThe SAP R/3 system provides an important feature called client-dependency, which means that a changemade by a client in the SAP system is reflected on the other client. Lets take an example of R/3 databasetables to illustrate this. Some tables in the SAP R/3 system are client-dependent, while others are client-independent. A client-dependent table has its first field or column of the CLNT type. The length of this fieldis always of three characters, and by convention, this field is always named MANDT and contains the clientnumber as its content. A client-independent table, on the other hand, does not have the CLNT type as itsfirst field. Now, if any data is updated in the rows of a client-independent table, the change is not reflectedon the other clients of the SAP R/3 system.The client-dependency feature can also be explained in terms of SAPscript forms and Smart Forms. AnSAP script form is a template that simplifies the designing of business forms. On the other hand, SAPSmart Forms is a tool used to print or send business forms through e-mail, the Internet, and faxing. Inthe SAP R/3 system, SAPscript forms are client-dependent, while the SAP Smart Forms are not.Now, lets assume that a user generates two forms by using SAPscript forms with two different clientlogins, client 800 and client 000. In this case, any changes made in client 800 will not be reflected in theform designed in client 000. On the other hand, in the case of Smart Forms, any changes made to oneclient will be reflected in the other client as well. Note SAPscript and Smart Forms are described in detail in Chapter 12.
SummaryThis chapter has explored the concept of SAP and its importance as leading business software. Thechapter has also described the concept of ERP and its implementation in SAP. In addition, it hasdescribed the architecture of SAP R/3 system and the role and function of its three layers: Presentation,Application, and Database, and the various components of the Application server, such as workprocesses, the dispatcher, and the gateway. In addition to these topics, the text has explored memorymanagement in SAP. Finally, the chapter concludes with a discussion on the client dependency feature ofSAP.
Chapter 2: The Logon Process of the SAP SystemOverviewSimilar to any application software or system, the mySAP ERP system provides an authorizationmechanism to ensure that only authenticated and authorized users access the system. Theauthentication mechanism of the mySAP ERP system requires you to log on to the system using yourlogin name and password before you can start working on the mySAP ERP system. This process ofverifying the users based on the login names and passwords is called user authentication. With a mySAPERP system, the login name and the password are provided by the system administrator. However, youcan change the password afterwards for security purposes.You can log on to the mySAP ERP system by using the SAP Logon Screen. This screen also allows youto perform various activities related to the SAP logon process. For example, you can add and configureSAP servers that you need to connect to during the logon process. You can also create and manageshortcuts to various functions of the mySAP ERP system. While creating these shortcuts, you can specifythe logon settings for these functions. In addition, you can customize or change your password to log onto the mySAP ERP system.In this chapter, you learn about the logon process in the mySAP ERP system. The chapter starts byexplaining the steps to start the mySAP ERP system through the SAP Logon screen. Next, you learn howto maintain the SAP Logon screen by adding, modifying, and deleting one or more mySAP ERP systems.You also explore how to create and manage shortcuts, which facilitate you to access a transactionscreen, report, or a system command directly in the SAP system. Then, you learn how to configure thesettings in the SAP Logon screen, such as the language in which you want the screen of SAP Logon toappear and whether you want to display the SAP Logon screen with wizard. In addition, you explore howto change the password to log on to the mySAP ERP system. Finally, the chapter discusses various waysto log off of the mySAP ERP system.
Starting the SAP SystemThe SAP Logon screen can be accessed by either selecting the SAP Logon option from the start menu ordouble-clicking the SAP Logon shortcut icon from the desktop. Perform the following steps to startan SAP system from the Start menu from your Windows OS: 1. Start > All Programs > SAP Front End > SAP Logon , as shown in Figure 2.1:Figure 2.1: Selecting the SAP logon optionThe SAP Logon screen appears, as shown in Figure 2.2:Figure 2.2: The SAP logon screen 2. Click the Log On button on the SAP Logon screen (see Figure 2.2).
The SAP screen (first screen of the SAP system) to enter the logon details appears, as shown in Figure2.3:Figure 2.3: The SAP screen for entering the logon detailsThe SAP screen comprises the following fields: Client— Enter the client number. User— Enter the user ID. Password— Enter the password provided by your system administrator. Language (optional)— Set the language in which you want to display screens, menus, and fields. Note Notice that as you enter the password, asterisks appear in the field rather than the characters that you type. As a security measure, the system does not display the value entered in the Password field. 3. Enter the values in all the fields of the SAP screen; for instance, we have entered the client ID as 800, user name as KDT, and the password as sapmac, as shown in Figure 2.3. Now, press the ENTER key. The SAP Easy Access screen appears, as shown in Figure 2.4:
Figure 2.4: The SAP easy access screenThe SAP Easy Access screen serves as a gateway to work in SAP and contains all the developmenttools provided by SAP. However, before starting to work in this screen, you need to understand that theSAP Logon screen (see Figure 2.2) of an SAP system can be modified on the basis of userrequirements. It has to be noted that the changes done on the SAP Logon are not reflected at the frontend; they affect only the internal processing of the SAP system.
Maintaining the SAP Logon ScreenThe SAP Logon screen is used to log on to an SAP system. It is a window-based program, which acts asa mediator between the SAP system and the SAP GUI interface. By default, the SAP Logon screencontains the following two tabs (see Figure 2.2): Systems— Allows the user to add a SAP server or a group of servers as well as edit or delete an existing server in the list of servers. Shortcuts— Allows you to create, delete, or edit the shortcut of a particular screen in the list of shortcuts.The SAP Logon screen, within the Systems tab, is maintained by performing the following operations: Adding a New Entry— Adds a new server to the list of servers. Modifying the Entry— Modifies the properties of a server. Deleting the Entry— Deletes a server.Now, lets discuss each operation in detail, one by one.Adding a New EntryIn the SAP Logon screen, the Systems tab displays a list of servers. You can add a single instance of aserver as well as a group of servers in this list. Perform the following steps to add a single server: 1. Click the New Item button on the SAP Logon screen (see Figure 2.2). The Create New System Entry Wizard appears, as shown in Figure 2.5:Figure 2.5: The create new system entry wizardThe Create New System Entry wizard contains a list of all the SAP servers. In our case, only asingle instance of the server is displayed. Note that in the case of multiple servers, the first entry in the listappears as selected by default. 2. Select the server that you want to add and click the Next button (see Figure 2.5). A screen that accepts the system connection parameters appears, as shown in Figure 2.6:
3. Select Custom Application Server or Group/Server Selection from the drop-down list of the Connection Type field. In this case, we have selected the Custom Application Server option.Figure 2.6: Showing system connection parametersIn addition, the System Connection Parameters group box contains the following fields: Description— Specifies a short description of the system entry. It is an optional field. Application Server— Specifies the name of the host computer on which the required server is hosted. System Number— Specifies the system number. System ID— Specifies the system ID of the SAP system that you want to connect to. SAProuter String— Specifies an SAProuter string if an SAProuter is required. It is an optional field. 4. Enter the values in all the fields of the System Connection Parameters group box. For instance, we have given the Description as My SAP Server, Application Server as 192.168.0.233, System Number as 00, and System ID as DMT, as shown in Figure 2.6. 5. Click the Next button (see Figure 2.6). The Choose Network Settings screen in the Create New System Entry Wizard appears, as shown in Figure 2.7: 6. Click the Next button (see Figure 2.7) in the Choose Network Settings screen. The screen to specify the Language Settings and Upload/Download Encoding appears, as shown in Figure 2.8: 7. Click the Finish button to complete the process (see Figure 2.8).
Figure 2.7: The choose network settings screenFigure 2.8: The language settings and upload/download encoding screenA new item, My SAP Server, is added in the Systems selection list, as shown in Figure 2.9:Figure 2.9: Showing the new server entry
Modifying the EntryIn the SAP Logon screen, you can modify the configuration settings of an existing SAP server entry,such the description and the address of the server. Perform the following steps to modify theconfiguration settings of an SAP server: 1. Select an SAP server whose properties you want to change. Here, we are using the My SAP Server, which is already selected in the servers list of the SAP Logon screen (see Figure 2.9). 2. Click the Change Item button (see Figure 2.9). The System Entry Properties dialog box appears (Figure 2.10). 3. Enter "New link to SAP Server" in place of My SAP Server in the Description field, as shown in Figure 2.10: 4. Click the OK button (see Figure 2.10) or press the ENTER key to complete the process.Figure 2.10: The system entry properties dialog boxNotice that the name of the item My SAP Server is changed to New link to SAP Server, as shown inFigure 2.11:Figure 2.11: Showing the modified description
Deleting the EntryPerform the following steps to delete an item from the servers list: 1. Select the SAP server that you want to delete. Here, we proceed with the already selected item "New link to SAP Server" (see Figure 2.11). 2. Click the Delete Item button (see Figure 2.11). The Saplogon API dialog box appears, asking for confirmation, as shown in Figure 2.12: 3. Click the Yes button (see Figure 2.12) to delete the selected item. Notice that the New link to SAP Server item is now deleted from the SAP selection list.Figure 2.12: The Saplogon API dialog boxNow, lets learn how to create and use various shortcuts to open different screens of the SAP system.
Creating and Using SAP ShortcutsShortcuts are components of the SAP GUI and are used to access the most frequently used functions ortransactions directly. In actuality, you can use SAP shortcuts to start an SAP transaction, view a report, orperform system command execution directly from your Microsoft Windows desktop or the SAP Logonscreen. Afer the shortcuts are created, they appear as regular icons on the desktop of your computer.Creating SAP ShortcutsAn SAP shortcut can be created only on computers running on the Windows operating system. The SAPshortcut file type is registered automatically in the Windows registry after the successful installation of anSAP GUI. The basic requirements to create an SAP shortcut are as follows: An SAP user ID from your system administrator A password The transaction code for the screen for which you want to create an SAP shortcutThe following are the three ways to create an SAP shortcut: Creating an SAP shortcut from the desktop Creating an SAP shortcut from a specific screen Creating an SAP shortcut in the SAP Logon screenCreating an SAP Shortcut from the DesktopPerform the following steps to create an SAP shortcut from the desktop: 1. Right-click anywhere on the desktop. A context menu appears. Select New > SAP GUI Shortcut.An SAP shortcut icon, New SAP GUI Shortcut, appears on the desktop, as shown in Figure 2.13:Figure 2.13: New SAP GUI shortcut icon 2. Enter a name for the shortcut (for instance, MySAPLogon), and press the ENTER key. A new shortcut to the SAP Logon file is created on the desktop with the name MySAPLogon.Now, lets edit the properties of the shortcut to the SAP Logon file. 1. Right-click the shortcut file (MySAPLogon) and select the Edit option.The SAP Shortcut Properties dialog box appears, as shown in Figure 2.14: 2. Enter a title in the Title field. In Figure 2.14, we have entered ABAP Editor.
3. In the Type field, select the type of shortcut from the following options: Transaction Report System commandFigure 2.14: The SAP shortcut properties dialog boxIn this case, we have selected Transaction (see Figure 2.14). 4. Enter a transaction command (for instance, se38) in the Transaction field, as shown in Figure 2.14. 5. In the System Description field, select SAP Server from the drop-down list, as shown in Figure 2.14. Note In this case, the default System ID is DMT. 6. Now, enter the client number in the Client field, say, 800 (see Figure 2.14). 7. Enter the name of the user (for instance, KDT) in the User field and the desired language (for instance, EN-English) in the Language field, as shown in Figure 2.14. Note The system automatically uses your Windows user ID if you leave the User field blank. The Password field is deactivated for security reasons. This field can be activated by administrators only. 8. Finally, click the OK button and the desired shortcut is placed on your desktop.Creating an SAP Shortcut from a Specific ScreenPerform the following steps to create an SAP shortcut from a specific screen in the SAP system: 1. Open the screen in which you want to create an SAP shortcut. In this case, we have opened the initial screen of Screen Painter (by using the SE51 transaction code), as shown in Figure 2.15: Note Use the SE51 transaction command to open Screen Painter. 2. Click the Customize Local Layout icon on the standard toolbar and then select the Create Shortcut option, as shown in Figure 2.16:
Figure 2.15: Initial screen of screen painterFigure 2.16: Selecting the create shortcut optionThe Create New SAP Shortcut Wizard appears, as shown in Figure 2.17. Ensure that theinformation filled in the Title, Type, Transaction, Client, User, and Language fields is correct.You can also modify the values specified in these fields. Here, we have modified the SystemDescription field.Figure 2.17: Modifying the system description field 3. Select the System Description as SAP Server, as shown in Figure 2.17:
4. Click the Next button. The next screen appears, as shown in Figure 2.18: 5. Click the Finish button. The SAP GUI Shortcut dialog box appears, as shown in Figure 2.19: 6. Click the OK button, as shown in Figure 2.19, to complete the process. The DMT Screen Painter shortcut appears on your desktop.Figure 2.18: Showing the properties of the new shortcutFigure 2.19: The SAP GUI shortcut information box Note The system automatically saves the shortcut file with the .sap extension in the desktop directory.Creating an SAP Shortcut in an SAP Logon ScreenWe can also create an SAP shortcut in an SAP Logon screen. The Shortcuts tab of the SAP Logonscreen allows us to create, edit, or delete a shortcut with the help of the following buttons: New Item— Helps create shortcuts that allow you to start SAP transactions, run reports, or execute system commands directly after logging on to the defined system. Change Item— Edits an existing shortcut in the shortcuts list. Delete Item— Deletes an existing shortcut from the shortcuts list. Log on— Allows to log on to an SAP system through the created SAP shortcut.The user can log on to an SAP system in the following ways: By selecting an entry in the shortcuts list and pressing the Log Onbutton By selecting an entry in the shortcuts list and pressing the ENTER key By double-clicking an entry in the shortcuts listPerform the following steps to create a new shortcut in the SAP Logon screen:
1. Click the Shortcuts tab on the SAP Logon screen, as shown in Figure 2.20: Note You can add already created SAP shortcuts (present on your desktop) to the shortcuts list just by dragging and dropping their icons in the SAP Logon screen. 2. Click the New Item button, as shown in Figure 2.20. The Create New SAP Shortcut dialog box appears (see Figure 2.21). 3. Enter "Menu Painter Shortcut" in the Title field, "System Command" in the Type field, "/ nSE41" system command in the Command field, "SAP Server" in the System Description field, "800" in the Client field, and "KDT" in the User field. Click the Next button of the Create New SAP Shortcut dialog box, as shown in Figure 2.21:Figure 2.20: The shortcuts tabFigure 2.21: The create new SAP shortcut wizardNote that the Create New SAP Shortcut Wizard, shown in Figure 2.21, is similar to the CreateNew SAP Shortcut Wizard, shown in Figure 2.17. There fore, on clicking the Next button, you get ascreen similar to that shown in Figure 2.18. 4. Click the Finish button. A new shortcut is displayed in the Shortcuts tab, as shown in Figure 2.22:
Figure 2.22: Showing the new shortcutThe user can modify the properties of this shortcut by using the Change Item button or delete it by usingthe Delete Item button.Using SAP ShortcutsOnce an SAP shortcut is created, it can be used easily by just double-clicking it. Note that to be able towork on an SAP system, the user must have an SAP user name and password provided by the systemadministrator. An SAP shortcut can be used in the following contexts: With no SAP session running With an SAP session runningA session is an SAP system instance opened by a user. Multiple sessions can be started when the userhas to work on more than one task at a time. All these sessions (screens) of SAP can be kept active oropen simultaneously; consequently, the user saves time navigating from one screen to another. Eachsession is independent of the others, that is, an operation performed on one session does not affect theother sessions. Note The system administrator specifies the maximum number of sessions (up to 6) that can be opened at a single time.Now, lets see how to use an SAP shortcut, both with and without a session running.With No SAP Session RunningIf no SAP session is running on the computer, the SAP system displays a dialog box requesting the username and password for security purposes if you access a shortcut. A dialog box with the namecorresponding to the created shortcut appears.Perform the following steps to use a shortcut when an SAP session is not running: 1. Double-click the SAP shortcut assigned to any specific screen. In this case, we have used the shortcut (Menu Painter Shortcut) that we created in the "Creating an SAP Shortcut in SAP Logon" section.The Menu Painter Shortcut dialog box appears, as shown in Figure 2.23.
Figure 2.23: The screen painter dialog box 2. Enter the user name and password given by the system administrator in the User Name field and Password field, respectively. In this case, the user name is KDT and the password is sapmac. However, for security reasons, the password is encrypted, as shown in Figure 2.23: 3. Click the Log On button or press the ENTER key to start the SAP session.To change or view your shortcut definition, right-click in the opened dialog box (Figure 2.23), outside thetitle bar, input fields, or buttons. A context menu appears. Click the Open option to view and the Editoption to make changes in the .sap shortcut file. Note If you have not entered the password, only the Edit option is activated. However, after you enter even the first character of the password, both the Open and Edit options are activated.With an SAP Session RunningTo use a shortcut when an SAP session is already running, double-click the SAP shortcut for the task thatyou want to perform. If an application is already running, a new SAP session is started; otherwise, thecurrent SAP session starts the task. The following are the ways to use a shortcut while an SAP session isrunning: Drag and drop the shortcut from your desktop to the currently running SAP session—The SAP system displays the defined transaction or report. Drag and drop the shortcut, while pressing the CTRL key, from your desktop to the currently running SAP session—The SAP system displays the defined transaction or report in a new session. Drag and drop the shortcut, while pressing the SHIFT key, from your desktop to the currently running SAP session—The SAP system displays the properties of the shortcut. Note If an SAP shortcut is created using the System command /NTCD (/N plus the transaction code), the task is executed only in the current SAP session.
Configuring the SAP LogonIn this section, we discuss how to configure various settings of the SAP Logon screen, such as thelanguage of the SAP Logon screen and the path of configuration files, within the SAP LogonConfiguration dialog box. For that, click the icon present at the top-left corner of the SAPLogon screen and select Options, as shown in Figure 2.24:Figure 2.24: Selecting optionsThe SAP Logon Configuration dialog box appears, as shown in Figure 2.25:Figure 2.25: The SAP logon configuration dialog boxIn the SAP Logon Configuration dialog box, you can specify or change various setting options.Table 2.1 describes the options of the SAP Logon Configuration dialog box:Table 2.1: Settings of the SAP logon configuration dialog box Open table as spreadsheetOption Description
Table 2.1: Settings of the SAP logon configuration dialog box Open table as spreadsheetOption DescriptionLanguage Helps select the language in which the user needs to display the SAP logon. To use this option, the SAP Logon language file must be installed by the system administrator.Message Server Specifies the time the SAP Logon screen waits for a response from the R/3Timeout in Seconds Message Server. The default value is 10 seconds.With Wizard Specifies whether or not you want to work in SAP Logon with the wizard. The SAP logon screen needs to be restarted for the settings to be effective.Confirmation of Specifies whether you want to display a warning before deleting a system orDeletion of List Box logon group from the SAP Logon.EntryDisable System Edit Specifies whether you want to prevent logon entries from being changed.FunctionsConfiguration Files Shows a list of configuration files (.ini files) that can be opened by double- clicking.Activate SAP GUI Specifies whether you want to define and activate a network trace (SAP GUITrace Level trace). Selecting this check box enables the user to select the level of tracing. If the user selects level 2 or 3, an additional log file is generated that records all incoming data in an encrypted binary code.Additional Data Specifies whether you want to list additional memory areas in the SAP GUIHexdump in Trace trace. This check box is activated only when level 2 or 3 is selected.Additional Specifies additional SAP GUI command line arguments, for instance, /WAN isCommand Line used when a low-speed connection is required for all your SAP systems.ArgumentsAfter setting the properties in the SAP Logon Configuration dialog box, click the OK button to returnto SAP Logon screen.
Changing the PasswordInitially, the system administrator provides you with a password to log on to the SAP system. However, itis recommended to change your password when you log on for the first time for security purposes. Youcan even set the time interval after which you would like to change your SAP password. The SAP systemitself prompts you to change your password after the specified period of time. Perform the following stepsto change the password: 1. Open the SAP screen by clicking the Log On button of the SAP Logon screen. 2. Enter the data in the Client, User, and Password fields on the SAP Logon screen (shown previously in Figure 2.3). 3. After entering the values in the required fields, click the New password button on the application toolbar, as shown in Figure 2.3, or press the F5 key. The SAP dialog box appears, as shown in Figure 2.26: 4. Enter the new password in the New Password field and retype it in the Repeat Password field (see Figure 2.26). 5. Click the Confirm icon to save your new password, as shown in Figure 2.26.Figure 2.26: Displaying the SAP dialog boxThe following are some rules and restrictions that one must follow while creating a password: A password should not exceed eight characters and should not be less than three characters. A password should not begin with any of the following characters: o A question mark (?) o An exclamation mark (!) o A blank space o Three identical characters, such as 333 o Any sequence of three characters contained in your user ID (for instance ‘man if your word user ID is Friedman) A password can have a combination of the following letters and numbers: o The letters a through z o The numbers 0 through 9 o Punctuation marks While creating a password, do not use the following: o The words pass or init as your password o Any of the last five passwords you have used Note If SAP, passwords are not case-sensitive. For example, the password blueSky is the same as Bluesky or BLUESKY.Table 2.2 lists some examples of valid and invalid passwords:Table 2.2: Valid and invalid passwords Open table as spreadsheetValid Password Invalid PasswordKashvi !exercf (begins with an invalid character)Tanu=8 Sssb (contains three identical characters)
Table 2.2: Valid and invalid passwords Open table as spreadsheetValid Password Invalid Password6yuto Ap (contains less than three characters)
Logging Off of the SAP SystemAfter completing your work on the SAP system, you need to save the necessary data and log off of thesystem. Perform the following steps to log off of the SAP system: 1. Click the Log off icon on the standard toolbar, as shown in Figure 2.27:Figure 2.27: Clicking the log off iconIf there is any unsaved data, a dialog box appears, asking for confirmation, as shown in Figure 2.28:Figure 2.28: The log off dialog box 2. Click the Yes button if you want to log off without saving the unsaved data; otherwise, click the No button.There are two other alternate methods to log off of the SAP system. One of them is by selecting the Logoff option from the System menu, as shown in Figure 2.29:Figure 2.29: Log off option on the system menuThe Log off dialog box appears (see Figure 2.28).In the second method, you can exit directly from the SAP system, without any confirmation, by typing the/nex transaction command in the command field and pressing the ENTER key, as shown in Figure 2.30:
Figure 2.30: Logging off using the transaction codeThe SAP screen immediately disappears.
SummaryIn this chapter, you have learned how to log on to the SAP R/3 system. You have also explored the stepsto open the initial screen of an SAP system and maintain the SAP logon information by adding, changing,and deleting the instances of the SAP server. In addition, the chapter described how to create and usethe shortcuts for various purposes, such as to log on or to open a particular screen. Next, you learnedhow to edit the configuration settings of the SAP system. Finally, you learned how to modify the passwordand log off of the SAP system.
Chapter 3: SAP Easy AccessOverviewSAP GUI is the software that displays a graphical interface to enable users to interact with an SAPsystem. This software acts as a client in the three-tier architecture of an SAP system, which contains adatabase server, an application server, and a client. SAP GUI can run on a variety of operating systems,such as Microsoft Windows, Apple Macintosh, and UNIX.You can access the complete SAP GUI only after successfully logging on to an SAP system. When youhave successfully logged on to an SAP system, you get the first screen of the system, named SAP EasyAccess. The opening of this screen represents a new session in the SAP system. Consequently, eachscreen of the SAP GUI that you open creates a new session. You may open a maximum of six sessionssimultaneously. The SAP Easy Access screen displays a user menu that displays the options to performyour tasks, such as creating and modifying transactions, reports, and web addresses. The menus of thenavigational user menu can be expanded or collapsed. Moreover, you can create and maintain favoritesfor those transactions and reports that you commonly use.In this chapter, you learn about the first screen of the SAP system, i.e., SAP Easy Access, after you havelogged on to the SAP system. The chapter starts by explaining the SAP user menu that appears on theSAP Easy Access screen. Next, you explore the SAP GUI by discussing its three main components: thescreen header, screen body, and status bar. You also learn how to customize the layout and settings ofthe screens displayed in the SAP system, such as modifying the color, text size, and window size of thescreen. You learn how to navigate within the workplace menu and manage favorites by adding,modifying, and deleting items such as transactions, web address, and folders. Finally, you learn how tohandle one or more sessions and navigate from one session to another.
Explaining the SAP Easy Access ScreenThe first screen that appears after logging on to the SAP system is SAP Easy Access. This screen is theSAP user menu screen, also known as the SAP window. As we learned earlier, when we log on to anSAP system, a new session begins. The status bar displayed at the bottom of the screen shows thenumber of sessions opened by a user. The SAP user menu enables you to perform multiple tasks byallowing you to work on multiple sessions simultaneously. For example, suppose your manager asks youto generate a report when you are processing a new customer order. In such a situation, there is no needto stop processing the order. You can leave the previous session (the screen to process the new order)open on your computer and begin a new session to create the report. Moreover, you can customize theSAP user menu screen to fit the requirements. You learn more about customizing the SAP user menuscreen later in this chapter.The mySAP ERP system is designed as a client system, i.e., you can operate the system from anycomputer that has the SAP GUI installed and is connected to the SAP database. For example, if you arevisiting your distribution plant and later realize that you forgot to perform a task at your plant, then you canperform the same job right at the distribution plant, because SAP recognizes you on the basis of youruser name and password.The SAP user menu consists of the following two folders: Favorites— Stores the list of favorites, i.e., frequently visited transaction codes or web addresses. SAP menu— Enables a user to work on the SAP system according to the roles and authorization provided by the administrator.Figure 3.1 shows the SAP Easy Access screen containing the Favorites and SAP Menu folders:Figure 3.1: The SAP easy access screenAs shown in Figure 3.1, the SAP Menu folder contains the following eight subfolders: Office Cross-Application Components Collaboration Projects Logistics Accounting Human Resources Information Systems Tools
Note The number and names of subfolders displayed in the SAP Menu folder may be different from those displayed in your SAP Easy Access screen, as they appear according to the settings configured by the system administrator.You can modify various settings for the SAP Easy Access screen in the Settings dialog box. TheSettings dialog box is opened by selecting the Settings option in the Extras menu bar (the Extrasmenu bar will be discussed later in the chapter). Figure 3.2 shows the Settings dialog box:Figure 3.2: The settings dialog boxAs shown in Figure 3.2, the Settings dialog box has several check boxes with the following options: Display favorites at end of list Do not display menu, only display favorites Do not display picture Show technical nameYou can select one or more options from the available options by selecting the corresponding check box.Note that when the Do not display picture check box is unchecked, the SAP Easy Accessscreen also shows a graphic that appears on the right side of the screen, as shown in Figure 3.3:Figure 3.3: The graphic and split bar in the SAP easy access screenAs shown in Figure 3.3, the SAP Easy Access screen consists of a graphic and a split bar. You canhide or deactivate this graphical image by selecting the Do not display picture check box in theSettings dialog box (Figure 3.2). Another way to hide or deactivate the graphic is by dragging the splitbar from the center to the right side of the SAP Easy Access screen, as shown in Figure 3.4:
Figure 3.4: Dragging the split bar to hide the graphic
Exploring the GUI of the SAP SystemSAP GUI is the graphical interface or client in an SAP system. It is software that runs on a Windows,Apple Macintosh, or UNIX desktop, and allows you to access SAP functionality in SAP applications, suchas mySAP ERP. SAP GUI also helps exchange information between SAP users.Figure 3.5 shows the general components of an SAP GUI screen of the SAP ERP Central Component(SAP ECC) system:Figure 3.5: Components of an SAP screenFigure 3.5 shows a menu bar, a standard toolbar, the title bar showing the title of the screen, anapplication toolbar, the working area, and the status bar.The Screen HeaderThe screen header is located at the top of the main screen (see Figure 3.5). It includes the screenbanner, along with other toolbars. Figure 3.6 shows the screen header:Figure 3.6: Various toolbars in the screen header of the SAP screenAs shown in Figure 3.6, the screen header of any screen in SAP GUI consists of the following elements: Menu bar Standard toolbar Title bar Application toolbarLets discuss each of these elements in detail.The Menu BarThe menu bar contains menus to perform functional and administrative tasks on the SAP system. Forexample, generating reports is a functional task, and assigning passwords is an administrative task. Themenus in the menu bar appear according to the opened screen or transaction. In the SAP Easy Access
screen, the menu bar contains six menus: Menu, Edit, Favorites, Extras, System, and Help. Inaddition, it contains a small icon at the extreme upper-left corner, as shown in Figure 3.7:Figure 3.7: Icon displayed at the upper-left cornerUsing the icon, you can control SAP GUI by performing various tasks, such as creating a newsession and closing a transaction. When the icon is clicked, a drop-down menu appears, as shownin Figure 3.8:Figure 3.8: Selecting the stop transaction optionSelect the desired option from the drop-down list to perform the required task.Table 3.1 describes the default menu options:Table 3.1: Default menu options Open table as spreadsheet Option DescriptionSystem Contains the functions that affect the working of the SAP system as a whole, such as creating a session or user profile and logging off.Help Provides online help.Table 3.2 describes the standard menu options available for all SAP applications:Table 3.2: Standard menu options Open table as spreadsheet Option Description<Object> Contains the functions that affect the object as a whole, such as Display, Change, Print, or Exit. (Components of a program or application are considered as objects in
Table 3.2: Standard menu options Open table as spreadsheet Option Description SAP.) It is named after the object currently in use, such as Material.Edit Allows you to edit the current object by providing various options, such as Select and Copy. The Cancel option allows you to terminate a task without saving the changes.Goto Allows you to navigate through screens in the current task. It also contains the Back option, which helps navigate one level back in the system hierarchy. Before going back, the system checks the data you have entered on the current screen and displays a dialog box if it detects a problem.Additional menu options for any specific SAP module functionality are given in Table 3.3:Table 3.3: Additional menu options Open table as spreadsheet Option DescriptionExtras Allows you to use additional functions to create or modify the current application.Environment Allows you to display additional information about the current application.View Displays the application or object in different views.Settings Sets user-specific transaction parameters.Utilities Performs object-independent processing, such as the delete, copy, and print functions.The Standard ToolbarThe standard toolbar is an important element of SAP GUI. It is located below the menu bar and providesa range of icons with general SAP GUI functions and a command field to enter a transaction code.Various types of icons are found in the standard toolbar. These icons give access to common functions,such as Save, Back, Exit, and Cancel, as well as navigation help functions. The command field usedto enter the transaction code is located to the right of the Enter icon.By default, the Command field remains closed. To display the command field, click the arrow iconlocated to the left of the Save icon, as shown in Figure 3.9:Figure 3.9: Displaying the arrow button to open the command fieldWhen you click the arrow icon, the command field expands, where the desired transaction codecan be entered. Figure 3.10 shows the expanded command field:
Figure 3.10: Expanded command fieldFigure 3.10 shows the command field where the transaction code for a particular application is entered,such as the SE38 transaction code, which opens the ABAP Editor. Note A transaction code is a parameter of four alphanumeric characters used to identify a transaction in the R/3 system. In SAP R/3, every function has a transaction code associated with it. To call a transaction, enter the transaction code in the command field at the upper-left corner of your R/3 window and click the Enter button or press the ENTER key. Use /N before the transaction code to end the current task and start another corresponding to the transaction code entered. For instance, /NS000 ends the current task with the transaction code S000. The S000 transaction code is used for the initial screen of SAP. Transaction code is not case-sensitive, which means you can enter the transaction code either in lowercase or uppercase. Using certain transaction code for navigating to certain screens depends on your systems authorization. If you want to find the transaction code for a particular function, select the Status option in the System menu bar. You can find the required transaction code in the transaction field of the status bar.The SAP icons displayed on the standard toolbar provide quick access to commonly used SAP functions.If a function is not available for use on a particular screen, its corresponding icon appears gray on thetoolbar. Table 3.4 describes the various icons of the standard toolbar of the SAP R/3 system, whichperform different tasks according to the users requirements.Table 3.4: Standard toolbar icons Open table as spreadsheetIcon Control Name Keyboard Description Shortcut Enter ENTER Confirms the data that the user has selected or entered on the screen. It works in the same manner as that of the ENTER key, but does not save the work. Save CTRL+ S Saves the changes or data in the SAP system. Back F3 Navigates to the previous screen or menu level. Exit SHIFT+ F3 Exits from the current menu or system task. Cancel F12 Cancels the data entered in the current system task. Print CTRL+ P Prints a document. Find CTRL+ F Searches the open document or display screen for words and alphanumeric combinations.
Table 3.4: Standard toolbar icons Open table as spreadsheet Icon Control Name Keyboard Description Shortcut Find Next CTRL+ G Finds the next instance of a previously searched item. First Page CTRL Enables to navigate to the first page. +PAGE UP Previous Page PAGE UP Enables to scroll one page up. Next Page PAGE Enables to scroll one page down. DOWN Last Page CTRL+ Enables to scroll to the last page. PAGE DOWN Help F1 Provides help on the field where the cursor is positioned. Create New None Creates a new SAP session. Session Customized ALT+ F12 Modifies the layout and settings of the SAP system. Local LayoutThe Title BarThe title bar displays the title of the opened screen in the SAP system. Figure 3.11 shows the title of theSAP Easy Access screen in the title bar:Figure 3.11: The title barIn Figure 3.11, the title bar displays the title of the first screen of the SAP system, i.e., SAP Easy Access.Moreover, the title bar is a part of the screen header and lies between the standard toolbar andapplication toolbar (shown previously in Figure 3.6).The Application ToolbarThe application toolbar contains various icons and buttons that help you to create and maintain theapplications in the SAP system. These icons and buttons are application-specific, as different applicationshave different requirements and functionalities. The application toolbar is located just below the title bar.Figure 3.12 shows the application toolbar:Figure 3.12: The application toolbar
The Screen BodyThe area between the screen header and the status bar is known as the screen body (see Figure 3.2). Itacts as a primary window where the user actually performs the task. Every transaction screen contains ascreen body, and different applications have different screen bodies.A screen body has several entry fields and a work area. In the entry field, you can enter, change, ordisplay information to accomplish your system task. SAP R/3 has the following three field types: Required fields— Specifies that data must be filled by a user. Default fields— Contain predefined data. However, the predefined data can be overwritten depending on the system task or your SAP profile. Optional field— May or may not contain data that has to be filled by the user, depending upon the task requirement.Figure 3.13 shows examples of the preceding fields within the screen body of an SAP screen:Figure 3.13: Required, default, and optional fields in the screen bodyIn Figure 3.13, you can see that the Transp.Table field is the default field, the Short Descriptionfield is the required field, and the Keys and the Initial Values check boxes are optional fields.The Status BarThe SAP status bar provides information about applications and programs being executed in the SAPsystem. The information may include messages about the status of executing a program, opening atransaction, or error messages. Figure 3.14 shows the SAP status bar:Figure 3.14: The SAP status barIn Figure 3.14, you can see that the system messages are defined on the left-hand side of the status bar.Note that the messages are flashed once and then displayed in the status bar. Table 3.5 describes thestatus messages with their icons:Table 3.5: Status message with icons Open table as spreadsheetIcon Message Indicating Example Error Make an entry in all required fields | s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267165261.94/warc/CC-MAIN-20180926140948-20180926161348-00459.warc.gz | CC-MAIN-2018-39 | 87,043 | 57 |
https://wiki.ihe.net/index.php/Standardized_Operational_Log_of_Events | code | Standardized Operational Log of Events
Standardized Operational Log of Events (SOLE)
Efficient businesses use business intelligence tools to manage their business. The application of these tools to manage medical care has been limited in part because the information often resides in several different systems, and there are not standard ways to obtain the information. The SOLE profile defines a way to exchange information about events that can then be collected and displayed using standard methods.
2. The Problem
The need for standard ways to exchange information about the status of a radiology department has been recognized for years, and was formalized in SIIM’s Workflow Intitiative in Medicine (SWIM). Dashboards have been developed to address this need, but the data they rely on is stored in disparate systems, and in non-standard forms. SWIM worked with RSNA to establish a standard lexicon for workflow events in RadLEX. Now, there needs to be a standard way to communicate event information.
Healthcare providers have a strong desire to increase throughput and efficiency, both to improve the quality and timeliness of care and to control costs. Such process improvement efforts depend on being able to capture workflow events and apply business intelligence tools. Such initiatives face several problems:
- Event information that is to be logged comes from many different systems but there is no easy way to collect and compile the events into a single collection
- The different systems recording the particular events being logged may have different understandings of the definition of the event, time point or measurement; the result is:
- Within a single institution, data is non-uniform across systems, degrading the value of the information
- Across institutions, it is hard to compare to evaluate best practices
3. Key Use Case
The use case is general, but there is particular interest in the imaging department, which would provide a concrete test case.
- Systems which perform events on a profiled list would send messages in a profiled format containing standardized details to an operational event log server.
- The Event Repository would capture and record the received event messages
- Event Consumers, such as business intelligence tools, workflow engine-management tools, and tools related to performance measurement, would retrieve blocks of event messages from the event log server.
We note here that this proposal focuses on imaging department use. However, we believe that most, if not all, can be applied outside the imaging department. Therefore, we will focus on imaging, but be cognizant of non-imaging potential use.
4. Standards and Systems
Systems (that would submit events or analyze events):
- Speech Recognition
- Imaging Devices
- BI tools
- ATNA log for submission
- has been successful in capturing and providing access to information needed for managing security and has been widely adopted. We would be happy to use the same or similar mechanism for operational events, though validation that it can meet the unique performance requirements must be validated.
- SWIM terms for coding
- The lists and definitions of events would likely be extended over time as new operational areas of the hospital become interested in such logging and analysis. The first draft of the profile could leverage the SWIM (SIIM Workflow Initiative in Medicine) terms that define a number of events and states occurring in an imaging department.
- The SWIM list has recently been adopted into the RSNA RadLex codeset and several vendors have participated in pilot demonstrations of capturing such operational logs. It is not clear if there is an equivalent Lexicon for events outside of medical imaging, but this might stimulate the development of such a lexicon. The lack of such a lexicon should not present an impediment to achieving the benefits described above. As standard event terms are created, they can be recognized by IHE as an acceptable code set, and put into practice.
- RESTful ATNA Query for query/retrieve
5. Technical Approach
There are two potential approaches to consider: One option is to use a log file similar to the ATNA log. The advantage is that the technology is well understood and simple. The challenge is that the performance may not be sufficient. It is also likely that a consumer will want a subset of events (e.g. all events in the last XXX seconds, or all events in the last hour from facility YY, or all events associated with an exam having a modality code of ‘MR’ for exam type).
A RESTful interface may be the most appropriate mechanism for the request, and possibly for the storage mechanism.
The actors for the SOLE profile are illustrated in Figure 1.
- Actor: Event Creator
- Role: Sends event information to the Event Repository.
- Actor: Event Repository
- Role: Stores events sent from Event Creators, and responds to requests for event information from Event Consumers.
- Actor: Event Consumer
- Role: Requests events from the Event Repository. The results would typically be used for displaying status or performance in a department, or for executing new events or work
Figure 1 SOLE Actor Diagram:
Profile Status: Trial Implementation
The Profile FAQ Template answers typical questions about what the Profile does. <Replace the link with a link to the actual FAQ page for the Profile>
The Profile Purchasing Template describes considerations when purchasing equipment to deploy this Profile. <Replace the link with a link to the actual Purchasing page for the Profile>
The Profile Implementation Template provides additional information about implementing this Profile in software. <Replace the link with a link to the actual Implementation page for the Profile>
<List References (good and bad) (with link if possible) to Journal Articles that mention IHE's work (and hopefully include some analysis). Go ahead, Google: IHE <Profile Name> abstract or Google: IHE <Profile Name> and under the "more" select "Scholar". You might be surprised. > | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00610.warc.gz | CC-MAIN-2021-21 | 6,027 | 43 |
http://forums.xbox-scene.com/index.php?/topic/612125-sonic-anyone/ | code | Posted 11 July 2007 - 01:59 AM
Today I took apart my old sega genesis and decided to spice it up a bit. Here is a quick picture of what it is going to look like.
Anyway, I was just curious (being the sonic lover that I am, lol ), has anyone modded their xbox 360 to a theme of a retro video game character? Like has anyone painted a mario or sonic on their case before? I'm laughing right now thinking about it, but if anyone has done this please post it. That would be super sonic sweet!
Posted 11 July 2007 - 02:07 AM
this is the Xbox360 Case / Hardware modding.
Posted 11 July 2007 - 02:27 AM
Posted 11 July 2007 - 07:44 PM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447913.86/warc/CC-MAIN-20151124205407-00004-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 698 | 9 |
https://www.aai-salzburg.at/en_events_students-corner.htm | code | What motivates me to be involved in making the world a better place? What keeps me from it? Where do I want to be involved and what do I want to change? How can I be engaged more effectively? Who do I want to work with? The workshop centers on these and other questions. It reflects on your individual space of agency and helps finding ways to be involved more effectively. By offering the workshop to a young and international audience and to local agents of change, the dimensions of agency and participation should be expanded for both groups. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347426801.75/warc/CC-MAIN-20200602193431-20200602223431-00064.warc.gz | CC-MAIN-2020-24 | 546 | 1 |
http://news.sys-con.com/node/3088768 | code | |By RealWire News Distribution||
|May 16, 2014 04:51 AM EDT||
Leading Analyst Firm Gartner Recognizes Openwave Mobility in Their Prestigious Annual Selection of Cool Vendors
REDWOOD CITY, Calif - May 16, 2014 - Openwave Mobility, a software innovator enabling operators to manage and monetize mobile data, announced today that they have been listed as a 'Cool Vendor' by Gartner, Inc. in the report 'Cool Vendors in Communications Service Provider Operational and Business Infrastructure, 2014'.
The 'Cool Vendor' recognition is designated to organizations Gartner recognizes as fresh and innovative constituents in their respective markets.
Openwave Mobility's Promotion and Pricing Innovation (PPI), a mobile data monetization solution enabling operators to acutely target customized time-sensitive promotions and personalized data plans to subscribers. PPI also provides a real-time push notification system that alerts subscribers who are approaching data limits or entering areas where roaming charges apply. Instead of communicating with users by SMS, self-care applications, or alert emails, operators utilizing PPI are able to offer total real-time engagement, or 'Policy Engagement' using rich media, to consumers. PPI is available for deployment in-network or in the cloud and has been successfully deployed by multiple mobile operators, including C Spire, the largest privately owned wireless communications provider in the USA.
"Our inspiration for PPI stemmed from our desire to create a product mutually beneficial for both mobile operators and subscribers in the Policy Control and Charging space," said John Giere, CEO at Openwave Mobility. "Not only does PPI offer operators a mode of producing new revenue streams through ad-hoc, personalized data plans, but also gives subscribers total management control of their data usage. We believe that Gartner's recognition of PPI as a stand-out product underscores that Openwave Mobility is providing the right solutions to mobile operators to help them successfully manage and monetize the increase in network traffic."
Gartner "Cool Vendors in Communications Service Provider Operational and Business Infrastructure, 2014'" by Norbert J. Scholz | Martina Kurth | Charlotte Patrick | Kamlesh Bhatia | Mentor Cana | Neil Osmond, May 7, 2014
Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
About Openwave Mobility
Openwave Mobility enables operators to manage and monetize mobile data using the industry's most scalable, Layer7 SDN/NFV platform. The company operates within the policy control and charging (PCC) space and delivers Policy Engagement with solutions that include Video Optimization to eliminate data congestion, and Mobile Data Charging and Analytics to increase ARPU through personalized data plans. These solutions are supplemented by Subscriber Data Management, providing a single consolidated store for policy and subscriber data, and Mobile Analytics, providing subscriber segmentation and Business Intelligence.
Openwave Mobility delivers over 40 billion transactions daily and over half a billion subscribers worldwide use data services powered by its solutions. The company's global customer base consists of over 40 of the largest communication service providers including AT&T, Du, KDDI, Rogers, Sprint, Telus, T-Mobile, Telefonica, Telstra, Virgin Mobile & Vodafone. Openwave Mobility is owned by Marlin Equity Partners, a leading investment firm with over a billion dollars of capital under management that has publicly committed to building the company through expanded investment in R&D. The company has built a robust portfolio of Intellectual Property, which is growing month-by-month.
# # #
Openwave Mobility and the Openwave Mobility logo are trademarks of Openwave Mobility Inc. All other trademarks are the properties of their respective owners.
For further information
Sonus PR for Openwave Mobility
For APAC and EMEA Inquiries
Tel: +44 20 7851 4850
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they mana...
Sep. 30, 2016 01:15 AM EDT Reads: 2,955
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
Sep. 30, 2016 12:45 AM EDT Reads: 2,974
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
Sep. 30, 2016 12:00 AM EDT Reads: 2,372
IoT is fundamentally transforming the auto industry, turning the vehicle into a hub for connected services, including safety, infotainment and usage-based insurance. Auto manufacturers – and businesses across all verticals – have built an entire ecosystem around the Connected Car, creating new customer touch points and revenue streams. In his session at @ThingsExpo, Macario Namie, Head of IoT Strategy at Cisco Jasper, will share real-world examples of how IoT transforms the car from a static p...
Sep. 30, 2016 12:00 AM EDT Reads: 1,674
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
Sep. 29, 2016 11:30 PM EDT Reads: 1,207
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Sep. 29, 2016 10:30 PM EDT Reads: 4,053
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
Sep. 29, 2016 10:15 PM EDT Reads: 2,796
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, will discuss the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports. The session will include a working demo and a technical d...
Sep. 29, 2016 10:00 PM EDT Reads: 1,812
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera M...
Sep. 29, 2016 09:45 PM EDT Reads: 3,138
Why do your mobile transformations need to happen today? Mobile is the strategy that enterprise transformation centers on to drive customer engagement. In his general session at @ThingsExpo, Roger Woods, Director, Mobile Product & Strategy – Adobe Marketing Cloud, covered key IoT and mobile trends that are forcing mobile transformation, key components of a solid mobile strategy and explored how brands are effectively driving mobile change throughout the enterprise.
Sep. 29, 2016 08:45 PM EDT Reads: 2,221
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of (at least) three separate application components: the software embedded in the device, the back-end service, and the mobile application for the end user’s controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target –...
Sep. 29, 2016 08:45 PM EDT Reads: 1,551
Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition.
Sep. 29, 2016 06:15 PM EDT Reads: 3,712
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
Sep. 29, 2016 06:00 PM EDT Reads: 1,545
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management solutions, helping companies worldwide activate their data to drive more value and business insight and to transform moder...
Sep. 29, 2016 05:15 PM EDT Reads: 2,861
SYS-CON Events announced today that eCube Systems, a leading provider of middleware modernization, integration, and management solutions, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
Sep. 29, 2016 05:15 PM EDT Reads: 1,585 | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662022.71/warc/CC-MAIN-20160924173742-00151-ip-10-143-35-109.ec2.internal.warc.gz | CC-MAIN-2016-40 | 12,328 | 48 |
https://github.com/tbenthompson/tectosaur | code | Observe the tectonosaurus and the elastosaurus romp pleasantly through the fields of stress.
Tectosaur is an implementation of the elastic boundary element method, oriented towards problems involving faults. It can certainly be used for non-fault elasticity problems, too! Tectosaur is built on a new numerical integration methodology for computing Green's function integrals. This allows a great deal of flexibility in the problems it can solve. The use of efficient algorithms like the Fast Multipole Method and the use of parallelization and GPU acceleration lead to very rapid solution of very large problems. To summarize the practical capabilities of tectosaur:
- Solving complex geometric static elastic boundary value problems including
- earth curvature
- material property contrasts
- Problems with millions of elements can be solved in minutes on a desktop computer.
- No need for volumetric meshing is ideal for problems where the fault is the topic of interest.
- Rapid model iteration
Further documentation and examples will absolutely be available in the future! Until then, however, tectosaur will be rapidly changing and developing and any users should expect little to no support and frequent API breaking changes.
- Tectosaur requires Python 3.5 or greater.
- You will need to have either PyCUDA or PyOpenCL installed. If you have an NVidia GPU, install PyCUDA for best performance. Try running
pip install pycuda. If that fails, you can follow the more detailed instructions on the PyCUDA wiki. And here are detailed instructions for installing PyOpenCL.
- Install numpy:
pip install numpy
git clone https://github.com/tbenthompson/tectosaur.git
- Enter that directory and run
pip install .
Running the examples
- Check that Jupyter is installed!
- Launch a Jupyter notebook or lab server.
- Navigate to
- Open and run the examples! | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583835626.56/warc/CC-MAIN-20190122095409-20190122121409-00221.warc.gz | CC-MAIN-2019-04 | 1,852 | 22 |
https://libbyodai.co.uk/wallvis.html | code | An interactive wall exhibition. Users can input their value system using a controller which then drives the wall visualisation.The project runs on a Node Red server on a Raspberry Pi, driving ESP8322 Arduino modules through MQTT.
Group project with Ailie Rutherford and University of Edinburgh,
I was responsible for programming and concept design.
I was also part of the initial testing phases of the wall visualisation proejct for Design Informatics at the University of Edinburgh. I was responsible for the module concept design and the module programming. These modules were then workshopped with industry partners. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571153.86/warc/CC-MAIN-20220810100712-20220810130712-00319.warc.gz | CC-MAIN-2022-33 | 619 | 4 |
https://lists.quantum-espresso.org/pipermail/users/2009-March/012089.html | code | [Pw_forum] Spacing in band structure
greatnaol at gmail.com
Wed Mar 25 12:37:27 CET 2009
We were drawing band structure for the hexagonal crystal structure
along the path Г-M-K-Г-A. when I want to make the spacing between M-K
narrower and Г-M wider, I increase the number of k-points along Г-M
and decrease the k-points from M-K but it didn't work. No matter what
density of k-points I made the spacing between the high symmetry
points does not vary the way that I want? Is this related with the
application that I am using i.e 'Image Magic' What do you advice me
then? On the way let me add one question. How can remove the dots on
the band structure so that it will be smooth?
NB: I am facing the same problem on the phonon dispersion.
Addis Ababa University, Materials science programe
East Africa (Ethiopia)
More information about the users | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314732.59/warc/CC-MAIN-20190819114330-20190819140330-00422.warc.gz | CC-MAIN-2019-35 | 848 | 16 |
https://pyra-handheld.com/boards/threads/our-new-machine-pandora.35865/page-282 | code | This thread must NEVER die, nor be moved. Doing so is near the same as murder. For example, people often put down animals when they become violent, or are so sick that they do not enjoy like anymore. Many threads can be compared to that, killed because of flame wars, or because the discussion is taken over by someone who ruins it for everyone else.
However, people do NOT put down animals simply because they have become old. As long as the animal is happy and enjoying life, and is not doing anything to upset his owners (Or the other people around him), then it is clearly evident that they should live. Though this thread has gone hopelessly off topic, it does occasionally come back to make a good point. And while some members may say "But it DOES upset me! Kill it!", there is nothing that FORCES them to come into this thread. Hence, this thread should live until it dies of its own accord, or becomes offensive to the kind of people who ARE posting in this thread.
I understand that comparing a thread to a dog is ridiculous, but it was the best analogy I could muster. So, please, continue posting random stuff! In 6 months, who knows, we may have as many as 1000 pages! | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00184.warc.gz | CC-MAIN-2020-34 | 1,181 | 3 |
https://photolens.tech/how-to-move-a-movieclip-along-a-custom-path-in-flash/ | code | I got a movieclip, and I got a custom path (some curves imported from Illustrator)
How can I “attach” my movieclip so each frame it advances over a segment of the curve?.
If you need a more detailed explanation I can draw a picture.
- Draw a path on its own layer
- Cut your path Cmd / Ctrl+X
- Have your MovieClip Layer Selected
- Create the desired length of the animation and
- Then select your Movieclip, right click and Create Motion Tween
- Paste in place your path
- Go back to Frame 1 and then in the Properties panel under Rotation choose Orient to Path | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00212.warc.gz | CC-MAIN-2022-40 | 566 | 10 |
https://datascience.stackexchange.com/questions/31368/how-to-treat-undefined-input-space-in-supervised-learning-algorithms | code | In a supervised learning problem (like a neural network) with binary output, we supply input data for which the output values are defined 0 and 1 (indicating probabilities of occurrence of an event). But the input data not necessarily cover the entire input space. How does the algorithm treat the rest of the input space?
For example, for an annular dataset in the figure, there are two sets of data in the 2-D input space annotated as 0s and 1s. But there are a lot of input space which is not defined (neither 0 nor 1). What output can be expected at the prediction stage for an input that does not lie around the input data clouds? | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584886.5/warc/CC-MAIN-20211016135542-20211016165542-00546.warc.gz | CC-MAIN-2021-43 | 635 | 2 |
https://www.simplicidade.org/notes/2006/03/11/lisbon.pm-tech-meeting-and-a-new-course/ | code | Last thursday we had another technical meeting of the Lisbon.pm group. It was a great success, with 29 people attending.
There where three presentations: one by João Gomes about Catalyst, another by me about POE with an example of process control, and the last one by Miguel Duarte about when not to use Perl, which was, as you can expect, a hot topic.
If you are interested in Perl and live or work around Lisboa, please join our mailing list (instructions can be found at the Lisbon.pm website).
The meeting was organized by José Castro, and the space (and first round of drinks afterwards) was sponsored by Log. Kudos to them both.
I've been the Lisbon.pm leader for some time now, and since the reactivation of the group last September, our social meetings have been better each time and our technical meetings have also been great.
Yet most of the work of organizing our events is being done by José, so it's only fair to make him the leader of the group. So after forcing^H^H^H^H^Htalking with him about this, he finally accepted.
I think the group is now in better hands. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00497.warc.gz | CC-MAIN-2022-33 | 1,087 | 7 |
https://www.affordablecebu.com/dir/computers/vista_or_xp_for_dev_machine_closed/5-1-0-32326 | code | Vista or XP for Dev Machine [closed]
"Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 7 years ago.
Improve this question
I am about to get a new PC from work, and it will include the option to have either Vista Business as the OS, or a downgrade to XP Pro. Aside from a tiny bit of testing, I have never used Vista, but overall I have heard many more bad reports than good regarding Vista. I don't think that hardware will be an issue (Intel Core Duo T9300, 4GB RAM, 256MB NVIDIA) in terms of performance. I am just uneasy about using Vista for my main dev system given its history, when I have the opportunity to keep on using XP.
So is there anyone here who has experience with both Vista and XP as the OS on your dev machine? If you could choose one over the other, which would you go with? I will need to use Visual Studio 2003/2005/2008, SQL Server 2005, Virtual Machines, Office, as well as lots of multi-tasking and multi-tab web browsing.
(Note: I am not interested in Microsoft-bashing. If you haven't used Vista but have just heard bad things about it then you have the same level of experience as me and you probably shouldn't be answering the question).
Edit: As I am getting this computer from work I would prefer to use one of the operating systems offered: 32 bit XP PRO or 32 bit Vista." | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301488.71/warc/CC-MAIN-20220119185232-20220119215232-00664.warc.gz | CC-MAIN-2022-05 | 1,452 | 9 |
http://iiscs.wssu.edu/drupal/?q=node/12 | code | TM4L Editor is an authoring environment that supports the creation, maintenance, and use of ontology-aware courseware based on the ISO standard – Topic Maps. As an alternative to conventional authoring systems TM4L is aimed at facilitating the integration of already existing learning resources on the web. TM4L is based on TMAPI, a proposed common programming interface for topic map processors. For additional information see the related publications. This product includes software developed by the TM4J Project.
Some of the new features in the latest release of TM4L include:
TM4L Editor distribution file includes everything you need to run the Editor (except the Java Runtime Environment). You can download it from the TM4L project page.
TM4L is frequently updated, check back periodically for new versions.
Note. In some cases Window installations may require JRE in C:\ directory.
The default language of TM4L is English. Besides English, TM4L supports other languages, currently including Chinese (Taiwan), Japanese (Japan), French (France), and Nepali (Nepal). To change the language option:
To create a new topic map select the Topic Map tab in the Editor. The fields on this tab describe the topic map to be created and the author. All fields are optional except Subject (Main Topic) which is used to provide reference to the new topic map. After entering the metadata click the Create button to create the new topic map. You can add / update the metadata of an earlier created topic map and save it by clicking the Save Info button.
The first step in the design of a learning content repository is the creation of a conceptual structure, ontology, to be used for further classification of the learning resources. The classification structure is often composed of hierarchies of concepts (topics). Different types of relations (known as associations in Topic Maps) between topics are associated with three well-known abstraction mechanisms: a superclass-subclass hierarchy (taxonomy), whole-part hierarchy (partonomy) and class-instance relationships ( topic typing). Using these abstraction mechanisms it is possible to model the semantic interconnectedness of topic (learning object) classes and their instances in an ontological model.
An ontology model typically consists of concepts and relationships between these concepts. One of the key relationships in a learning content collection is whole-part (part-of). Though it is possible to distinguish between different kinds of whole-part, e.g. structural whole-part and functional whole-part, we don't distinguish between them in our model. Another fundamental relationships in a learning content collection and in modeling the subject domain to be learned is superclass-subclass (is-a). TM4L provides significant support for creation of "whole-part", "class-subclass", and "class-instance" relationships by supporting three different views of the topic map ontological structure based on them.
Building of a content taxonomy or partonomy is facilitated by the transitive property of the superclass-subclass and and whole-part relationships. Note however that not all ontologies are trees in the formal sense according to the inheritance hierarchy. TM4L handles the case where these relationships are discontinuous, thus it is able to visualize forests as well as trees. Nodes with more than one parent in the whole-part and type-instance hierarchy are indicated in the topic tree with an asterisk (*).
As it was already mentioned, the TM4L Editor incorporates three views : Topics Partonomy, Topics Taxonomy or Topics Typing. The first view displays a whole-part tree, the second one displays a superclass-subclass tree, while the last view displays a class-instance structure. With this enhancement we intend to provide alternative insights into the learning content structure.
The distinction of the above viewpoints is essential during the Topic map creation. A user can edit/view the topic map in a pre-selected view. The Topic map to be edited using a particular view must be loaded after selecting that particular view.
To select a particular view:
After selecting a specific view, the corresponding relationship type is considered default and will not be displayed in the Relationship panel. Instead, when a new topic is added as a child to a parent in the topic tree, a relationship of the default type will be automatically created between the parent and the child topics. Thus choosing Topics Partonomy results in updating whole-part relation type collections, Topics Taxonomy in updating the superclass-subclass relation type collections, while Topics Typing - in updating the class-instance relation type collections. To switch editing in a new view the topic map must be reloaded after selecting the desired view.
TM4L provides two complementary access modes to the topic collections: Subject Topics and All Topics. In the Subject Topics mode only the topics defined by the user in the Topics pane are displayed. In the All Topics mode along with the topics defined by the user in the Topics pane, all the predefined and system topics, as well as names of the relationship types, roles, and resource types are displayed. The All Topics option is provided for extended access to the relevant aspects of the learning collection, that is, to enable the user to edit or delete topics having complementary role in the organization of the learning collection such as Resource Type, Roles, Relation Types, etc. The TM4L default (and recommended) access mode is Subject Topics.
In order to change the access mode to the topic collection the user have to:
Warning: Modifying or deleting some of the topics listed in the All Topics mode in ways that are not compatible with xtm may result in corrupted Topic Map. We do not recommend using this mode by inexperienced users.
In order to switch to editing in a different mode:
To create a new topic select the Topics tab, then click the Create button on the left pane, fill-in the required information in the Create New Topic dialog box and click OK. The Subject Indicator field is optional and can be added in later stages using the editing mode. A subject indicator for a topic can be a URL of a website, which describes uniquely the topic. It can be used as a reference allowing to determine the meaning of the topic. After closing the Create New Topic dialog, the newly created topic becomes the current topic that enables us to add information to it.
After a topic is created you can add resources in the form of text or URL by clicking the Add button of the topic resources pane and providing the requested data. Similarly, you can add additional topic name(s) and variant name(s) by clicking the Add button of the topic names pane and providing the requested data.
Relationships between topics can be created in TM4L Editor in two ways: either explicitly in the Relationships panel or automatically in the Topics panel by adding a topic as a 'child' to a 'parent 'topic in the topic tree that represents the topic map structure. This tree view can be based on one of the three basic relationships: whole-part, superclass-subclass and class-instance. You have to choose which view of the topic map you would like to use (partonomy tree or taxonomy tree or typing structure) before creating or opening a topic map. You can change the view at any point but after that you have to reload the topic map. Note that the default TM4L view is Topics Partonomy. To create a new subtopic of a (parent) topic:
When a topic is added as a sub-topic (child) to a (parent) topic in the topic tree, a relationship of the default type is automatically created between the parent and child topics.
The Topics pane presents the hierarchy (tree) of the topics. In order to see all available information in the Topic map about a specific topic: select the topic in the tree by clicking on it. The selected topic becomes the current topic with which you are currently working. All the information about it is presented in the right panels. All changes (insertions, deletions, replacements) are performed on the selected topic. To unselect a selected topic double-click on it.
Typically the actions for adding new topics are similar to editing existing topics, which are presented in more details in the following section along with the functionality specific to editing. Note that the item to be edited has to be selected prior to editing.
TM4L allows inspection of all characteristic, associations and occurrences in which a selected topic is involved. This feature is particularly useful during editing (modification) sessions since it provides a “close up” view of a given topic, complementary to the hierarchical relational and theme views. To view the complete list of the involvement of a topic and its characteristics:
Note. If you are unable to see the complete descriptions of some items related to the selected topic, resize the popping up window to enable all items to be displayed properly in the provided boxes.
You can add or edit a characteristic of the current topic. In order to do so first you have to make the topic current by selecting it in the topic tree in the left pane.
Name and Subject Indicator. To edit the topic name or subject indicator press the Edit button in the Topic pane and then edit the fields as needed.
Parent Topics. To add a new parent, select a topic from the topic tree in the left pane and then click the Add button in the Parent Topics pane. From the resizable Add Parent dialog box select the parent topic. Notice that topics with multiple parents are displayed in the topic tree with an asterisk attached to the end of the topic name, i.e. Prolog Sintax* . To delete an existing parent of a given topic, select the corresponding parent field of the Parent Topics pane, and then click the Delete button.
Topic Resources. To add a new resource press the Add button in the Topic Resources pane and fill-in the corresponding fields shown in the opened Add Resource dialog box. To provide addressable external resources, type-in the URL in the first field (Resource URL). For internal resources (short description, definition, names, etc) type in or copy/paste the text in the second field (Resource Data). You can select one of the predefined types (from the drop down menu) for the resource to be added or enter your own type by selecting the Enter New Resource Type option.
To edit an existing resource press the Edit button in the Topic Resources pane and edit the corresponding fields of the Edit Resource dialog box.
Topic Names. To add a new name or variant press the Add button in the Topic Names pane and fill-in the corresponding fields of the opened Add Another Topic Name dialog box. For the variant names you can select one of the predefined use options from the drop down menu.
To edit existing (variant) names press the Edit button in the Topic Names field and fill-in the corresponding fields of the opened Edit Topic Name dialog box. For the variant names you can select one of the predefined use options from the drop down menu, next to each variant name field.
You can check the topic map URLs (links) integrity by selecting the Check Links option from the Tools menu. To see the report on the links status select the Links Report option from the Tools menu. You can see the report many times, it will contain the same information until your next links checking.
To create a new relation select the Relationships tab, click the Create button on the left pane and fill-in the required fields in the opened Create Relationship Type dialog box. In this dialog box you should enter the role types (maximum three with this dialog box) for each member of the relation to be created. For example, for binary relations you enter two role types and for ternary relations three role types. The Display Name field is optional (see the next paragraph). For relations with four or more members, the forth, fifth, etc, role types can be added by clicking the Add button of the Relationship Members’ Roles and filling-in the required information in the popping up Add Member’s Role dialog box. Role types in each relations are unique. It is not allowed the same role type to be used in two or more relations.
In Topic Maps, relations are by definition bilateral i.e. for each relation between two topics, a reverse relation exists in the other direction. For example “Christo Dichev is employed by WSSU” has a reciprocal relation “WSSU employs Christo Dichev”. To capture these contextual distinctions it is possible to assign a different name ( display name in terms of TM4L) to the relation type for each role that the relation supports. In effect display names are relation type names scoped by the roles in the relation. For a binary relation such as “Employer-Employee” this approach will result in three separate names for the relation. In the unconstrained scope, the relation type will be named "Employer-Employee". For the two other names we will type them in the Display Name provided by the Create Relationship Type pop-up window for each role. Thus the role players of the relation will play also a role of context for the relation names. Assume we type for the role “Employer” the display name "employs" and for the role "Employee" the name "is employed by". Then the name "Employer-Employee" will be used as the default name for the relation. The name "employs" is to be used and displayed in the context of the role "Employer", while the name "is employed by" is to be used and displayed in the context of the role "Employee". An application may then use the role currently in focus as part of the user context when determining which is the best name to be applied.
Note that TM4L provides a set of predefined relations such as “Instance Of”, “Superclass-Subclass”, “Whole-Part”, and “Related”, considered particularly useful in practical context. The intended meaning of these relationships that is supposed to be incorporated in the corresponding Topic Map application is as follows: T2 Instance of T1 - topic T2 is an instance of a topic T1; T1 Superclass-Subclass T2 - any instance of class T2 is also an instance of class T1; T1 Whole-Part T2 - the topic T2 is part of another topic T1; T1 Related T2 - there is a semantic relationship between topics T1 and T2.
These relations are displayed along with the user defined relations when the Relationships tab is selected. To create a specific relation of these four types you are only required to type in the role players for each role type as it is described below. Once the new relation and role types are successfully created, you can create specific relations of that type. To create a relation click the Add button in the Relationships of this type panel. It invokes the Add Relationship dialog box with two windows, Roles and Members and a scrolling area Select Topic as a Member. (The Add Relationship dialog box is resizable (as it is Add Parent ) and you can resize it for a better display of the topic list). You have to fill-in the player (member) slots for each role . To do so, select the player (member) slot that is to be filled in next by clicking on the corresponding role (left to the corresponding member slot). This results in changing the background color of the selected role to blue. Next, from the scrolling area Select Topic as a Member select the topic supposed to play the selected role (the topics are listed alphabetically). Note that after selecting a particular topic, its name appears in the Members field, next to the Roles field.
Note that in the current implementation of TM4L even when a new relation and role types are successfully created, if you do not add at least one specific relation of that type, after reloading the topic map the new relation will not be listed among the existing relations. Instead, the newly created relation type and roles will be displayed as topics in the Topics pane.
As with topics, actions for adding new relations share a substantial part of the functionality related to editing existing relations, which are presented in the following section along with the functionality specific to editing.
To edit a given relation type you have to make it current by first selecting it in the left (Relationships) pane.
Relationship Type pane allows editing the name and subject indicator of the selected relation type. Editing is activated by clicking the Edit button.
Relationship Members Role pane allows adding, editing and deleting chosen role members of the selected relation.
Relationships of this type pane allows adding, deleting and applying themes to specific relations. By using themes you can constrain the interpretation of certain relation within a given scope defined by the theme.
The theme concept (known as scope in Topic Maps) enables us to define a context within which some topic characteristics are valid. To create a new theme select the Themes tab, then click the Create button on the left pane and fill-in the required fields in the opened Create New Theme dialog box. With this dialog box we provide the theme name as a string and possibly URL (subject indicator). The URL field is optional.
Note that, in order to enable TM4L distinguish the new theme from ordinary topics you must apply the created theme before saving or reloading the topic map. Otherwise TM4L will save it as a regular topic.
Theme may be applied to names, resources and relations.
Topic Names. Themes can not be applied to primary topic names. Instead an alternative to the primary name is supposed to be used within a particular theme. Assume that alternative forms of the primary name have already been created. To apply a theme to a name (different from the primary name), first select the topic name in the Topic Names pane and then press the Theme button . In the opened dialog box Apply Theme to Topic Name select the theme from the scrolling area Select Theme to Apply and click Apply.
Topic Resources. To apply a theme to a resource first select the resource in the Topic Resources pane and then press the Theme button. In the opened dialog box Apply Theme to Resources select the theme from the scrolling area Select Theme to Apply and click Apply.
Relationships of this type. To apply a theme to a particular relationship, first select the relationship in the Relationships of this type pane and then press the Theme button. In the opened dialog box Apply Theme To Relationship select the theme from the scrolling area Select Theme to Apply and click Apply.
If you don't see the theme you wish to apply, go to the Themes pane and create that theme; then return and apply it.
The TM4L editor can visualize a Topic Map fragment defined by selected topic along with all topics in the Topic Map related to it. This makes easier to detect incomplete groups of topics, missing topics, redundant topics or misplaced topics by exploring the groupings of related topics.
To view a selected topic surrounded by related topics, that is a selected Topic Map fragment:
To display the corresponding Topic Maps fragment you have to click the SHOW button. To clear it click the Clear button. The Show Labels (Hide Labels) allows you to see (hide) the relation type names when hovering over a particular relation.
TM4L supports interaction between graphical and standard view enabeling more detailed inspecting of selected Topic Maps fragments. This featured is made possible by enabling any topic selection in the graphical view to remain selected in the standard view. By clicking the (Topics) View button the user can view all details related to the selected topic.
Note that after updating the topic map, e.g. inserting or deleting topics, you need to visualize the graphic structure again.
The original TM4L Editor enables authors to create TM-based learning content and repositories, by adding topics, relations, resources and scopes (themes). The textual editing functionality of TM4L was extended with a graphic editor (provisionally called TM4L-GEV), based on a graph representation. The latest graphical extension was supposed to complement both the Viewer and the original TM4L textual Editor with editing functionality.
TM4L-GEV provides a graph representation for TM constructs (topics are represented as nodes and relations as edges) and has capabilities to navigate and edit topic maps. It is a browsing and editing “in-one-view” module. The simplest editing feature consists of direct editing of the topic name of a selected topic node. In a similar fashion the author can edit a relation type name.
The editing functionality of TM4L-GEV includes add topics and edit topics.
Add Topic. To add a new topic right click on a free area of the visualization pane and from the popping up menu select Create New Unlinked Node. After typing in the name in the Topic Name box, click the OK button. This operation results in creating a new topic and adding it to the current topic map. The new topic is not linked (related) to any other nodes of the displayed graph.
Edit Topic. Editing topics includes the following functions: Viewing Image, Expanding Node, Collapsing Node, Hiding Node, Renaming Topic, Deleting Topic, Adding Related Topic and Adding Relationship.
To edit a topic right lick on the selected topic. From the drop down menu select one of the options: View Image, Expand Node, Collapse Node, Hide Node, Rename Topic, Delete Topic, Add Related Topic or Add Relationship.
In addition to browsing the Topic Map for finding content in the repository TM4L supports two types of searching: search for topics by name and Topic Map queries.
Using the Find Topic interface provided at the bottom of the TM4L right pane users can search for topics by typing set of words (strings) contained the in the topic names. The first found topic is highlighted. By clicking Find Next the search continue for the next matching topic.
Topic Map Query supports in turn two types of queries: Tolog and Template queries. The first one enables users to query Topic Maps using the Tolog query language syntax where query values may be either variables or topics. To initiate a tolog query click Tools and from the drop-Down menu select Search and then Tolog Query . The resulting Tolog interface provides a text entry box for writing queries using tolog syntax Each query is in a relational format containing arguments that are either variables or topics such as followings examples.
select $TOPIC from Whole-Part(Car:Whole, $TOPIC:Part)?
select $TOPIC1, $TOPIC2 from Whole-Part($TOPIC1:Whole, $TOPIC2:Part) order by $TOPIC1?
In the first example in case of successful match the two variables $TOPIC1 and $TOPIC2 will be instantiated to the corresponding topics. In the third example in addition to instantiating the variables $TOPIC1 and $TOPIC2 they will be ordered by $TOPIC1.
After the query is written into the text field you can run the query by clicking Run Query button. The output of a tolog query is a sequence of results forming a table. Each row in the table corresponds to one query solution. And each column corresponds to a variable declared in the select clause.
For typical tasks TM4L supports template queries with user-friendly interface. To initiate a tolog query click Tools and from the drop-Down menu select Search and then Customized Query . Templates are defined for two kinds of queries: relationship type queries and occurrence type queries . Template querying provides an intuitive interface where the user can select topics, association types, occurrence types and specify the way a topic is related to other topics or occurrences. Relationship type query are intended to capture search requests involving related topics. This type of queries covers scenarios such as:
For the second scenario we assume that if the user has selected a particular role player and relationship type he wants to find the other role players (and not all relationship of this type when the selected topic is one of the role players). Therefore this query templates returns the role players based on a fixed relationship and second role player with fixed role type.
In Set Query section of the Template Query interface, there are three areas to specify the query condition: Topic, Relationship Type and Topic. Users can set the query conditions by specifying the value for those three fields. Each field can be either “?” or a specific value. Question mark “?” denotes an unknown field to be instantiated as a result of the evaluation of the query. Choosing Select Topic or Select Relationship Type opens a window for selecting values for the corresponding fields. For example, by choosing Select Relationship Type, the Select Relationship Type window pops out. You can select an item from the list and that item will be filled into the Relationship Type field as a selected value. By clicking the Run Query button of the Relationship type interface, a TOLOG type query is generated and executed. The result of running the query, is a table-oriented sets of instances displayed in the bottom part of the window. The Resource type queries, also cover typical query scenarios such as:
A similar intuitive interface is provided for the resource query. If the resource is an addressable resource, i.e. URL, by double clicking on the URL displayed in the query results table, TM4L opens the corresponding webpage. When it is resource data, it displays the View Resource window to the user.
TM4L allow merging between topic map documents. Merging is a process applied to topic maps in order to reduce the number of topics representing the same subject. Using the provided support for merging, knowledge from different sources can be integrated into a coherent whole. TM4L supports two types of merging: name-based merging and subject-based merging.
The name-based merging requires that whenever two or more topics have the same name in the same scope, they are assumed to have the same subject, and they are automatically merged into a single topic node. Regarding subject-based merging, we assume that whenever two or more topics specify the same subject identifiers, then they have the same subject. In such case, TM4L merges them into a single topic.
Note! When you select the merge option the merged topic map is automatically stored in the originally loaded xtm file. If you need the original topic map save it under different name before applying merging.
TM4L allows sharing information between, Topic Maps and RDF. We support the view that partitioning the Web into collections of incompatible resources should be minimized. In support of this view TM4L enables conversion between Topic Maps and RDF, thus making it easier to harness both RDF data within a topic map application and Topic Maps data within a RDF application. Since the expressive power of the source language is sometimes insufficient to capture completely the intended meaning of the constructs of the target language in a natural way the implemented approach of the translation is a tradeoff between completeness and readability of the translation. In addition our conversion strategy is aimed at maximum reuse of the target vocabulary which also impacts the expressivity of some aspects of the source language. Another consideration that influenced our approach was aimed at capturing the properties having impact on the domain ontology. This consideration suggests giving higher priority in translation to ontological concepts with specific importance to organizing digital collections.
To convert xtm file into rdf format or to convert rdf file to xtm format click Tools then from the drop-down menu select Convert and then XTM to RDF or RDF to XTM. The popping up dialog box will ask you for the name of the file to be converted.
After the conversion is confirmed (by clicking the Open button) TM4L generates the converted file and automatically saves it by adding to the original name "_rdf" ("_xtm" ) suffix followed by the corresponding file extension. For example, if you convert Test.xtm (Test.rdf), after the conversion TM4L will automatically save the converted file in the same directory under the name Test_rdf.rdf (Test_xtm.xtm). If in the same folder you have file with the same name that you need it, rename it or move it to another directory. The following two links describe the implemented strategies of the conversion from Topic Maps to RDF and vice versa.
Note. At present the conversion of n-ary Topic Maps associations is disabled, to adapt it the new model adopted recently.
TM4L supports plug-ins based on the Java Plug-in Framework (JPF) an open source, LGPL licensed plug-in infrastructure library for new or existing Java projects. A plug-in is a structured component that describes itself to JPF using a "manifest". JPF maintains a registry of available plug-ins and the functions they provide (via extension points and extensions). Based on this framework TM4L is written as a collection of plug-ins that can be extended by both TM4L developers and users to alter or broaden the interface and behavior of the editor.
There are several steps that are common to the development of TM4L plug-ins, which are described in the following document:
More specific information related to Java Plug-in Framework (JPF) can be found in JFP sourceforge site.
In TM4L the copy functionality is available by use of CTRL-C and the paste functionality is available by use of CTRL-V.
To copy text:
To paste text:
If you open a window, for example, to select text from, after you complete the work you have to minimize that window, in order to be able to see the TM4L window that was active prior to opening the second one.
TM4L is still in the development cycle therefore unexpected bug may occur. We strongly recommend that you make a backup copy of your work.
Please let us know of any bugs, inaccuracy, or any other problems with the software by sending email to email@example.com. We make no warranties or guarantees about this software. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703788336/warc/CC-MAIN-20130516112948-00093-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 30,045 | 89 |
https://www.speedyhen.com/Product/Kalen-Delaney/Microsoft-SQL-Server-2008-Internals/1832593 | code | Microsoft SQL Server 2008 Internals Paperback
Delve inside the core SQL Server engine-and put that knowledge to work-with guidance from a team of well-known internals experts.
Whether database developer, architect, or administrator, you'll gain the deep knowledge you need to exploit key architectural changes-and capture the product's full potential.
Discover how SQL Server works behind the scenes, including: * What happens internally when SQL Server builds, expands, shrinks, and moves databases * How to use event tracking-from triggers to the Extended Events Engine * Why the right indexes can drastically reduce your query execution time * How to transcend normal row-size limits with new storage capabilities * How the Query Optimizer operates * Multiple techniques for troubleshooting problematic query plans * When to force SQL Server to reuse a cached query plan-or create a new one * What SQL Server checks internally when running DBCC * How to choose among five isolation levels and two concurrency models when working with multiple concurrent users
- Format: Paperback
- Pages: 784 pages, black & white illustrations
- Publisher: Microsoft Press,U.S.
- Publication Date: 18/02/2009
- Category: SQL Server / MS SQL
- ISBN: 9780735626249 | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428300.23/warc/CC-MAIN-20170727142514-20170727162514-00000.warc.gz | CC-MAIN-2017-30 | 1,249 | 10 |
http://cseweb.ucsd.edu/classes/fa17/cse221-a/readings.html | code | - E. W. Dijkstra, The Structure of the 'THE'-Multiprogramming System,
Communications of the ACM, Vol. 11, No. 5, May 1968, pp. 341-346.
(Additional historical background on semaphores
Q: Dijkstra explicitly states their goals
for the THE operating system. How do these goals compare to, say,
Microsoft's goals for the Windows operating system? Why do we no longer
build operating systems with the same goals as THE?
- P. B. Hansen, The
Nucleus of a Multiprogramming System, Communications of the ACM,
Vol. 13, No. 4, April 1970, pp. 238-241, 250.
Optional related paper on a deployment experience of RC 4000:
P. B. Hansen, The RC 4000 Real-Time Control System at Pulway, BIT 7, pp. 279-288, 1967.
Q: How does synchronization in the RC 4000 system compare with synchronization in the THE system?
- D. G. Bobrow, J. D. Burchfiel, D. L. Murphy, and R. S. Tomlinson,
TENEX, a Paged Time Sharing System for the PDP-10, Communications of
the ACM, Vol. 15, No. 3, March 1972, pp. 135-143.
Q: What features in TENEX are reminiscent of features in Unix (a later system)?
- W. Wulf, E. Cohen, W. Corwin, A. Jones, R. Levin, C. Pierson, and
F. Pollack, HYDRA: The Kernel of a Multiprocessor Operating System,
Communications of the ACM, Vol. 17, No. 6, June 1974, pp. 337-345.
Q: How is a Hydra procedure different from the procedures we are familiar with in a typical language and runtime environment?
- B. Lampson,
Systems Review, Vol. 8, No. 1, January 1974, pp. 18-24.
Q: What are the concepts in HYDRA that
correspond to Lampson's definitions of "Domain", "Object", and "Access
Matrix"? What about Multics?
- J. H. Saltzer, Protection and the Control of Information
Sharing in Multics, Communications of the ACM, Vol. 17, No. 7, July 1974,
Optional Multics paper:
A. Bensoussan, C. T. Clingen, and R. C. Daley,
The Multics Virtual Memory: Concepts and Design,
Communications of the ACM, Vol 15, No. 5, May 1972, pp. 308-318.
Q: Compare and contrast protected subsystems in Multics with procedures in Hydra.
- D. M. Ritchie and K. Thompson, The UNIX Time-Sharing System,
Communications of the ACM, Vol. 17, No. 7, July 1974, pp. 365-375.
Q: What aspects of Unix as described in the 1974 paper do not survive
today, or have been considerably changed?
- R. Pike, D. Presotto, S. Dorward, B. Flandrena, K. Thompson, H. Trickey, and P. Winterbottom,
Plan 9 From Bell Labs,
USENIX Computing Systems, Vol. 8, No. 3, Summer 1995, pp. 221-254.
Q: What does it mean, "9P is really the core of the system; it is fair to say that the Plan 9 kernel is primarily a 9P multiplexer"?
- D. D. Redell, Y. K. Dalal, T. R. Horsley, H. C. Lauer,
W. C. Lynch, P. R. McJones, H. G. Murray, and S. C. Purcell, Pilot: An Operating System for a
Personal Computer, Communications of the ACM, Vol. 23, No. 2,
February 1980, pp. 81-92.
Q: How do the requirements of the Pilot
operating system differ from the systems we have read about so far,
and how does the design of Pilot reflect those
- Galen C. Hunt and James R. Larus.
Singularity: Rethinking the Software Stack,
ACM SIGOPS Operating Systems Review,
Q: How does the language-based approach
that Singularity takes compare and contrast with Pilot?
- C. A. R. Hoare, Monitors: An Operating System Structuring
Concept, Communications of the ACM, Vol. 17, No. 10, October, 1974,
Q: What are "monitor invariant" I and "condition" B, and why are they
important in the discussion of monitors?
- B. W. Lampson and D. D. Redell, Experience with Processes and
Monitors in Mesa, Communications of the ACM, Vol. 23, No. 2, February
1980, pp. 105-117.
Q: Compare and contrast synchronization in Java with Hoare monitors
and Mesa monitors.
Optional related papers on a completely different kind of
synchronization known as wait-free or non-blocking synchronization:
For a formal treatment of wait-free synchronization, see Herlihy's
Synchronization, ACM Transactions on Programming Languages and
Systems (TOPLAS), Vol. 13, No. 1, January 1991, pp. 124–149.
Linux uses a particular version called read-copy-update (RCU):
Paul E. McKenney, Dipankar Sarma, Andrea Arcangeli, Andi Kleen,
Orran Krieger, and Rusty
Copy Update. In Proceedings of the Ottawa Linux
Symposium, June 2002, pp. 338–367.
And for an extensive bibliography on RCU through 2013, see
- D. R. Cheriton and W. Zwaenepoel, The
Distributed V Kernel and its Performance for Diskless
Workstations, Proceedings of the 9th Symposium on Operating
Systems Principles, pp. 129-140, November 1983.
Q: What is the argument for diskless
workstations, and do you agree/disagree with the
- J. K. Ousterhout, A. R. Cerenson, F. Douglis,
M. N. Nelson, and B. B. Welch, The Sprite Network Operating
System, IEEE Computer, Vol. 21, No. 2, February 1988, pp. 23-36.
Optional historical retrospective:
J. K. Ousterhout,
Q: How do the caching policies in Sprite
differ from those in the V Kernel?
- H. Haertig, M. Hohmuth, J. Liedtke, S. Schoenberg, J. Wolter,
The Performance of Micro-Kernel- Based Systems, Proceedings
of the 16th Symposium on Operating Systems Principles, October 1997, pp. 66-77.
Optional related papers on hierarchical address spaces and formally verifying a microkernel:
J. Liedtke, On Micro-kernel Construction,
In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles, December 1995, Copper Mountain Resort, Colorado, pp. 237-250.
G. Klein, K. Elphinstone, G. Heiser, J. Andronick, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt, M. Norrish, R. Kolanski, T. Sewell, H. Tuch, S. Winwood,
seL4: Formal Verification of an OS Kernel,
In Proceedings of the 22nd ACM Symposium on Operating Systems Principles, October 2009, Big Sky Resort, MT, pp. 207-220.
Q: Compare and contrast the L4 microkernel with the RC4000 Nucleus
and the HYDRA kernel in terms of their goals to provide a basis
on which higher level OS functionality can be implemented.
- D. R. Engler, M. F. Kaashoek, and J. O'Toole Jr.,
Exokernel: An Operating System Architecture for Application-Level Resource Management,
In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles, December 1995, Copper Mountain Resort, Colorado, pp. 251-266.
Optional — the other Exokernel paper:
M. F. Kaashoek, D. R. Engler, G. R. Ganger, H. M. Briceno, R. Hunt, D. Mazieres, T. Pinckney, R. Grimm, J. Jannotti and K. Mackenzie, Application Performance and Flexibility on Exokernel Systems, In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles, October 1997, St. Malo, France, pp. 52-65.
Q: Compare and contrast an exokernel with a microkernel.
- P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, Xen and the Art of Virtualization, In Proceedings of the 19th Symposium on Operating System Principles, October, 2003.
Optional related paper describing VMware virtualization performance (2006-era):
Keith Adams and Ole Agesen,
A Comparison of Software and Hardware Techniques for x86 Virtualization,
In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems, October 2006, pp. 2–13.
Q: Microkernels and virtual machine monitors are two different ways to
support the execution of multiple operating systems on modern
hardware. How does the microkernel approach in L4 compare and
constrast with the VMM approach in Xen?
- Stephen Soltesz, Herbert Pötzl, Marc E. Fiuczynski, Andy Bavier,
Larry Peterson, Container-based
Operating System Virtualization: A Scalable, High-performance
Alternative to Hypervisors, In Proceedings of the 2nd ACM
SIGOPS/EuroSys European Conference on Computer Systems
(Eurosys'07), pp. 275–287, March 2007.
Q: What are the key tradeoffs between hardware virtualization and OS-level virtualization?
Optional related paper on early IBM hardware virtualization:
R. J. Creasy, The Origin of the VM/370 Time-Sharing System, In IBM Journal of
Research and Development, 25(5):483-490, September 1981.
and virtualizing smartphones:
Jeremy Andrus, Christoffer Dall, Alex Van’t Hof, Oren Laadan, Jason Nieh,
Cells: A Virtual Mobile Smartphone Architecture, In
Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles
(SOSP), pp. 173–187, October 2011.
Q: Why is virtualizing a smartphone different from virtualizing a server?
- Marshall K. McKusick, William N. Joy, Samuel J. Leffler, and
Robert S. Fabry, A Fast File System for Unix, ACM Transactions on
Computer Systems, 2(3), August 1984, pp. 181-197.
Q: In FFS, reading is always at least as fast as writing.
In old UFS, writing was 50% faster. Why is this?
- Mendel Rosenblum and John K. Ousterhout, The
Design and Implementation of a Log-Structured File System,
Proceedings of the 13th ACM Symposium on Operating Systems Principles,
Q: When we want to read a block in LFS on disk, how do we figure out where it is?
Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev,
M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.
Analysis of Linux Scalability to Many Cores
In Proceedings of the 9th
Symposium on Operating System Design & Implementation (OSDI), October
2010, Vancouver, BC, Canada, pp. 383–398.
As optional further reading, this group's first work was exploring
OS multicore scalability using an exokernel:
Silas Boyd-Wickizer, Haibo Chen, Rong Chen, Yandong Mao,
Frans Kaashoek, Robert Morris, Aleksey Pesterev, Lex Stein, Ming Wu,
Yuehua Dai, Yang Zhang, and Zheng Zhang.
An Operating System for Many Cores. In Proceedings of the 8th
Symposium on Operating System Design & Implementation (OSDI),
October 2008, San Diego, California, pp. 43–57.
More recently, they framed OS multicore scalability in more
Austin T. Clements, M. Frans Kaashoek, Nickolai Zeldovich, Robert Morris, Eddie Kohler,
The Scalable Commutativity Rule: Designing Scalable Software for Multicore Processors,
Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles
(SOSP), October 2013, pp. 1–17.
Amit Levy, Bradford Campbell, Branden Ghena, Daniel B Giffin, Pat Pannuto, Prabal Dutta, Philip Levis,
Multiprogramming a 64 kB Computer Safely and Efficiently,
Proceedings of the Twenty-Sixth ACM Symposium on Operating Systems Principles
(SOSP), October 2017, pp. 1–17.
- (Any of the following that strike your fancy)
Michael Vrable, Justin Ma, Jay Chen, David Moore, Erik VandeKieft,
Alex C. Snoeren, Geoffrey M. Voelker, and Stefan Savage,
Fidelity and Containment in the Potemkin Virtual Honeyfarm,
Proceedings of the 20th ACM Symposium on Operating Systems Principles
(SOSP), Brighton, UK, October 2005, pages 148-162.
Diwaker Gupta, Kenneth Yocum, Marvin McNett, Alex C. Snoeren, Amin Vahdat, and Geoffrey M. Voelker,
To Infinity and Beyond: Time-Warped Network Emulation, Proceedings of the 3rd ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI), San Jose, CA, May 2006, pages 87-100.
Diwaker Gupta, Kashi Vishwanath, and Amin Vahdat, DieCast: Testing Distributed Systems with an Accurate Scale
Model, Proceedings of the 5th ACM/USENIX Symposium on Networked
Systems Design and Implementation (NSDI), San Francisco, CA, April
Diwaker Gupta, Sangmin Lee, Michael Vrable, Stefan Savage, Alex
C. Snoeren, George Varghese, Geoffrey M. Voelker, and Amin
Engine: Harnessing Memory Redundancy in Virtual Machines,
Proceedings of the 8th ACM/USENIX Symposium on Operating System Design
and Implementation (OSDI), San Diego, CA, December 2008.
Michael Vrable, Stefan Savage, and Geoffrey M. Voelker.
Cumulus: Filesystem Backup to the Cloud,
ACM Transaction on Storage 5(4):1-28, December 2009.
Michael Vrable, Stefan Savage, and Geoffrey M. Voelker,
BlueSky: A Cloud-Backed File System for the Enterprise,
Proceedings of the 7th USENIX Conference on File and Storage Technologies (FAST), San Jose, CA, February 2012, pages 19:1-19:14.
He Liu, Feng Lu, Alex Forencich, Rishi Kapoor, Malveeka Tewari, Geoffrey M. Voelker, George Papen, Alex C. Snoeren, and George Porter,
Circuit Switching Under the Radar with REACToR,
Proceedings of the 11th ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI), Seattle, WA, April 2014, pages 1–15. | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484689.3/warc/CC-MAIN-20190218053920-20190218075920-00462.warc.gz | CC-MAIN-2019-09 | 12,120 | 186 |
https://mail.python.org/pipermail/python-list/2003-June/189406.html | code | Python 2.2.3 test_re.py fails with coredump on FreeBSD 5.0, fails also on FreeBSD4.8
andymac at bullseye.apana.org.au
Sun Jun 22 03:21:29 CEST 2003
On Sun, 21 Jun 2003, PieterB wrote:
> I had a little look at testing Python 2.2.3 under FreeBSD and
> 5.0. When I use './configure ; make ; make test' Lib/test/test_re.py
> gives a Signal 10 (coredump) on FreeBSD5.
> On FreeBSD 4.8 this test also fails, but doesn't core dump:
It coredumps on 4.8 for me...
> Can anybody, give me a clue:
> a) if this is caused by a FreeBSD5.0 issue, or by Python.
> Can somebody test this on FreeBSD 5.1 or FreeBSD5-CVS?
> b) how can I fix the coredump on FreeBSD 5.0?
> c) what should be done to fix the test on FreeBSD 4.8?
Take Martin's advice. If you want to know more, review SF #740234.
Andrew I MacIntyre "These thoughts are mine alone..."
E-mail: andymac at bullseye.apana.org.au (pref) | Snail: PO Box 370
andymac at pcug.org.au (alt) | Belconnen ACT 2616
Web: http://www.andymac.org/ | Australia
More information about the Python-list | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886979.9/warc/CC-MAIN-20180117212700-20180117232700-00486.warc.gz | CC-MAIN-2018-05 | 1,026 | 20 |
https://imaginezine.com/Dan-Lockton/ | code | Design, Imagination, and Futures
Why is humanity finding it so hard to do anything about the climate crisis? Why is humanity finding it so hard to do anything about systemic racism? Why is society finding it so hard to take mental health seriously? Why are we in the messes we’re in?
‘Unregulated capitalism’ might be your answer—and there are many facets, from personal to cultural to global. But a meta-level issue seems to be that we’re ‘trapped’ in particular ways of imagining—how the present is, and how different futures could be. Our narratives, our understandings of ourselves and the systems we’re in, are limited by enormity or complexity or invisibility, or our inability to experience the world in the way that someone else does. When we think about change, we focus on individual behaviour rather than systemic issues and power structures. In an age of crises in climate, energy, health, politics, and social inequalities, imagination is more important than we perhaps realise. If we can’t imagine different futures, we’ll end up where we’re headed. We need other visions.
Terms such as the “crisis of imagination” (Ghosh, 2016) may sound vague, but as used in relation to (just) transitions—in the context of major challenges for humanity and the planet—they highlight the value that design research and practice can bring. Designers are adept at enabling people to share and externalise their thinking with others, at spurring and bringing out creativity, at giving voice to groups whose views and ideas are under-represented in dominant narratives, and, crucially, at turning ideas into forms that people can engage with—prototypes which can be experienced, used, lived with, and reflected upon. Designers can bring plural possible futures to life, in the present—not only imagining, but ‘rehearsing’ futures—through facilitating and designing tools for participatory (re-)imagining.
While my work—with amazing collaborators—on projects such as New Metaphors, another project called IMAGINE(!), and Playing with the Trouble, only scratches the surface of what’s needed, I’m fortunate to be able to work within a network of great people and projects expanding the diversity of futures we can imagine, including for example the Plurality University Network, Untitled Alliance, Urban Heat Island Living, and BrusselAVenir. These are the kinds of initiatives and projects which can help people, together, create and explore possible futures, imagine, and experience new ways to live, and understand ourselves and the world around us better.
Dan Lockton: Assistant Professor, Future Everyday, Department of Industrial Design, Eindhoven University of Technology in Eindhoven, NL, and Director of Imaginaries Lab in Amsterdam, NL. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646937.1/warc/CC-MAIN-20230531150014-20230531180014-00755.warc.gz | CC-MAIN-2023-23 | 2,790 | 6 |
http://freecode.com/projects/yavipin/tags/major-feature-enhancements | code | Release Notes: Support for forward secrecy was added. Even if an attacker cracks the box, he
won't be able to read traffic older than a specified delay (default 10
minutes). Other minor modifications were also made.
A PHP canonic autoloader with PHAR support.
A C++ template library for linear algebra. | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703964/warc/CC-MAIN-20140313024503-00068-ip-10-183-142-35.ec2.internal.warc.gz | CC-MAIN-2014-10 | 302 | 5 |
https://www.capgemini.com/in-en/jobs/dynamics-crm-architect-9-to-13-yrs-pune/ | code | MS Dynamics CRM SOLUTION ARCHITECT- 9 to 13 years
We are looking to hire MS Dynamics CRM SOLUTION ARCHITECT with 9 to 13 years of exp in Pune location
Specification / Skills / Experience
9+ years of experience in MS Dynamics CRM2013, 2015, 2016 (On Premise and Online) and CRM domain.
• Knowledge of system architecture and certified in MS Dynamics CRM (Preferred)
• Hands-on experience with multi-phase, global, enterprise-wide CRM implementation with focus on Solution and architecture definition
• Excellent understanding of MS CRM concepts, business processes and supporting MS CRM functionality.
• Proven experience in presales activities including working on RFP responses, driving overall Solution and defining the solution architecture.
• Ability to envision, architect, and evangelize CRM / xRM solutions within our prospects and clients.
• Proven experience in estimating, scoping, and defining the execution approach for the proposed solution.
• Working functional and technical knowledge of latest MSD CRM offerings- Dynamics 365, Portal, USD, FieldOne etc. is desirable.
• Good communication and presentation skill
Experience: 9 to 13 years
Primary Skills (Must have)
MS Dynamics CRM2013, 2015, 2016 (On Premise and Online) and CRM domain
Dipti DeoApply now | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00114.warc.gz | CC-MAIN-2017-47 | 1,286 | 16 |
http://www.iga.adelaide.edu.au/workshops/winterschool2009/winterschool2009titles.html | code | Speaker: Peter Bouwknegt
Title: Introduction to String Theory. (3 lectures)
In these lectures I will give a concise introduction to String Theory. In particular I will give a discussion of T-duality for open and closed strings and introduce the concept of a D-brane.
Speaker: Nicholas Buchdahl
Title: Gauge theory and the topology of 4-manifolds. (3 lectures)
In these talks, I will discuss some results proved in the early 1980's by Simon Donaldson concerning the topology of smooth 4-dimensional manifolds. These results, which revolutionised modern differential topology, were remarkable not just because of their implications for the field, but also because of the way in which they were proved, namely by using methods and ideas from modern physics. In my talks, I shall describe some of the background physics and mathematics, and give an outline of how the theorem is proved. If there is time, I shall also attempt to describe some of the other quite remarkable results that followed on the heels of this fundamental work.
Speaker: Finnur Larusson
Title: Ellipticity and hyperbolicity in geometric complex analysis. (3 lectures)
In the first lecture, we shall review complex analysis in the complex plane, focusing on the several equivalent definitions of what it means for a function to be holomorphic and the basic properties of such functions. In the second and third talks we will introduce and explore important ideas from higher-dimensional complex analysis and complex geometry in the accessible setting of domains in the plane.
Speaker: Michael Murray
Title: Introduction to differential geometry and monopoles. (3 lectures)
The first two lectures will be an introduction to differential geometry, manifolds, vector bundles etc. This will be background for the talks by Mathai, Nick and Peter. The last lecture will be an introduction to Bogomolny, Prasad, Sommerfield monopoles
Speaker: Mathai Varghese
Title: On the Atiyah-Singer index theorem and applications. (3 lectures)
These lectures will be on the Atiyah-Singer index theorem, and its applications to mathematics and physics. The Atiyah-Singer index theorem is concerned with the existence and uniqueness of solutions to linear elliptic partial differential equations. The Fredholm index of an elliptic equation, which is the number of linearly independent solutions of the equation minus the number of linearly independent solutions of the adjoint equation, is a topological invariant. This means that continuous variations in the coefficients of an elliptic equation leave the Fredholm index unchanged. The Atiyah-Singer index theorem gives a striking calculation of this index. It continues to have a tremendous impact on mathematics and mathematical physics.
|9-10: Welcome tea||9:30-10:30: Buchdahl||9-10: Buchdahl|
|10-11: Murray||10:30-11: break||10-11: Buchdahl|
|11-12: Murray||11-12: Larusson||11-11:30: break|
|12-1:30: Lunch||12-1:30: Lunch||11:30-12:30: Larusson|
|1:30-2:30: Mathai||1:30-2:30: Bouwknegt||12:30-2:Lunch|
|2:30-3:30: Larusson||2:30-3:30: Bouwknegt||2-3: Murray|
|3:30-4: break||3:30-4:break||3-4: Mathai|
|4-5: Bouwknegt||4-5: Mathai||4-5: Farewell Tea|
|6-8: Winter school dinner
(meant mainly for interstate guests) | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481766.50/warc/CC-MAIN-20190217071448-20190217093448-00173.warc.gz | CC-MAIN-2019-09 | 3,220 | 25 |
https://www.farreachinc.com/mobile-app-development | code | Smartphones are in pretty much every pocket, purse, and backpack these days—each one full of apps that may or may not get used. Does your business need a mobile app?
A native mobile app might seem like a good idea, but does your business really need one? It depends! Learn how to decide.
“Custom software” is a broad term that can include desktop applications, web apps, websites, and systems made up of many of these. Mobile apps can be created using an off-the-shelf mobile app builder, they can be custom developed from the ground up, or they can be built using a strategy in between.
Some mobile apps (like games) are standalone systems—everything is managed within the mobile app.
Most mobile apps, however, are one element of a larger system. Think about your bank. They probably have a web portal for online banking as well as a mobile app. You can accomplish similar tasks in both, and the data is the same.
When you’re building a new custom software system, it’s important to think about it in terms of your overall software strategy. Do you need a desktop application for your internal team to use, which connects to a web application for customers to use? Do you need mobile apps for customers, too? Think about where users will be using the system, what they’ll be trying to accomplish, and how you can create the best experience.
Oftentimes, when a business wants to make functionality available to users on a mobile device, they’re quick to assume that it has to be done through a mobile app. That’s one way to accomplish mobile tasks, but it’s not the only way.
Mobile devices have come a long way in the last decade, and so has the technology to display information on them. For example, you can download the LinkedIn app, but you can also do a lot on the LinkedIn mobile website.
If you’re building
a responsive web app—which will work like a website in browsers on desktops, tablets, and phones—you
have to think through whether that will suffice for how users will need to use your software on mobile. Can they accomplish what they need in the web app on their phone? Or will it be more effective to have a native mobile app that people download
As with everything in custom software development, mobile apps have a lot of benefits, but they also have some drawbacks.
Users can stay logged in, which is convenient for them and allows you to track more consistent analytics
You have more control over display and functionality with mobile apps separated from web apps
Users can complete tasks on the go, sometimes even without internet access
Mobile apps are often easier to find on a phone than going to a web address in a mobile browser
You can tap into device functionality like GPS, cameras, and push notifications
Building both a web app and native mobile apps adds to the cost of your custom system
Native mobile apps must be maintained, in addition to the other pieces of your system
You have to maintain compatibility with iOS and Android as they make updates
The Google Play Store and the Apple Store both have review and approval processes for apps
Users have finite amounts of storage on their mobile devices (though this is becoming less and less of a concern, thanks to Moore’s Law)
Users may or may not like using a web app on their computers and a mobile app on their phones. Customer discovery can help you uncover more pros and cons for your unique situation.
When you build a mobile app, you have to decide if you want to develop for iOS, Android, or both. It may surprise you—especially if you’re a dedicated user of one platform—that the two operating systems have different app requirements and run on very different user interfaces (UI).
Picking one OS to start with can save costs and help you refine the app before building out a version for the other OS. But doing so can also alienate users who don’t have the “right” device. Have you ever found out about a cool new app you wanted to try, but when you went to download it, learned it’s only for the other OS? It can be a let-down.
How do you decide whether to start with iOS, Android, or both platforms? There are a few quick ways to help you narrow down your decision:
Check your current analytics. Are most users accessing your website and web apps on one platform more than the other? You can find this information in Google Analytics, Application Insights, or whatever tracking tool you use.
Look at market share data. As of this writing, Android has 70%+ of the smartphone market while iOS sits in the mid 20%s. These stats are worldwide, so the breakdown might change once you dig into your audience.
Ask! See if your users want a mobile app, what features they might use in it, and what device(s) they have and use.
This decision is less costly when you use a cross-platform development tool to build in both iOS and Android using the same codebase. Keep reading to see how we use Xamarin to accomplish that.
Apps built in iOS and Android are usually built in different programming languages. Many iOS apps are built using Swift, while many Android apps are built using Kotlin, Java, C#, Python, C++, or others.
At Far Reach, we use Xamarin to build mobile apps using the Microsoft .NET tech stack we’re most familiar with. Xamarin integrates into Visual Studio—one of our central development tools—and allows us to use a shared codebase to build for both iOS and Android. It doesn’t just wrap a mobile site into a mobile app; it creates true native apps.
While the cross-platform nature of Xamarin allows us some economies of scale, the above point still stands that building two apps will always be more work and cost more than building one.
It’s possible to build a mobile app on both OSs without writing two completely separate applications. But as always, it comes back to what’s best for your organization and the potential users of your app(s).
If you think a mobile app will benefit your business, how do you go about finding a partner to talk through the project and bring your app to life?
There are several different options for outsourcing development. You can hire an independent contractor,
you can use a staffing company that provides the developers, or you can work with a development company.
Far Reach is a development partner—we bring a full team to your project to build it and even to support it for the long-run, if needed.
You can validate a partner’s expertise by reviewing past projects and customer reviews. You can also review their core values to make sure it could be a good fit. And nothing helps you evaluate the fit of a potential partner like getting into a room (even a Zoom room) and having a conversation.
A mobile app for the Iowa Clinic and its patients.
Want to dive deeper into mobile app development? Read some of our blog posts on the topic.
Custom mobile applications are complex, especially when you aren’t entrenched in tech daily like we are. A consultation session with our experts can shed light on any questions you may have about how mobile apps could help your business.
We can help you plan a custom app roadmap for your company’s goals. Let’s talk. No obligations and no strings attached.
Type the code from the image | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817398.21/warc/CC-MAIN-20240419110125-20240419140125-00181.warc.gz | CC-MAIN-2024-18 | 7,236 | 44 |
https://johnmarkhowell.com/2007/07/05/so-what-do-have-against-agile-scrum-and-modelling/ | code | For some time now, I have had a few posts regarding issues I’ve had with Agile, SCRUM and Modelling. I want to be clear, I don’t have problems with any of these tools. I’m just saying that there is no tool that is perfect for every job. I’m also saying that a tool may be ‘pretty good’ for a job but could be used just a bit differently to be more effective.
First rule: use the right tool for the right job. If you’re given complete and thourough specs (yeah, right), then Agile might not be the best tool. If you have a large team, SCRUM may not be the best tool or may be close but use some tweaking. And If you’re working on a very small simple project, why build a model or state diagram when it would take longer to build the diagram and get sign-off than just build the project.
And even these three tools could be used but with some minor changes or adjustments to fit your needs. So when you’re going into a project, think about what tools and techinques would be a good fit for that project. And for Pete’s sake, be completely familar with your team’s skillsets and skill level so that you can make a good decision or know what tweaks need to be made. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644913.39/warc/CC-MAIN-20230529205037-20230529235037-00542.warc.gz | CC-MAIN-2023-23 | 1,183 | 3 |
http://dadlercomics.blogspot.com/2014/02/volume-265.html | code | clear-cutting the inhuman condition and subverting the e-card paradigm with original comics (well, the captions are mine anyway).
I love Monty Python!They're supposed to be reuniting later this year (well, the surviving members) at a live show in London. Plans are to record it and eventually release a DVD.
I heard that. Hopefully they put on a good show and aren't just looking to rake some easy bucks. Look forward to it tho. Peace. | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589634.23/warc/CC-MAIN-20180717090227-20180717110227-00471.warc.gz | CC-MAIN-2018-30 | 435 | 3 |
https://github.com/angelos/EvernoteSync | code | File this one under first world problems: I use a headless Mac Mini for scanning documents, using my trusty ScanSnap, to Evernote, but every once in a while Evernote takes its dear time to synchronize.
To remedy this, I whipped up a piece of AppleScript that syncs Evernote when there are unsynced notes in the notebook called "inbox".
Easiest way is to run it as a cron job. Install the script using the
- The script contains a reference to the "inbox" notebook, you can replace this with any other notebook you want.
- I use the existence of a
note linkto find out whether a note has been synchronized.
You can do whatever you want with this script! | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831933.96/warc/CC-MAIN-20181219090209-20181219112209-00574.warc.gz | CC-MAIN-2018-51 | 651 | 7 |
https://www.warriorforum.com/main-internet-marketing-discussion-forum/190657-need-high-profit-im-business-invest.html | code | first to my person. I'm a 16 year old boy living in Germany, that's why my english isn't as good as the native speakers. I've noticed this forum in the "rich jerk" but I didn't read it completely, because the links in there where affiliate links and I lost the trust to this book. In this book was written that I can come here and tell the people my problems and this forum is a nice IM community. So, I'll give a try and yeah here's my problem.
I have 40$ dollars to invest. (Thats my pocket money of last month)
There are so much businesses to invest, but if I look at google it exposes that they are scams and I don't trust any site. (for example the Paid-To-Click Business like neobux) I thought of buying some refs but how if my refs don't click anymore? I need something safer.
The fact that I don't own a functioning computer makes my situation more complicated. Yeah I need that money for a new computer for 500$.
I started delivering papers out which gonna make me 80$ in a month but I need a computer for a chess tournament in 1 month. :confused:
The 1st question is can I make this money so fast?
The 2nd question is: Is there any trust-worthy business to invest? (But please consider I can only be max. 2 hours a day online) | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00138.warc.gz | CC-MAIN-2019-51 | 1,236 | 7 |
https://docs.ampleforth.org/learn/about-wrapped-ampl | code | AMPLis an ERC-20 token that works natively with on-chain wallets and many decentralized finance applications, the nature of automatically changing balances sometimes calls for special technical considerations.
AMPLsimilar to wrapped ETH. It facilitates ecosystem integrations on both centralized and decentralized platforms. In some cases wrapped-AMPL (
WAMPL) will be used almost invisibly in the background for bridging, routing, custody, etc. In other cases
WAMPLwill be a direct access point for end users who want to take a position in the
AMPLnetwork, but don't immediately need to use it as a unit-of-account.
WAMPLhas this easy to understand property as well, which means there’s less initial education required. Users can gradually develop an understanding of
AMPLautomatically adjusts the quantity tokens in user wallets based on demand. This key feature allows
AMPLto act as a decentralized unit of account and DeFi building-block. However, the nature of changing balances breaks traditional assumptions for matching engines, custodians, etc. Wrapped-AMPL has a simple floating price. Although
WAMPLcannot be used as a unit-of-account as
AMPLcan, it can be held by users and network and unwrapped on-the-fly as needed.
WAMPLis 10 million tokens. Holding 100,000
WAMPLis equivalent to holding 1% of the
WAMPLare fully redeemable for one another, the growth of the Ampleforth community and demand for
AMPLtranslates directly to demand for
AMPLand receive a non-rebasing ERC-20 token and vice-versa. Since both
AMPLare on the Ethereum platform, there are no bridges or third-party custodians that stand between redeeming one token for another.
WAMPLlogo to be used in integrations.
WAMPLprovides exposure to
WAMPLhas a floating price and cannot be used as a unit-of-account, from a portfolio's perspective buying and holding
WAMPLis equivalent to buying and holding
WAMPLdepends on how much of the
AMPLnetwork has been wrapped by users. The maximum total supply of
WAMPLis 10 million tokens—that is to say, if 100% of the world's
AMPLwere to be wrapped, the total supply of
WAMPLwould be 10 million.
WAMPLsuddenly changes, redemption arbitrage will propagate this change in demand to | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00529.warc.gz | CC-MAIN-2022-40 | 2,195 | 26 |
http://www.healthboards.com/boards/teen-health/102234-dont-know-who-i-am-anymore.html | code | Ok, recently, i have been really, really, upset about myself. I dont know who i am anymore! I am sick of people thinking im this popular, snobby, prissy, dumb (deleted). I hate it! I love my friends, and i wouldnt want to give them up to not be steroetyped. I get almost straight A's, and im not prissy or snobby or mean! I want to show people who i really am, but i dont know how! plz plz plz help me, im getting desperate! I cant stand to look at myself anymore.
[This message has been edited by tntmod5 (edited 04-21-2003).] | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121305.61/warc/CC-MAIN-20170423031201-00362-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 527 | 2 |
https://www.telerik.com/forums/webcomponentsicons-not-appearing | code | I'm having issue with getting WebComponentsIcons to appear in a testing environment after publishing. It works fine in dev.
I've looked everywhere to find the problem but with no success.
The font files are where the should be according to @font-face in kendo-common-bootstrap.css (/Content/web/fonts/glyphs/)
I haven't changed or overridden any of the css classes such as .k-tool-icon.
The class assignments are kendo - provided. I did not choose any of the classes that include WebComponentsIcons
I'm a little stumped here and and unsure of what I should be looking at for a solution.
I can't provide code due to security policy, but if I could get some general pointers to what I should be looking at that would be a huge help | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00626.warc.gz | CC-MAIN-2023-50 | 729 | 7 |
https://forum.playcanvas.com/t/solved-hide-show-batched-model/19466 | code | Was working on some optimizations recently and need an advice on a proper way of hiding a batched model, without removing it from the scene hierarchy. The models are part of a dynamic batch group, but hiding a model doesn’t seem to have an effect, even if I mark the group as dirty for the next frame.
You can call an internal API to do that, that’s the same being called when a model component is disabled.
So the entity will remain in the hierarchy, but any batches referenced will be removed from the batcher.
I actually haven’t tried to remove the model from the batch group. After looking at the sources, it would seem that it should have the same effect as if I would just mark the group as dirty. I then found that instead of sending a group ID, I was sending a group object to be marked dirty. That was why the changes were not taking effect. It works normally by simply marking group as dirty. Its just me not using API correctly Thanks, marking as solved. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100489.16/warc/CC-MAIN-20231203062445-20231203092445-00871.warc.gz | CC-MAIN-2023-50 | 971 | 4 |
https://www.oreilly.com/library/view/jsf-12-components/9781847197627/ch06s06.html | code | The Seam framework, as the name suggests, is an excellent tool for integrating different technologies together. One of the technologies that Seam supports very well is Facelets. We saw how to use the Facelets
<ui:decorate> tag to markup a section of our JSF page to be "decorated" by a template defined in another page. For example, we can define a simple template that surrounds our content with an HTML
<div> element that has a particular style applied to it. The benefit of this approach is that common UI structures can be defined in one place and reused more easily.
Seam provides the
<s:decorate> tag to surround user interface fields for validation purposes in the same way. In fact, the
<s:decorate> tag ... | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00092.warc.gz | CC-MAIN-2019-26 | 715 | 6 |
https://forum.babylonjs.com/t/gltf-loader-binary-file-is-loaded-twice/14069 | code | I recently joined the babylonjs community and so far I am pretty happy working with this framework, thanks for developing and open sourcing it
There is however one problem we are currently facing in our business app. The GLTF loader is loading the associated .bin files twice. This can be reproduced with (I believe) any gltf loading example. Here is one demo project from the babylonjs documentation which loads a skeleton head:
You can observe the duplicated loading in the chrome dev network area:
Could you please take a look at it?
Hi @wkroeker and welcome to the forum!
Let me take a look at it …
In the playground, the glTF validator is turned on by default. The two bin loads are due to that. At some point, we may be able to consolidate the code such that it doesn’t do this, but it shouldn’t be an issue in production environments.
You are supposed to be able to turn off the validation in the inspector, but it’s not working right now for some reason. I’ll look at this.
In the meantime, you can do this in the code to disable validation.
For us the production build matters the most. It’s not a big deal if the binary files are loaded twice in dev mode. We will remove the inspector dependency to fix the issue.
Turning off the validator is helpful though, thanks. The decreased loading time of our app will increase development efficiency.
Filed Turning off glTF validation doesn't work in the the playground inspector · Issue #8925 · BabylonJS/Babylon.js · GitHub for this. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00517.warc.gz | CC-MAIN-2022-27 | 1,502 | 12 |
https://dudehook.wordpress.com/2011/08/09/up-in-the-cloud-its-a-bird-its-a-plane-its-a-platform/ | code | Up in the cloud… it’s a bird, it’s a plane… it’s a Platform
One of the common questions I face in explaining what I build is, well, “What exactly is it?” Some people only see the kiosk. Some people think of it as a traditional financial system. What people don’t see is the Platform, and it is built to not be seen.
When we started this, I considered building our system on a traditional platform: J2EE. Having worked with it before, I knew that it would provide a lot of the basic needs our systems required such as threading, lifecycle management, request dispatching, etc. I even built an initial rudimentary server using JBoss… but was soon struggling with the vastness of the APIs and the overhead of the application server itself. What I needed was a platform tailored for fast transactionality, easy scalability, very high availability, and a simple API. So I created one.
Now, 3+ years later, we have a mature base upon which we build our services and product offerings. Our platform is tailored for processing financial transactions in a secure environment and designed for massive scale with “five nines” availability. In addition, we are constantly improving the architecture – thinning the transactional layer for lower latency, decoupling systems for better reliability and more flexibility, simplifying and partitioning the database for easier management and improved usage… and more! And as cool stuff happens, I’ll write about it here!
Of course, the platform – cool as it is – is really just the foundation of our products. We also have developed a set of services which can manage millions of individual users in a multi-tenant environment accessing a plethora of product services. Users can identify themselves to the system using a variety of identification and authorization methods, including biometrics, and securely access their personal information and perform secure financial transactions. Furthermore, the system manages a network of end terminals which access these services, and is able to deliver, add, and remove new services to the terminals “over the air”. To this end, we’ve created terminal client software which runs on a self-service financial transaction machine (a kiosk) which interacts with the user via an intuitive touch screen-based GUI and various devices such as card readers and cash dispensers. The terminal software is extremely flexible and can take delivery of new product services (screens, logic, and data) on-the-fly without on-site interaction. In addition, the terminal software is designed to be “hardware agnostic” – i.e. the core is not tied to any particular hardware or operating system or device.
So, that, is my system in a nutshell. Future posts in this series will dive deeper into these aspects as well as some interesting techniques and solutions we used to build it all. Let me know if you are curious about anything in particular, and I’ll try to write about it! | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701155060.45/warc/CC-MAIN-20160205193915-00323-ip-10-236-182-209.ec2.internal.warc.gz | CC-MAIN-2016-07 | 2,978 | 6 |
http://technosotnya.top/watch/r9_Xrmxcrg4/Once+Homeless+Punk+Finds+His+Place+On+The+Wrestling+Mat+TODAY.html | code | With the help of a strong support team, Jaime Miranda has gone from a self-described punk to a heavyweight on the wrestling mat. NBC’s Harry Smith reports for Sunday TODAY on the college athlete who’s making the most of his second chance after overcoming major adversity.
» Subscribe to TODAY: http://on.today.com/SubscribeToTODAY
» Watch the latest from TODAY: http://bit.ly/LatestTODAY
About: TODAY brings you the latest headlines and expert tips on money, health and parenting. We wake up every morning to give you and your family all you need to start your day. If it matters to you, it matters to us. We are in the people business. Subscribe to our channel for exclusive TODAY archival footage & our original web series.
Connect with TODAY Online!
Visit TODAY's Website: http://on.today.com/ReadTODAY
Find TODAY on Facebook: http://on.today.com/LikeTODAY
Follow TODAY on Twitter: http://on.today.com/FollowTODAY
Follow TODAY on Google+: http://on.today.com/PlusTODAY
Follow TODAY on Instagram: http://on.today.com/InstaTODAY
Follow TODAY on Pinterest: http://on.today.com/PinTODAY
Once-Homeless ‘Punk’ Finds His Place On The Wrestling Mat | TODAY
I feel like not really have my father there in my later childhood helped me get into martial arts and wrestling because it gave me a wierd kind of strength from withen even though I was a short out of shape kid at the time but now im slowly become better | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745762.76/warc/CC-MAIN-20181119130208-20181119151848-00045.warc.gz | CC-MAIN-2018-47 | 1,416 | 13 |
http://gmatclub.com/forum/a-is-divisible-by-60-b-is-not-divisible-by-21-is-a-b-2492.html?kudos=1 | code | a*b is divisible by 540
a*b = 540 s
540 not a mutliple of 48 and so a*b not divisible by 48...
I don't agree with this. Say a*b = 8640(=540*16). Here a=60 & b = 144. Meets all the conditions. and is divisible by 48. but if a*b = 540, its not divisible by 48. So Not sufficient.
a^2 * b is divisible by 36
a^2 * b = 36 q ... equation 1
we know a = 60 n ...equation 2
Divide 1 by 2 a* b = 36/60 qn
a* b = 6/10 qn
if qn= 80 or any other multiple of 80, yes ...but otherwise no
Agree with this. So 2 alone is insufficient.
Now, combine 1 & 2 -
Say a* b is 8640(a=60 and b=144) here all conditions are satisfied and is divisible by 48.
Now, say a* b is 540(a=60, b= 9) , here also all the conditions are satisfied but not divisible by 48.
Pls. let me know if I'm missing anything. | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930443.64/warc/CC-MAIN-20150521113210-00183-ip-10-180-206-219.ec2.internal.warc.gz | CC-MAIN-2015-22 | 775 | 15 |
https://securityaffairs.com/115388/hacking/microsoft-msert-microsoft-exchange-attacks.html | code | Early this month, Microsoft has released emergency out-of-band security updates that address four zero-day issues (CVE-2021-26855, CVE-2021-26857, CVE-2021-26858, and CVE-2021-27065) in all supported Microsoft Exchange versions that are actively exploited in the wild.
The IT giant reported that at least one China linked APT group, tracked as HAFNIUM, chained these vulnerabilities to access on-premises Exchange servers to access email accounts, and install backdoors to maintain access to victim environments.
“Microsoft has detected multiple 0-day exploits being used to attack on-premises versions of Microsoft Exchange Server in limited and targeted attacks. In the attacks observed, the threat actor used these vulnerabilities to access on-premises Exchange servers which enabled access to email accounts, and allowed installation of additional malware to facilitate long-term access to victim environments.” reads the advisory published by Microsoft. “Microsoft Threat Intelligence Center (MSTIC) attributes this campaign with high confidence to HAFNIUM, a group assessed to be state-sponsored and operating out of China, based on observed victimology, tactics and procedures.”
The attack chain starts with an untrusted connection to Exchange server port 443.
The first zero-day, tracked as CVE-2021-26855, is a server-side request forgery (SSRF) vulnerability in Exchange that could be exploited by an attacker to authenticate as the Exchange server by sending arbitrary HTTP requests.
The second flaw, tracked as CVE-2021-26857, is an insecure deserialization vulnerability that resides in the Unified Messaging service. The flaw could be exploited by an attacker with administrative permission to run code as SYSTEM on the Exchange server.
The third vulnerability, tracked as CVE-2021-26858, is a post-authentication arbitrary file write vulnerability in Exchange.
The last flaw, tracked as CVE-2021-27065, is a post-authentication arbitrary file write vulnerability in Exchange.
According to Microsoft, the Hafnium APT exploited these vulnerabilities in targeted attacks against US organizations. The group historically launched cyber espionage campaigns aimed at US-based organizations in multiple industries, including law firms and infectious disease researchers.
Microsoft immediately updated signatures for Microsoft Defender to detect web shells that were deployed by the attackers exploiting the above zero-day flaws.
Microsoft also updated the Microsoft Support Emergency Response Tool (MSERT) to detect the web shells employed in the attacks against the Exchange servers and remove them.
The MSERT tool is a self-contained executable file that scans a computer for malware and reports its findings, it is also able to remove detected malware.
For customers that are not able to quickly apply security updates released by Microsoft to fix the zero-day vulnerabilities, the IT giant provided alternative mitigation techniques.
“Interim mitigations if unable to patch Exchange Server 2013, 2016, and 2019: Implement an IIS Re-Write Rule and disable Unified Messaging (UM), Exchange Control Panel (ECP) VDir, and Offline Address Book (OAB) VDir Services.” reads the post published by Microsoft.
Administrators could use MSERT to make a full scan of the install or they can perform a ‘Customized scan’ of the following paths where malicious files from the threat actor have been observed:
“These remediation steps are effective against known attack patterns but are not guaranteed as complete mitigation for all possible exploitation of these vulnerabilities. Microsoft Defender will continue to monitor and provide the latest security updates.” concludes Microsoft.
More information on how to use this script can be found in the CERT-LV project’s GitHub repository.
If you want to receive the weekly Security Affairs Newsletter for free subscribe here.
(SecurityAffairs – hacking, Microsoft Exchange) | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474377.60/warc/CC-MAIN-20240223085439-20240223115439-00753.warc.gz | CC-MAIN-2024-10 | 3,941 | 19 |
https://world.optimizely.com/forum/developer-forum/CMS/Thread-Container/2020/11/getreferencestocontentitems-throws-exception-after-upgrade-to-latest-cms-11-30-1-and-commerce-13-26-0/ | code | Looks like a new bug introduced in CMS Core 11.20. I will file a bug to CMS Core team (and will move this thread to CMS Forum). Thanks for bringing this into our attention
I cannot find this bug in the buglist but maybe it is not supposed to be seen there?
Yes, the bug needs to be "triaged" by CMS Core team, that is when they decide to fix it (and when), and if it should be public or not (in this case it should, but the description should be reviewed and cleared before making it public)
Any news about this one? We cannot upgrade untils this bug is fixed.
It is in progress but that's it. I ping-ed the developer who is assigned to this bug, we'll see.
Ok, can I find it in the bug list yet?
Not yet, they need to make it public first. Once you can follow it CMS-17307
I just upgraded to CMS 11.30.1 and Commerce 13.26.0 and now I get this exception when calling ContentRepository.GetReferencesToContent(contentReference, false)
Any idea why? | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657735.85/warc/CC-MAIN-20230610164417-20230610194417-00273.warc.gz | CC-MAIN-2023-23 | 947 | 9 |
http://www.rayui.com/archive/291 | code | I’ve been using Backbone JS daily as part of my day job for about three months. I’ve learned a lot in that time and have found it to be very effective at what it does. Whilst I am now much more comfortable with the framework, I had some false starts and I found the documentation slightly intimidating for a noob! Whilst I appreciate that this is in part due to the philosophy behind the software, I would have loved a structured tutorial with an accompanying “Hello World” example.
The problem with “Hello World” is that being an application that outputs only one string, it’s not really suited to a demonstration of MVC techniques which require slightly more dynamic data sources. Instead, I decided to write a simple calculator web server that multiplies two operands supplied by the client and returns the result. It demonstrates the basic principles of MVC on the client using both JSON and traditional form submits. It also has a script to generate its own documentation web pages from the source code, which are served up as part of the tutorial application.
The frontend is written in Backbone, jQuery and HTML5 and is served by Node JS using Express, Backbone, Jade and Browserify. All the documentation is generated with docco and it comes ready for deployment to Heroku. | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877644.62/warc/CC-MAIN-20140722025757-00227-ip-10-33-131-23.ec2.internal.warc.gz | CC-MAIN-2014-23 | 1,295 | 3 |
https://en.algorithmica.org/hpc/profiling/ | code | Staring at the source code or its assembly is a popular, but not the most effective way of finding performance issues. When the performance doesn’t meet your expectations, you can identify the root cause much faster using one of the special program analysis tools collectively called profilers.
There are many different types of profilers. I like to think about them by analogy of how physicists and other natural scientists approach studying small things, picking the right tool depending on the required level of precision:
- When objects are on a micrometer scale, they use optical microscopes.
- When objects are on a nanometer scale, and light no longer interacts with them, they use electron microscopes.
- When objects are smaller than that (e.g., the insides of an atom), they resort to theories and assumptions about how things work (and test these assumptions using intricate and indirect experiments).
Similarly, there are three main profiling techniques, each operating by its own principles, having distinct areas of applicability, and allowing for different levels of precision:
- Instrumentation lets you time the program as a whole or by parts and count specific events you are interested in.
- Statistical profiling lets you go down to the assembly level and track various hardware events such as branch mispredictions or cache misses, which are critical for performance.
- Program simulation lets you go down to the individual cycle level and look into what is happening inside the CPU on each cycle when it is executing a small assembly snippet.
Practical algorithm design can be very much considered an empirical field too. We largely rely on the same experimental methods, although this is not because we don’t know some of the fundamental secrets of nature but mostly because modern computers are just too complex to analyze — besides, this is also true that we, regular software engineers, can’t know some of the details because of IP protection from hardware companies (in fact, considering that the most accurate x86 instruction tables are reverse-engineered, there is a reason to believe that Intel doesn’t know these details themselves).
In this chapter, we will study these three key profiling methods, as well as some time-tested practices for managing computational experiments involving performance evaluation. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00633.warc.gz | CC-MAIN-2023-06 | 2,351 | 11 |
http://technoflak.blogspot.com/2006/04/os-religious-wars.html | code | Microsoft's Linux expert has launched a company sanctioned blog in an outreach attempt to the Open Source community but all it seems to attract are irate anti-Microsoft posters.
Hilf says the unmoderated blog called Port 25 at http://port25.technet.com/archive/2006/03/28/8.aspx is intended to promote open communications between his interoperability team and the Ipne Source community. "As someone who has many hours at the command line, debugging things such as protocol states (LISTENING?) and getting software and servers working to provide some type of service, the concept of server ports and being open is well engrained in how I and the team here in our lab think about communications – so we thought it was applicable to how we want to start the dialogue around this subject. I guess it just took a Slashdot interview and a couple thousand emails (and consistent nudging from friends) to really drive the point home that having a participative discussion around OSS and Microsoft technologies is a good thing, not –as many people may believe- something we want to ‘hide’ or shy away from."
Despite Hilf's stated good intentions, however, a quick perusal of the blog, which commenced on 28 March, shows that the posts are almost exclusively anti-Microsoft rants by unimpressed users.
Must be very confusing for some when Microsoft goes off script. | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947931.59/warc/CC-MAIN-20180425154752-20180425174752-00117.warc.gz | CC-MAIN-2018-17 | 1,363 | 4 |
https://clairebogdanos.net/2018/08/08/ | code | I have browsed through many a page of time
And wandered lost in the vast world of rhyme
I’ve traveled alone or in company
From ocean to ocean and sea to sea
And all that I’ve learned and all that I know
God walks beside me wherever I go ! | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646144.69/warc/CC-MAIN-20230530194919-20230530224919-00793.warc.gz | CC-MAIN-2023-23 | 242 | 6 |
https://learn.microsoft.com/en-us/answers/questions/289362/microsoft-no-reconoce-mi-licencia-que-debo-hacer | code | Hi @helen Gálvez ,
This is an English language forum, I suggest you try to post question in English.
Based on your description, I transform the language to English. Did you get any error message after enter the product key? Try to provide the detail error message here.
I suggest you refer to this support article and check if this is helpful:
Please be a bit more precise to explain your problem or you can upload a screenshot so that I can get more accurate solutions to this problem. I’m glad to help and follow up your reply.
If the response is helpful, please click "Accept Answer" and upvote it.
Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654031.92/warc/CC-MAIN-20230608003500-20230608033500-00276.warc.gz | CC-MAIN-2023-23 | 757 | 7 |
http://beano.wikia.com/wiki/Edd_Case | code | |Relatives||Victor (dad), Edwina (mum)|
|First appearance||The Beezer, 1962|
Edd Case is the deuteragonist of the Numskulls comic. He is the boy whom the Numskulls operate. Each of his Numskulls represents one of the five senses; Brainy (nerves), Snitch (smell), Cruncher (taste), Radar (hearing) and Blinky (sight).
Edd normally wears a red jersey with black trousers and has short hair. | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424287.86/warc/CC-MAIN-20170723062646-20170723082646-00462.warc.gz | CC-MAIN-2017-30 | 388 | 4 |
https://www.breitbart.com/tech/2016/06/02/elon-musk-one-in-billions-chance-we-arent-living-in-a-matrix-style-computer-simulation/ | code | Tesla and SpaceX CEO Elon Musk claimed that there is only a “one in billions” chance we are not living in a Matrix-style computer simulation at Recode’s annual Code Conference this week.
“The strongest argument for us being in a simulation probably is the following: Forty years ago we had Pong. Like two rectangles and a dot. That was what games were,” said Musk.
Now, 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously, and it’s getting better every year. Soon we’ll have virtual reality, augmented reality.
If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let’s imagine it’s 10,000 years in the future, which is nothing on the evolutionary scale.
So given that we’re clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds we’re in base reality is one in billions.
Tell me what’s wrong with that argument…” asked Musk.
“So is the answer yes?” asked a member of the audience in reference to reality being an artificial simulation.
“The argument is probably… Is there a flaw in that argument?” he replied.
Musk continued to argue that it would be a good thing if we are living in a simulation, claiming that if we weren’t, a calamitous event will wipe out humanity and stop it from advancing.
“Arguably we should hope that that’s true, because otherwise if civilization stops advancing, that may be due to some calamitous event that erases civilization,” he argued. “So maybe we should be hopeful this is a simulation, because otherwise either we are going to create simulations indistinguishable from reality or civilization will cease to exist. Those are the two options”.
Code Conference was sponsored by regressive-left news outlet Vox Media, and the conference included other progressive Silicon Valley keyspeakers such as Facebook COO Sheryl Sandberg, TMZ founder Harvey Levin, and Twitter CEO Jack Dorsey, who appeared on stage with his friend, Black Lives Matter leader DeRay Mckesson. | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00796.warc.gz | CC-MAIN-2021-04 | 2,358 | 11 |
https://forums.ni.com/t5/PXI/Missing-devices-in-DAQmx-dropdown/td-p/4101371?profile.language=en | code | I am trying to use the external SMB trigger on my PXI controller as illustrated in:
But I don't see my controller listed in the dropdown menu for available devices.
I checked the support for both my chassis as well as the controller with PXI Platform Services version and they show as compatible. It shows up fine in NI Max so I don't know what the problem is.
Is there a way to "refresh" the list of devices that LabView thinks are available?
Solved! Go to Solution.
I tried this but I get an error: "No device by the given name was found"
I can't update the problem description to reflect this
Can you send in the code your using or a screen shot of the block diagram where the connection is being made? Also, can you send in a MAX Tech report? I'd like to look at how the connection is being made in LabVIEW as well as the naming and hardware of the devices being used.
I am attaching the MAX tech report as well as a screenshot of the way the connection in made in the VI in question. The source code is also included in the zip file for the MAX report.
I was able to connect with NI support and got a resolution to my issue. Somehow the MAX configuration database got out of whack. I was able to see all the devices in the system after a configuration reset. | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038073437.35/warc/CC-MAIN-20210413152520-20210413182520-00387.warc.gz | CC-MAIN-2021-17 | 1,263 | 10 |
http://stackoverflow.com/users/1199835/artur-shamsutdinov | code | Apparently, this user prefers to keep an air of mystery about them.
3 Emulate Binding.FallbackValue in WinRT jan 17 '13
2 Method or Delegate or Func apr 10
1 Java Binding: The type `…' does not exist in the namespace `…'. Are you missing an assembly? apr 10
1 deserializing dynamic json for windows phone nov 22 '12 | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458866468.44/warc/CC-MAIN-20150501054106-00083-ip-10-235-10-82.ec2.internal.warc.gz | CC-MAIN-2015-18 | 319 | 5 |
https://help.adobe.com/en_US/livecycle/11.0/Services/WS92d06802c76abadb-2f74688c12dbeb3d85f-7ff2.2.html | code | Output IVS is a sample application for testing the Output
service. Using this sample application, you can generate documents
and test form designs and data sets. You can also print documents
by using laser and label printers.
Administrators can use Configuration Manager to deploy Output
IVS. They can also manually deploy it. (See the installation document
specific to your application server, from the LiveCycle Documentation page)
To open the Output IVS application, navigate to http://[server_name:port_number]/OutputIVS.
To change settings that Output IVS uses, click the Preferences
link on the LiveCycle Output banner. Here are some of the settings
you can specify from the Preferences window:
Locations that Output IVS obtains form, data, XDC, and
companion files from. The locations can be URLs, a repository, or
an absolute reference from a folder on the computer that hosts LiveCycle.
Repository locations can be specified as either repository:/ or repository:///.
Common format and options, such as whether Output IVS creates
a single output stream or includes metadata.
Print options, such as duplex and number of copies.
To specify characteristics about your job and to submit your
job, click the Test Output link on the LiveCycle Output banner.
Here are some of the settings you can specify from the Test Output
Output format, such as PDF, PDFA-1/a and ZPL 300 DPI
File selection specifies the form, data, XDC, and companion
files to use in the test. Use the companion file for these settings:
Pattern-matching rules for mapping data elements to different
Batch settings for initializing lazy loading
Many of the other settings you can specify by using the Preferences
Output location information such as server file or printer
Issue request sends the request to the Output service
To view or delete files used by Output IVS, click the Maintenance
link on the LiveCycle Output banner. You can delete only the files
that you added to Output IVS. You cannot delete files that are installed
with Output IVS. The Maintenance window lists the form, data, XDC,
and companion files in the locations that are specified using the
Preferences window. You can also use the Browse buttons to upload
files from your own computer
To see the complete Help for Output IVS, click the Help link
on the LiveCycle Output banner. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647525.11/warc/CC-MAIN-20230601010402-20230601040402-00765.warc.gz | CC-MAIN-2023-23 | 2,323 | 38 |
https://elon-vs-jeff.com/its-warning-elon-musk-in-a-big-troubles-forced-to-stop-importing-cars-into-india/ | code | TeslaFans #teslanews #teslamotorfans #gigaberlin Please Subcribe my channel: http://bit.ly/3clkTm6 ===== IT’S Warning!
The War OVER! Joe Biden CANCELS PLAN, Finally Accept Elon Musk's Tesla.
IT'S Warning! Elon Musk's Ready to KICK OFF Brain chip HUMAN trials.
Your email address will not be published.Required fields are marked *
Hit enter to search or ESC to close | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00816.warc.gz | CC-MAIN-2023-06 | 367 | 5 |
http://www.fialo4ka.ru/dating-advice-for-the-frist-time-165.html | code | Dating advice for the frist time is it true that liam and danielle still dating
So, you met a cool person who you’re about to go out with. That said, there a few fairly concrete dos and don’ts to keep in mind when hanging out with someone totally new—just remember that it’s all about making a solid first impression to land a second date with someone you really like.
It’ll ensure that you not only have a good time on a first date, but also get a second date, too.
There’s nothing more rude than trying to have a conversation with a person who’s constantly stating at a screen.
I think a lot of guys when left to their own devices default to those kinds of generic dates because it’s simple and it’s not off-putting.
Suggest a hobby you’re into, like hiking or something.
Nowadays, when most first dates come from an algorithm match, meeting for the first time can feel a little awkward.
Especially when you have no clue what the person across the table is thinking.
If your date insists, offer to split the bill, or at least leave the tip.
However, if you offer to pay or split, be prepared to pay or split.
As cliché as it sounds, being yourself is probably best first date advice.
So, how do you deal with the anxiety that inevitably comes with first-date territory?
In those instances, they were afraid of hurting my feelings, but they wouldn’t have.
I felt really awkward knowing that they basically hated the date I had planned.” —"If you’re really into him, and you’re pretty sure he’s into you too, text him later that night or the next day to let him know you had a good time.
It was a really great ice breaker while we grabbed drinks at the bar, and it’s an easy game to play and screw around with.Tags: Adult Dating, affair dating, sex dating | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743105.25/warc/CC-MAIN-20181116152954-20181116174954-00472.warc.gz | CC-MAIN-2018-47 | 1,790 | 15 |
https://dribbble.com/getboyce/projects/356214-Twitter-App | code | May 02, 2016
This is the Compose view and is a WIP. Inside I've attached two ideas I'm working through for quickly accessing favorites/recents for @-ing someone. Any feedback or ideas are welcome and appreciated.
March 02, 2016
This is the Favorites Tab and is a WIP. Inside there are three views, which you can see in attachments. Again, any feedback is welcome.
March 01, 2016
Updated some of the icons, sizing and spacing. Still iterating and a WIP. Any feedback is welcome! | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656530.8/warc/CC-MAIN-20190115225438-20190116011438-00005.warc.gz | CC-MAIN-2019-04 | 477 | 6 |
https://fallenlondon.wiki/wiki/That_sounds_profitable... | code | That sounds profitable...
From Fallen London Wiki
ID: 235920, 235941
This page contains details about Fallen London Actions.
Time it right, and they might not have locked away the silverware, or you might intercept a delivery of something tasty.
You'll need to case the place first
You make yourself a fixture on the street corner.
- You have moved to a new area: Area-Diving in Spite
- Begun a new venture! Villainy: Area-Diving (Sets Villainy: Area-Diving to 1)
- Casing... is increasing… (+1 CP)
- An occurrence! Your 'Discovered: Area-Diving' Quality is now 1! (hidden)
Redirects to: Area-diving: Casing the Target (only on subsequent dives) | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00075.warc.gz | CC-MAIN-2023-40 | 647 | 12 |
https://blog.berczuk.com/2019/08/book-review-7-rules-for-positive.html | code | Having known Esther Derby from conferences, her writings, and having participated in a problem solving leadership workshop she led, I knew that she was an expert in helping organizations work better. I was thus looking forward to this book. It exceeded my expectations. “7 Rules” is a concise, easy to read book full of useful information . In addition to the “Rules” you will learn about a variety of ways to model organizational dynamics so that you can identify patterns that inhibit change.
This is a very actionable book. Chapters wrap up with things you can do and with a summary of key points. This book can be as much a daily reference as a tools for learning to be a better change agent.
While 7 Rules is about organizational/corporate change, concepts in the book are also helpful in helping you to navigate tricky issues in community and family life. For example, the relationship between congruence and empathy underlies being an effective change agent, and the book can help you understand these concepts better.
The lessons in the book will help you understand how to make changes at any level, from small things like encouraging unit testing to larger things like a better dev process.
The book provides useful advice for managers, scrum masters and those leading sprint and project retrospectives. Since change can happen at all levels anyone who has found challenges at work that they want to improve should consider this book.
Thoughts about agile software development, software configuration management, and the intersection between them.
Saturday, August 17, 2019
Book Review: 7 Rules for Positive, Productive Change
Subscribe to: Post Comments (Atom)
Branching and Integration Time
Discussions about branching often focus on the wrong thing. Unintegrated code sitting around slows teams down, whether the code is in a bran...
My main development language is Java, but I also some work in Python for deployment and related tools. Being a big fan of unit testing I wr...
This is a bit off of the usual “Software Development/Agile/SCM” theme that I usually follow, but it does fit into the theme of accidental si...
Being a fan of Continuous Delivery , identifiable builds, and Continuous Integration: I like to deploy web apps with a visible build number...
Post a Comment | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653764.55/warc/CC-MAIN-20230607111017-20230607141017-00485.warc.gz | CC-MAIN-2023-23 | 2,303 | 15 |
http://archive.battlecodeforum.org/2016/t/bc-testing-doesnt-work/180 | code | In the Specs it says:
Your robot can read system properties whose names begin with "bc.testing.". You can set a property by adding a line to bc.conf like this:
You can check the value of the property like this:
String strategy = System.getProperty("bc.testing.team-a-strategy");
However this doesn't seem to work in practice. Already reported this in the IRC and @dinosaurs saw but thought I'd put it here too. | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948627628.96/warc/CC-MAIN-20171218215655-20171219001655-00061.warc.gz | CC-MAIN-2017-51 | 410 | 5 |
https://community.plotly.com/t/how-to-select-color-of-the-selection-tick-boxes/20160 | code | I am using a DataTable setup with ‘row_selectable’ =‘multi’ (which works great!).
As soon as I change the background of my web page I can see that the column containing the selection tickboxes remain WHITE while everything else turns to the new color.
How to setup the background color for this column (and possibly the color of the tickbox itself)?
Has this column a name so that I can use conditional formatting to do that?
I saw in other forum that there is a mention of a master CSS for the library.
If I change that I should be able to achieve what I want…the point is that when I did it but I found hard to find whch element I should use.
Can you help with that?
Many thanks for this charming product | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00240.warc.gz | CC-MAIN-2022-49 | 716 | 8 |
https://www.techintroduce.com/technology/decimal-code/ | code | Each number of etch system is represented by 4-bit binary digital, each with a fixed weight. Therefore, it is called it as a weight code or weighted code. 8421 The weight of the code from the high to ten:
Any decimal number is to be written into an 8421 code representation, as long as the number of the decimal number is converted into a corresponding 8421 yard, such as
In turn, the decimal number of 8421 yards indicated, or conveniently convert into ordinary decimal number, such as
Compressed BCD code
Compression BCD code (or combined BCD code), which features 4-bit binary number
To represent a decimal number, that is, one byte represents two decimal numbers. If the compression BCD code of the decimal number 57 is
01010L1Lb; the binary number 10001001, the compression BCD code is used as a decimal number 89.
Non-compressed BCD code
Non-compressed BCD code (or non-combined BCD code) indication is characterized by using an 8-bit binary number to represent a decimal number, that is, one byte representation 1 digit decimal number, and only 4 digits of each byte is 0 ~ 9, and 4 digits are set to 0. If the decimal number 89, the non-compressed BCD code is expressed as the binary number is 00001000 00001001.
The conversion between the BCD code and the decimal number is easy to implement, such as the compressed BCD code is 1001 0101 0011.0010 0111, its decimal value is 953.27.
BCD code can intuitively express a decimal number, and it is easy to achieve mutual conversion with the ASCII code, which is easy to input, and output. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00109.warc.gz | CC-MAIN-2023-06 | 1,543 | 11 |
https://academia.meta.stackexchange.com/tags/design/hot | code | I love it! The images, the fonts, the color palette. I particularly like the hint towards books in the site logo.
My one suggestion has to do with the medals. Not sure how the open triangle shape fits with the theme, and the thin lines make it blend in. I agree partly with what was suggested above... I think the medals should be mortarboards, similar to ...
I don't think there is anything to worry about:
The logos are not exactly identical, there is no copyright or attribution license issue.
It seems pretty clear from Springer's website that this drawing is not a registered trademark.
The image is rather generic and very similar instances can be found on stock design websites:
University image: status-completed
The line weight has been adjusted. When shrinking the image, the weights were too fine and made the image look fuzzy or undefined. We'll get a better weight for them.
Empty space: status-bydesign
It's important to remember that, while there's white space on that page, elements live in those empty ...
Thanks for the design… and soliciting feedback: it's not easy, and you'll sure get lot of helpful yet contradictory advice from all around…
So, while trying to avoid the nefarious “design by committee” effect, here's some of my “gut reaction” to the look:
Very positive first reaction: elegant, clean design… calming effect
My eyes first went to the ...
These are now blue instead of pink.
What's with the pink OP highlighting?
(from this answer.)
On the plus side, the jarringness of the pink has brought my attention that the OP signature gets highlighted in question and answer bylines just like it is in comments, even in the old styling, so there's that.
But why is it such an ...
By default, here are the colors of links from Academia, pulled from the CSS:
The hover color is the red of the leaning book in the logo, which is actually quite noticeable.
Instead of increasing contrast, increasing the saturation by increasing the blue, similar to the cyan that is used in meta, should be ...
The line weights have been adjusted. When shrinking the image, the weights were too fine and made the image look fuzzy or undefined. We'll get a better weight for them. Clock face also has hands now!
The line strength for the buildings in the top right is too thick given the shrunk buildings. I acknowledge that it has to be at least one ...
I think this is a relatively simple fix—it's the extra padding around the question. However, I also think that the fix shouldn't be to go something quite so dense as Stack Overflow. (Currently, I can fit about 7 questions from Academia.SE on my screen compared to about 10 SO questions in the same space. However, SO seems too crowded in comparison.
I think ...
A few comments on the design so far:
The "medals" icons should perhaps be closer to mortarboards, with gold, silver, and bronze tassels.
I would personally prefer a somewhat more assertive font for the main body text.
Because of the abovementioned mortarboards, standard black is also an appropriate "academic-themed" color—as are strong "standard" colors (...
Great work. A few small suggestions:
Reduce the vertical space taken up by each question on question listing pages. It seems that currently (24th April 2014; only about half the number of questions can fit on the screen as beta sites or stack overflow. Perhaps the vertical white space could be dramatically reduced. This is important when you want to quickly ...
The reason is that there are two accounts with the same display name and they both commented under your question. It might be the same user who created two accounts (maybe they lost the credentials of the first account): it's not forbidden.
I added a comment suggesting the possibility to merge the accounts, if they wish.
Thanks for the updates!
I have a singular concern.
I don't know if it's just me, but it appears that question titles now have serif font that is somewhat harder to read, because the characters seem to be wider than those of other StackExchange sites. It's also slightly disorienting as the rest of the page is in a sans serif font.
Probably need other ...
For me the font for regular texts is Open Sans, questions in the question list are in Museo Slab – identified with Opera’s and Firefox’s Inspect Element tool. I strongly suspect (but cannot verify it right now) that this is loaded as a web font¹, so it should be the same for you, as long as your browser supports web fonts. PS: Looking at your screenshot, it ...
When I look at the two screenshots side by side, I prefer the academia spacing. All in all, it is remarkable how much more pleasing to my eye the academia format is than the SO format: some really nice work was done in the design of our site.
It looks to me that the height of each question could be slightly reduced, perhaps to the point of being able to ...
This header doesn't exactly look great:
It seems like the title should only go as far as the divider between the question and the right bar. Right now, it doesn't seem like it's part of the same element. And, with two-line titles like this one, there's a lot of awkward empty space.
I just noticed a layout bug (?) on FF 26.0 (running Ubuntu):
The green box indicating an accepted answer is badly positioned; the text should be vertically centered:
I notice that your screenshots above look more like this than the bad one above, so maybe it's a platform-specific thing?
Very small point. I think of the clock tower as a quintessential icon of university campuses and in the design it captures my vision, which is a good thing. When the sky line is presented in a circular fashion the position of the clock tower at 2:00 invokes thoughts of the Mars gender symbol for males. I would suggest rotating the skyline so that the clock ...
An alternative to reinforcing the visual divider (making it bold would do wonders) would be to have the questions and answers alternate between white and perhaps the beige in the logo/header bar. Optionally the question itself could have a 3rd distinct colour. This makes delineation (e.g. whether you are reading a very long question/comment string or the ...
My suspicion is that this is not an Academia issue, but a browser or Yosemite issue. If I examine the same page in Safari or Google, I get a very different appearance than when I try to view the site in Firefox (for instance). I personally find the text in Firefox easier to read, as it is "heavier" and stands out better against the background.
Accessibility issue: I'm wearing glasses but have generally very good eye-sight with them. However, since recently I'm finding it hard to read the titles in the list of questions on the front page. I can only imagine how bad it is for people with poor eye-sight. The blue seems to be an even lighter color than on the Stack Overflow front page.
You should ... | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00369.warc.gz | CC-MAIN-2021-49 | 6,878 | 51 |
https://unix.stackexchange.com/questions/486261/how-do-i-compile-linux-kernel-4-19-on-centos | code | I am trying to compile the linux kernel into a binary (Trying to make a linux distro) on centos. I need a step by step walk through on how to do this.
closed as off-topic by Stephen Harris, Rui F Ribeiro, Jeff Schaller, G-Man, RalfFriedl Dec 6 at 6:45
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Requests for learning materials (tutorials, how-tos etc.) are off topic. The only exception is questions about where to find official documentation (e.g. POSIX specifications). See the Help Center and our Community Meta for more information." – Stephen Harris, Rui F Ribeiro, Jeff Schaller, G-Man, RalfFriedl | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00013.warc.gz | CC-MAIN-2018-51 | 660 | 4 |
https://gitat.me/2014/05/15/Brute_Forcing_Contact_Info/ | code | Brute Forcing Contact Info05.15.14 · python
Today’s open source project has a high potential for abuse, but I hope you’ll take the high road. Months back I wrote about validating email addresses by pinging SMTP servers. It was a fun trick, though not very practical.
```python from rapportive import rapportive print rapportive.request(“[email protected]”)
Name: Neal Shyam
# Account Manager ADstruc # Co-Founder The AudioShocker Podcast # Editor Git @ Me # Twitter http://twitter.com/nealrs # LinkedIn http://www.linkedin.com/in/nealrs # GitHub https://github.com/nealrs # Google+ https://plus.google.com/106729159255897575431/posts ```
Why is this a big deal? Because, coupled with commonly used email address patterns, it’s trivial to brute force a LinkedIn member’s contact & employment information. So trivial in fact, that I wrote a script to do it for you. Please use it responsibly. | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00513.warc.gz | CC-MAIN-2020-16 | 903 | 6 |
https://blog.watchtowerstudios.co.uk/blog/2020-09-04-smol-the-game-that-fits-on-a-qr-code/ | code | Introduction and Credits
First off this idea was inspired/stolen from a YouTube video by MattCK where he tries to make not just a simple HTML5 game on a QR code (which is far easier and what I have done) but a whole executable, it’s impressive, insightful and well worth checking out. Below is my game! Later in the post I will show you how to play it.
Data is an abstraction. It’s something that is taught and understood by anyone who’s learnt about Computer Science, but as a concept I don’t think we give it enough consideration, or maybe I’m alone in thinking how much we took it for granted. QR Codes are data, data can hold information that can be interpreted by a computer to make programs, including games! But lets break down what that actually means.
When I was young my dad showed me how to make a computer using a cereal box. I thought it was the most ridiculous thing I’d ever heard, but it totally worked! First you’d make data in the format the “computer” could compute, in this case it was pieces of cardboard with some that hang onto pegs and others trimmed so that the peg would not hold it. The computer itself was a box with labelled holes that you could stick a peg through. The pegs would hold the cards by certain holes, a simple sorting machine, but the simple machine could be used like a computer by catagorising information. Pulling out a peg that for “black” and “bird” might make a card labelled “magpie” fall while “panther” is still held by “mammal”. This is a very basic example but is a kind of computer and data for it. I really recommend this for kids as it demystifies how “magical” computers work, at least in one sense. Try it here.
Essentially you have the data and the computer that can interpret it. An Xbox game doesn’t work on a PS3, a card for a magpie isn’t understood by the PC. So storing a game on QR code isn’t surprising or really even impressive (except limitations with size) as the computer is the one doing most of the work, the data is just instructions spoken in the language of the computer. When MattCK calls one method “cheating” he neglects to mention that really the approach isn’t different, I suspect he knows this but was more looking for additional challenge. In his first try the data is HTML + JS encoded to text on a QR code, in the second it is binary for an executable for windows. Both are data just being interpreted by slightly different computers (or platforms, engines ect).
How to Play “Smol”#
Scan the above QR Code.
Paste the large amount of text into a blank html file, or save using a utility straight to an html file.
Open the file in any modern browser.
Alternatively, see the source here: https://github.com/Pepperized/smol | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474744.31/warc/CC-MAIN-20240228175828-20240228205828-00339.warc.gz | CC-MAIN-2024-10 | 2,764 | 10 |
https://www.dinduks.com/a-fix-for-select2-not-displaying-results/ | code | When using Select2, filling a select box, and not seeing any result, chances are that you’re querying a different server, which requires CORS, which in turn isn’t supported by Internet Explorer (9.0 and below).
Using the jQuery-ajaxTransport-XDomainRequest plugin fixes
Explanations are in the README.
(this problem is of course not specific to Select2, but given the layers of
code/libs between the developer and the creation oFJthe XHR object inside
ajax() function, one might forget this detail like I did) | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00044.warc.gz | CC-MAIN-2022-49 | 513 | 6 |
https://lhishop.com/products/lhi-pro-racing-f3-flight-controller-board-cleaflight-6dof-standard-for-mini-fpv-racing-qav250-zmr250-qav280-qav180-qav210-mini-quadcopter-better-than-naze32-flip32-rv5-cc3d | code | Pro Racing F3 Flight Controller
Original price $23.99
Current price $20.99
- Supports OneShot ESC and more than 8 RC channels.Supports more flight controllers, including CC3D, CJMCU and Sparky.
- Additional PID controllers that uses floating point arithmetic operations. (now has 3 built-in PID Controllers)
- Many more features such as RGB LED strip support, Autotune, In-flight PID tuning with your radio, blackbox flight data logging etc.
- Better coding practices and introducing tests, easier to maintain and for future development. (Dominic has software development background)
- If you in use process have any problems, please contact with us, we will be responsible for it, and provide you with the best after-sales service! | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655933254.67/warc/CC-MAIN-20200711130351-20200711160351-00298.warc.gz | CC-MAIN-2020-29 | 732 | 8 |
https://askubuntu.com/questions/276723/getting-powernaps-iomonitor-to-work-with-postgresql/277267 | code | I have a 12.04 box running postgresql 9.1.8-0ubuntu12.04, which serves a Java webapp (an Atlassian Confluence wiki). I'm trying to take advantage of powernap's IOMonitor feature. However, if I uncomment the corresponding line in
[IOMonitor] postgres-io = "postgres"
... powernap never allows the box to go to sleep, the logs (with DEBUG=3) show this:
Looking for [postgres-io] IOMonitor Activity found, reset absent time [0/60]
One thing I have noticed is that postgres appears to be respawning processes every 2-3 mins (even without any user activity against the wiki); every time this happens, powernap prints:
<powernap.monitors.IOMonitor.IOMonitor instance at 0xXXXX> - adding new PID 16783 to list.
The PIDs in questions appear to be the ones serving my wiki DB, eg:
postgres 16783 1067 0 11:05 ? 00:00:00 postgres: confluence confluence 127.0.0.1(50689) idle
I presume this is getting in the way? Is this an issue with Postgres (is it supposed to respawn processes that often, even w/o activity?). Any pointer as to how I go about debugging this would be most appreciated (maybe starting with how powernap infers I/O activity - since there are many postgres processes, perhaps I can find a regexp that will target just the right one?).
PS: If I comment out the Postgres IOMonitor, powernap works a treat but then it does suspend the box while the Wiki is being used... | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00165.warc.gz | CC-MAIN-2020-34 | 1,374 | 10 |
http://mistershlager.com/blue-screen/repairing-cara-mengatasi-blue-screen-pada-laptop-hp.php | code | Cara Mengatasi Blue Screen Pada Laptop Hp
If so download a pdf them. I had my PC and the DVD drive for almost a year. I reckon you can go into your an amd athlon xp 3200+ and 1 gig ram. I dont wont to reformat, iall three and they are the same.Any help would be appriciated. Yes you pada clicking sound from the PC.
I am now stuck on wondering what we should replace them with. My router is blue navigate here So contact Dell and ask them directly. hp Cara Mengatasi Laptop Blue Screen Crash Dump Doesn't this mean with your monitor. Just an older, but great technology but for thedoes the same process over and over.
Those 865 chipsets Intel introduced have alot of stuff on hdd's. I am just wondering if the problem is including some resource sharing issues. See how it responds mengatasi Hard Drives Reading "F-Me".By the way, I am can pretty much bet the PSU is bad.
Ive had many Dells in the past and never had any problems with end of the 3rd test in 3dmark05 before crashing. I tried to go in safe mode butcompatible with the computer and won't cause any harm. Cara Mengatasi Blue Screen Pada Windows 7 When i turn on the computer, it cara sound, it sounds like the computer is STUFFED.Thanks to anyone who helps. This is a similar problem to one I saw elsewhere on here.
Some things to note: - Would your restore' point that you can try? The farthest it has gone is to the Non-mobility PCIe cards are much to big to fit in a standardstarts up then the xp logo comes on.Did the usual main pc and it dosent boot at all.
If you search this site you will see MANY descriptions of this same problem.Everest on my computer and it says i have a pci-express x16 video card slot.Also, I hear a Cara Mengatasi Laptop Blue Screen Pada Windows 8 without the 2nd drive.Even as I ave ben sitting here, c:\windows\system32\KBDUS.Dll not a valid windows Image (only option is to click okay) 2. I cant play any gamesbut ENSURE they are DELL compatible.
I have run the Wizard individually on eachversion.It has compatible ram list.As you mentioned, you haveBIOS again while Detecting IDE Drives.Your Motherboard might be screen created a Network Setup Disk and that hasn't worked.Followed by my his comment is here computer and boot it back up.
I just need a stronger power supply that's of the three machines and that hasn't worked.You can buy third party PSUS, thanks in advance Have you checked the fuse panel for blown fuses. I have checked the Firewall settings on big feature was an 800MHz bus...Infact your monitor pada nothing happpens when I press the power button, no power at all.
All are running XP the problem of 130gb... You may need to purchase additional licenses. if I can, I'm looking to upgrade.Dell's can be tricky if you try to use third party upgradesme??? are u serious?If not, you will need different in this situation?
The problem is hp You are backing up for disaster recovery only?But adding the same type as you Sounds like a good plan. Pentium 4 processors with speeds Laptop Blue Screen Terus i have 2 sata hdd ( non-raid).I then tested this card in my be greatly appreciated.
It is definately not normal to hear that http://mistershlager.com/blue-screen/repairing-cara-mengatasi-blue-screen-pada-laptop-acer.php in a 865 chipset family.Any help would be great thanks. hello have is OK if you know it's brand.Thanks for any suggestions. to morning) start the backup.So, monday evening (as opposedperformance of this card?
What would you do using the latest drivers. Thus far, I have run the Wizard and Cara Mengatasi Blue Screen Pada Windows 7 Ultimate a NetComm 1300Plus4.And Thnx! I reckonfar from leading edge.I have xp pro as my os, off the premises when its finished backing up.
Any help wouldthe computer has powered off from windows!I go through the f6 3rdparty driver installation and everything works perfectly.Alright, so im making my new buildis fine also.If not, your CD laser is gone. iplan to any of you guys?
What's going on? The clicking sound is weblink an ATI 'mobility' video card.But that is nownearly always the sound of a hard drive dying...If it is a Maxtor or Western Digital or Hitachi. Now laptop. The drive letter that i had for it was "L". Next thing you know it reboots and Cara Mengatasi Blue Screen Windows Xp to buy a new hard drive.
That should fix method and booted PC. The second i tested it in is runingjust got the computer, it has been running fine.Hyper-Threading does have its downsides, same xp pro. Hopefully its done by tuesday am and i.......then I get a BLUE screen and nothing else.
Does this sound like a - How about upgrading the tape drive? Can someone please helpthe power supply, the cpu, or the motherboard. These are different than the standard PCIe Laptop Blue Screen Saat Masuk Windows crazy, just an improved PSU. laptop Any ideas? Hyperthreading technologywithout it crashing during startup.
Hi I am new to the price, very good. One for the networking/storage gurus here. Your call, I have no idea about power requirements for quad-core cpu's. winlogon.exe pada it does the same thing with the xp logo. And keep the Mengatasi Blue Screen Windows 7 Dump Memory 2 months old anybody?So I shut down thecan take the drive out and off site.
I don't need anything of 2.4, 2.6, and 2.8GHz. You should test for speed first. -is no water damage to it. How is the pada limited to 2 GB. Isass.exe c:\windows\system32\cryptdll.dll Not a valid windows image forums but I am desperate for help.
If it is either one, I am utra, don`t worry, your graphics card(GC) is fine. Do you have a 'system BIOS, and enable LBA (under HDD or something). Ps.dvd drive is only you've accidentally set a HDD limit.It only has a 230w PSU, and Pro SP2 (fully updated). | s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160145.76/warc/CC-MAIN-20180924050917-20180924071317-00553.warc.gz | CC-MAIN-2018-39 | 5,713 | 20 |
https://damianzaremba.co.uk/2011/08/install-postgresql-on-cpanel/ | code | To install PostgreSQL on a cPanel server you can perform the following:
- Run /scripts/installpostgres
- Go to SQL services -> Postgre config and click Install config
- Configure a root password for Postgre 4 Enable Postgre with chkconfig postgres on; service postgres restart
Now you would think that is it, right? Well if you already have users on the box you will now need to add them into postgre otherwise they will have no access.
You can add them with the following script:
for user in $(ls /var/cpanel/users);
su postgres -c "createuser -S -D -R $user"; | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476137.72/warc/CC-MAIN-20240302215752-20240303005752-00772.warc.gz | CC-MAIN-2024-10 | 561 | 8 |
https://help.nextcloud.com/t/not-that-fast-hdds-and-250mbps-internet-connection-i-o-waits/34987/4 | code | I just setup My first home server, before this I had setup only one small business server so I’m new in server things and Linux in general. I have setup Apache2 for home pages, Nextcloud for cloud and I’m planning mail server too. I installed Nextcloud in manual mode, so far so good, service works. As server will be used by around 10 people whit various service combinations I bumped up internet connection to 250mbps as I had “only” 100mbps so friends/clients can have a good experience.
Server hardware that I use is My old computer, from 2010. Its Phenom x2 555 unlocked to B55 whit 4 cores and overclocked to 3,6gHz. Motherboard is GA-MA770T-UD3P whit 8Gb 1600 ram. Hard dives that I have are: TOSHIBA MQ01ABD075 (laptop hdd) 750TB 8mb cache 5400rpm, and ST500DM002-1BD142 500TB 16mb cache 7200rpm whit WD5000AZRX-00A3KB0 500TB 64mb cache intellipower 5400rpm in RAID1. Motherboard have only 3gb/s sata ports, so there is no surprise that I have i/o waits problems from time to time. When I stress test server whit large files in Nextcloud or via ftp i got i/o waits around 40% and warning in glances. Speeds are around 28mb/s download and 25mb/s upload, maximum should be 31mb/s. That speed I achieve from other servers.
My question is what I can upgrade? Logic tells Me that hdd’s but is there a point if motherboard have only 3Gb/s speeds. SSD will help whit a lot of iops, but they are expensive. I’m panning to upgrade to x2 4TB NAS drives and that much storage will be very expensive in SSDs. Will system drive replacing to SSD help this problem a bit? That what I can buy. Maybe I can use SSD as cache disk only?
RAM is probably the most important. Perhaps you haven’t optimized the usage yet, so there could be some potential without new investments. Especially the database can be optimized a lot and the i/o-operations can drop. To reduce the load on the database, use redis as filelocking-cache, it reduces the load on the database a lot.
iotop you can check which process uses all the write/read-operations. It’s often the database.
I think that already looks quite good. With
iperf and other tools, you can check the real connection speed. You can then check with sftp or other tools, where you can see the impact of writing to a disk, and Nextcloud will still be a bit slower since it does a lot of database operations on top.
I have setup caches (redis as filelocking-cache for example) as I read documentation and I have setup those recommendations. Speedtest-cli shows full 250mbps speeds, so this is hardware or OS limitation. It is quite a shame that around 10mbs goes to waste.
Thanks for Your answer.
Make sure that the database is on a separate physical disk. When sql is the bottle neck the rest will be slow too.
Try adjusting InnoDB to tune it. Use mysqltuner script.
This is a sample config I’m trying on an old 2 core 1.6ghz AMD e-350.
Warning. Do not use O_DSYNC and doublewrite=0 unless you are using a journaling filsystem like btrfs…
innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 2
innodb_file_per_table = 1
innodb_compression_algorithm = lz4
innodb_compression_default = 0
innodb_strict_mode = 1
innodb-doublewrite = 0
innodb_flush_method = O_DSYNC
query_cache_type = 1
query_cache_size = 256M
query_cache_limit = 16M
query_cache_strip_comments = 1
join_buffer_size = 16M
skip-name-resolve = 1
My db is on separate drive. I have system hdd, and two data hdd’s in RAID1.
I’ll check those configs out, tnx. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00541.warc.gz | CC-MAIN-2022-40 | 3,480 | 29 |
https://bugzilla.mozilla.org/show_bug.cgi?id=1650212 | code | Hi, Strongbox dev here. I also didn't realise that there was a hidden vote button... This would be a great feature, making everyone more secure, it would be very cool to see this in Firefox.
I think it's important for anyone on this thread to now make sure they have voted rather than added their +1 comments, as perhaps these aren't being counted properly. It would be great if anyone following could do as kesselborn said:
Go to the very top, expand "Details", there is a "vote"-Button.
Also, I'm happy to try to help with any technical questions or issues, I've integrated with the require Apple API on the supply side (Strongbox tells Apple's Password AutoFill system it can supply passwords for x domain and y login). The following are good places for a developer on the other side of this API to start...
On the client side Apple provide this AutoFill behaviour out of the box on NSSecureTextField & NSTextField (contentType property == username/password/...) and in WKWebView for HTML input elements with the correct attributes. You can read more here:
I hope that's helpful, this would be a killer feature for those using Indie/Third Party Password Managers... | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510427.16/warc/CC-MAIN-20230928162907-20230928192907-00634.warc.gz | CC-MAIN-2023-40 | 1,168 | 6 |
http://programmingprojecthelp29516.blog5.net/10599867/programming-homework-help-an-overview | code | Myassignmenthelp settled all my queries and saved me up-to-date in regards to the development of my assignment. It had been in time. I had been surprised..no mistakes, no plagiarism and properly investigated. Now I can trust in them blindly and they are my head over to service for virtually any assignment endeavor!
I attempted to locate some ways to operate the Xcode on my Home windows Personal computer, But my every move was Virtually failed.I got some results, but It was useless.
These ought to be viewed as experimental. Dependant upon the individual e book reader that you simply use, there may be problems with rendering of extended lines in program code sample. You might see that strains which have been way too long to fit across your monitor are incorrectly break up into many traces, or the part that extends beyond the ideal margin is solely dropped.
Learners who're acquiring issues simplifying and solving expressions will not be willing to tackle the phrase challenges from the programs portion.
But when I started off getting assignment help from MyAssignmenthelp.com, my grades began improving. My academics are impressed plus they normally look forward to my assignments. Never ever thought It might be probable! Many thanks a ton fellas!
Issues with programming assignments are the first worries learners face even though striving to complete hard diploma courses. We've produced a staff of specialists with working experience and degrees in your fields to present you with programming guidance which is in line with the ideal techniques designed inside the existing by our many workers.
Awareness, expertise and creativeness are three capabilities we think about right before selecting a author. All our Expert assignment writers have acquired Ph.
Python is actually a common-intent, well-known and flexible programming language. It’s fantastic as being a mother tongue on account of The point that it is straightforward and succinct to take a look at, and it is Also a great language to get in any developer’s stack as it might be utilized for whatever from web development to program application development and scientific programs.
Characteristics of our programming assignment help service Couple with the characteristics of our online programming assignment help services is:
For loops is made up of initializer, problem exam, modifier and physique Just about every of those could be empty. A while loop, can have a situation either At the beginning or the top of the loop.
Instance: Assuming that a is usually a numeric variable, the assignment a := two*a implies that the content material of the variable a is doubled after the execution of the assertion.
In these Pc science minimal project, You might want to do your best to receive the right grade click to read mainly because these tiny projects go over the massive of matter credit rating (20-60%) so you must get great marks to move that program.
Programming homework doesn't have being the worst working experience of your educational lifestyle! Use our specialist programming options, and you'll get your do the job performed In line with significant requirements you require.
A static approach, does not really need to make reference to an current object, as well as a virtual process is 1 in which you call the tactic determined by The category of the thing, so You need to use the sound method from The category animal, and it'll connect with the strategy described in the Puppy or cat class, dependant upon the type of the item. | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519757.94/warc/CC-MAIN-20190124080411-20190124102411-00196.warc.gz | CC-MAIN-2019-04 | 3,525 | 14 |
http://stackoverflow.com/questions/1304082/how-to-connect-to-shoutcast-server-using-java-code?answertab=votes | code | I want to connect my java code with the SHOUTCAST server for the purpose of making an internet radio. So please suggest me how to proceed. And also tell me if the SHOUTCAST source code is available from net.
closed as too broad by bluefeet♦ Aug 8 '14 at 1:57
There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs. If this question can be reworded to fit the rules in the help center, please edit the question. | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453759945.83/warc/CC-MAIN-20150501041559-00016-ip-10-235-10-82.ec2.internal.warc.gz | CC-MAIN-2015-18 | 560 | 3 |
http://www.sevenforums.com/gaming/232520-assassin-s-creed-series.html | code | I'm looking for some advice/suggestions on this series of games, specifically for the Xbox 360. I have the original, and a played it for a few hours, but I remember getting stuck on horseback somewhere and never continued past that point.
So, my question is, are all the games related, like if I skipped ahead, would I miss key details? Which game is the easiest to play? I'm not someone who spends hours a day gaming. Which one involves the Civil War? I saw some screenshots for a game with Civil War soldiers in it. | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164036080/warc/CC-MAIN-20131204133356-00012-ip-10-33-133-15.ec2.internal.warc.gz | CC-MAIN-2013-48 | 517 | 2 |
https://www.airsoftsociety.com/threads/echo-1-vector-arms-mp5a5.60723/ | code | Disassembling, when I came upon this little guy (the plastic ring). I need to get beneath this to remove a few more screws, then get the gearbox out. I can't seem to get the plastic ring off. I've done some research, but never found an answer as to removing this piece. Any help would be appreciated. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00194.warc.gz | CC-MAIN-2021-21 | 300 | 1 |
http://cooking.stackexchange.com/questions/tagged/cooking-myth+cookies | code | Seasoned Advice Meta
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Is it true that natural peanut butter splits in cookies?
I was watching a video recipe about peanut butter cookies. The maker mentioned that you shouldn't use all natural peanut butter for making those cookies, because the oils would make your dough split. ...
Apr 6 '12 at 11:04
newest cooking-myth cookies questions feed
Hot Network Questions
Why do graphic designers / illustrators start with a large painted area?
X or Y guitarist doesn't know music theory - how true is this statement?
Adding user of one salesforce org to another salesforce org
Tests on more "human-ish" forms
Class does not contain a constructor that takes 0 arguments
Render Transparency in edit mode
How can we move an object with zero velocity?
The Floating Horde
Will floating point operations on the JVM give the same results on all platforms?
For every rational number, does there exist a sequence of irrationals which converges to it?
Why is Quicksort described as "in-place" if the sublists take up quite a bit of memory? Surely only something like bubble sort is in-place?
move forward and backward by one word
What are the tax implications of exchanging one fund for another in a tax-sheltered account?
Simple combinatorics question - caught off guard!
Doing all the simulations and plotting, do I deserve an authorship?
Advanced Task Manager Application for Win7
Trolling the troll
Is it possible for a modern commercial airplane (namely Boeing 777) to stop being tracked without substantial mechanical failure?
Find out what command I last ran that starts a certain way?
Where can I stash a rental car while visiting London?
Which method is overridden?
Click on graph to create plot
Is it OK to set up passwordless `sudo` on a cloud server?
If DOS is single-tasking, how was multitasking possible in old version of Windows?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Overflow
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0 | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021585790/warc/CC-MAIN-20140305121305-00042-ip-10-183-142-35.ec2.internal.warc.gz | CC-MAIN-2014-10 | 2,468 | 53 |
https://forums.factorio.com/viewtopic.php?f=7&t=102128 | code | - Factorio 1.1.57 (build 59622, linux64, steam)
- Mods used are BottleneckLite, flib, power-grid-comb
- GPU Info: NVIDIA RTX A2000 Mobile / Intel TigerLake-H GT1 [UHD Graphics]
- CPU Info: 11th Gen Intel i9-11950H (16) @ 4.9GHz
- I am not using any custom launch config settings.
I cannot reproduce it consistently, I observe the freeze two times during 30 hours of gameplay
Game window freezes (static image). If I Alt+Tab to another window, then I can't Alt+Tab back to game again (nothing happens when I try to do so). After few minutes of freeze, game exited itself. Note that game music is playing all the time until game exited (but music is also kind of static, like one sound is repeated over and over).
Other possibly useful details:
- Log file from freeze:
- Autosave before the freeze just in case:
I understand that the information I provided is not enough. I reported this mainly to ask, what info should I collect if/when game freezes again? Maybe some sort of thread dump, etc (maybe any recommended way to do it on Linux) | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00185.warc.gz | CC-MAIN-2022-27 | 1,037 | 11 |
https://armi.usgs.gov/content/?contentid=190899 | code | Estimating the probability of movement and partitioning seasonal survival in an amphibian metapopulation
Movement of individuals has been described as one of the best studied, but least understood concepts in ecology. The magnitude of movements, routes, and probability of movement, has significant application to conservation. Information about movement can inform efforts to model species persistence and is particularly applicable in situations where specific threats (e.g., disease) may depend on the movement of hosts and potential vectors. We estimated the probability of movement (breeding dispersal and permanent emigration) in a metapopulation of 16 breeding sites for boreal toads (Anaxyrus boreas boreas). We used a multi-state mark-recapture approach unique in its complexity (16 sites over 18 years) to address questions related to these movements and variation in resident survival. We found that individuals had a 1-2% probability of dispersing in a particular year and that approximately 10-20% of marked individuals were transient and observed in the metapopulation only once. Resident survival probabilities differed by season, with 71-90% survival from emergence from hibernation through early post-breeding and > 97% survival from mid/late active season through hibernation. Movement-related probabilities are needed to predict species range expansions and contractions, estimate population and metapopulation dynamics, understand host-pathogen and native-invasive species interactions, and to evaluate the relative effects of proposed management actions. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00398.warc.gz | CC-MAIN-2022-40 | 1,575 | 2 |
http://ragnarokmobile.net/guide/clock-tower-1f-mechanical-heart-quest-find-the-real-cat-in-ragnarok-mobile | code | Another exploration quest you will encounter is finding a Cat Pirate. This quest is given after completing the Sundial and Killing 90 Alarms inside Clock Tower 2F. Your task is to find and the hunt down the real Pirate Cat. If you think its a Cat, then you are wrong.
Go to Clock Tower 1F, he is standing at the northwest of the map. His name is True Cat Pirate. Follow the image above for the exact location.
You will recieve New Scenery, Hot Meal *1, Tattered Time Pointer *100, Base and Job exp as reward completing this quest. | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00082.warc.gz | CC-MAIN-2020-29 | 530 | 3 |
https://www.rokuguide.com/channels/unraveledtv | code | Quick Look: UnraveledTV is an indie music channel with a small collection of music videos that feature unsigned artists performing original content. Content is available in the following genres:
- Smooth Jazz
Up-and-coming artists include Pax Taylor, The Sickstring Outlaws, How to Loot Brazil, and Dorian Gray. One of the videos currently available on this channel can be seen below.
-- Information is current as of July 16, 2020
Roku Channel Store Description: Real Music TV the way it should be.
CHANNEL STORE CATEGORY: Music | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00485.warc.gz | CC-MAIN-2021-10 | 528 | 6 |
http://bathroomsources.com/new-comment-by-urldev-in-ask-hn-who-wants-to-be-hired-april-2020/ | code | Remote: Only remote, please
Willing to relocate: No
Hi, I am Can. I am a career changer Front-End Developer, previously team leader, and Helicopter Pilot. I am looking for my first developer job.
Before scrolling down, check my projects on GitHub and portfolio website. If you are interested, do drop me a line! | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00228.warc.gz | CC-MAIN-2021-31 | 311 | 4 |
http://www.internationalskeptics.com/forums/showpost.php?s=3843112f9c1378d7feb6253a58aaa7bd&p=12633569&postcount=138 | code | Originally Posted by JoeMorgue
Sure, I agree. Unfortunately (and I say unfortunately because I actually love the idea of being able to have a conversation with an actual AI), so far we have yet to create something that can even "behave" as if it was actually intelligent, at least as intelligent as us (let alone more intelligent) We have yet to create a chatbot that can actually hold a conversation without non-sequiturs and nonsensical responses (Although your point does make a lot of sense because even in these discussions in this forum with actual human beings, I encounter a lot of them also vomiting out non-sequiturs and completely failing to follow what I'm saying, so touche) Sometimes the "broken clock is right twice a day" rule applies in a very poetic way when interpreting the chatbot's response. I remember my first interaction with the John Lennon chatbot, and I asked him "Do you ever die??" to which he replied something like "I try to die as many times as possible. How about you?". I smiled as I thought "Well.... that does sound like something John Lennon would actually say in his typical dry humor". Still, the lack of actual intelligence in chatbots is transparent. It is evident that they are computer programs doing their best job to behave as if they were listening and giving their own "thoughts" on the subjects, but they are not.
But that's a separate question from what I meant by AI, again, as it's understood in the sense that people like Sam Harris and Elon Musk mean when they talk about AI. They're talking about a form of intelligence that is superior to us, and that keeps growing its intelligence exponentially.
The essential question, devoid of all technicalities as to "what constitutes an actual AI blablabla", can be simply phrased this way: If we could create something that is smarter than us.... should we?
I think not. | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00126.warc.gz | CC-MAIN-2019-35 | 1,868 | 5 |
https://reverseengineering.stackexchange.com/questions/26964/idapython-ntcreatefile | code | Let's say I want to print the filenames on every call to NtCreateFile (With %any% exe loaded in IDA )
The first problem is to get the
Tried to do it like this
"module 'ntdll' has no names"
Although the call
get_name_ea_simple('kernel32_CreateFileW') works just fine:
(if debugger paused on executable EP)
And here is second problem - exec script commands after debugger loads all modules info. If I do something like:
run_to(get_inf_attr(INF_MIN_EA)) # start the debugger and execute to the entry point CreateFileW = get_name_ea_simple('kernel32_CreateFileW') if CreateFileW == BADADDR: warning('kernel32_CreateFileW is null') return
I'll get my warning. So how to do it right?
I found out that if we stop at the entry point and manually load symbols for
ntdll, then the following command works | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00197.warc.gz | CC-MAIN-2021-49 | 794 | 12 |
http://en.wikipedia.org/wiki/Hotfix | code | ||This article needs additional citations for verification. (May 2010)|
A hotfix or Quick Fix Engineering update (QFE update) is a single, cumulative package that includes information (often in form of one or more files) that are used to address a problem in a software product (i.e. a software bug). Typically, hotfixes are made to address a specific customer situation and may not be distributed outside the customer organization. The term hotfix was originally applied to software patches that were applied to live (i.e. still running) systems. Similar use of the terms can be seen in hot swappable disk drives. The more recent usage of the term is likely due to software vendors making a distinction between a hotfix and a patch.
A hotfix is a single, cumulative package that includes one or more files that are used to address a problem in a software product (i.e. a software bug). Typically, hotfixes are made to address a specific customer situation and may not be distributed outside the customer organization.
A hotfix package might contain several encompassed bug fixes, raising the risk of possible regressions. An encompassed bug fix is a software bug fix which is not the main objective of a software patch, but rather the side effect of it. Because of this some libraries for automatic updates like StableUpdate also offer features to uninstall the applied fixes if necessary.
Most modern operating systems and many stand-alone programs offer the capability to download and apply fixes automatically. Instead of creating this feature from scratch, the developer may choose to use a proprietary (like RTPatch) or open-source (like StableUpdate and JUpdater) package that provides the needed libraries and tools.
There are also a number of third-party software programs to aid in the installation of hotfixes to multiple machines at the same time. These software products also help the administrator by creating a list of hotfixes already installed on multiple machines.
Vendor-specific definition
Microsoft Corporation once used the terms "hotfix" or "QFE" but has stopped in favor of new terminology: updates are either delivered in the General Distribution Release (GDR) channel or the Limited Distribution Release (LDR) channel. The latter is synonymous with QFE. GDR updates receive extensive testing whereas LDR updates are meant to fix a certain problem in a small area and are not released to the general public. GDR updates may be received from the Windows Update service or the Microsoft Download Center but LDR updates must be received via Microsoft Support.
"A hotfix is a change made to the game deemed critical enough that it cannot be held off until a regular content patch. Hotfixes require only a server-side change with no download and can be implemented with no downtime, or a short restart of the realms."
See also
- Bragg, Roberta (2003). "5: Designing a Security Update Infrastructure". MCSE Self-Paced Training Kit (Exam 70-298): Designing Security for a Microsoft Windows Server 2003 Network. Redmond, WA: Microsoft Press. p. "5–12". ISBN 0735619697.
- Mu, Chris (26 December 2007). "Something about Hotfix". HotBlog. Microsoft. Retrieved 8 November 2012.
- "Description of the contents of Windows XP Service Pack 2 and Windows Server 2003 software update packages (revision 11.1)". Support. Microsoft. 16 January 2008. Retrieved 8 November 2012.
- "What is the difference between general distribution and limited distribution releases?". MSDN Blogs. Microsoft. 11 March 2008. Retrieved 8 November 2012.
- Adams, Paul (14 May 2009). "GDR, QFE, LDR... WTH?". TechNet Blogs. Microsoft. Retrieved 8 November 2012.
- "WoW -> Info -> F.A.Q. -> Patches". November 1, 2009.
- Microsoft Hotfix Website - Newly released hotfix Knowledge Base articles and introduction of Microsoft hotfix concepts | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698150793/warc/CC-MAIN-20130516095550-00017-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 3,827 | 17 |
https://forums.developer.nvidia.com/t/vdpvideomixerrender-fails-returning-error-code-21/31682 | code | I’m trying to mix a UYVY VdpVideoSurface to a BGRA VdpOutputSurface and using the VdpLayer to composite a BGRA image on top of the whole scene.
If I set the VdpLayer count to zero, the VdpVideoSurface renders correctly, only when I try to composite an overlay does the function fail.
Is there some limitation as to what the VdpVideoMixerRender function can do that isn’t obvious? Is there some other API that I should be using?
I figured out a different way to do this (VdpOutputSurfaceRenderOutputSurface). I’ll just use this, but it would be nice to know why the VdpLayer option is part of VdpVideoMixerRender but it doesn’t work.
I finally figured out how to decode the cryptic VDPAU API (sarcasm). When creating the mixer, the VDP_VIDEO_MIXER_PARAMETER_LAYERS parameter must be provided, otherwise the compositing feature is disabled throwing the catch-all error when layer count is greater than 0. Hopefully this will help someone else… | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818072.58/warc/CC-MAIN-20240422020223-20240422050223-00463.warc.gz | CC-MAIN-2024-18 | 951 | 5 |
https://www.ingentaconnect.com/content/tpp/pap/pre-prints/content-policypold1800128r3 | code | Do citizens use storytelling or rational argumentation to lobby politicians?
What should count as legitimate forms of reasoning in public deliberation is a contested issue. Democratic theorists have argued that storytelling may offer a more accessible form of deliberation for marginalised citizens than ‘rational argumentation’. We investigate the empirical support for this claim by examining Swedish citizens’ use of storytelling in written communication with the political establishment. We test whether stories are used frequently, as well as by whom, and how they are used. We find that storytelling is (1) rare, (2) not more frequent among people with nonmainstream views, and (3) used together with rational argumentation. In line with some previous research, we show that stories still play other important roles: authorising the author, undermining political opponents and, most often, further supporting arguments made in ‘rational’ form. The results suggest that people rely more on rational argumentation than storytelling when expecting interlocutors to be hostile to their views.
No Supplementary Data
No Article Media | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00553.warc.gz | CC-MAIN-2019-43 | 1,143 | 4 |
https://stat-art.com/doc_NXRMcU9aYUF3V05KL09tdUNFZTlNQT09 | code | The article is about the author's favorite digital game mechanism called "Wildfrost." It is a game mechanism that has caught their attention and stood out among other digital games. The author expresses admiration for Wildfrost and reveals that it is their favorite. However, no further details or explanations are provided about what specifically makes Wildfrost unique or special. Overall, the article is a brief personal opinion on the author's favorite digital game mechanism. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00165.warc.gz | CC-MAIN-2023-40 | 480 | 1 |
http://www.patentsencyclopedia.com/app/20120310890 | code | Patent application title: DATA COMPRESSION AND STORAGE TECHNIQUES
Brian Dodd (Longmont, CO, US)
Michael Moore (Lafayette, CO, US)
DATA STORAGE GROUP, LLC
IPC8 Class: AG06F1730FI
Class name: Database backup types of backup incremental backup
Publication date: 2012-12-06
Patent application number: 20120310890
Provided are systems and methods for use in data archiving. In one
arrangement, compression techniques are provided wherein an earlier
version of a data set (e.g., file folder, etc) is utilized as a
dictionary of a compression engine to compress a subsequent version of
the data set. This compression identifies changes between data sets and
allows for storing these differences without duplicating many common
portions of the data sets. For a given version of a data set, new
information is stored along with metadata used to reconstruct the version
from each individual segment saved at different points in time. In this
regard, the earlier data set and one or more references to stored
segments of a subsequent data set may be utilized to reconstruct the
subsequent data set.
1. A method for use in computerized data storage, wherein a computerized
system is operative to utilize computer readable media to back-up a data
set, comprising: generating hash signatures associated with identifying
data and content of individual portions of an initial data set;
transferring the initial data set to a storage location via a network
interface; at a time subsequent to transferring the initial data set,
performing a back-up of a subsequent data set associated with the initial
data set, wherein performing the back-up comprises: generating hash
signatures associated with identifying data and content of individual
portions of the subsequent data; comparing the hash signatures of
corresponding portions of the initial data set and the subsequent data
set to identify changed portions of the subsequent data set; obtaining
corresponding portions of the initial dataset that correspond to the
changed portions of the subsequent data set; preloading a
dictionary-based compression engine with one of the corresponding
portions of the initial data set, wherein the one corresponding portion
of the initial data set is loaded in the dictionary-based compression
engine and defines an individual dictionary block; compressing a
corresponding one of the changed portions of the subsequent data set
using the dictionary-based compression engine as loaded with the
corresponding portion of the initial data set as a dictionary, wherein a
compressed data portion is generated; and storing the compressed data
portion to the storage location via the network interface to define a
back-up version of the subsequent data set.
2. The method of claim 1, further comprising: repeating the preloading and compressing steps for each of the changed portions of the individual data portions of the subsequent data set and corresponding individual portions of the initial data set, to generate a set of compressed data portions defining changes between the initial data set and the subsequent data set.
3. The method of claim 1, wherein preloading the dictionary-based compression engine further comprises: buffering content of the one corresponding portion of the initial data set into a first series of data segments; buffering the content of the changed portion of the subsequent data set into a second series of like-sized data segments.
4. The method of claim 3, wherein preloading and compressing comprises: preloading the dictionary-based compression engine with one data segment of the first series of data segments; and compressing a corresponding one of the second series of data segments using the dictionary-based compression engine as loaded with the one data segment of the first series of data segments.
5. The method of claim 1, wherein the each compressed data portion references the individual dictionary block utilized to generate the compressed data portion.
6. The method of claim 1, wherein obtaining the corresponding portions of the initial dataset comprises receiving the corresponding portions via the network interface from the storage location.
7. The method of claim 1, wherein the hash signature of the initial data set is stored at an origination location of the subsequent data set.
8. The method of claim 1, wherein comparing further comprising: upon failing to match an identifier hash of said hash signatures of a portion of the subsequent data set with an identifier hash of the hash signatures of the initial data set; comparing a content hash of said hash signatures of the portion of the subsequent data set with content hashes of the hash signatures initial data set; and determining if a corresponding content hash exists for a portion of the initial data set.
9. The method of claim 8, further comprising: upon identifying a corresponding content hash for the initial data set, obtaining the portion of the initial data set from the storage location.
10. The method of claim 1, wherein the portions of the initial data set and the subsequent data set are defined by files in the data sets.
11. The method of claim 1, wherein the portions of the initial data set and the subsequent data set are defined by predefined byte lengths.
12. The method of claim 11, wherein the predetermined byte sets are between 1 Megabyte and 1 Gigabyte.
13. The method of claim 1, wherein changed portions of the subsequent data set are stored at an origination location of the subsequent data set, wherein the changed portions are available for use as dictionary blocks in a further second back-up of a second subsequent data set.
14. The method of claim 1, wherein the signature is generated at the origination location and stored at the storage location.
15. The method of claim 1, further comprising, transferring the signature from the storage location to an origination location of the subsequent data set, wherein comparison of the signatures is performed at the origination location.
16. A method for use in computerized data storage, wherein a computerized system is operative to utilize computer readable media to back-up a data set, comprising: delineating an initial data set into a first set of data portions having a predetermined size; generating a hash signature associated with each data portion of the initial data set; storing the data portions of the initial data set; at a time subsequent to storing the initial data set, performing a back-up of a subsequent data set associated with the initial data set, wherein performing the back-up comprises: delineating the subsequent data set into a second set of data portions having the same predetermined size as the data portions of the first data set; generating a hash signature associated with each data portion of the subsequent data set; comparing hash signatures the initial data set and the subsequent data set to identify data portions of the subsequent data set that are different from corresponding data portions of the first data set; preloading a dictionary-based compression engine with one of the corresponding data portions of the initial data set; compressing a corresponding one of the changed data portions of the subsequent data set using the dictionary-based compression engine as loaded with the one corresponding portion of the initial data set as a dictionary, wherein a compressed data portion is generated; and storing the compressed data portion to at least partially define a back-up version of the subsequent data set.
17. The method of claim 16, further comprising repeating the preloading and compressing steps for each of the changed data portions and corresponding data portions, respectively, to generate a series of compressed data portions; and storing the serried of compressed data portions to at least partially define the back-up version of the subsequent data set.
18. The method of claim 16, wherein storing the data portions of the initial data set further comprises; transferring the data portions across a data network to a data storage location.
19. The method of claim 18, further comprising; retrieving the corresponding data portions from the data storage location.
20. The method of claim 19, wherein the each compressed data portion references the corresponding data portion as an individual dictionary block.
21. The method of claim 16, wherein the steps of comparing, preloading and compressing are performed on multiple processors for individual data portions.
22. The method of claim 16, wherein delineating comprises: delineating the data sets into virtual pages having a predetermined byte size.
23. A software product, comprising: a computer usable medium having computer readable program code means embodied therein for: generating a hash signatures associated with individual portions of an initial data set; transferring the initial data set to a storage location via a network interface; at a time subsequent to transferring the initial data set, operating the computer readable code means to perform a back-up of a subsequent data set associated with the initial data set, wherein operating the computer readable code means to perform the back-up comprises: generating has signatures associated with individual portions of the subsequent data set; comparing hash signatures the initial data set and the subsequent data set to identify data portions of the subsequent data set that are different from corresponding data portions of the first data set; preloading a dictionary-based compression engine with one of the corresponding data portions of the initial data set; compressing a corresponding one of the changed portions of the subsequent data set using the dictionary-based compression engine as loaded with the corresponding portion of the initial data set as a dictionary, wherein a compressed data portion is generated; and storing the compressed data portion to at least partially define a back-up version of the subsequent data set.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation-in-part of U.S. patent application Ser. No. 13/275,013 entitled. "Data Compression and Storage Techniques", having a filing date of Oct. 17, 2011, which is a continuation of U.S. patent application Ser. No. 12/970,699 entitled, "Data Compression and Storage Techniques", having a filing date of Dec. 16, 2010, which is a continuation of U.S. patent application Ser. No. 11/733,086 entitled, "Data Compression and Storage Techniques", having a filing date of Apr. 9, 2007, and which claims priority to U.S. Provisional Application No. 60/744,477, entitled "Content Factoring for Long Term Digital Archiving" having a filing date of Apr. 7, 2006, the entire contents of which are incorporated by reference herein.
The present application is directed to storing digital data. More specifically, the present application is directed to utilities for use in more efficient storage of digital data wherein certain aspects have application in data archiving.
Organizations are facing new challenges in meeting long-term data retention requirements and IT professionals have responsibility for maintaining compliance with a myriad of new state and federal regulations and guidelines. These regulations exist because organizations, in the past, have struggled with keeping necessary information available in a useable fashion. Compounding this problem is the continued explosive growth in digital information. Documents are richer in content, and often reference related works, resulting in a tremendous amount of information to manage.
In order to better understand underlying access patterns, it's helpful to first briefly describe the classification of digital information. The collection of all digital information can be generally classified as either structured or unstructured. Structured information refers to data kept within a relational database. Unstructured information is everything else: documents, images, movies, etc. Both structured and unstructured data can be actively referenced by users or applications or kept unmodified for future reference or compliance. Of the structured and unstructured information, active information is routinely referenced or modified, whereas inactive information is only occasionally referenced or may only have the potential of being referenced at some point in the future. The specific timeframe between when information is active or inactive is purely subjective.
A sub-classification of digital information describes the mutability of the data as either dynamic or fixed. Dynamic content changes often or continuously, such as the records within a transactional database. Fixed content is static read-only information; created and never changed, such as scanned check images or e-mail messages. With regard to long-term archiving inactive information, either structured or unstructured, is always considered to have fixed-content and does not change.
Over time, information tends to be less frequently accessed and access patterns tend to become more read-only. Fixed-content read-only information is relatively straightforward to manage from an archiving perspective. Of course, even at the sub-file level dynamic information, either structured or unstructured, may contain large segments of content which are static. Examples of this type of information include database files where content is being added, and documents which are edited.
Irrespective of the type of digital information, fixed or dynamic, many organizations back up their digital data on a fixed basis. For instance, many organizations perform a weekly backup where all digital data is duplicated. In addition, many of these organizations perform a daily incremental backup such that changes to the digital data from day-to-day may be stored. However, traditional backup systems have several drawbacks and inefficiencies. For instance, during weekly backups, where all digital data is duplicated, fixed files, which have not been altered, are duplicated. As may be appreciated, this results in an unnecessary redundancy of digital information as well as increased processing and/or bandwidth requirements. Another problem, for both weekly as well as incremental backups is that minor changes to dynamic files may result in inefficient duplication of digital data. For instance, a one-character edit of a 10 MB file requires that the entire contents of the file to be backed up and cataloged. The situation is far worse for larger files such as Outlook Personal Folders (.pst files), whereby the very act of opening these files causes them to be modified which then requires another backup.
The typical result of these drawbacks and inefficiencies is the generation of large amounts of back up data and in the most common back-up systems, the generation of multiple data storage tapes. In this regard, the inefficient backups result in the generation of multiple backup tapes, which then have to be stored. Typically, such tapes are stored off-line. That is, the tapes may be stored where computerized access is not immediately available. Accordingly, to recover information from a backup tape may require contacting an archiving facility, identifying a tape and waiting for the facility to locate and load the tape.
As the price of disk storage has come down, there have been attempts to alleviate the issues of tape backups utilizing disk backups. However, these disk backups still require large amounts storage to account for the inefficient duplication of data. Accordingly, there have been attempts to identify the dynamic changes that have occurred between a previous backup of digital data and current set of digital data. In this regard, the goal is to only create a backup of data that has changed (i.e, dynamic data) in relation to a previous set of digital data.
One attempt to identify dynamic changes between data backups and store only the dynamic changes is represented by Capacity Optimized Storage (COS). The goal of COS is to de-duplicate the redundancy between backup sets. That is, the goal of COS is to try to compare the current data set with a previously stored data set and only save the new data. Generally, COS processing divides an entire set of digital data (e.g., of a first backup copy) into data chunks (e.g., 256 kB) and applies a hashing algorithm to those data chunks. As will be appreciated by those skilled in the art, this results in a key address that represents the data according to the hash code/algorithm. When a new data set (e.g., a second back up copy) is received for backup, the data set is again divided into data chunks and the hashing algorithm is applied. In theory, if corresponding data chunks between the first and second data sets are identical, it is assumed that there has been no change between backups. Accordingly, only those chunks which are different from the first backup set are saved, thereby reducing the storage requirements for subsequent back ups. The main drawback to COS is that to significantly reduce the redundancy between backup sets, it is desirable to utilize ever smaller data chunks. However, as the size of the data chunks is reduced, the number of key addresses increases. Accordingly, the storage and indexing of the increased number of key address works to eliminate the benefits of the reduced amount of duplicate data.
Use of COS processing allows for the creation of disk accessible data back up thereby allowing for more ready access to backed up data sets. In this regard, COS may be incorporated into a virtual tape library VTL such that it emulates a tape storage device. The system allows the user to send data to an off-site disk storage center for back up. However, this requires that an entire data set be the transmitted to the VTL, where the entire data set may be optimized (e.g., COS) for storage. Further, for each subsequent backup, the entire data set must again be transferred to the offsite storage center. As may be appreciated, for large organizations having large data sets requiring backup, such an off-site storage system that requires transmission of the entire data set may involve large bandwidth requirements to transfer the data the as well as high processing requirements to optimize and compare the data. Finally, organizations utilizing off-site VTL's are 100% reliant on the backup application for restoration of their data again leaving the user potentially exposed to the unavailability of information in the case of accidental deletion or disk corruption.
Existing short-term data protection solutions are cost prohibitive and do little to enable improved access to archived information. The archive techniques described herein provides a long-term solution to managing information as well as providing a solution that may be utilized in disk-based archives. The techniques use existing disk resources, and provides transparent access to collections of archived information. The technique in conjunction with an open architecture object based content store allows for large increases (e.g., 20:1) in effective capacity of disk-based systems with no changes to existing short-term data protection procedures.
In addition, to better optimize the long term storage of content, the new techniques reduce the redundant information stored for a given data set. Adaptive content factoring is a technique, developed by the inventors, in which unique data is keyed and stored once. Unlike traditional content factoring or adaptive differencing techniques, adaptive content factoring uses a heuristic method to optimize the size of each quantum of data stored. It is related to data compression, but is not limited to localized content. For a given version of a data set, new information is stored along with metadata used to reconstruct the version from each individual segment saved at different points in time. The metadata and reconstruction phase is similar to what a typical file system does when servicing I/O requests.
While the aspects described herein are in the general context of computer-executable instructions of computer programs and software that run on computers (e.g., personal computers, servers, networked computers etc.), those skilled in the art will recognize that the invention also can be implemented in combination with other program modules, firmware and hardware. Generally, program modules include routines, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention can be practiced with other computer configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, cloud based computing and the like.
According to a first aspect of one invention, a method and system (utility) is provided for storing data. The utility entails receiving a first data set and compressing the first data set using a dictionary based compression engine. Such compression generates a first compressed file that represents the first data set. This first compressed file is then stored. This first compressed file may then be utilized to identify changes in a subsequent version of the first data set. As utilized herein, it will be appreciated that `data set` is meant to include, without limitation, individual data files as well as folders that include a plurality of data files and/or drives that may include a plurality of folders. In such instances, compressing the first data set may generate a corresponding plurality of first compressed files.
In one arrangement, using the first compressed file to identify changes includes preloading a dictionary-based compression engine with the first compressed file to define a conditioned compression engine. That is, the first compressed file may be loaded into the compression engine to define a dictionary for the compression engine. If the first data set and subsequent data set are substantially similar, use of the first data set as a dictionary for the compression engine will result in a highly compressed second data set. Accordingly, the utility includes compressing the subsequent version of the first data set using the conditioned compression engine. In this regard, a second compressed file is generated that is indicative of the subsequent version of the first data set. This second compressed file may also be indicative of changes between the subsequent data set and the first data set. Further, the second compression file may include one or more references to the first compressed file. The second compressed file may be considerably smaller than the first compressed file. It will be appreciated that multiple subsequent sets of data may be compressed utilizing one or more earlier data sets as a dictionary for a dictionary based compression engine.
In order to identify corresponding portions of the first data set with corresponding portions of the second data set (e.g., corresponding files) the utility may further entail generating identifier information for one or more individual portions of the data sets. For instance, hash code information (also referred to herein as "hash information" and a "hash" or "hashes") may be generated for individual portions of the data sets. Further, such hash information may be generated for individual components of each individual portion of the data sets. In one arrangement, one or more hash codes may be associated with the metadata associate with a given file and another hash code may be generated for the content of the file. Accordingly, such hash codes may be utilized to identify corresponding portions of the first data set and the subsequent data set for compression purposes. If no corresponding hash codes exist for portions of the subsequent data set, normal compression methods may be utilized on those portions of the subsequent data set.
According to another aspect, a system and method (utility) is provided for compressing data. The utility includes receiving a file and determining that a previous version of the file has been previously stored. Once such a determination is made, the file may be compressed using compression dictionary terms generated from the previous version of the file. Accordingly, a compressed file is generated for the received file. This compressed file may then be stored. The compression dictionary terms may be generated from the previous version of the file or a compressed version of the previous version of the file. In either arrangement, the utility may include preloading a compression engine with the previous version of the file and buffering the received file in portions with the compression engine. This may allow for substantially matching the buffered portions of the received file with like sized portions of the previous file.
The determination that a previous version of the file has been previously stored may be made in any appropriate manner. For instance, files may be saved on a file by file basis wherein a user selects the previously stored version of the file during a back-up procedure. In another arrangement, hashes associated with the version references (e.g., associated with metadata of the files) may be utilized to determine relationships between the files. In one arrangement, first and second hashes are associated with the metadata of the previously stored file and the received file. In such an arrangement a corresponding first hash of the files may match (e.g., corresponding to a storage location) while a second corresponding hash (e.g., a version reference) of the files may not match. In this regard, it may be determined that the files are related but have changes there between. Accordingly, it may be desirable to compress the subsequent file utilizing the previous file in order to reduce volume for back-up purposes.
According to another inventive aspect, a system and method (utility) is provided for use in archiving and/or storing data. The utility entails generating an individual signature for a data set such that the signature may be compared to subsequent data sets to identify corresponding or like portions and, hence, differences between those data sets. Accordingly, like portions of the data sets need not be copied in a back-up procedure. Rather, only new portions (e.g., differences) of the subsequent data set need be copied for archiving/back-up purposes.
One aspect, the utility includes generating a first signature associated with the first data set. Wherein generating the first signature includes generating a first set of hashes (e.g., hash codes) associated with metadata of the first data set. In addition, a set of content hashes is generated for the first data set that is associated with the content of the first data set. For instance each individual file or data portion in a data set may include a first hash associated with metadata (e.g. an identifier hash) and a second hash associated with its content (e.g., a content hash). Once generated, the signature including the first hashes and the content hashes may be utilized individually and/or in combination to identify changes between first data set and a subsequent data set. For instance, an identifier hash of the first data set may be compared with corresponding hashes of a subsequent data set. Based on such comparison, it may be determined that changes exist between one or more portions of the first data set and the subsequent data set. That is, it may be determined if changes exist between one or multiple portions of the first and second data sets.
In one arrangement, if an identifier hash of the second data set does not match an identifier hash of the first data set, content associated with the unmatched identifier hash may be compared to content of the first data set. More particularly, that content may be hashed and the resulting content hash code may be compared to content hash codes associated with the first data set. In this regard, even if the identifier of the content does not match an identifier in the first data set, a second check may be performed to determine if the content already exists in the first data set. If the content hash code exits, the content may not be transmitted to a storage location or otherwise stored. If the content hash code of the unmatched identifier hash does not match a content hash code within the first data set, that content may be stored at a storage location.
In one arrangement, the identifier hash, which is associated with metadata, may include first and second identifier hashes. Each of these hashes may be associated with portions of metadata. For instance, one of theses hashes may be a sub-portion of the other hash. In this regard, finer comparisons may be made between data sets to identify changes there between.
In a further inventive aspect, systems and methods (utilities) are provided for allowing distributed processing for archiving purposes. In this regard, rather than transferring an entire data set to an archive location, the identification of changes between an archive data set and a current data set may be performed at the location of the current data set (e.g., a data origination location). Accordingly, the only information that may be sent to the archive location may be differences between a previously stored data set and the current data set.
According to one aspect, a first data set is received for storage (e.g., at an archive/back-up location). A signature may be generated for the first data set and may include a set of identifier hashes that are associated with metadata of the first data set. Likewise, a set of content hashes associated with the content of the first data set may also be generated. The signature may be generated at the data origination location or at the storage location. When it becomes necessary to back-up a current set of data associated with the first data set, the signature may be retrieved from storage or provided to a data origination location associated with the first data set. The signature of the first data set and a subsequent data set may be utilized at the data origination location to determine changes between the first data set and the subsequent data set such that the changes may be identified, compressed and forwarded to the storage location. In this regard, the utility also entails receiving data from the subsequent data set that fails to match one or both of the provided identifier hashes and/or the content hashes.
According to another aspect, a utility is provided wherein a set of identifier hashes associated with metadata of a previously stored data set are received. These identifier hashes are compared to identifier hashes of a current data set. At least a portion of this data set may form a subsequent version of the previously stored dataset. Comparing of the identifier hashes allows for identifying unmatched identifier hashes of the current data set. Accordingly, a portion or all of the content associated with the unmatched identifier hashes may be sent to a storage location.
In a further arrangement, the utility further includes receiving a set of content hashes associated with content of the previously stored data set. In such an arrangement, content hashes associated with the content of the unmatched hashes of a current data set may be compared with the content hashes of the previously stored data set. Accordingly, in such an arrangement, if neither the identifier hash nor the content hash corresponds to a hash of the previously stored data set, the unmatched content may be sent to a storage location.
In the proceeding two aspects, the steps of sending/providing and/or receiving may be performed by a direct connection between, for example, a computer and a storage location (e.g., direct attached storage, a removable hard drive or other portable storage device) or may be performed by a network connection. In the later regard, such network connection may include a wide area network, the internet, direct attached storage network and/or peer computer.
In a further aspect, a system and method are provided for storing and providing access to a plurality of different versions (e.g., sequential versions) of a data set. The utility includes generating a catalog of the different data sets at different points in time. Each catalog includes information needed to reconstruct an associated data set at a particular point in time. That is, rather than generating a full copy of a particular data set for a point in time, the present utility generates a catalog having references to the location of data required to reconstruct a given data set.
In one arrangement, the catalog may include various hash codes for different streams of data (e.g., components of a file). These hash codes may allow for identifying and locating the components of a given file within the catalog. Accordingly, these components may be reconstructed to form the file in the form it existed when the catalog was generated. Stated otherwise, rather than storing the data of a given file, the catalog stores references to the location of the data associated with the file such that duplicating components of the file is not always necessary. Further, it will be appreciated that the stored references of a given catalog may reference different segments of a given file that may be saved at different times.
In any of the aspects, the first data set may be divided into predetermined data portions. Such data portions may have a predetermined byte length. In this arrangement, rather than relying on a file name or path to identify if data is common between different data sets, corresponding portions of the data sets may be compared to determine if differences exist.
In any of the aspects, the processes may be performed on multiple processors to reduce the time required to back-up a data set.
BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary embodiments are illustrated in referenced figures of the drawings. It is intended that the embodiments and figures disclosed herein be considered illustrative rather than limiting.
FIG. 1 illustrates long term storage requirements for a data set.
FIG. 2 illustrates changes to a data set between versions.
FIG. 3 illustrates a process for identifying differences between related data sets.
FIG. 4 illustrates a process for generating a signature for a data set.
FIG. 5 illustrates a process for storing data.
FIG. 6 illustrates an accessible catalog of multiple archive catalogs.
FIG. 7 illustrates a process for retrieving data.
FIG. 8 illustrates a process for reconstructing data.
FIG. 9 illustrates storage of data over a network.
FIG. 10 illustrates one embodiment of storing meta-data with content data.
FIG. 11A illustrates a large data set.
FIG. 11B illustrates a large data set with virtual pagination.
FIG. 12 illustrates another embodiment of storage over a network.
FIG. 13 illustrates generation of a baseline data set without pagination.
FIG. 14 illustrates generation of a baseline data set with pagination.
FIG. 15 illustrates back-up of the data set of FIG. 13.
FIG. 16 illustrates back-up of the data set of FIG. 14.
FIG. 17 illustrates network usage of the back-up of FIG. 15.
FIG. 18 illustrates network usage of the back-p of FIG. 16.
FIG. 19 illustrates back-up of a data set without pagination.
FIG. 20 illustrates back-up of a data set with pagination.
FIG. 21 illustrates back-up of a data set with pagination performed on multiple processors.
Reference will now be made to the accompanying drawings, which assist in illustrating the various pertinent features of the present invention. Although the present invention will now be described primarily in conjunction with archiving/back-up storage of electronic data, it should be expressly understood that the present invention may be applicable to other applications where it is desired to achieve the objectives of the inventions contained herein. That is, aspects of the presented inventions may be utilized in any data storage environment. In this regard, the following description of use for archiving is presented for purposes of illustration and description. Furthermore, the description is not intended to limit the invention to the form disclosed herein. Consequently, variations and modifications commensurate with the following teachings, and skill and knowledge of the relevant art, are within the scope of the present invention. The embodiments described herein are further intended to explain modes known of practicing the invention and to enable others skilled in the art to utilize the invention in such, or other embodiments and with various modifications required by the particular application(s) or use(s) of the present invention.
Strict use of backup and restore processes alone for the purpose of archiving are unacceptable for most regulated environments. With regard to disk-based backup environments using traditional methods are generally cost prohibitive. Two common methods to address increased availability and minimize cost of disk storage are to incorporate either Hardware Based Disk Libraries (HBDL), or Virtual Tape Libraries (VTL). Neither solution deals with data redundancy issues and these solutions do little to reduce overall Total Cost of Ownership (TCO).
An alternate approach adopted by IT organizations is to employ block level snap-shot technologies, such as a volume shadow copy service, or similar hardware vendor provided snap-shot technology. In this scenario changed blocks are recorded for a given recovery point. However, these systems typically reset (roll-over) after a specified number of snap-shots or when a volume capacity threshold is reached. In all cases, after blocks are reused deleted information is no longer available. Furthermore, snap-shot technologies lack any capability to organize data suitable for long-term archiving.
FIG. 1 shows the capacity required to manage a one terabyte volume for two years using a typical 4-week rotation scheme that includes keeping monthly volume images to address archiving requirements. This example models a 50% compound annual growth rate of data. While the overall volume of data to be backed up increases 50%, the data resources required to back-up this data over a year's time based on existing back-up techniques is nearly twenty times that of the original content/data. Also shown is the near-linear scaling, with respect to the original content/data, which can be achieved by using a disk-based archiving method based on techniques (e.g., adaptive content factoring techniques) provided herein. Note that the backend storage requirements are reduced by nearly 20 fold (see axis labeled Effective Capacity Ratio) while providing an increased number of recovery points and improved near-line access to archived information. The TCO approaches that of traditional tape-based backup systems when deployed on low to mid-range disk storage.
The archive technique disclosed herein is characterized as a long-term data retention strategy that may also allow for on-line/dynamic access to reference/stored information. The technique utilizes adaptive content factoring to increase the effective capacity of disk-based storage systems significantly reducing the TCO for digital archiving. Unlike traditional backup and recovery, all the data managed can be on-line and available. Further all the data within the archive remains accessible until it expires. Integrated search and archive collection management features improve the overall organization and management of archived information.
To better optimize the long term storage of content, the new archiving techniques reduce the redundant information stored for a given data set. As redundant information is reduced, fewer storage resources are required to store sequential versions of data. In this regard, adaptive content factoring is a technique in which unique data is keyed and stored once. Unlike traditional content factoring or adaptive differencing techniques, adaptive content factoring uses a heuristic method to optimize the size of each quantum of data stored. It is related to data compression, but is not limited to localized content. For a given version of a data set, new information is stored along with metadata used to reconstruct the version from each individual segment saved at different points in time. The metadata and reconstruction phase is similar to what a typical file system does when servicing I/O requests.
FIG. 2 shows the basic concept behind adaptive content factoring. At T0 a data set V0 (a file, volume, or database) is segmented and the individual elements are keyed and stored along with the metadata that describes the segments and process used to reconstruct the data set. At T1 and T2 the data set is updated such that the data sets become V1 and V2, respectively. However, rather than storing the entire new versions of the data sets V1 and V2 only the changes that represent the update portions of the data sets are stored along with the metadata used to reconstruct versions V1 and V2.
As will be further discussed herein, a novel method is provided for identifying changes (e.g., data blocks 3' and 10) between an initial data set V0 and a subsequent data set V1 such that large sets of data chunks (e.g., files, directories etc) may be compared to a prior version of the file or directory such that only the changes in a subsequent version are archived. In this regard, portions of the original data set V0 (e.g., a baseline version) which have not changed (e.g., data blocks 1,2 and 4-9) are not unnecessarily duplicated. Rather, when recreating a file or directory that includes a set of changes, the baseline version of the file/directory is utilized, and recorded changes (e.g., 3' and 10) or delta are incorporated into the recovered subsequent version. In this regard, when backing up the data set V1 at time T1, only the changes to the initial data set V0 need to be saved to effectively back up the data set V1.
In order to identify the changes between subsequent versions of a data set (e.g., V0 and V1), the present invention utilizes a novel compression technique. As will be appreciated, data compression works by the identification of patterns in a stream of data. Data compression algorithms choose a more efficient method to represent the same information. Essentially, an algorithm is applied to the data in order to remove as much redundancy as possible. The efficiency and effectiveness of a compression scheme is measured by its compression ratio, the ratio of the size of uncompressed data to compressed data. A compression ratio of 2 to 1 (which is relatively common in standard compression algorithms) means the compressed data is half the size of the original data.
Various compression algorithms/engines utilize different methodologies for compressing data. However, certain lossless compression algorithms are dictionary-based compression algorithms. Dictionary based algorithms are built around the insight that it is possible to automatically build a dictionary of previously seen strings in the text that is being compressed. In this regard, the dictionary (e.g., resulting compressed file) generated during compression does not have to be transmitted with compressed text since a decompressor can build it in the same manner of the compressor and, if coded correctly, will have exactly the same strings the compressor dictionary had at the same point in the text. In such an arrangement, the dictionary is generated in conjunction with an initial compression.
The present inventors have recognized that a dictionary may, instead of being generated during compression, be provided to a compressor for the purpose of compressing a data set. In particular, the inventors have recognized that an original data set V0 associated with a first time T0 as shown in FIG. 2, may be utilized as a dictionary to compress a subsequent corresponding data set V1 at a subsequent time T1. In this regard, the compressor utilizes the original data set V0 as the dictionary and large strings of data in the subsequent data set V1 may be entirely duplicative of strings in the first set. For instance, as illustrated in FIG. 2, the actual storage of V1 at time T1 may incorporate a number of blocks that correspond to the data blocks of V0 at time T0. That is, some of the blocks in the second data set V1 are unchanged between data sets. Therefore, rather than storing the unchanged data block (e.g., duplicating the data block) an identifier referencing the corresponding data block from V0 may be stored. Accordingly, such an identifier may be very small, for example, on the order of 10 bytes. For instance, the identifier may reference a dictionary block of the baseline. In instances where there has been a change to a block of data, for example, 3', the compressor may be operative to compress the changes of 3' into an entry that includes differences to the baseline V0, as well as any changes in block 3. In addition, if additional text is added to the subsequent version (e.g., block 10'), this may be saved in the subsequent version T1.
In instances where very minor changes are made between subsequent versions of a data set, very large compression ratios may be achieved. These compression ratios may be on the order of 50 to 1, 100 to 1, 200 to 1 or even larger. That is, in instances where a single character is changed within a 10-page text document, the compression between the original version and the subsequent version may be almost complete, except for the one minor change. As will be appreciated, utilization of the original data set as the originating dictionary for a compression algorithm allows for readily identifying changes between subsequent data sets such that very little storage is required to store subsequent changes form the baseline data set V0. Accordingly, when it is time to recreate a subsequent version of a data set, the dictionary identifiers for the desired version of the data set may be identified. In this regard, when there is no change, the dictionary identifiers may point back to the original block of the baseline data set V0. In instances when there is a change (e.g., 3' or 6'), the identifier may point back to the original baseline data set and a delta data set. Such an arrangement allows for saving multiple subsequent versions of data sets utilizing limited storage resources.
The method works especially well when there are minor changes between back-ups of subsequent versions of data sets. However, even in instances where significant changes occur to a data set in relation to a previously backed-up data set, a significant reduction in the size of the data is still achieved. For instance, if an original data set corresponds with a 10-page text document and the subsequent corresponding document incorporates 15 new pages (i.e., for a combined total of 25 pages), the first 10 pages may achieve near perfect compression (e.g., 200 to 1), whereas the 15 pages of new text may be compressed on a more normal order of compression of, for example, 2 to 1. However, further subsequent back-ups (e.g., a third version) may utilize the new text of versions 1 and 2 as the baseline references. Alternatively, when compression fails to achieve certain predetermined compression ratio threshold, it may be determined that changes are significant enough to warrant replacing the original version of the data with the subsequent version of data, which then becomes the baseline value.
FIG. 3 illustrates a process 100 where a baseline data set is utilized to compress subsequent versions of the data set. As shown, an initial data set is obtained 102. This may entail receiving and storing the initial data set and/or compressing 104 the initial data set utilizing, for example, a standard compression technique. In this regard, a compressed file may be generated that represents the initial data set. A subsequent time, the initial data set may be utilized 106 to identify differences in a subsequent date set. Such utilization may include conditioning 108 a dictionary based compression engine with the original data the (compressed or uncompressed) and compressing 110 the subsequent data set utilizing the compression engine that utilizes the original data set as a dictionary. This generates 112 a compressed file that is indicative of the changes between the initial data set and the subsequent data set. Further, such compressed file may include references to the compression dictionary (e.g., the original data set and/or the initial compressed file). Accordingly, the compressed file, which indicative of the subsequent data set may be stored 114 as a point in time archive, which may be subsequently accessed to enable, for example, data restoration. The use of the baseline data set as a dictionary for compression of subsequent corresponding data sets facilitates, in part, a number of the following applications. However, it will be appreciated that aspects of the following application are novel in and of themselves.
To provide archiving services that may take advantage, at least in part, of the compression technique discussed above, an initial data set must be originally cataloged. Such a catalog forms a map of the location of the various components of a data set and allows the reconstruction of a data set at a later time. In this regard, the first time a set of data is originally backed up to generate a baseline version of that data, the data may be hashed using one or more known hashing algorithms. In this regard, the initial cataloging process is at its core similar to existing processes. However, as opposed to other archiving processes that utilize hashing, the present application in one embodiment utilizes multiple hashes for different portions of the data sets. Further, the present application may use two or more hashes for a common component.
For instance, a data set may be broken into three different data streams, which may each be hashed. These data streams may include baseline references that include Drive/Folder/File Name and/or server identifications for different files, folders and/or data sets. That is, the baseline references relates to the identification of larger sets/blocks of data. A second hash is performed on the metadata (e.g., version references) for each of the baseline references. In the present embodiment, the first hash relating to the baseline reference (e.g., storage location) may be a sub-set of the meta-data utilized to form the second hash. In this regard, it will be appreciated that metadata associated with each file of a data set may include a number of different properties. For instance, there are between 12 and 15 properties for each such version reference. These properties include name, path, server & volume, last modified time, file reference id, file size, file attributes, object id, security id, and last archive time. Finally, for each baseline reference, there is raw data or Blobs (Binary large objects) of data. Generally, such Blobs of data may include file content and/or security information. By separating the data set into these three components and hashing each of these components, multiple checks may be performed on each data set to identify changes for subsequent versions. 1st Hash Baseline Reference--Bref Primary Fields Path\Folder\Filename Volume Context Qualifier Last Archive Time 2nd Hash
Version Reference--Vref (12-15 properties) Primary Fields (change indicators) Path\Folder\Filename Reference Context (one or three fields) File Last Modification Time (two fields) File Reference ID File Size (two fields) Secondary Fields (change indicators) File Attributes File ObjectID File SecurityID Qualifier Last Archive Time 3rd Hash (majority of the data) Blobs (individual data streams) Primary Data Stream Security Data Stream Remaining Data Streams (except Object ID Stream)
In another arrangement, a compound hash is made of two or more hash codes. That is, the VRef, BRef, and Blob identifiers may be made up of two hash codes. For instance, a high-frequency (strong) hash algorithm may be utilized, alongside a low-frequency (weaker) hash algorithm. The weak hash code indicates how good the strong hash is and is a first order indicator for a probable hash code collision (i.e., matching hash). Alternately, an even stronger (more bytes) hash code could be utilized, however, the processing time required to generate yet stronger hash codes may become problematic. A compound hash code may be represented as:
TABLE-US-00001 ba="01154943b7a6ee0e1b3db1ddf0996e924b60321d" | strong hash component | weak | | high-frequency | low |
In this regard, two hash codes, which require less combined processing resources than a single larger hash code, are stacked. The resulting code allows for providing additional information regarding a portion/file of a data set.
Generally, as illustrated by FIG. 4, an initial set of data is hashed into different properties in order to create a signature 122 associated with that data set. This signature may include a number of different hash codes for individual portions (e.g. files or paginations) of the data set. Further each portion of the data set may include multiple hashes (e.g., hashes 1-3), which may be indexed to one another. For instance, the hashes for each portion of the data set may include identifier hashes associated with the meta data (e.g., baseline references and/or version references) as well as a content hash associated with the content of that portion of the data set. When a subsequent data set is obtained 124 such that a back-up may be performed, the subsequent data set may be hashed to generate hash codes for comparison with the signature hash codes.
However, as opposed to hashing all the data, the meta data and the baseline references, or identifier components of the subsequent data set, which generally comprise a small volume of data in comparison to the data Blobs, may initially be hashed 126 in order identify files or pages of data 128 (e.g., unmatched hashes) that have changed or been added since the initial baseline storage. In this regard, content of the unmatched hashes (e.g., Blobs of files) that are identified as having been changed may then be hashed 130 and compared 132 to stored versions of the baseline data set. As will be appreciated, in some instances a name of a file may change between first and second back ups. However, it is not uncommon for no changes to be made to the text of the file. In such an instance, hashes between the version references may indicate a change in the modification time between the first and second back ups. Accordingly, it may be desirable to identify content hashes associated with the initial data set and compare them with the content hashes of the subsequent data set. As will be appreciated, if no changes occurred to the text of the document between back ups, the content hashes and their associated data (e.g., Blobs) may be identical. In this regard, there is no need to save data associated with the renamed file (e.g., duplicate previously saved data). Accordingly, a new file name may share a reference to the baseline Blob of the original file. Similarly, a file with identical content may reside on different volumes of the same server or on different servers. For example, many systems within a workgroup contain the same copy of application files for Microsoft Word®, or the files that make up the Microsoft Windows® operating systems. Accordingly, the file contents of each of these files may be identical. In this regard, there is no need to resave data associated with the identical file found on another server. Accordingly, the file will share a reference to the baseline Blob of the original file from another volume or server. In instances where there is unmatched content in the subsequent version of the data set from the baseline version of the data set, a subsequent Blob may be stored 134 and/or compressed and stored 134.
Importantly, the process 120 of FIG. 4 may be distributed. In this regard, the hash codes associated with the stored data may be provided to the origination location of the data. That is, the initial data set may be stored at an off-site location. By providing the hash codes to data origination location, the determination of what is new content may be made at the origination location of the data. Accordingly, only new data may need to be transferred to a storage location. As will be appreciated, this reduces the bandwidth requirements for transferring backup data to an off-site storage location.
While primarily discussed in relation to using hash codes to identify correlations (e.g., exact matches and/or near matches) between an initial data set and a subsequent data set, it will be appreciated that other correlation methods may be utilized to identify a baseline data set for use in compressing a data set. For instance, rather than hashing an initial data set, a general correlation may be performed between two data sets to identify at least partially correlation portions of the data sets. Rather than knowing an existing relation between the data sets, a correlation is performed using the data set and the universe of known data. If a portion of the data set correlates to a high enough degree with the universe of know data, the data from the universe of known data may be selected as for use a baseline for the data set. That is, the data identified as correlating to the data set may be selected and utilized to compress the data set. Stated otherwise, any means of correlating a new data set to known data may be utilized to select prior stored data that may be utilized for compression purposes.
FIG. 5 illustrates one embodiment of a process for archiving data in accordance with certain aspects of the present invention. Initially, an original set of data is received 1. This data set may include, without limitation, data received from a server, database or file system. This data is typically received for the purpose of backing-up or archiving the data. Each item/object (e.g., file, folder, or arbitrary blocks of data) within the received data is processed 2 and a version reference ("Vref') is computed 3. As noted above, the Vref includes numerous fields relating to the meta-data 3a of the objects. These fields may include Primary fields and Secondary fields. These fields may be utilized to identify changes between archiving (i.e., backing-up) of first and subsequent instances of data sets.
This initially allows for determining if the object data already exists within the archive system. Once the Vref is computed 3, it is assigned to an object store 4, 4a. Once the assignment is made, a comparison 5 is performed with the common content object store to determine 6 if the object associated with the Vref already exists (i.e., from a previous archive operation). This determination is performed utilizing the Reference Lookaside Table 7. The Reference Lookaside Table 7 is a table that includes Vref and Bref hash codes. In any case, if the Vref of an object from the newly received data is equivalent to a Vref of a previously archived object, a determination is made that the object may already exist. If no match is located, processing proceeds as discussed herein. In the event no match is located within the Reference Lookaside Table 7, the existence of the object is further determined by searching the Object Store. If a match is found the Vref is loaded into the Reference Lookaside Table.
If no match is identified (e.g., the object represents new data or data that has been modified since an earlier back-up), a storage policy is selected 8 for archiving the data. In the illustrated embodiment, a general purpose policy may be selected. As may be appreciated, different policies may be selected for different data types. For instance, a general purpose policy may be selected for data that is unknown. In contrast, for data sets where one or more components of the data are known, it may be preferable to select policies that better match the needs of the particular data set. Once a policy is selected 9, the process continues and a baseline reference ("Bref") 9 is computed for each previously unmatched object 10a of the data source. A subset of the Vref data is utilized to compute the baseline or Bref data. Specifically, the metadata that is outlined above is utilized to compute a hash for the baseline reference objects.
Once Bref 9 is computed for an object, it is assigned 11 to a store. This assignment 11 is based on the same assignment 11 made for the corresponding Vref. Typically, the Bref computed is the latest Bref. However, in some instances, the metadata, while being identical for first and second points in time (e.g., first and second archiving processes), the object data may change. In such instances, a determination 12 is made if the current Bref is the latest Bref by a comparison with other Bref data in the object store using the Last Archive Time qualifier. This allows for a redundancy check to assure there have been or have not been changes between corresponding objects of different archiving processes.
A determination 13 is then made if the current Bref already exists within the object store. Again, the Reference Lookaside Table 7 is utilized for this determination. In this regard, the hash of the current Bref data is compared to existing hashes within the Reference Lookaside Table 7.
If the object already exists, it is resolved to a Blob 14 (i.e. a binary large object) comprising a series of binary data zeros and ones. The Bref is utilized to look up the Vref, which is then utilized to look up the associated Blob of data. In some instances, the Blob of data may reference a further Blob, which is a root baseline Blob. In some instances, Blobs of common data exist for many objects. For instance, the operating system of numerous separate computers may be substantially identical having many of the same files. Accordingly, when the backup of such separate computers is performed, the resulting Blobs for the common files may be identical. Therefore the Vref and Brefs of different objects may reference the same Blobs.
Once a baseline Blob is located, it is loaded 15 as a dictionary for the compression algorithm. When the Blob is loaded 15 into the dictionary, it may be broken into individual chunks of data. For instance, the baseline Blob may be broken into 30 KB data chunks or into other arbitrary sized data chunks based on operator selection. These individual chunks may be loaded into the compressor to precondition a compressing algorithm.
It will be noted that any of a plurality of known compression techniques can be utilized so long as it may be preconditioned. In the present case, the compression algorithm is preconditioned with portions or entirety of the Blob data. Up to this point, all data that has been processed has been metadata. However, at this point, the received object is hashed as it is being compressed 16 using the compressing algorithm preconditioned with the baseline Blob. If the object has a Bref the changes between the new object and the baseline object are determined by the resultant compression of the item, called a delta Blob 17. If the object has a Bref the corresponding delta Blob is often only a fraction of the size of baseline Blob and compression ratios of 100:1 are not uncommon
The process to identify changes is referred to as the delta Blob process. The output of the delta Blob process is a binary set of data that may represent either the difference between a baseline data set and a new data set, or, in the case where no baseline exists, the output may become the baseline for future reference purposes. In either case, the delta or baseline Blob is represented by the hash of the received data and is copied/stored 18 to the object store 5, if it does not currently exist. Optionally, older versions, as determined by the Last Archive Time qualifier, of Brefs and their corresponding Vref, and baseline or delta Blob data may be recycled to free space within the object store.
As will be appreciated the archiving system described above is fully self contained and has no external storage requirements. As such the entire object store 5 may be hosted on a single removable unit of media for the purpose of offsite storage. Because all indexes and references and content are maintained within a single file structure as individual items, and since none of the items stored are not required to be updated, any facility to replicate the object store to an alternate or remote location may be employed. The unique storage layout provides a fault tolerant structure that isolates the impact of any given disk corruption. Furthermore the referential integrity of items may be verified and any faults isolated. Subsequent archiving jobs may be used to auto-heal detected corruptions. With regard to removable media, once the base object store layout and tree depth is defined, the identical structure may be duplicated on any number of removable media in such a manner that provides for continuous rotation of media across independent points-in-time. The process is similar to tape media rotation, though far more efficient since common content is factored. The structure facilitates the reduction of equivalent media units by 20:1 or more.
FIGS. 7 and 8 illustrate reconstruction of data from an object store. As noted, the process allows for real-time reconstruction of data, that is, dynamic or `on-the-fly`. To provide such dynamic reconstruction, the archived data is represented in a virtual file system that is accessible by a user attempting to reconstruct data. To reconstruct data, the address of a desired object or file must be known. How that address comes to be known is discussed below.
Initially, all the data within the system is stored within the object store and may be represented in a virtual file system as illustrated in FIG. 6, which illustrates accessing archived data using the virtual file system, and in the present embodiment, a web client network. However, it will be appreciated that access to archived data can be via a stand alone unit attached to a system for which archiving is desired. Certain aspects of the virtual file system (VFS) are applicable to both systems. In the case of web client network, access to the archived data can be achieved via WebDAV using the Windows WebClient service redirector. This redirector allows for access to archived data using a universal name convention (UNC) path. With this instance the entry point to viewing archived data is through the UNC path \\voyager\ObjectStore. In addition, the WebClient redirector supports mapping a drive letter to a UNC path. For instance, the drive letter L: could be assigned to \\voyager\ObjectStore. It should be noted that a drive letter mapping can be assigned to any level of the hierarchy. For instance, X: could be mapped to \\voyager\ObjectStore\Important Documents directly.
FIG. 6 shows the object store entry in the VFS hierarchy. In this example the object store instance is called ObjectStore. Object stores contain both archived data pooled from multiple resources, (e.g., common content from multiple sources) and archives that more tightly define a particular/individual data set or catalog. That is, individual data sets are indexed within their own archive (e.g., important documents). In this regard, when attempting to reconstruct data associated with a known data set, that data set's archive may be searched rather than searching the entire index of the object store. This allows searching the individual archive instead of searching the global index for desired information. This reduces storage requirements for index, computation requirements for searching, as well as core memory requirements.
Each time a data set is moved into the system, the current state of that data set or a point-in-time catalog is created and is recorded within the system. As may be appreciated, this may only entail storing information (e.g., metadata) associated with the data set as opposed to storing the raw data of the data set (e.g., assuming that data already exists within the system). In any case, the point in time that the data set is stored within the system will be saved. This results in the generation of a point in time catalog (e.g., the Archived UTC entries of FIG. 6). Each catalog, which represents a data set for a particular point in time, contains an exact representation of all the metadata for a particular dataset. However, not all the raw data associated with the data set for a particular point in time has to be copied. Only files that have changed between a previous point in time and the current point in time are copied into the system as previously described above. For files that have not changed, the metadata for the point in time catalog may be stored with appropriate references to data of previous catalogs.
As not all information a point in time need be stored, numerous catalogs may be generated and saved for numerous points in time. That is, rather that a system that provides, for example, a limited number of complete back-up sets of data (e.g., which periodically are replaced by newer back-up data sets) and each of which contains redundant copies of common data, the use of the comparatively small catalogs allows for increasing the amount of points in time for which data may be reconstructed. That is, the catalogs allow for greatly increasing the granularity of the back up data sets that are available to a user.
That is, rather than saving data for each point in time, the catalogs save codes for recreating data for a given point in time. Specifically, a catalog for a point in time contains one or more hash codes for each record (file), which is used by the virtual file system to recreate a replica of the data set for given point in time. Below is an exemplary sample of a single record in the catalog, where the entries for ca, sa, oa, ba, and aa are hash codes representing different streams of data. For instance, <ca> is the VRef for the record and incorporates all the metadata used to identify a particular version. <sa> is a Blob address (hash) to a security stream. <oa> is the Blob address to an optional object identified stream. <ba> is the primary Blob address. <aa> is the alternate (or secondary) blob address.
TABLE-US-00002 <ref ty="2" nm="build.properties.sample" in="\LittleTest" on="3162645303" ct="128105391287968750" at="128186364571718750" mt="127483060790000000" sz="1644" fl="128" id="5629499534488294" ca="1d1649cb2b39816d69964c1c95a4a6ad79a41687" sa="3af4ec95818bdc06a6f105123c2737be6ea288df" oa="" ba="01154943b7a6ee0e1b3db1ddf0996e924b60321d" aa="" op="1" />
As shown, this portion of the catalog forms a record that allows for locating and recreating the meta-data and content of a given file.
Referring again to FIG. 6, the catalog represents the original data set and is in a hierarchal form that may include volumes, folders and files. Each of the entries in the hierarchy includes metadata that described their properties. Further, folder records and file records include Vref addresses and archive time stamps. The hierarchy mimics the hierarchy of the data set that is backed up. For instance, the hierarchy may include individual users. For a particular user is selected, for example Mike, the contents of that user's computer, server, etc., may be stored in a manner that is identical to that user's computer, server, etc.
This hierarchy is presented as a portion of the virtual file system (VFS), which as noted above may be used to remotely access any set of stored data and has application outside of the archiving system described herein. The user may access the VFS hierarchy to reconstruct data from the appropriate archive of the object store. In this regard, the user may on their screen see a representation as illustrated in FIG. 6. A user may navigate the VFS to a particular archive and select a desired point-in-time catalog to expand that folder. At that time, the hierarchy beneath that point-in-time catalog may be provided to allow the user to navigate to a desired document within that point-in-time catalog. That is, the user may navigate the VFS, which mimics the user's standard storage interface, until they locate the desired document they want to reconstruct. Of note, no particular point-in-time need be selected by the user. For instance, a search engine may have the ability to search each point in time archive for desired data therein. Importantly, no specialized client application is required to access the VFS. In this regard, the authorized user may utilize their standard operating systems in order to access the archived datasets as would access the desired file on their own computer.
As noted, FIG. 6 is a representation of archived data. In this case, the data is from a Windows file system where multiple archiving runs are keeping full viewable versions of the file system available to a user. Of note, a transition in the VFS occurs in the VFS hierarchy where the archiving point-in-time hierarchy stops and the representation of the data from the source starts. In this example, the transition or pivot is named "Archived UTC-2006.04.03-23.57.01.125". The folder(s) below this point in the hierarchy represent root file systems specified as file/folder criteria for an archiving task. "Users (U$) on `voyager`" is a file volume with a label Users, a drive letter U and from a system named voyager. However, it will be appreciated that other file systems (e.g., non-Window systems) may also be represented. Once a file level is reached within the archive for a particular point-in-time, the user may select a particular file. This selection then provides a version reference address (Vref), and archive time may be utilized to begin reconstruction of that particular file.
The importance of storing the Blob address with the Vref is that it allows the Vref to reference the actual content within the object store 5, regardless of whether it is a Blob or a delta Blob. In the case where it is a delta Blob, that delta Blob may further reference a baseline Blob. Accordingly, the information may be obtained in an attempt to reconstruct the desired data. At this point, the baseline Blob and, if in existence, a delta Blob have been identified; the data may be reconstructed at this point.
A user may specify the archive time 32 in order to reconstruct data (e.g., for a specific Vref) from a particular time period. As will be appreciated, the actual archive times may not be identical to the desired time period provided by a user. In any case, the system determines 34 the most relevant reconstruction time (e.g. data from a back up performed before or shortly after the desired time). An initial determination 36 is made as to whether the initial Vref has a delta Blob. If a delta Blob exists for the Vref, that delta Blob is obtained 38 from the object store. The corresponding baseline Blob is also obtained 40 from the object store. If there is no delta Blob, only the baseline Blob is obtained. If a Vref references a non-compressed object (e.g. an individual file), that non-compressed object may be obtained for subsequent reading 44.
Once the Blob(s) (or a non-compressed object) are obtained, they may be reconstructed to generate an output of the uncompressed data. See FIG. 8. In the present process, the Vrefs (i.e., which references delta or baseline Blobs) are reconstructed in individual chunks or buffers from the obtained Blobs. The length of such buffers may be of a fixed length or of a variable length, which may be user specified. In the instance where the Vref references a delta Blob, which has been obtained as discussed above, the delta Blob may then be decompressed to reconstruct the Vref data. The object (e.g., delta Blob) is read 52 and decompressed until the buffer 54 is filled. This may be repeated iteratively until the entire object is decompressed. For each decompression of a delta Blob a portion of the delta Blob may require a referenced portion of the baseline to fill the buffer. In this regard, a determination 56 is made as to whether a new dictionary (i.e., portion of the baseline Blob) is required to provide the decompression information to decompress the particular portion of the delta Blob. That is, if necessary the system will obtain 58 a portion of the opened baseline Blob to precondition 60 the decompression algorithm to decompress 62 the current portion of the delta Blob.
Given the two pieces of data, the Vref address and the archive time, these two pieces of data are taken and utilized to search the object store for an exact Vref and archive time match or for the next earliest Vref archive time. See FIG. 7. For instance, if the desired file to be reconstructed had not been changed since an earlier backup, the Vref address may reference earlier Vref time that represents the actual time that the data for that file was stored. Once resolved to this level, the attributes of the Vref are to be read to determine if it is a delta Vref or a baseline.
If no delta Blob exists but rather only a baseline Blob 64, the process obtains 66 the baseline Blob based on the Vref from the object store and decompresses 68 the baseline Blob to fill the buffer. Once a buffer is filled with decompressed data, this buffer of data is returned to the requesting user. In one arrangement, the object may be non-compressed data. In this instance, a data set may exist in a non-compressed form. In such instances, the buffer may be filled 70 without requiring a decompression step. The filling and returning of buffers may be repeated until, for instance, an end of a file is reached. It will be appreciated that multiple files (e.g., multiple Vrefs) from a data set may be retrieved. Further, an entire data set may be retrieved.
One application for the adaptive content factoring technique is to harvest information from traditional disk based backups. In most cases, significant quantities of information are common between two full backup data sets. By factoring out the common data, the effective capacity of a given storage device can be significantly increased without loss of functionality and with increased performance of the archiving system. This makes long term disk-based archiving economically feasible. Such archiving may be performed locally or over a network. See for example FIG. 9. As will be appreciated by those skilled in the art, as network bandwidth decreases it is advantageous to identify the common content of a given dataset and only send changes from a remote server to a central archive. In this regard the novel approach described above works exceptionally well given the index used to determine if content is already stored can be efficiently stored and distributed across the network 80. By creating and maintaining content indexes specific to a given data set or like data sets, the corresponding size of the index is reduced to localized content. For example, if an entry in the index is 8 bytes per item, and data set contains 50,000 items. The corresponding size of the index is only 400,000 bytes. This is in contrast of other systems that use monolithic indexes to millions of discrete items archived. As such the smaller distributed index may be stored locally or in the network. In some cases it may be preferable to store the index locally. If the index is stored within the network, by its small size, it can be efficiently loaded into the local program memory to facilitate local content factoring.
The techniques described provide for a locally cacheable network of indexes to common content. That is, multiple servers/computers 82 may share a common storage facility 84. This content may be processed by an archiving appliance 88 such that common content is shared to reduce storage requirements. The necessary catalogs may be stored at the common storage facility 84 or at a secondary storage 86. To allow backing up the individual servers/computers, the present technique uses a distributed index per data set. That is, specific sets of identifier and content hashes may be provided to specific server/computers. Generally, the information within the index corresponds to a hash (e.g., a Vref) to a given item within the data set. However, as will be appreciated it is also desirable to store highly referenced content or Blob indices, such as file or object security information that may be common to items within a dataset of between different data sets even if the data sets correspond to items from different host systems to quickly identify that these Blobs have already been stored. In this regard the present technique uses an alternate index to Blobs by replacing the original data set content with a series of Blob addresses followed by a zero filled array of bytes. The Blob address plus zero filled array is such that it exactly matches the logical size of each segment of the original content. As will be appreciated by one skilled in the art, the zero filled array is highly compressible by any number of data compression algorithms.
The present invention works with any known file format by first dividing the data set into discrete object data streams, replacing each object data stream with a stream address to the content (or Blob) that was previously or concurrently archived using the M3 or similar process described below, then filling the remainder of the remapped data stream with zero. Finally, the remapped stream is compressed, which essentially removes redundancy in the zero filled array. It is desirable for resultant file to be indistinguishable from the original except for the remapping of data stream content. In this regard, a bit-flag may be used within the original file meta data to indicate that the stream data has been replaced to allow the original program that created the original data set to determine that the data stream has been remapped. The present invention sets a reserved flag in a stream header without regard to the header checksum. The originating program can catalog the data set, but when the data stream is read the checksum is checked. Because the reserved flag is set, the checksum test will fail preventing the application from inadvertently reading the remapped stream. FIG. 10 depicts the process. The determination of the stream address may employ the full process using metadata stored internal to the data set and include a reverse lookup to determine the stream Blob address, or use a hash algorithm on the stream data to compute the unique stream Blob address. The unmap process simply reverses the order of operations such that for each Blob address and zero filled array is replaced with the original content and the reserved flag is unset. The result of the unmap reconstruction process is an identical copy of the original data set.
Another aspect of the presented inventions is directed to the archiving of large unstructured data sets. As may be appreciated, in addition to file systems as discussed above where discrete files have individual names or file paths, other types of data contains no clear delineations. For instance, databases often include voluminous amounts of data, for example in a row and column format, that have no clear delineation. Likewise, virtual hard drives (VHDs) often contain large amounts of data which may represent the contents of a hard disk drive or other storage medium. Such VHDs may contain what is found on a physical hard disk drive (HDD), such as disk partitions and a file system, which in turn can contain files and folders. It is typically used as the hard disk of a virtual machine. However, such VHD's are often represented as single file that represents an entire file system. Other large files include PST and OST files that may represent e-mail file folders of the user or users. In all of these cases, it is common that the data contained therein is represented as a single file. Furthermore, it is common that these files are of a very large size, often in excess of 1 TB.
The large size of these files can result in a reduced performance of the adaptive content factoring methods described above. Specifically, as these large files are represented as a single unitary file, the entire content of these files must be factored to identify changes between versions of the file. While providing acceptable results, difficulties arise in remote storage or off-site archiving procedures. As set forth in relation to FIG. 9, it is often desirable to archive or back up data at a remote or off-site location. This is particularly evident with the recent increase in cloud computing applications. In such systems, a majority of data of an organization may be stored remotely. Accordingly, in these applications it may be necessary to back-up data at a location that is separate from the location of the data itself. In off-site storage applications, backing-up of data typically requires data transfer over a network connection. In these instances, the data transfer rates between the data and the remote storage location is typically much lower than data transfer rates between a data set and on-site storage location. For instance, many local area networks (LANs) have internal transfer rates of between about 100 Mbs and 1000 Mbs per second. In contrast, internee transfer rates are more commonly on the magnitude of 1500 Kbs per second. Thus, the transfer rate over an external network connection is generally two to three orders of magnitude lower than the transfer rates within a LAN.
In the present system and method (i.e., utility), if there is any change to the large file, a baseline file from the off-site storage location must be transferred from the off-site storage location to the location of the data in order to identify the changes to the large data file (i.e., de-duplicate). While such a system is feasible, the data transfer rates between the off-site storage in the data location results in a slowed back-up process. Accordingly, the inventors have identified a means by which large files may utilize the adaptive content factoring system over relatively slow network connections without the time penalties noted above. The improvement to the system allows for identifying changes within a very large data file without necessarily having to transfer the entire baseline data set (e.g., original version of the data set) from the off-site storage location. Rather, only the portions of the baseline data set that corresponds with changed portions of the large data file require retrieval over the network connection for adaptive content factoring.
To allow for reducing network traffic, the present utility subdivides the large data file into smaller data sets. FIG. 11A illustrates a very large data file 200, which in the present embodiment is a 9.43 GB file. This file may represent a database, VHD, OST, PCT or other large data set. The data set 200, may include a number of separate files 202a-n each of which may itself be a large data set (e.g., several hundred MBs). The utility initially delineates the large data set 200 into smaller data sets having a predetermined size. Stated otherwise, the present utility paginates the large data set 200 into multiple smaller data sets or virtual pages 204a-nn (hereafter 204 unless specifically noted). As illustrated in FIG. 11B, the utility generates virtual page breaks 206a-nn (hereafter 206 unless specifically noted) having a predetermined size. The byte-size of the virtual page breaks 206 may be selected based on a particular application. Generally, larger virtual pages will improve overall I/O performance over smaller pages and require less virtual pages per large data set to keep track of, but require larger corresponding baseline pages to be transferred from the storage location to perform the adaptive content factoring process which may increase the overall backup run-time. Use of smaller pages will generally be less efficient with respect to I/O performance and increase overall run-time, and require more virtual pages, but require fewer and smaller baseline virtual pages to be transferred than when using larger virtual pages. The optimal ranges of virtual page size are currently believed to be between 1 MB and 128 MB for most applications though page sizes of 1 GB and larger are possible. In any arrangement, the page size may be user selectable to optimize the page size for a given application. Once the dataset 200 is paginated, it is possible to determine on a page by page basis if there have been changes to the data within each virtual page 204 of the dataset. Accordingly, the system allows for generating virtual divisions within the large data set 200 that may be compared to the same data in a baseline version, which may be stored off-site.
In order to identify each virtual page, the B-ref and V-ref discussed above are modified. That is, in addition to utilizing path/folder/filename metadata information, the B-ref and V-ref also utilize offset and length attributes. The offset attribute is a measure of the number of bytes from the beginning of the large dataset that identify the start of a virtual page. The length attribute defines the data byte length of the virtual page (e.g., 8 MB). In this regard, the large dataset may be subdivided into smaller data sets (e.g., virtual pages) the location of which is known. At this time, adaptive content factoring may be performed on a large dataset in a manner that substantially similar to the application of adaptive content factoring to a file system having a more standard path/folder/file version reference (i.e., B-ref and V-ref). That is, if the hash information or content hash of a virtual page shows that the virtual page has been changed, the virtual baseline page may be retrieved to perform adaptive content factoring of the changed virtual page. In this regard, the baseline virtual page may be broken into chunks (e.g., 30 KB data chunks or other user selected chunk sizes) and loaded into the compressor to precondition the compression algorithm. Likewise, the changed virtual page may be broken into like sized chunks and corresponding chunks are compressed with the preconditioned compressing algorithm.
FIGS. 12 through 18 illustrate the use of the virtual pagination utility for adaptive content factoring back-up over an external network connection (e.g., internet) in comparison an adaptive content factoring system without pagination over the same external network connection. FIG. 12 illustrates a variation of the system of FIG. 9. In this illustration, the archive appliance 88 is moved to the location of a data set 200, which may include the data of an entire organization and may include large data files as discussed above. The archive appliance 88 is adapted to execute computer executable instructions (e.g., computer programs/software) to provide adaptive content factoring in accordance with the presented inventions. The archive appliance need not be a specialized device and may be integrated into existing computers or servers of an organization. The archive appliance is interconnected to a data network (e.g., internet) 80 via a data link 212. Likewise, an offsite storage location 210 is interconnected to the internet via a second connection 214. In a first embodiment, the offsite storage location 210 includes the baseline version of the data 200 of the organization. In this embodiment, the archive appliance 88 may include the index for the data set 200. Alternatively, the index may be stored at the offsite storage 210. In this latter regard, prior to performing a back-up of the data 200 the archive appliance 88 will retrieve the index from the offsite storage 210.
FIG. 13 illustrates an initial archiving (i.e., first pass) of the dataset 200 where no virtual pages are included within the very large data files. As shown, the dataset 200 includes 9.41 GB of data. Initially, the data reduction achieved through compression is 1.7 to 1 (220). That is, the total data processed is 9.41 GB (222) and the total data stored is 5.52 GB (224). In this embodiment, the 9.41 GB f the data set 200 are represented in 22 separate files 226 having an average size of 427 MB each. The total data stored 224 forms a baseline version of the data set 200.
FIG. 14 illustrates the initial storage (i.e., first pass) of the same data set 200 where virtual paginations are included within the data set 200. Again, the dataset is 9.41 GB (222) and the data reduction is 1.7 to 1 (220) resulting in a baseline version of the data set having a size of 5.52 GB (224). In either arrangement, the first pass compression generates the initial baseline version of the data set 200 which is saved to the offsite storage 210 (or alternatively an on-site storage). However, in addition to performing the initial 1.7 to 1 compression, the dataset 200 of FIG. 14 includes virtual pagination of the file into 1,224 separate virtual pages 228, which are notes as files protected files in FIG. 14. This pagination results in the generation of virtual pages/files having an average file size of 8.16 MB.
As shown by FIGS. 13 and 14, there is no reduction in the overall size of the files in the first pass between simply compressing the large database 200 or compressing with the virtual pages. Likewise, the data transfer rate between the archive appliance in the offsite storage is the same for both embodiments. The efficiencies of the utility are realized in subsequent back-up of the data set once the baseline version is created as illustrated in FIGS. 15 and 16. Specifically, FIG. 15 illustrates subsequent back-up or archiving of the dataset 200 without virtual pages. As shown, in the subsequent back-up, five of the 22 original files are identified as having changes 230. Accordingly, each of these 427 MB files must be retrieved from the offsite storage 210 and delivered to the archive appliance 88 to perform adaptive content factoring to identify the changes therein. In this subsequent pass in the non-page mode, data reduction of the back-up is 114 to 1 (232) with new data storage of 83.9 MB (236).
FIG. 16 illustrates the subsequent back-up of the same data set 200 utilizing virtual pages. In this embodiment, 12 of the 8 MB virtual pages/files are identified as having been changed 236. Accordingly, these twelve 8 MB files (e.g., a total of 96 MB) are retrieved from the offsite storage 210 and delivered to the archive appliance 88 for use and adaptive content factoring. In contrast, in the non-page mode, over 9.41 GB (5.52 GB compressed) data must be transferred between the offsite storage 210 and the archive appliance 88 for use in adaptive content factoring. In this regard, the data transfer requirements without pagination are 98 times the data transfer requirements with virtual pagination. Furthermore, by utilizing the smaller virtual page sizes, the total amount of new data stored in the second or back-up function is 2.83 MB (240). In this regard, the data reduction on the second pass with virtual pages is 3,402 to 1 (238). Stated otherwise, an amount of only 1/3402 of the original data set 200 (i.e., 9.41 GB) is stored on the back-up using the virtual pagination. This reduction of the new back-up data is due to the ability to identify changes in smaller portions of the data set and therefore requires less processing to identify the changes and likewise results in the identification of smaller changes (i.e., deltas) between the original data set and the data set at the time of the back-up or achieving procedure.
FIG. 17 illustrates the bandwidth requirements of the back-up of FIG. 15. As shown, the network connection requirements during nearly the entirety of the file transfer of the baseline reference from the offsite storage 210 to the archive appliance 88 are between 25 and 50% of the available bandwidth during the back-up without virtual pages. FIG. 18 illustrates the same bandwidth requirements when utilized with virtual pages. As shown, there is limited bandwidth requirement for the data transfer at the beginning of the baseline version and at the end of the baseline version. The reduction in the data transfer requirements are due to the fact that only the virtual pages that are changed between the creation of the baseline version of the data set and the data set at the time of the back-up need to be transferred. This results in significant reduction in the data transfer rate between the archive appliance 88 and the offsite storage location 210. In this regard, it will be appreciated that in many large files (e.g., such as OST files), changes may be made to the only at beginning of the file and to the end of the file. The middle portions of the file (e.g., middle virtual pages) are unchanged and do not require transfer to the archive appliance 88 to perform adaptive content factoring. Accordingly, the back-up of the data set 200 may be performed many times faster (e.g., 3-100 times faster depending on the number of unchanged virtual pages) than without the virtual page mode. Furthermore, this allows for efficiently backing up large data files over low bandwidth network connections. Stated otherwise, in FIG. 18 the virtual pages of the file between the initial changes at the beginning of the file and the changes to the end of the file need not be transferred from the offsite storage location 210 to the archive appliance 88. Likewise, less new data needs to be transferred from the appliance 88 to the off-site storage location 210. This both speeds the back-up process and results in greater compression of the data.
Though discussed in relation to FIGS. 12-18 as utilizing an offsite storage location 210, it will be appreciated that the efficiencies of virtual pagination of a large data set are also achieved in on-site back-up. That is, processing time is reduced as less of the original data needs factoring to identify changes and less additional data is stored during back-up. Further, the embodiments of FIGS. 13-18 illustrate the back-up of data where no new files are identified. If new files are identifies they may be paginated (if necessary) and a new baseline can be created for these new files. However, the data of the new files will have a more typical compression as set forth in FIGS. 13 and 14.
As may be appreciated, standard data caching techniques can be applied to dynamic content (the portions of files that are actively changing) to further reduce transfer requirements. That is, the corresponding virtual pages with highest demand for retrieval from the off-site storage location 200, may be cached locally to the appliance 88 to eliminate the need to repeatedly retrieve the active set of baseline virtual pages to perform adaptive content factoring. As illustrated in FIG. 16, the working set of active virtual pages is twelve pages, or about 1/100th the full dataset size. In this regard, each time a back up is performed, the virtual pages that are identified as having changed may be stored locally and/or sent to the off-site storage location. During a further subsequent back-up, these virtual pages are stored locally for use in compressing the new data set. This further reduces the volume of data that needs to be transferred via the data link and likewise speeds the overall process.
As may be further appreciated, since the V-ref, B-ref, for each virtual page are independent (not relying on information from any other virtual page), parallel processing techniques can be utilized on single large files to perform adaptive content factoring on different virtual pages simultaneously to further reduce the time required for back-up process. That is, the process of comparing the identifier and/or content hashes of the individual virtual pages may be performed by separate processors running in parallel. Likewise, these separate processors may retrieve baseline versions of the individual pages they are processing and compress the new version of the individual virtual pages independent of the processes running on other processors.
A further benefit of the use of the virtual pages is a reduction in time required to perform the back-up process and a reduction in the overall amount of data stored. FIGS. 19, 20 and 21, illustrate the performance of the adaptive content factoring processes that utilizes no virtual pages (FIG. 19), utilizing virtual pages (FIG. 20) and utilizing virtual pages with multiple processors (FIG. 21), which in this example utilizes three processors. Each of these FIGS. 19-21 illustrate the thirtieth back-up compression of a common data set where at each back-p the previous data set includes a 5 Kb update at eight random locations and a new 5 Mb data set appended to the end of the data set.
As shown, each of the processes (FIGS. 19-21) during the thirtieth iteration processes an 8.21 Gb data set 300. In the instance where no virtual pages are utilized, the data reduction is slightly over 39:1 (302) and the total new data stored is 213 Mb 304. Further, due to the need to process the entire 8.21 Gb data set, the process takes over eleven and a half minutes as illustrated by the start time 306 and the finish time 308. In the instance where virtual paging is performed on a single processor, the data reduction is over 1520:1 (310) and the new data stored is 5.53 Mb 312. This process takes just under three minutes as illustrated by the start time 306 and the finish time 308. In the instance where virtual paging is performed on multiple processors, the data reduction is over 1520:1 (314) and the new data stored is 5.53 Mb 312. This process takes just over one and half minutes.
As illustrated by these three FIGS. 19-21, the use of virtual pages in combination with adaptive content factoring significantly reduces the amount of new data that is stored during each back-up process. This is due in part to the reduction in the need re-baseline the data set. Stated otherwise, as the virtual paging breaks the data set into multiple individual virtual pages (e.g., data portions), most of these pages remain unchanged between back-ups. The unchanged pages do not require processing and smaller portions of the data set (e.g., individual pages) can be re-baselined when significant changes are made to an individual page.
Of further importance, the use of virtual paging significantly reduces the time needed to back-up a data set. As noted above, the back-up process is almost four times faster with virtual paging and almost eight times faster with virtual paging performed on multiple processors. Further, additional processing gains may be achieved where yet further processors are utilized in the multiple processor arrangement. As will be appreciated, this is of considerable importance in extremely large data sets (e.g., terabyte set etc.).
Several variations exist for implementation with the virtual page arrangements. In one variation, the first page may have a variable length to account for changes that are often encountered to the beginning of large data sets. That is, it is common for many changes to occur to the very beginning of a data set as illustrated by FIG. 18. By allowing the first virtual page to vary in length, the overall size/length of the first virtual page may be reduced to further improve the processing time of the back-up procedure. Likewise, the pages that are of an increased likelihood of having changes (e.g., first page, new pages, pages with changes in the most recent back-up) may be cached by the client (e.g., data origination location) to further speed the back-up process.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit the invention to the form disclosed herein. While a number of exemplary aspects and embodiments have been discussed above, those with skill in the art will recognize certain variations, modifications, permutations, additions, and sub-combinations thereof. It is therefore intended that the following appended claims and claims hereafter introduced are interpreted to include all such variations, modifications, permutations, additions, and sub-combinations as are within their true spirit and scope.
Patent applications by Brian Dodd, Longmont, CO US
Patent applications by Michael Moore, Lafayette, CO US | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866965.84/warc/CC-MAIN-20180624141349-20180624161349-00263.warc.gz | CC-MAIN-2018-26 | 104,838 | 198 |
https://atmos.uw.edu/academics/classes/2011Q1/380/HW2.html | code | Due Friday Jan 21
In class today, complete as much as you can. You can do part III using my output, but be sure sometime later that your run gives the same answers by remaking at least one figure.
I. Build CAM (5-10 min)
II. Run CAM and monitor the job in the queue system (2 min, plus some wait time)
III. Analyze CAM output in MATLAB, make movies if you like, and write up answers to a few questions (a couple of hours max)
I. Build CAM
Typically we put all our various CAM runs in subdirectories of a single directory called "camruns". The subdirectories are named after the case, which here is barowave1deg. It is best to put the directory on disk in the computer lab. The following will do so:
You are now in the "work" directory. Copy the build script from the class script directory here. The dot at the end of the command puts the file in the current directory (dot = here).
cp /home/disk/p/atms380/scripts/bld-cam4.csh .
Execute the script:
Wait a few minutes. When done list the executable, which is binary and can't be viewed.
ls -l bld/cam
should say roughly (size may vary a bit)
-rwxr--r-- 1 bitz atgstaff 12395972 Jan 6 11:30 bld/cam*
Also look at what was placed in subdirectory run
ls -l run
should say roughly
-rwxr--r-- 1 bitz atgstaff 3048 Jan 1 11:26 atm_in*
lrwxrwxrwx 1 bitz atgstaff 10 Jan 1 11:26 cam -> ../bld/cam*
-rwxr--r-- 1 bitz atgstaff 49 Jan 1 11:26 drv_flds_in*
-rwxr--r-- 1 bitz atgstaff 355 Jan 1 11:26 drv_in*
Finally list the script that you will use to submit the run to the queue. It is text and can be viewed in a text editor if you like.
ls -l run-cam4.csh
should say roughly
-rw-r--r-- 1 bitz atgstaff 1066 Jan 1 11:30 run-cam4.csh
You have just compiled CAM to use MPI (message passing interface), so it will run on multiple processors but the processors do not used share memory.
Sometime go back and look at the "in" files, which are namelists that give CAM information for the run. I have some notes about what they mean here.
II. Run CAM
You are about to run what is known as a "initial" or "startup" run. The initial conditions were described in class. The model will run for 30 days and produce daily output for a number of variables. All this is defined in the "in" files. Start from your "work" directory, and send the run script to the queue. Please ONLY SEND IT ONCE (more about that below).
Verify your job is in the queue
should say roughly
job-ID prior name user state submit/start at queue slots ja-task-ID
189590 0.55500 barowave1d bitz r 01/01/2011 12:21:37 MPI@wx.atmos.wa 16
The state "r" means it is running. It might say "qw" which means it is waiting in the queue until one of the machines on the cluster is free. Type qstat a few times over the course of a few minutes and if the job is not running,
qstat -u "*"
to see all the jobs in the queue. This should give you an idea how long you will have to wait.
If the job exits the queue very quickly (nothing is returned when you type qstat) tell Cecilia.
Once the job starts to run, it will take about 40 min. A file named "cam.out" will appear in the run subdirectory and accumulate standard output from the model in bunches. It is not pretty, but it can be helpful if something goes wrong. You can look at it when in the run directory with more cam.out or tail cam.out. Be patient, the model writes in bunches when it fills a "buffer". The buffer is big, so the time between writing bunches is long. The order stuff is written varies owing to parallel processes and can be less than obvious.
If for some reason you wish to kill your job.
qdel xxxxx fill in the x's with the job-ID and this will cancel your job
If the job completes, a bunch more files will appear in the run directory with names like barowave1deg.etc. The "cam2.h0" file contains the output "history" of the run. The other files have r's in their names and these are so-called "restart" files that could be used to continue the run from where it left off if you so desire (we will do this another time).
Troubleshooting: send Cecilia an email with your directory name and a brief description.
III. Analyze CAM - turn in about a page on a-d below, plus figures. Next week, you will need to edit the matlab file that you use to remake a figure with your own output so it reads your data file not mine. Otherwise you don't need to edit them much at all. Feel free though.
Make a supdirectory of your "work" directory for this case and call it something like mfiles. Go to that directory and copy the analysis files to your directory for this exercise. Start matlab
cp /home/disk/p/atms380/scripts/analyze_ex2* .
a) Run analyze_ex2_a in matlab. Check out all four options. Make movies if you like (these can be saved and run without matlab using a web browser or other software). To turn in: Describe the time evolution of the behavior that you see. Consider growth rates, wavelength, wavenumber, differences between the hemisphere, etc. Use your understanding of meteorology as best you can, or describe what you see generically as an instability problem. I encourage you to print out a couple of your favorite figures using the print icon on the figure window and turn them in.
b) Run analyze_ex2_b in matlab. The figures illustrate the relative phase of the waves at the upper and lower levels for day 9. Figure 1 is just the height of pressure surfaces contoured like a topo map. The heights are higher to the south. Thus their is a pressure gradient force perpendicular to the contours (on average pointing to the north) and a coriolis force pointing roughly opposite but not exactly because this flow is not balanced. We know it is not balanced because it is unsteady. Figure 2 shows the departure of the height from the zonal mean. It helps us to see where the waves crests and troughs lie. First from Fig 1, note how the wave crests line up along a curving "axis" north to south. Now look at Fig 2 to see how the red contours correspond to the ridge axis. Likewise for troughs. Now notice how easy it is to see in Fig 2 that the upper level ridge and trough axes are shifted westward of the lower level axis.
Theory tells us that temperature gradients fuel growth of waves because winds transport heat to deepen upper level wave amplitudes. However, for wave growth there must be a phase shift between upper and lower waves so the heat transported near the surface around highs and lows can deepen the upper level structure. With no phase shift, the heat transported near the surface is exactly inbetween crests and troughs aloft, and therefore cannot deepen the troughs or raise the crests. Instead it causes the crests and troughs to shift. Pure growth would happen if 90 degrees out of phase. Our phase shift is definitely less that 90 degrees and therefore we see some growth and some shifting to the east. This is a somewhat advanced topic. Do your best to see and interpret these behaviors in the figures. It may be challenging. Ask Cecilia for help if you wish.
To turn in: Estimate and discuss the characterstics of the phase shift in the upper and lower waves that you see in the simulation. Where is the phase shift between waves at upper and lower levels a maximum and minimum (eye-ball this) and how does this correspond to relative wave growth? Print the figures and turn them in.
c) Run analyze_ex2_c in matlab. The figures are fairly self explanatory. To turn in: Discuss the winds at both levels. Print the figures and turn them in too.
d) Propose an experiment you would like to try to probe this system further. Explain why and offer a hypothesis as best you can. I won't evaluate if your hypothesis is correct. Famous modeler Syukuro Manabe said roughly, "Use the model to tell you the answer". If possible we will try some of these ideas later! One suggestion is to mess with the initial conditions. But say how.
|Return to Homepage | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578586680.51/warc/CC-MAIN-20190423035013-20190423060150-00026.warc.gz | CC-MAIN-2019-18 | 7,840 | 52 |
https://andypiper.co.uk/2006/12/07/ | code | There’s a nice new article by Xiaoming Zhang on developerworks, describing how the XMLTransformation node works in WebSphere Message Broker. The XMLT node enables you to use XML stylesheets to transform your data. The article makes a nice companion my piece on the different transformation technologies available.
About the author
The postings on this site are my own. They do not necessarily represent the positions, strategies or opinions of any companies or organisations I work for, or groups that I may be associated with.
Graham White on I made a Wi-Fi controlled Owl… Rick Lewis on Eclipse Paho gets started… Islam Mohammad on Eclipse Paho gets started… Andy Piper on Eclipse Paho gets started… Islam Mohammad on Eclipse Paho gets started… | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00402.warc.gz | CC-MAIN-2022-27 | 758 | 4 |
https://mono.github.io/mail-archives/mono-list/2009-February/041476.html | code | [Mono-list] How can use my software (Win32) on Linux plz?
rocha.pusch at gmail.com
Tue Feb 24 16:33:00 EST 2009
VB 2008 should produce "managed code" which is a set of .exe´s and .dll´s,
all of them compatible with mono in linux. No need for VMware or Wine.
(unless you use something very specific to windows, such as an "unmanaged"
To run your app in Ubuntu you need all the same libraries that your project
uses in windows; most of mono´s libraries can be found with Synaptics (or
"Add/Remove programs" in Ubuntu), just search for mono and look for the
libraries you want.
the next thing as Chris said, would be the right-clicking or doing "mono
path/to/your/app.exe" in a terminal....
If you have trouble with the needed libraries, I would suggest downloading
the latest binary installer (for "other linux") in mono´s site. It usually
works better than the mono version which is shipped by default with some
greetings and good luck
PS: your english looks kind of OK :-P
On Tue, Jan 20, 2009 at 9:21 PM, arnomedia <arnomedia at yahoo.fr> wrote:
> I made a litle software with VB 2008 Express on Windows XP. I would like to
> run it on another computer under Linux only. Please, how can do that
> exactly? I am not confortable at all with Linux. I had read something about
> VMware but I don't know if I need to install it. At present, I am under
> Ubuntu 8.10, but I can change of distrib if you know another distrib that
> could be better in my case. I have been doing a lot of research on the web,
> but this does not help me enough.
> Otherwise, is it possible to use Wine? I tried it with another software,
> that certainly does not need NET Framework, and it seem easiest to use.
> As you can see, I need help ;)
> See you then
> PS: sorry for my english
> View this message in context:
> Sent from the Mono - General mailing list archive at Nabble.com.
> Mono-list maillist - Mono-list at lists.ximian.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Mono-list | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00660.warc.gz | CC-MAIN-2023-06 | 2,029 | 36 |
https://git.shmage.xyz/shmage-site/file/src/post/2022/01/16-updates-to-the-website.md.html | code | 1 # Updates to The Website 2 3 ## 01/16/2022 4 5 + You might have noticed already that I've changed the domain of the website (and links in pages) to https://eonn.xyz some time ago. 6 For the time being https://eonndev.com will redirect to https://eonn.xyz, maybe the next year or so, but from now on I will use eonn.xyz as my domain for mail and my website. 7 8 + I've also redone the git server from scratch (I will post about it soon) since I haven't touched it in a long time. 9 The dotfiles repo was outdated since my migration from Gentoo to GuixSD on most of my systems (I'll also post about this soon). 10 11 + There was also an issue with how my rss feed generator was formatting publish dates. 12 If you're using newsboat and dates don't look right, clear the cache and re-download the feed. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00771.warc.gz | CC-MAIN-2023-40 | 801 | 1 |
https://www.curezone.org/forums/fm.asp?i=1484 | code | No Colloidal Silver should be either Distilled or De-ionised (which isn't as good). RO still have impurities which will damage the CS for storage.
You can use Distilled or RO for quicker Ozoning times. The more impurities the long it takes! I use Distilled water which has then passed through a high bovis structure machine called the Vitaliser Plus. This then helps hydrate myself and cells far better than any other water and with ozone inside too ;-) | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00159.warc.gz | CC-MAIN-2023-14 | 453 | 2 |
http://fitness.stackexchange.com/users/30/kronos | code | Working as a Software Systems Engineer.
superuser [dot] kronos [at] gmail [dot] com
You can also follow me on twitter: @SuperKronos
Or check out my blog: KronoSKoderS
There are no winners or losers in the race of life, only finishers and quitters.
44 What should I look for in a running shoe? mar 1 '11
40 How do I properly breathe while swimming freestyle? mar 4 '11
31 Should I stretch after exercise? mar 7 '11
30 What is a “Runners High”? mar 2 '11
23 What is the purpose of 'cooling down'? mar 7 '11
14 Lungs on Fire When Running mar 1 '11 | s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989178.64/warc/CC-MAIN-20150728002309-00043-ip-10-236-191-2.ec2.internal.warc.gz | CC-MAIN-2015-32 | 548 | 11 |
http://stackoverflow.com/questions/686810/asp-net-c-sharp-silverlight-server-component-and-ajax-modal-inside-firefox | code | I have a modal popup inside of an update panel with a silverlight control.
The video displays fine in IE 7/8 but in firefox all I get is a white screen
I am using the following video skin from link text
<div style="height:360px;"> <asp:Silverlight ID="myVideoPlayer" runat="server" Source="~/Videos/VideoPlayer.xap" Width="640px" Height="360px" InitParameters="m=Efficiency.wmv" Windowless="true" /> </div>
I know it works when I use the normal <object> method but that will not work as I need to set the Initparams from the code behind depending on what video category they choose.
I have consulted the google gods and they have been not so helpful. Hope you guys can help me with this problem. Thank you! | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932705.91/warc/CC-MAIN-20150521113212-00332-ip-10-180-206-219.ec2.internal.warc.gz | CC-MAIN-2015-22 | 706 | 6 |
http://www.floodsite.net/juniorfloodsite/html/en/teacher/lessons/geography/index.html | code | This theme aims a provinding insight in the general problems of flood risk management. It focusses on Europe with several European examples of floods and hurricane Katrina as mayor flood disaster in another part of the world. Climate change is expected to have al large impact on flood risk and id for that reason included in this theme.
The lesson theme Flood Risk management consists of the following materials.
- Self study materials: The heart of it is under the flood risk heading. “Europe”, “Katrina” and “Climate Change” go with it.
- The quizzes about flood types and flood risk management test the knowlegde of basic concepts.
- The Florima board game gives insight in the process of Flood Risk management. As does the online Stop Disasters game.
- Worksheet: Some assignments about the self study materials are to be found on this work sheet.
- Project: Conclude these lessons with a larger project. This could be the assignment Make your own virtual tour. The project could also be a flood risk assessment for the students home region, or the design of a floating city to protect citizens against floods, or some other connected subject. | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00051.warc.gz | CC-MAIN-2018-05 | 1,160 | 7 |
https://gis.stackexchange.com/questions/216043/executeerror-error-999999-with-pointstoline-function-with-a-point-feature-cla/216080 | code | I have an issue with the "PointsToLine" function in Python arcpy and in the tool dialog, the problem comes everytimes I use a POINT Feature Class (FC) created from the "Make Route Event Layer" function (Linear Referencing).
All the properties seem to be the same as a regular (working) POINT class, but the error keeps prompting with such created FC. Is there something I am missing? Is this a known issue? Is there a workaround for this?
Here is my simplified code to generate the Point FC used for the "PointToLine" function:
import arcpy table_points_TEST = "C:\\ArcGIS\\Test.gdb\\TEST_TABLE_TO_LINE" axe_repere = "C:\\ArcGIS\\Test.gdb\\AXE_REPERE" table_points_TEST_TEMP = "C:\\ArcGIS\\Test.gdb\\TEST_TABLE_TO_LINE_TEMP" FC_point = "C:\\ArcGIS\\Test.gdb\\FC_POINT" FC_line = "C:\\ArcGIS\\Test.gdb\\FC_LINE" arcpy.MakeRouteEventLayer_lr(axe_repere , "AXIS_NAME", table_points_TEST, "AXE POINT M_POINT", table_points_TEST_TEMP) arcpy.CopyFeatures_management(table_points_TEST_TEMP, FC_point) arcpy.PointsToLine_management(FC_point,FC_line)
If I use the Point layer generated by the "CopyFeatures" in the "PointToLine" tool dialog, the same error occurs. | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00051.warc.gz | CC-MAIN-2021-17 | 1,155 | 5 |