text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In today’s CCNA (Cisco Certified Network Associate) 200-120 and CCENT (Cisco Certified Entry Networking Technician) 100-101 blog posting; we are going to follow-up on our last CCNA blog post about cables in a CCNA home lab. The prior post covered the different types of cables required to connect to your Cisco router or Cisco switch so you could console into it. Today we are going to cover the different types of LAN cables you will encounter are you are going to inter-connect devices together. These will all be things that you are going to need to know for the real world too! So let’s start off our two common CCNA lab LAN cables. The Ethernet Straight-thru Patch cable and the Ethernet Crossover patch cable. It is very important that you know when to use each of these in your lab as you can see questions about this on your CCENT or CCNA exams. So let’s take a quick look at the pinout of a normal Ethernet Patch cable using the newer T-568B standards. You may be familiar with different color coding of the older T-568A standards so please note that they are different. T-568B Ethernet Patch Cable Now we will look at the pinout of an Ethernet crossover patch cable using the T-568B standards. Crossover Patch Cable T-568B In a nutshell, if you are going from like device to like device, you will use an Ethernet Crossover Patch cable. For example; when you go to connect two routers directly together via their fast Ethernet ports, you will use an Ethernet Crossover Patch cable. The same holds true if you are daisy chaining two Cisco switches. But then you have the scenarios like you will use an Ethernet Crossover Patch cable if you are connecting a PC to a Router. Huh? How are those two like devices??? Well, the reality is that from an EIA/TIA standards perspective you want to break the devices up into two categories: DCE: Hubs, Switches DTE: Routers, PCs So if you can remember that hubs and switches are DCE devices and then routers and PCs are DTE, you will have this down very easy. Any combination of like devices will use an Ethernet crossover cable to connect each other together. Any mix of a DCE and DTE device will use an Ethernet patch cable. So let’s try a little practice: PC to PC: Crossover PC to Router: Crossover PC to Switch: Straight-through Patch Switch to Switch: Crossover Switch to Router: Straight-through Patch Pretty easy when it is explained in the DCE/DTE context, huh? These are all real world scenarios and cables that you will come across in your day to day job duties. To make the crossover cables easy to identify in our CCNA lab certification kits; we make these cables red. So why did we cover this information? Well for two reasons. 1) You can and probably will see something on your CCENT or CCNA exam about what type of cable should be used or a scenario where the wrong type of cable is being used and you need to be able to pick that out to troubleshoot the issue. 2) You will need to know this stuff for the real work when you have your job as a network engineer. Now I do have to touch on a caveat to what I have stated above. Everything we covered above is 100% correct. However as we continue to progress technologically; the industry tries to make things stupid-proof. So the industry has adopted a standard for FastEthernet ports referred to as Auto-MDIX which stands for automatic medium-dependent interface crossover. Wow, that is a mouthful. Basically what that means is you don’t have to think if your ports are Auto-MDIX and they will automagically cross or uncross themselves to work with your cable. So that is why when you test our your cables you may see that if you connect your PC to your router directly with a patch cable it will work on a newer PC that is Auto-MDIX but will not work on an older PC that is not Auto-MDIX. Wow, so there was a lot more to the differences and when to use a standard Etherent Patch cable versus an Ethernet Crossover Cable. These are two really good examples of the value of having your own lab versus using a simulator like PacketTracer or GNS3. You don’t get to really experience these things in a CCNA simulator. It is the real hands-on experience that helps cement these types of things in your head for your CCENT or CCNA exam. So if this gives you a desire for you own CCNA lab, check out some of the various home lab kits we have to offer below! If you need some help building your lab, please use the Contact Us link in the upper right hand corner and we will be more than happy to help you! We also have our helpful lab suggestions here.
<urn:uuid:0b5fa4af-6e84-47da-a08d-d8288b89de70>
CC-MAIN-2017-04
https://www.certificationkits.com/ccna-when-to-use-patch-vs-crossover-cables/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921733
1,050
2.828125
3
see the first one is declared as Char type. And Second one is declared as Varchar type. For Char type it is fixed. But where in case of Varchar its Variable length. i.e maximum is 20 char. Suppose in char type if u pass 'pavan' it takes 20 bytes and in varchar if u pass same 'pavan' it will take less than that. This is only one variable. Not a group element. And in COBOL we have upto 49 Level Number only. So if u mention 49 level means it cant have sublevels. Hope this is wat u expected from us. Plz let me know if i am wrong.. posting same message two times cannot gain more attention or is that that important to post twice take it cool or hot the answer is here if you want a variable of variable length then go for the varchar the length s9(4) comp will store the length of the varchar after a value is assigned to it. so that that much length of string is drawn for your use when ever uu use that varchar. so...... either that is a pure syntax error or it will get the whole length x(20) if its declared so. so we willl get garbage value .....for the rest of the characters but i dont think this wil happen....... another option is it can be treated as a character variable and the one with out pic cclause above this variabe declaration may be treated as error take the above post as example to this explanation.
<urn:uuid:8ff87eee-c7f8-4b5a-b5f1-ac8d3530b5d0>
CC-MAIN-2017-04
http://ibmmainframes.com/about2453.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893935
331
2.8125
3
Here's how you can rearrange and group your Windows 8 tiles on the Start screen. 1. Click and drag a tile to move it on the screen. 2. If you are on a mobile device, hold and drag to move a tile. 3. Pinch to zoom out (or Ctrl-mouse wheel) to make the entire screen shrink and see all tiles on multiple screens. 4. After pinching, you can move tiles around or rename them. For more, see the original article at the link below. Group and re-arrange tiles | It Pro Portal
<urn:uuid:0c959449-5790-4d0f-8826-e473dfb40a9a>
CC-MAIN-2017-04
http://www.itworld.com/article/2719982/enterprise-software/rearrange-and-group-windows-8-tiles.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.860685
118
2.515625
3
Apollo 11: Looking Back at the First Moon Landing 45 Years Ago Former NASA engineers reminisce about the Apollo 11 moon mission, its historic landing and its amazing accomplishments in interviews with eWEEK.Forty-five years ago this weekend, humans landed on the moon for the first time as astronauts from the Apollo 11 spaceflight began exploring the lunar surface on July 20, 1969. NASA astronauts Neil Armstrong and Edwin "Buzz" Aldrin spent only a few hours on the moon on that first voyage, but their accomplishment still stands tall more than four decades after humans first visited another celestial body, far from the bonds of Earth. To commemorate that historic first moon mission, eWEEK talked with several former NASA Mission Control engineers who worked to support Apollo 11 back then, helping to guide the three-man crew through the launch, the voyage to the moon, the landing and the long return to Earth. (Astronaut Michael Collins, who piloted the command module and didn't land on the moon's surface, remained in moon orbit while his crewmates explored the lunar surface.) Forty-five years after the success of Apollo 11, the former engineers still beam with pride and excitement about the spaceflight and its events. Gene Kranz was one of several flight directors for Apollo 11 at Houston's Mission Control headquarters at the time, while John "Jack" Garman was a 24-year-old NASA computer engineer. Jerry Bostick was a 30-year-old engineer who was a member of Kranz's flight team for the mission. Garman and Bostick were two of the many flight engineers who worked with the early open-source software that helped take Apollo 11 to the moon. The nail-biting landing of Apollo 11's lunar module on the moon's surface on July 20, 1969, is still what Garman remembers the most. "It's like yesterday and at the same time, it's like it happened long ago in a galaxy far, far away," he said. "My biggest memory was when Buzz Aldrin said, 'picking up some dust'" as the lunar module's lone engine stirred up a large cloud of lunar dust as the spidery craft touched down on the surface of the moon.
<urn:uuid:b8cd6ef4-c634-4dff-b50e-d2c6ca5fb1ed>
CC-MAIN-2017-04
http://www.eweek.com/cloud/apollo-11-looking-back-at-the-first-moon-landing-45-years-ago.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00192-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968336
452
3.265625
3
Similar to viruses, Trojans copy, distribute and kill your data. Obviously since you would never permit these actions, Trojans do not bother to ask for your permission. Trojans are deceptive programs which don’t take no for an answer. Any keystroke means yes to them. They are mainly designed to steal your data from your pc. But if you have antispyware guarding your pc, you need not worry. What Are Trojans? Basically Trojans appear quite harmless. They quietly enter your computer, either as email attachments or bundled in with other software programs. They are a kind of spyware. Spyware works by keeping track of what you do when you browse the web without you being aware of it. It is quite irritating and can cause major problems if you don’t use any antispyware program to keep your pc clean. Trojans are one of the worst kinds of spyware that eventually destroy your data after stealing it. There are some Trojans called remote administration tools, which permit access to your computer every time you log in – and you won’t even know it. Whoever accesses your pc can easily pick up files from your system, remove or add programs, and even control your keystrokes. How Trojans Land Into Your PC Similar to viruses, Trojans copy, distribute and kill your data. Obviously since you would never permit these actions, Trojans do not bother to ask for your permission. The spyware installer does not care about the means used to rob your data. It is well known that there are many Internet marketers who trick you into installing certain software bundled in with Trojan spyware into your computer. They use a pop up ad to attract you, and then as if you want to install it. Whether you say yes or no is irrelevant. Even if you say no, they follow up with another pop up ad to ask if you are really sure. In spite of clicking no, your keystroke simply sets off a download into your computer without your being aware of it. Drive by downloads are a common method for Trojan spyware to sneak in to your pc. Here is what happens – you browse a website and see a popup asking if you want to download something and the way it asks you, you sometimes end up saying yes, thinking that you need to download it to look at the web page. So when you say yes, it looks like you are allowing the download. If you say no, you are hounded by pop ups that wait for you to just click to start off the download, making it happen without your knowledge. Everyday there are new ways being devised by spyware installers to get into your system. Get antispyware software to control this and keep your pc free of spies! by Arvind Singh Dallas Computer Support
<urn:uuid:b4dc67b4-3f34-4ce5-bfa8-395471dc50e6>
CC-MAIN-2017-04
http://3tpro.com/trojan-yes-or-no/dallas-trojan-yes-no
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00494-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953093
588
3.015625
3
/ December 3, 2013 This image, captured by NASA’s Solar Dynamics Observatory, shows the explosion of a solar flare from the right side of the sun. Solar flares are powerful bursts of radiation emanating from a small area on the sun's surface. While not harmful to humans, when powerful enough, they can interfere with GPS and communications signals. The flare depicted here is classified as an X1.0 flare, one of the stronger types. According to NASA, solar flares can have significant impacts. "In the past, X-class flares of this intensity have caused degradation or blackouts of radio communications for about an hour," its website reads. An increased amount of solar flares are typically noted as the sun nears "solar maximum", the period of greatest solar activity in the 11-year cycle of the sun. It is normal for there to be many solar flares a day during this period.
<urn:uuid:aa15c328-365c-4a62-a438-72c968a8e17c>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Solar-Flare.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00037-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941748
186
4
4
Fiber optic cable installation is a very special task, because fiber optical cables can be easily damaged if they are improperly installed. It is imperative that certain procedures be followed in the handling of these fiber cables to avoid damage and/or limiting their usefulness. Only a trained professional can achieve rapid and accurate. Finished fiber optic cables can be categorized as outdoor cables, indoor fiber cables, or indoor/outdoor cables (shown as the following figure). However, most of the technology of optical fiber cable installation has been borrowed from those used in copper cable. And at present most of the fiber cable contractors from copper network background. Most independent fiber cable installation contractors install indoor cables most of the time. Indoor fiber cables are mostly used as the backbone for campus networks, enterprise LAN systems and etc. And there is the submarine cable category. They are laid from ships built for that purpose. They are only used by the big global network backbone builders. But actually there are many small divisions within each group. When installing optical cable, what you don’t know could really hurt you or damage your cable. there are many different types of finished fiber cables installations knowledge we need to know, let’s examine them one by one. 1. Submarine cables. Laying of submarine cables are the purpose of shipbuilding. They were buried at the bottom of the ocean trench depth of less than 200 meters. Areas in the deeper level, submarine cables laid directly at the bottom of the sea. 2. Direct-buried fiber cables. Direct-buried cables are also called armored cables. They have aluminum foil wrapped around for mechanical protection from rodent bite and outside forces. They are laid in a deep trench dug with a cable plow and then covered with dirt. 3. Outdoor cable does not include the buried cable, cable duct is necessary. Cable ducts are plastic tubes which provides a path and protection for outdoor cables. They are buried in the trenches, and then covered with dust. Cable ducts have various size and flexibility, they can be from 1 inch to 6-10 inches. Some of them are flexible and others are very strict. 4. Cable ducts are buried first without any fiber cables inside. They can be routed directly between two endpoints or through a series of access points at manholes. 5. Outdoor cable through the cable duct and traction on the rope. Pull on the rope attached to the strength of the member, and then pull out the cable of the destination. You should not put the fiber directly because it will break easily and result in a broken cable is useless to you. 6. And then there are aerial cable. One type of aerial cable with a messenger line to provide mechanical support and they can hanged on poles without any lashing. This type of aerial cable is called figure 8 or self-supporting cable. Other types of aerial cables have to be lashed with a special lashing wire running around both the cable and the separate messenger wire. 7. Indoor cables can be installed within walls, through cable risers, or elsewhere in buildings. Note: only special under carpet fiber cable should be used for laying on the floor where people walk. 8. A special type of indoor cable is called plenum cable. They have special formulated out jacket material for fire rating. Only plenum cables can be used in air return spaces for the VAC system. Warm prompt: Fiberstore is doing 30% discount of the price, including indoor cables and outdoor cables, and our website address can also provide to you. www.fs.com. Welcome to have a visit.
<urn:uuid:5708a574-3c01-436b-a292-7750603e256e>
CC-MAIN-2017-04
http://www.fs.com/blog/the-summary-of-different-finished-fiber-optic-cables-installation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00155-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947244
736
2.75
3
Question 4) Test Yourself on CompTIA i-Net+. Objective : Internet Security SubObjective: Understand and be able to Describe the Various Internet Security Concepts Single Answer Multiple Choice E-mail messages can be easily forged on the Internet. What can you use to be certain that a sender is who he or she claims to be? Certificates (digital signatures) are primarily used to verify the identity of a sender, but they can also be used to ensure that data, such as documents, arrives at the destination untampered with. An Access Control List (ACL) is a series of conditions that determine if network traffic should be passed on or blocked. Firewalls and routers use ACLs to filter out unwanted traffic. Secure Sockets Layer (SSL) is a protocol used for secure Internet transactions. Firewalls filter unwanted traffic from private networks. These questions are derived from the Self Test Software Practice Test for CompTIA Exam #IK0-002: i-Net+.
<urn:uuid:25de4a7a-83da-4996-91d9-a79d6f9e0019>
CC-MAIN-2017-04
http://certmag.com/question-4-test-yourself-on-comptia-i-net/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00395-ip-10-171-10-70.ec2.internal.warc.gz
en
0.826946
211
2.96875
3
In principle, the use of a Wi-Fi signal for location and tracking is simple. There's no need for the approach used in RADAR, generating and bouncing a signal off of the object to be located or tracked, because Wi-Fi-equipped devices are usually transmitting data, and with that unique information identifying a given station. Once a given environment is installed with sensors or access points, as required to meet the specifications of a given solution, and any calibration required to tell the infrastructure what to expect when signals are sent from a known location - a form of RF fingerprinting, if you will - you're good to go. Several different techniques to calculate location are available in practice. The most important of these are: " Signal strength - This technique examines the RSSI (received signal strength indication) of a given transmission and compares it to calculations of signal strength in a particular location obtained during a calibration process. Signal strength is, however, notoriously variable in wireless systems, and especially indoors. This variability is caused by the vagaries of radio propagation, including the natural exponential fading of signal strength and a wide variety of factors from radio-obstructive and even moving objects in the environment to the echoes and reflections of radio signals, known as multipath. However, with enough samples from a given object, signal strength can provide a very accurate measurement of location, and with rapid time-to-solution for any given object being tracked. These samples can be obtained very quickly, enabling, when coupled with a little artificial intelligence, the possibility of tracking moving objects, even in three dimensions. " Time Difference of Arrival (TDoA) - Another approach is to use multiple reference sources transmitting what amounts to the value of a clock synchronized with all other transmitters. Assuming the location of transmitters is known with high accuracy and a reliable view (in radio terms) of these transmitters, the receiver can resolve its position, also with high accuracy, and also in three dimensions. This is exactly, for example, how GPS works - the positions of the GPS NAVSTAR satellites are always known (with small corrective updates for the ephemeral variations in orbit because of simple physics and small atmospheric variations affecting the speed of the signal transmitted), so a receiver can simply measure the time difference in the arrival of each signal and do a relatively simple calculation to determine location with excellent accuracy. Many E911 systems also use this technique, but not via GPS, as GPS cannot usually be reliably received indoors. Instead, signals are transmitted from multiple cellular base stations. Both of these techniques can be improved by increasing the number of transmitters and the application of other correlation techniques, such as adding a known fixed reference point for a differential calculation. But as resolution of a square meter or so is often all that is required in most RTLS applications, exotic or expensive additions are usually of little value in indoor situations. Hybrid GPS/Wi-Fi-based solutions, enabling the mapping of indoor coordinates to world coordinates, are also possible. Just as in the case with RADAR, multiple samples and careful calculations are required in any given measurement so as to factor out errors and anomalous readings. In short, it would be wrong to think of Wi-Fi-based location and tracking technologies as an exact science - but they can be, as we learned through this series of exercises, regardless quite accurate and, depending upon the application, also quite valuable indeed. Read more about anti-malware in Network World's Anti-malware section. This story, "Location and tracking technologies: Understanding the technology" was originally published by Network World. Some of today's 'desktop' mini-PCs make laptops seem downright bulky in comparison. Sensing a possible stall in your coding career? Here’s how to break free and tap your true potential Microsoft on Friday said it will again provide Internet Explorer security patches as a separate... Sponsored by AT&T Can you even tell if a breach has occurred? Have you inventoried its vulnerabilities - and taken steps... U.S. iPhone buyers significantly shifted purchase preference to the larger 7 Plus in 2016, a research... Microsoft’s lawsuit objecting to the indiscriminate use by U.S. law enforcement of orders that demand...
<urn:uuid:ed90d609-cc5b-4446-84c1-c4d2cfbc697b>
CC-MAIN-2017-04
http://www.itworld.com/article/2724808/networking/location-and-tracking-technologies--understanding-the-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00119-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947096
870
3.59375
4
QKD – Quantum key distribution is the magic part of quantum cryptography. Every other part of this new cryptography mechanism remains the same as in standard cryptography techniques currently used. By using quantum particles which behave under rules of quantum mechanics, keys can be generated and distributed to receiver side in completely safe way. Quantum mechanics principle, which describes the base rule protecting the exchange of keys, is Heisenberg’s Uncertainty Principle. Heisenberg’s Uncertainty Principle states that it is impossible to measure both speed and current position of quantum particles at the same time. It furthermore states that the state of observed particle will change if and when measured. This fairly negative axiom which says that measurement couldn’t be done without perturbing the system is used in positive way by quantum key distribution. It a real communication system, if somebody tries to intercept photon-powered communication so that it can get the crypto key which is being generated by this photon transfer, it will need to squeeze transferred photons through its polarization filter to read information encoded on them. As soon as it tries with wrong filter it will send forward the wrong photon. Sender and receiver will notice the disparity in exchanged data and interpret it as detection of interception. They will then restart the process of new crypto key generation. Photon, and how it is used? 1) Photon – Smallest particle of light is a photon. It has three types of spins: horizontal, vertical and diagonal which can be imagined as right to left polarization. 2) Polarization – Polarization is used to polarize a photon. Polarize the photon means to filter the particle through polarization filter in order to filter out unwanted types of spins. Photon has all three spin states at the same time. We can manipulate the spin of a photon by putting the filter on its path. Photon, when passed through the polarization filter, has particular spin that filter lets through. 3) Spin – The Spin is usually the most complicated property to describe. It is a property of some elementary particle like electron and photon. When they move through a magnetic field, they will be deflected like they have same properties of little magnets. If we take classical world for example, a charged, spinning object has magnetic properties. Elementary particles like photons or electrons have similar properties. We know that by the rules of quantum mechanics that elementary particles cannot spin. Regardless the inability to spin, physicists named the elementary particle magnetic properties “spin”. It can be a bit misleading but it helps to learn the fact that photon will be deflected by magnetic field. The photon’s spin does not change and it can be manifested in two possible orientations. 4) LED – light emitting diodes are used to create photons in most quantum-optics experiments. LEDs are creating unpolarized (real-world) light. Modern technology advanced and today it is possible to use LEDs as source of single photon. In this way string of photons is created which will then be used in quantum channel for key generation and distribution in quantum key distribution process between sender and receiver. Normal optic networking devices use LED light sources which are creating photon bursts instead of individual photons. In quantum cryptography one single photon at a time needs to be sent in order to have the chance to polarize it on the entrance into optic channel and check the polarization on the exit side. Data Transmission Using Photons Most technically challenging part of data transmission encoded in individual photon is the technique to read the encoded bit of data out from each photon. How’s possible to read the bit encoded in the photon when the very essence of quantum physics is making the measurement of quantum state impossible without perturbations? There is an exception. We attach one bit of data to each photon by polarizing each individual photon. Polarizing photons is done by filtering photon through polarization filter. Polarized photon is send across quantum channel towards receiver on other side. Heisenberg’s Uncertainty Principle come into the experiment with the rule that photon, when polarized, cannot be measured again because the measurement will change its state (ratio between different spins). Fortunately, there is an exception in Uncertainty Principle which enables the measurement but only in special cases when measurement of the photon spin properties is done with a device (filter in this case) whose quantum state is compatible with measured particle. In a case when photons vertical spin is measured with diagonal filter, photon will be absorbed by the filter or the filter will change photon’s spin properties. By changing the properties photon will pass through the filter but it will get diagonal spin. In both cases information which was sent from sender is lost on receiver side. The only way to read photons currently encoded bit/spin is to pass it through the right kind of filter. If polarized with diagonal polarization (X) the only way to read this spin is to pass the photon through diagonal (X) filter. If vertical filter (+) is used in an attempt to read that photon polarization, photon will get absorbed or it will change the spin and get different polarization as it did on the source side. List of spin that we can produce when different polarization filter is used: - Linear Polarization (+) - Horizontal Spin (–) - Vertical Spin (|) - Diagonal Polarization (X) - Diagonal Spin to the left (\) - Diagonal Spin to the right (/) Key Generation or Key Distribution The technique of data transmission using photons in order to generate a secure key at quantum level is usually referred as Quantum Key Distribution process. Sometimes QKD is also wrongly referenced as Quantum Cryptography. QKD is only a part of Quantum Crypto. Key Distribution/Generation using photon properties like spin is solved by Quantum Key Distribution protocols allowing the exchange of a crypto key with – laws of physics guaranteed – security. When finally generated, key is absolutely secure and can be further used with all sorts of conventional crypto algorithms. Quantum Key Distribution protocols that are commonly mentioned and mostly in use in today’s implementations are BB84 protocol and SARG protocol. BB84 is the first one invented and it is still commonly used. It is the first one to be described in the papers like this one which are trying to describe how Quantum key exchange works. SARG was created later as an enhancement which brought a different way of key sifting technique which is described later in this paper. 1) Attaching Information bit on the photon – Key Exchange Key Exchange phase, sometimes referred as Raw Key Exchange giving the anticipation of future need for Key Sifting is a technique equal for both listed Quantum Key Distribution protocols BB84 and SARG. To be able to transfer numeric (binary) information across quantum channel we need to apply specific encoding to different photon states. For Example, encoding will be applied as in the Table 1 below making different photon spin carry different binary value. In the process of key distribution, first step is for sender to apply polarization on sent photons and take a note of applied polarization. For this to be an example, we will take the Table 2 below as the list of sent photons with their polarization information listed. Sender sent binary data: 0 1 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 If the system will work with integers this data can be formatted in integer format: Sender sent a key 267155 but it is just the start of the key generation process in which this key will be transformed from firstly sent group of bits ( 0 1 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 ) to the real generated and secured key. 2) Reading Information bits on the receiver side The question arises on how can we use above described properties of photon and still be able to actually read it on the receiver side. In the above step, photons with the information attached to them were sent to the receiver side. The next step will describe how quantum key distribution, and with that the whole quantum cryptography, works. While sending, a list is made, list containing each sent photon, sent from sender to receiver and polarized with specific spin (encoded a bit of information on each photon). In optimal case, when sender sends a photon with vertical spin and receiver also applies vertical filter in the time of photon arrival, they will successfully transfer a bit of data using quantum particle (photon). In a less optimal case when a photon with vertical spin is measured with diagonal filter the outcome will be photon with diagonal spin or no photon at all. The latter will happen if the photon is absorbed by the filter. In this case, transferred bit of data will later get dumped in the phase of key sifting or key verification. 3) Key Verification – Sifting Key Process Key sifting phase or Key verification is a technique made differently with listed Quantum Key Distribution protocols BB84 and SARG. In the last section, a less optimal case when a photon with vertical spin is measured with diagonal filter was described. The outcome of that photon, which is sent with vertical spin, measurement done with diagonal spin, will give to the receiver a photon with diagonal spin or no photon at all. Key verification comes into play now and it is usually referred as Key Sifting process. In BB84 protocol receiver will communicate with sender and give him the list of applied filters for every received photon. Sender will analyze that list and respond with a shorter list back. That list is made by leaving out the instances where sender and receiver used different filters in single photon transfer. In SARG protocol receiver will give to sender the list of results he produced from each received photons without sending filter orientation used (difference from BB84). Sender then needs to use that list plus his applied polarization while sending to deduce the orientation of the filter used by receiver. Sender then unveils to the receiver for which transfers he is able to deduce the polarization. Sender and receiver will discard all other cases. In this whole process, sending of polarized photons is done through special line of optical fiber cable. If we take BB84 for example, Key sifting process is done by receiver sending to the sender only the list of applied polarization in each photon transfer. Receiver does not send the spin or the value he got as a result from that transfer. Having that in mind, it is clear that communication channel for key verification must not be a quantum channel but rather a normal communication channel with not even the need to have encryption applied. Receiver and sender are exchanging the data that is only locally significant to their process of deducing in which steps they succeeded to send one polarized photon and read the photon one bit of information on the other side. In the end of Key Sifting process, taking that no eavesdropping happened, both sides will be in possession of exactly the same cryptographic key. The key after sifting process will be half of the original raw key length when BB84 is used or a quarter with SARG. Other bits will be discarded in the sifting process. Communication Interception – Key Distillation 1) Interception Detection If a malicious third party wants to intercept the communication between two sides, in order to read the information encoded, he will have to randomly apply polarization on transmitted photons. If polarization is done, this third party will need to forward photons to the original sender. As it is not possible to guess all polarization correctly, when sender and receiver validate the polarization, receiver will not be able to decrypt data, interception of communication is detected. On average, eavesdropper which is trying to intercept photons will use wrong filter polarization in half of the cases. By doing this, state of those photons will be changed making errors in the raw key exchange by the emitter and receiver. It is basically the same thing which happens if receiver uses wrong filter while trying to read photon polarization or when the same wrong filter is used by an eavesdropper. In both cases, to prove the integrity of the key, it is enough that sender and receiver are checking for the errors in the sequence or raw key exchange. Some other thing can cause raw key exchange errors, not only eavesdropping. Hardware component issues and imperfections, environmental causes to the quantum channel can also cause photon loss or polarization change. All those errors are categorized as a possible eavesdropper detection and are filtered out in key sifting. To be sure how much information eavesdropper could have gathered in the process, key distillation is used. 2) Key Distillation When we have a sifted key, to remove errors and information that an eavesdropper could have gained, sifted key must be processed again. The key after key distillation will be secured enough to be used as secret key. For example, for all the photons, for which eavesdropper used right polarization filter and for which receiver also used right polarization filter, we do not have a detected communication interception. Here Key Distillation comes into play. First out of two steps is to correct all possible errors in the key which is done using a classical error correction protocol. In this step we will have an output of error rate which happened. This error rate estimate we can calculate the amount of information the eavesdropper could have about the key. Second step is privacy amplification which will use compression on the key to squeeze the information of the eavesdropper. The compression factor depends proportionately on the error rate.
<urn:uuid:616af755-9346-4028-bf27-d8711c9d31ef>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2016/quantum-key-distribution
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00027-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931447
2,765
3.515625
4
NASA this week said it is exploring setting up one of its iconic Centennial Challenge competitions for companies to build a robotic Mars landing spacecraft NASA said it would expect to have about $250,000 worth of prize money for a robotic spacecraft that could land on the Red Planet, retrieve a sample and return it to orbit. +More on Network World: 13 cool high-tech prize competitions+ From NASA: "This Mars Ascent Vehicle Challenge would provide opportunities to evaluate a wide range of innovative methods to insert the sample, provide sample containment, erect the launch vehicle and deploy the sample container with limited human intervention and validate a reliable methodology. This challenge especially seeks to engage the amateur robotics and rocketry communities to provide solutions. The Challenge would award prizes for successful demonstration of an end-to-end autonomous operation to sequentially accomplish the following tasks: picking up the sample, inserting the sample into a single stage rocket in a horizontal position, erecting the rocket, launching the rocket to an altitude not less than 800m, deploying a sample container with the cache internally sealed and landing the container at less than 6m/s terminal velocity. " The Mars Ascent Vehicle Challenge is only in the planning stages and NASA is now evaluating how to proceed. The space agency said it is now looking to gather feedback on the competition being considered, the prize amounts and distribution structure; determine the level of interest in potentially competing in this challenge, and understand the applicability of the challenge capabilities for other non-government applications. Centennial Challenges typically dare public and private partnerships to come up with a unique solution to a very tough problem, usually with prize money attached for the winner. Centennial Challenges in the past have typically required several annual competitions to occur before the total prize purses, which can be in the millions-of-dollars range, have been claimed. In February NASA said it was looking into developing two other Centennial Challenge competitions that would let the public design, build and deliver small satellites known as Cubesats capable of operations and experiments near the moon and beyond. Check out these other hot stories:
<urn:uuid:1a69a9b3-b5a8-4d66-a3d3-f11213ef7e34>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226714/security/nasa-setting-up--250-000-mars-lander-competition.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946452
428
2.75
3
Kaspersky Lab announces the successful patenting of cutting-edge anti-spam technology in Russia. The technology provides efficient, high-level detection of unwanted messages in images. Spam filters currently have little problem detecting spam text messages. That is why spammers often use stealth technology to hide the text of unwanted messages in images. Filtering graphical spam is far more difficult – before an anti-spam filter can establish whether the text in a message is spam, it must first detect the text in an image. The majority of methods used to detect text in images are based on machine recognition of images. Machine recognition, however, requires uniformity in terms of size, style and the arrangement of symbols. This restriction is exploited by spammers who intentionally distort and create ‘noise’ in images to make detection more difficult. Kaspersky Lab’s cutting-edge technology was designed to effectively detect text and spam in raster images without the need for machine recognition of images. This approach provides high-speed detection and can recognize text in almost any language. Kaspersky Lab’s new anti-spam technology was developed by Eugene Smirnov. The Federal Service for Intellectual Property, Patents and Trademarks granted the patent on 13 January, 2009. “On the one hand, the new method is quite good at detecting images that contain text in almost any language,” says Eugene Smirnov, developer of the technology and manager of the Anti-Spam Development Group at Kaspersky Lab. “On the other hand, we don’t attempt to read the text using machine recognition, so the method has sufficiently low resource requirements for it to be used in Kaspersky Lab’s high-performance spam filter.” Double tracked approach backs up new technology The new patented technology is based on a probabilistic and statistical approach. Whether or not an image contains text is determined by the layout of the graphic patterns of words and lines as well as the content of the letters and words in those patterns. Dedicated filters ensure that the system is not affected by noise elements or the fracturing of text within images, while obfuscation techniques used in graphic spam such as warping and rotating are counteracted using a unique method of detecting text lines. The new system can also effectively determine whether detected text is spam by comparing its signature to spam templates contained in databases. “This invention is very important for the anti-spam industry,” comments Nadezhda Kashenko, Patent Law Group Manager at Kaspersky Lab. “It’s worth pointing out that there are lots of different technologies for detecting spam text messages, but there are very few solutions that can recognise a spam text message in an image. These solutions are very complicated and cumbersome because they have to first find the text in the image and only then decide whether it is spam. Eugene Smirnov’s method is unique. It is a new generation technology, which meant we could assert a patent right for it.” Kaspersky Lab currently has more than 30 patent applications pending in the US and Russia. These relate to a range of technologies developed by company personnel. Additionally, many of today’s antivirus technologies were developed by Kaspersky Lab and are currently used under license by vendors worldwide, including Microsoft, Bluecoat, Juniper Networks, Clearswift, Borderware, Checkpoint, Sonicwall, Websense, LanDesk, Alt-N, ZyXEL, ASUS and D-Link. About Kaspersky Lab Kaspersky Lab is the largest antivirus company in Europe. It delivers some of the world’s most immediate protection against IT security threats, including viruses, spyware, crimeware, hackers, phishing, and spam. The Company is ranked among the world’s top four vendors of security solutions for endpoint users. Kaspersky Lab products provide superior detection rates and one of the industry’s fastest outbreak response times for home users, SMBs, large enterprises and the mobile computing environment. Kaspersky® technology is also used worldwide inside the products and services of the industry’s leading IT security solution providers. Learn more at . For the latest on antivirus, anti-spyware, anti-spam and other IT security issues and trends, visit www.viruslist.com.
<urn:uuid:cda43350-ac0c-444a-aef3-33a8ebfa83b2>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/spam/2009/Kaspersky_Lab_patents_cutting_edge_anti_spam_technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925288
902
2.515625
3
By Ann Silverthorn —According to a recent end-user survey, which included a section on "green" initiatives, nearly three-fourths of the respondents have an interest in adopting a green data-center initiative, yet only one in seven has successfully done so. For the purpose of the study, a green data center was defined as having increased efficiencies in energy usage, power consumption, and space utilization, as well as a reduction in polluting energy sources. Conducted by Ziff Davis on behalf of Symantec, the study surveyed 800 data-center managers across 14 countries, most of which were Global 2000 organizations and other large companies. In the US, only about one-third of the companies have adopted green policies. However, many US companies are making progress with the Green Grid, which is a consortium of IT vendors and users seeking to lower the overall consumption of power in data centers. The organization is chartered to develop platform-neutral standards, measurement methods, processes, and new technologies to improve energy efficiency. Green Grid board members include AMD, APC, Dell, Hewlett-Packard, IBM, Intel, Microsoft, Rackable Systems, Spray Cool, Sun, and VMware. Contributing members include nearly 30 vendors, including storage vendors such as Copan and Pillar Data Systems. General members, which number 75, include storage vendors such as Nexsan, Sepaton, and Storwize. In the Symantec survey, 85% of the respondents said energy efficiency is at least a moderate priority in their data centers, with 15.5% citing it as a critical priority. Marty Ward, director of NetBackup product marketing at Symantec, says the numbers are not surprising because, "a lot of thought is being put into it and not enough action." However, Ward is encouraged that almost three-fourths of the respondents are at least thinking about going green. When considering approaches to making data centers greener, managers have many choices in both software and hardware. They may even decide on an entire data-center redesign. According to Ward, technologies such as data de-duplication and creating a tiered storage architecture are examples of technologies that can dramatically reduce energy consumption. In addition, there are a variety of projects that constitute green policies (see figure, below). Reducing the data footprint
<urn:uuid:ae723a44-32cd-4818-a636-edab8fdb2a43>
CC-MAIN-2017-04
http://www.infostor.com/index/articles/display/312697/articles/infostor/top-news/survey-reveals-green-initiatives-or-lack-thereof.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00038-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961527
484
2.625
3
Understand and Secure Your Windows DNS Infrastructure The Internet would be brought to its knees if DNS functionality were disrupted. With the advent of Windows 2000, your Windows network has a DNS Achilles heel, too. In the days of NT 4, Windows depended on the Windows Internet Naming System (WINS) for name resolution. In this environment NetBIOS broadcasts are redirected to a central WINS server. It is easy to set up, but not very secure. Thankfully modern Windows DNS gives us the ability to create a secure name resolution environment with dynamic updates. Why is DNS so important in a Windows network? DNS does an assortment of important tasks for us. It tells hosts where to find servers and workstations in the domain. You may be familiar with type "A" DNS resource records. These records link a host name to its IP address. When you map a drive to \fileserver.domain.comshare one of the first things that Windows does is lookup the DNS A record for "fileserver." This returns the IP address for "fileserver" and Windows will then communicate with the server via its IP address. Modern Windows hosts use dynamic DNS to update their A records once every 24 hours, upon reboot, and when renewing a DHCP lease. Windows also uses DNS to find domain controllers. To achieve this Microsoft DNS implements SRV records. These records provide more information than the simple host name to IP address mapping of an A record. The SRV records return the name of the host that is providing a specific service as well as the port that the service is listening on. SRV records can also return priority and weight fields, but these are not used for domain controller lookup. Run the following at a command prompt to see a list of the domain controllers on your domain: For your domain's PDC emulator try: Because DNS is so important in a modern Windows network, care must be taken to ensure high availability. Be sure that you are running at least two DNS servers. This can be done using Active Directory (AD) integrated DNS or by setting up primary and secondary DNS servers. When using AD integrated DNS, the DNS records are stored inside AD allowing multi-master replication. In other words, each DNS server can accept dynamic updates from servers and workstations. The changes are then replicated through AD to the other DNS server(s). With primary and secondary servers, only the primary server can receive updates. The secondary server gets a copy of the DNS database and will refer any dynamic updates to the primary server. Type the following at a command prompt to see your primary DNS server: ls -t soa YOUR_DOMAIN With AD integrated DNS this command will return whichever AD integrated DNS server you are currently talking to. If you have two DNS servers setup consider a few additional steps to increase the likelihood that at least one of your DNS servers will always be available. If you have several racks in your data center you can locate the DNS servers in different areas. You might also want to put the DNS servers on different subnets. Now that we have been thoroughly introduced to the importance of DNS in a Windows environment, let's take a closer look at security. Most administrators are not going to have time to manually update DNS resource records for each host in their domain every time an IP address changes. This means that dynamic updates are probably enabled. With this option enabled, hosts on your domain can dynamically update their IP address in DNS. Ideally you should use AD integrated DNS with the "secure updates only" option because this will use Kerberos authentication for dynamic DNS updates. Non-AD integrated DNS cannot use secure updates. Follow these steps to see if you are running AD integrated DNS: 1. Click Start, point to Administrative Tools, and then click DNS 2. Under DNS, double-click the applicable DNS server, double-click Forward Lookup Zones or Reverse Lookup Zones, and then right-click the applicable zone 3. Click Properties The type field should read Active Directory-Integrated. If it does not then you can click on the "Change" button and check the box for "Store the zone in Active Directory." Once your zone is stored in AD you can set the "Dynamic updates:" option to "Secure only." Of course, there are a few issues to deal with when switching to AD integrated DNS. Once you make the switch, each DNS resource record will have a set of permissions attached to it. Normally when a host creates a new resource record in an AD integrated zone it will be granted permission to update that resource record in the future. Unfortunately, when a DNS zone is converted to AD integrated, the default permissions do not grant a host the ability to update its own resource record. This means that as hosts get new IP addresses they will not be able to update themselves in DNS. There are a few ways to get around this little problem. First, you can delete all of your DNS host records after converting to AD integrated DNS. Within 24 hours each Windows 2000 or newer host will create a new resource record for itself in DNS. Another option is to enable scavenging before switching to AD integrated DNS. This will put a time stamp on each of the DNS records dynamically updated by hosts. After switching to AD integrated DNS, each host record will eventually age out because the record will not be updated. Within 24 hours the aged host record will be added back with the correct permissions. Finally, you can manually give permissions to the AD computer object for each host record. Remember, without DNS your Windows network will cease to function. Be sure that you have a good understanding of your DNS environment and take the necessary steps to secure it properly. How to configure DNS dynamic updates in Windows Server 2003
<urn:uuid:6496fa45-a867-41b6-b71b-ba7aec755c24>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3660051/Understand-and-Secure-Your-Windows-DNS-Infrastructure.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908119
1,169
2.90625
3
Electronics that dissolve when not in use? We’ve all had electronic equipment break as soon as the warranty expired, but the Defense Advanced Research Projects Agency wants to design future military gear to go a step further, melting away as soon as it’s no longer needed. The problem that DARPA is trying to solve is that the military today has so much technology, such as small sensors and even phones, that much of it gets left behind or even lost when troops relocate. The devices could be found and examined by the enemy. Given time with a confiscated device, the enemy could gain some insights as to how our technology works, compromising DOD’s “strategic technological advantage,” DARPA said. But that wouldn’t happen if the device begins to decay at the first sign of trouble. So DARPA has created the Vanishing Programmable Resources program, whose acronym, VAPR, sums up the general idea. Technology in the VAPR program has to be rugged and able to perform up to the same rigorous standards as gear does today, which means it needs to be able to handle the MIL-SPEC-810 testing. But when a device is listed as lost and receives some sort of a trigger, or after a certain amount of time has expired, it must partially or fully degrade, so that no working part could be examined by unauthorized personnel. “The commercial off-the-shelf, or COTS, electronics made for everyday purchases are durable and last nearly forever,” Alicia Jackson, DARPA program manager, said in announcing VAPR. “DARPA is looking for a way to make electronics that last precisely as long as they are needed. The breakdown of such devices could be triggered by a signal sent from command or any number of possible environmental conditions, such as temperature.” To solicit ideas, DARPA is hosting a presenters’ day, where teams of innovators can pitch their thoughts on how such technology could be designed and implemented. As with most DARPA projects, the agency expects creating vanishing technology will be a tough goal but one that can be achieved. “This is a tall order, and we imagine a multidisciplinary approach,” Jackson said. “Teams will likely need industry experts who understand circuits, integration and design. Performers from the material science community will be sought to develop novel substrates. There's lots of room for innovation by clever people with diverse expertise.” There is some precedent for degradable equipment, or “transient electronics,” Jackson pointed out. In September 2012, DARPA reported progress in developing ultrathin, biocompatible electronics that completely dissolve in liquid and could be used in implantable medical treatments. DARPA said it is hoping VAPR’s initial project will result in a degradable circuit that could be used in an environmental or biomedical sensor. Posted by John Breeden II on Feb 05, 2013 at 9:39 AM
<urn:uuid:c69e62b6-9d05-4d90-83fd-0b733657d210>
CC-MAIN-2017-04
https://gcn.com/blogs/emerging-tech/2013/02/electronics-dissolve-when-not-in-use.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960737
625
2.828125
3
Yesterday, the 11th of February 2014, was the eleventh annual ‘Safer Internet Day’, a time when the general public, and particularly those who care for children and other vulnerable people, can learn how to stay safe online. What is Safer Internet Day? Coordinated by Chilldnet International, the South West Grid for Learning, and the Internet Watch Foundation, the initiative aims to educate people who may not be aware of the ways in which their details are being shared and the common pitfalls encountered online. Peter Wanless, CEO of the NSPCC, explains further: “Making the internet safer for children and young people is the child protection challenge of this generation. And Safer Internet Day is a chance for everyone – industry, Government, charities, schools, and families – to talk about online safety and share knowledge about what works. A safer internet is built not only by technical endeavour and policies, but by the behaviour of the people that use it. We all need to encourage young people to seek help when they are upset by something or someone online. And service providers and website owners must continue to make it easier for young people to report upsetting content and behaviour, and take swift action to tackle it.” What are the challenges? One of the main challenges faced by companies working in online security is a lack of knowledge on the side of day to day users. The internet is a dynamic area which changes every day – social networking sites update their privacy policies, websites claiming to be reputable are set up for the purpose of defrauding naïve users – and keeping up with all the changes is a full-time job in itself. Of course, one of the most sensitive yet important areas of discussion is how to keep children safe online in a realistic way. According to a recent study released by the Internet Watch Foundation, only 37% of parents have had conversations with their children about what to do if something upsets them online, and just 20% have told their children how to report behaviour that makes them uncomfortable. This lack of preemptive action can lead to young people being confused when difficult situations arise – often when a child is being bullied, or when someone is making advances that make them feel uncomfortable, they do not want to discuss the details with their parents or carers. This leads to some people attempting to restrict young people’s internet access, but with the proliferation of multi-device usage and computers in schools, it is difficult to make this a realistic solution. Giving a young person a level of control over their own internet usage whilst providing them with a space where they can report unwanted behaviour of any kind goes some way towards solving this problem – so why are so few parents investing time in educating their children about internet safety? A common response to this question is simply that ‘the internet’ is too large a subject to broach. Websites change and update daily, children often keep their blogs and profiles hidden from their parents, and the latest trends in online life can seem incomprehensible to people who use the internet solely for work, a few personal emails and perhaps some online shopping. It is unrealistic to expect carers to spend as much time online as the children in their care, and yet this is really the only way to properly understand the social landscape navigated by young people every day. And even when incidents are reported, investigating them can be a challenge. With internet cafés, libraries and cheap handheld devices widely accessible, proving that a specific person perpetrated a crime – or even a series of crimes – is not as simple as gaining access to their personal computer and analysing its contents. Common public misconceptions such as the CSI effect can give rise to frustration on the part of those who report internet safety breaches: members of the general public often find it difficult to understand why law enforcement agents and digital forensics professionals cannot “just hack in and find out”, leaving victims with a sense of despair at not being taken seriously. Yet education about the methodologies used and time required in solving cases of internet safety breaches is not something that can be easily delivered to people with little or no prior training in computer science or digital forensics, particularly when fictional depictions of the field are so far-fetched. Reports of child protection cases in the news media do not generally demonstrate the scale of manpower, time and resource that is required to bring perpetrators to justice, instead focusing solely on results and allowing the public eye to skim over the more complex details. And yet if we want to ensure that young people are safe online, they surely need some level of understanding of both how and where to report unwanted behaviour, and the process that is set in motion once a report has been made. Finding a realistic solution Safer Internet Day aims to address these concerns without placing unrealistic expectations on people who are responsible for children. Rather than being asked to inform young people about the specific dangers of each type of site, or the ways in which they might be exploited online – elements which change every day – the Centre encourages parents and carers to provide children with the skills they need to be able to navigate the web securely, and to deal with any potential danger in a productive manner. Will Gardner, the Safer Internet Centre’s Director, explains: “Everyone has responsibility to make internet safety a priority. Young people are increasingly becoming digital creators and we must equip them with the skills to continue to create and innovate by working together to make the internet a great and safe place. This Safer Internet Day is the biggest one yet – the fantastic range of supporters really reflects how widespread and important this issue is, and we are delighted to see such collaborations where schools, civil society, public and private sectors are all championing the same cause.” And what about educating young people in how investigations are conducted, or about what happens when they submit a report to a law enforcement agency or similar body? There is a certain level of knowledge which can only be gained from training in digital forensics, however the basic concerns of young people can be addressed by giving them at least a rough understanding of how professionals deal with such reports. With this in mind, the three bodies who make up the UK Safer Internet Centre have put together a series of online and offline resources around internet safety. On average, two schools are visited by the team per working day, where talks focus not only on how to report incidents, but also what happens once a report has been made. Follow-up advice is given through the Safer Internet Centre’s website, where a series of helplines provide in-depth, personalised information on subjects ranging from indecent images of children to cyberbullying and scams. Over the past twelve months, 3,846 schools have been reached, over 39,000 reports of indecent content featuring children have been reported to the Safer Internet Centre by members of the public, and 9,550 websites featuring such content have been removed. Whilst progress is not always as fast as people would like, it is at least being made. Everyone, from digital forensics professionals to people who care for children but are not computer literate themselves, can make a difference to the ways in which we deal with online behaviour. In the words of this year’s Safer Internet Day theme, ‘Let’s create a better internet together’.
<urn:uuid:28518ff6-ee75-4b00-9e9d-1af312d44bc9>
CC-MAIN-2017-04
https://articles.forensicfocus.com/2014/02/12/safer-internet-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959011
1,505
3.171875
3
With the advancements of technology, autonomous or self-driving vehicles are slowly becoming a reality on some American roadways. States like California, Nevada and Florida recently passed laws that would regulate the testing of autonomous vehicles. But Bryant Walker Smith, a fellow at the Center for Internet and Society at Stanford Law School and the Center for Automotive Research at Stanford University in California, said now that these vehicles are hitting the roadways,many questions must be asked regarding existing driving laws and how they should be applied to these self-driving vehicles, according to an analysis in NewScientist. In addition to road infrastructure, Smith said that driving also deals with legal infrastructure -- laws that surround the way a vehicle is governed and driver license requirements. “One major question remains though,” Smith said. “Will tomorrow’s cars and trucks have to adapt to today’s legal infrastructure, or will that infrastructure adapt to them?” Smith questioned that when an individual is inside the self-driving vehicle, what is his or her legal responsibility -- what should that driver be allowed to do (or not be allowed to do) when inside the vehicle? In Nevada’s autonomous vehicle driving law, for instance, the vehicle operator cannot “drive” drunk, so clarifying the operator’s role once inside the vehicle will need to be more clearly defined. “For now, however, the appropriate role of a self-driving vehicle’s human operator is not merely a legal question," Smith said. "It is also a technical one." Photo: Google displays its self-driving vehicle at a press conference in Sacramento, Calif., on March 1. By Sarah Rich.
<urn:uuid:3bedfec6-38e2-4b07-bc5a-7f1742666ff9>
CC-MAIN-2017-04
http://www.govtech.com/technology/Are-Driving-Laws-Fit-for-Driverless-Vehicles.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955893
348
2.703125
3
A business capability is an abstraction that helps describe what the enterprise does to achieve its vision, mission, and goals. A business capability can be further decomposed into business processes, which define how the enterprise does this, in terms of activities performed by people that utilize tools (technology). Business capabilities are the building blocks of the enterprise. - Step 1: Work with the business to create a business capability map (see examples in Appendix A of Storyboard: Covertly Establish a Business-Centric, Value-Driven EA Capability). - Decompose the enterprise into business capabilities. - Stay at the capability level (focus on the what) and defer the definition of business processes (how) to a later time. - Step 2: Interview key business stakeholders to identify critically important business capabilities and significant capability gaps. Draw a capability evaluation matrix using the Business Capability Evaluation Matrix Template. Consider how important the capability is to the enterprise achieving its goals, as well as how significant the gap is between as-is and target states of the capability.
<urn:uuid:90c5760d-fe19-41db-846e-fba1961ff49a>
CC-MAIN-2017-04
https://www.infotech.com/research/it-business-capability-evaluation-matrix-template
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931683
216
3.046875
3
Composition and use of milk products for young children: Updated recommendations of the Nutrition Committee of the German Society of Pediatric and Adolescent Medicine (DGKJ) [Zusammensetzung und Gebrauch von Milchgetränken für Kleinkinder: Aktualisierte Empfehlungen der Ernährungskommission der Deutschen Gesellschaft für Kinder- und Jugendmedizin (DGKJ)] Bohles H.J.,Chausseestrasse 128 129 | Fusch C.,Chausseestrasse 128 129 | Genzel-Boroviczeny O.,Chausseestrasse 128 129 | Jochum F.,Chausseestrasse 128 129 | And 7 more authors. Monatsschrift fur Kinderheilkunde | Year: 2011 In recent years several special milk products intended for young children have been marketed, named "children's milk" or "young children's milk" similar to "milk for kids" "growing-up milk" or"toddler milk". These products are claimed to be advantageous as compared to cow's milk. The Nutrition Committee of the German Society of Pediatric and Adolescent Medicine (DGKJ) reconfirms its position from the year 2001 that special milk drinks for young children including follow-on formulae are in principle not needed. If toddler milks are used instead of cows' milk the nutritional composition of these products should be similar to whole cows milk regarding nutrients such as calcium, vitamins A and B2 and similar to low-fat cows milk regarding energy content. The content of critical nutrients such as iodine and vitamin D should be in line with the European directive for follow-on formulae. Flavoring and sweetening agents should not be added. Toddler milk should be drunk from a cup or mug but not from a feeding bottle. © 2011 Springer-Verlag. Source
<urn:uuid:bab677cd-dada-4e51-962e-aad600220774>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/chausseestrasse-128-129-1598016/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00240-ip-10-171-10-70.ec2.internal.warc.gz
en
0.737727
417
3.078125
3
Lagarde R.,ARDA ZI des Sables | Lagarde R.,University of Reunion Island | Teichert N.,ARDA ZI des Sables | Teichert N.,IRSTEA | And 5 more authors. Fisheries Management and Ecology | Year: 2015 Diadromous species of tropical islands face numerous anthropogenic disturbances. Understanding how these disturbances affect the population dynamic of these species is important to develop ecologically based management measures. The upstream migration dynamics of two amphidromous gobies, Sicyopterus lagocephalus (Pallas) and Cotylopus acutipinnis (Guichenot), was described using intensive fishway monitoring (28 sampling dates over 1.5 years, 13 000 S. lagocephalus, 23 000 C. acutipinnis captured). Migration peak occurred for both species during the afternoon at the end of austral winter/beginning of austral summer. Abundance and size of fish caught in the river showed a marked impact of a dam (six sampling dates approximately 4000 individuals of each species captured). This study provides critical data for better management of fishways used by tropical amphidromous gobies. It appears especially important to reduce unwanted rheotactic sources that attract the fish away from the fishway's entrance. © 2015 John Wiley & Sons Ltd. Source
<urn:uuid:8419ae8d-2bff-45c0-a8bb-9ad70e04d898>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/arda-zi-des-sables-2356476/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.812708
288
2.890625
3
Kaspersky Lab releases a new article: "Using leak tests to evaluate firewall effectiveness" 20 Dec 2007 Kaspersky Lab, a leading developer of secure content management solutions, has released a new analytical article on using leak tests to evaluate firewall effectiveness. The author is Nikolay Grebennikov, deputy director of the Department of Innovative Technologies. The article describes the role played by firewalls in integrated information security systems. It also examines the principles and methods used in leak tests, one of the most objective types of firewall testing. According to Nikolay Grebennikov, Due to the increase in the number of malicious programs, the additional security provided by a firewall is increasingly pertinent since firewalls block undesirable network traffic. This additional 'layer' of protection can block most types of malicious program that are not detected by the antivirus component of an integrated security system. The only way of bypassing a firewall is by using leaks, i.e., specific technologies that enable applications to send data to recipients outside the network without the user’s knowledge. The quality of protection from leaks provided by a firewall is tested using so-called leak tests, i.e., small non-malicious programs that implement one or more leaks. Nikolay Grebennikov describes existing leak technologies and the leak tests used to determine whether a firewall is able to block these leaks. Although Microsoft’s new operating system, Windows Vista, is better protected than previous Windows operating systems versions, third-party security programs should still be used to provide sufficient protection from leaks. According to the author, in the future malicious programs will use new methods to circumvent protection built into Windows Vista and existing security systems. Due to this, the additional protection provided by firewalls will need to become even more reliable. The article concludes that as malware writers will increasingly use leak technologies to bypass firewalls, leak tests are becoming a crucial method for testing the reliability of a computer’s protection. The full version of the article is available on Viruslist.com.
<urn:uuid:a899e04e-7511-4989-a613-84a94c4196c2>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/press/2007/Kaspersky_Lab_releases_a_new_article_Using_leak_tests_to_evaluate_firewall_effectiveness_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00322-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905733
425
2.53125
3
NASA this week put out a call to garner information about what sorts of technologies it might include in a spacecraft that would look for asteroids that threaten Earth. Such an asteroid spotting system, which isn't on the drawing board, would be capable of detecting and tracking asteroids in orbits very similar to Earth's, including Earth-trojan asteroids, NASA said. RELATED: The sizzling world of asteroids "Very-Near Earth asteroids" are envisioned as a set of asteroids to be discovered, in an orbit very similar to Earth's. This request solicits information from potential sources for an instrument that could be delivered for flight as soon as 2016, NASA said. From NASA: "This instrument might be flown on a US Government or commercially-owned spacecraft in geosynchronous orbit. The instrument would point outward from Earth and must be capable of detecting near-Earth asteroids in very Earth-like orbits, and capable of detecting asteroids of as little as 30m in diameter. Additional factors of interest include the ability to quantify spin rate and calculate the size and shape of detected asteroids. The data generated by the instrument would be delivered to the Minor Planet Center in a form suitable for orbit processing in order to confirm detections and determine the orbits of new asteroids." The instrument development and testing, integration and accommodation, and five years of flight operations and data processing should total no more than $50M (in FY12 dollars) life cycle costs, NASA stated. In May said that there were roughly 4,700 potentially hazardous asteroids which it said are a subset of a larger group of near-Earth asteroids but have the closest orbits to Earth's - passing within five million miles (or about eight million kilometers) and are big enough to survive passing through Earth's atmosphere and cause damage on a regional, or greater, scale. NASA pointed out too that ''potential'' to make close Earth approaches does not mean a PHA will impact the Earth. It only means there is a possibility for such a threat. The numbers came from asteroid observations made by NASA's Wide-field Infrared Survey Explorer, (WISE) satellite which looked at the objects that orbit within 120 million miles of the of the sun into Earth's orbital vicinity, NASA said. WISE scanned the celestial sky twice in infrared light between January 2010 and February 2011, continuously snapping pictures of everything from distant galaxies to near-Earth asteroids and comets. It has since entered hibernation mode, NASA stated. The asteroid-hunting portion of the WISE mission called NEOWISE has seem more than 100 thousand asteroids in the main belt between Mars and Jupiter, in addition to at least 585 near Earth, NASA noted. NASA said there are roughly 4,700 PHAs, plus or minus 1,500, with diameters larger than 330 feet (about 100 meters). So far, an estimated 20 to 30% of these objects have been found, NASA stated. Previous estimates of PHAs predicted similar numbers, they were rough approximations, NASA said. Asteroids have been in the news a lot lately. It has been widely reported that NASA could announce this month a manned project to land on an asteroid in the future. And in April Google executives Larry Page and Eric Schmidt and filmmaker James Cameron said they would bankroll a venture to survey and eventually extract precious metals and rare minerals from asteroids that orbit near Earth. Planetary Resources, based in Bellevue, Wash., initially will focus on developing and selling extremely low-cost robotic spacecraft for surveying missions. Layer 8 Extra Check out these other hot stories:
<urn:uuid:c3dece64-ff70-43f8-a351-60e3903e541f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222978/security/nasa-exploring-possible-mission-to-better-track-asteroids-that-threaten-earth.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955774
726
3.515625
4
This article, written by the security researcher Christian Funk, provides an overview of the pitfalls that new Internet users may encounter. The article aims to describe some typical threats and scams in order to help users protect themselves, their data and their money. Classic email threats include spam, which may have files attached that contain malware or include links to sites hosting malicious programs. No matter how tempting it may be to open such emails because of their intriguing subjects, the safest thing to do is simply delete them unread. Other scams include the now-familiar phishing, which targets banks, financial institutions, e-payment systems and auction sites. Again, the advice is simple: never enter your data on sites where the link has been sent to you by email, and even if the email looks legitimate, verify it with the organisation concerned. The section on money laundering provides an overview of how those new to the Internet can get sucked into transferring money to cybercriminal accounts. The scareware section looks at fake security software which demands money from the user in order to disinfect malware supposedly found on the computer; however, there is no infection present and the consequences can result in more than spending money unnecessarily – such programs can themselves be linked to malware. Those new to the Internet can also fall victim to social networking scams, where they're asked to transfer money to a 'friend' who is in danger. They may click on URLs in Twitter messages which are modified using a URL shortening service, and which actually lead to malicious sites. Or they may unwittingly download malware from peer-to-peer networks while trying to download music, games, films, and other entertainment media without actually having to purchase it. All of these examples are given, with further details and screenshots, in the full version of Traps on the Internet, which is available on Viruslist.com. The material can be reproduced provided the author; company name and original source are cited. Reproduction of this material in re-written form requires the express consent of the Kaspersky Lab PR department.
<urn:uuid:d67c46d8-bfab-4049-9d71-43085e0f4847>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2009/Traps_on_the_Internet_executive_summary
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00010-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94346
414
2.859375
3
On Fri, Aug 1, 2014 at 12:11 PM, Nathan Andelin <nandelin@xxxxxxxxx> wrote: This discussion about terminology has gotten rather pedantic. The wikipedia article cited previously states the following example: "west.example.com and east.example.com are subdomains of the example.com domain, which in turn is a subdomain of the 'com' "east" and "west" are subdomains only if there's a DNS server somewhere that claims to be authoritative (via SOA) for those domains. So if I owned example.com, I'd have three name servers (assuming I delegate authority to the sub-domains): - for example.com (ns1.example.com) - for east.example.com (ns2.east.example.com) - for west.example.com (ns3.west.example.com) NS1, NS2, NS3 are the unqualified names of my name servers. They are resources of their respective domains. If I choose to have servers named "east" and "west", then they wouldn't be subdomains anymore east.exmaple.com and west.example.com would simply be resources of the example.com domain. Are Charles and Ken suggesting that blahblah.dilgardfoods.com is not a subdomain of dilgardfoods.com? Correct. It's a resource in the dilgardfoods.com domain. How do we we know? Again because we can ping it. There's a physical device that will respond. Domains (and subdomains) are just a way to organize resources, logically. There's nothing physical about them. You can't ping the .COM domain. Originally (and even today by most DNS defaults AFAIK), you couldn't ping the example.com domain either. But now-a-days, you can set up your DNS server to redirect to a particular host (usually the www host) so you can for instance ping dilgardfoods.com but the request is redirected to Charles suggested that blahblah.dilgardfoods.com might point to a publicly addressed "server". Jeff indicated that blahblah.dilgardfoods.com pointed to the IP address of a "firewall" in Originally when the internet was created, blahblah would have been the actual server name and it would have had a public IP address. That's just how it worked. The naming conventions reflect that beginning. Thus you created a resource record, type A for a HOST, in order to direct traffic to The fact that the servers now-a-days are hidden behind NATing firewalls doesn't change the meaning of the names used. Remember, NATing is transparent. From the outside, I can't tell if blahblah actually has a public IP or is behind a NATing firewall.
<urn:uuid:76d9b960-68a5-46c1-8740-558b481b81c1>
CC-MAIN-2017-04
http://archive.midrange.com/midrange-l/201408/msg00043.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923907
643
2.953125
3
Storing data magnetically to hard drives means firing up wires and coils to flip spaces only tens of nanometers in area into either magnetic up or down to indicate 1s or 0s. That takes more power and space than you want to have to keep carrying around an airport. So the search for more efficient storage continues. The latest potential advance comes from a research paper published in Nature this week describing work French and Spanish researchers found to eliminate the magnetic coils but keep writing magnetically, in far smaller spots than previously possible. The technique creates a false magnetic effect by covering the disk with a nanometer-thick layer of cobalt and running an electric current through it. Electrons crossing the cobalt layer behave as if the electrical field is a magnetic field, due to unexplained but "subtle relativistic effects," (because the coarse, obvious ones are gauche). By varying the intensity of the electrical field it's possible to twist the magnetization of the electrons, creating a strong enough magnetic effect to get the misled and manipulated electrons to reverse the magnetization on specific areas of the disk to store data. The data stored on the so-called magnetic random access memory (MRAM) chips would also be more persistent than regular RAM – to the point that data on them wouldn't have to be refreshed every few milliseconds and allow them to hold enough data to boot a laptop or application instantly when the power is turned on. Currently the size of the magnetized data-bits is larger than the current state-of-the-art in magnetic-coil data storage, but has the potential to be made much smaller, the authors predict. Not having to constantly rewrite the same data using far more inefficient magnetic coils would also save a huge amount of power though it's not clear if the electrons would be told this or if they'd have to continue living under the mistaken impression they'd encountered a genuine magnetic field rather than a faux field from Grenoble.
<urn:uuid:42d9d6be-58b5-46c8-8e3c-bad2b594e93e>
CC-MAIN-2017-04
http://www.itworld.com/article/2737939/hardware/relativity-may-make-laptop-batteries-last-longer--add-instant-on-without-penalty.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939163
402
3.734375
4
Definition: A function that maps keys to integers, usually to get an even distribution on a smaller set of values. Specialization (... is a kind of me.) different kinds: linear hash, perfect hashing, minimal perfect hashing, order-preserving minimal perfect hashing, specific functions: Pearson's hash, multiplication method. Aggregate parent (I am a part of or used in ...) hash table, uniform hashing, universal hashing, Bloom filter, locality-sensitive hashing. See also simple uniform hashing. Note: The range of integers is typically [0... m-1] where m is a prime number or a power of 2. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 20 July 2015. HTML page formatted Mon Jul 20 14:31:11 2015. Cite this as: Paul E. Black, "hash function", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 20 July 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/hash.html
<urn:uuid:1ad022c5-2940-4b45-ab07-f789421a5a08>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/hash.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00277-ip-10-171-10-70.ec2.internal.warc.gz
en
0.825551
247
3.078125
3
Open topic with navigation The World Wide Web Consortium developed a set of standard Status Codes for understanding the results of a request. The codes are 3-digit integers, where the first digit represents the class, and the second two digits give you more information about the response within that class. Note: Status Code Classes, Status Codes, Labels, and Descriptions are defined by the World Wide Web Consortium: please refer to http://www.w3.org/Protocols for the full current definition. The information provided here is for your convenience only. There are 5 general classes for the response status. |1XX Series||Information||The request was received, continuing process.| |2XX Series||Successful||The request was successfully received, understood, and accepted.| |3XX Series||Redirection||Further action needs to be taken in order to complete the request.| |4XX Series||Client Error||The request contains incorrect syntax or cannot be fulfilled.| |5XX Series||Server Error||The server failed to fulfill an apparently valid request.| Information status codes are provisional responses from the web server. This is an interim status, notifying the client that the response has been received, and further action will be taken. |100||Continue||The initial part of a request has been received. The server intends to send a final response after the request has been fully received and acted upon.| |101||Switching Protocols||The server must generate an Upgrade header field in the response that indicates which protocol it will be switched to.| The Successful status codes indicate that the request was received, understood, and successful. |200||OK||Request has succeeded.| |201||Created||Request was fulfilled and resulted in new resources being created.| |202||Accepted||Request has been accepted for processing, but processing has not completed.| |203||Non-authoritative information||Request was successful but the enclosed payload has been modified from the origin server’s 200 response.| |204||No Content||Request successfully fulfilled, and no additional content to send in the response payload body.| |205||Reset Content||Request fulfilled, and the server wants the user agent to reset the “document view”.| Further action needs to be taken in order to fulfill the request. The agent MAY automatically redirect. |300||Multiple Choices||Target resource has more than one representation, each with it’s own identifier, and information about the alternatives is being provided.| |301||Moved Permanently||The target resource was assigned a new permanent URI, and future references to this resource should use one of the enclosed URIs.| |302||Found||Target resource resides temporarily under a different URI.| |303||See Other||Server is redirecting the user agent to a different resource, as indicated by the URI in the header field.| |305||Use Proxy||No longer used.| |307||Temporary Redirect||The target resource resides temporarily under a different URI and the user agent must not change the request method if it performs an automatic redirection to that URI.| The client appears to have provided something in error. |400||Bad Request||The server cannot or will not process the request due to something which is perceived to be a client error.| |402||Payment Required||Reserved for future use.| |403||Forbidden||The server understood the request but refuses to authorize it.| |404||Not Found||The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.| |405||Method Not Allowed||The method received in the request-line is known by the origin server but not supported by the target resource.| |406||Not Acceptable||The target resource does not have a current representation that would be acceptable to the user agent.| |407||Proxy Authentication Required| |408||Request Time-Out||The server did not receive a complete request message within the time that it was prepared to wait.| |409||Conflict||The request could not be completed due to a conflict with the current state of the target resource.| |410||Gone||The target resource is no longer available at the origin server, and this is likely to be permanent.| |411||Length Required||The server refuses to accept the request without a defined Content-Length.| |413||Payload Too Large||The server is refusing to process a request because the request payload is larger than the server is willing or able to process.| |414||URI Too Long||The server is refusing to service the request because the request-target is longer than the server is willing to interpret.| |415||Unsupported Media Type||The origin server is refusing to service the request because the payload is in a format not supported by this method on the target resource.| |416||Range Not Satisfiable| |417||Expectation Failed||The expectation given in the request’s Expect header field could not be met by at least one of the inbound servers.| |426||Upgrade Required||The server refuses to perform the request using the current protocol, but might be willing to do so after the client upgrades to a different protocol.| The server is aware that it has erred or is incapable of performing the request. |500||Internal Server Error||The server encountered an unexpected condition that prevented it from fulfilling the request.| |501||Not Implemented||The server does not support the functionality required to fulfill the request.| |502||Bad Gateway||While acting as a gateway or proxy, the server received an invalid response from an inbound server accessed while attempting to fulfill the request.| |503||Service Unavailable||The server is currently unable to handle the request due to a temporary overload or scheduled maintenance, which will likely be alleviated after some delay.| |504||Gateway Time-Out||While acting as a gateway or proxy, the server did not receive a timely response from an upstream server it needed to access in order to complete the request.| |505||HTTP Version Not Supported||The server does not support or refuses to support the major version of HTTP used in the request message.| W3C: HTTP - Hypertext Transfer Protocol (February 06, 2014, Draft -26), HTTP/1.1, part 2: Semantics and Content (http://www.w3.org/protocols).
<urn:uuid:c81a3855-dfe7-46d1-a30e-ee31be5fc093>
CC-MAIN-2017-04
http://doc.alertsite.com/synthetic/http-status-codes.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00487-ip-10-171-10-70.ec2.internal.warc.gz
en
0.87182
1,386
2.828125
3
Today’s consumers have an array of lighting options to select from, giving them the light they want while saving energy and money. In addition to the standard incandescent bulb that’s been around since Thomas Edison invented it about 130 years ago, there are energy-efficient halogen incandescents, compact florescent lamps (CFLs) and a new breed of softer, warmer light-emitting diodes (LEDs) to choose from. The problem with traditional incandescents is that 90 percent of the electricity they draw generates heat instead of light. Considering the huge potential for energy savings – the equivalent of $10 billion per year in the US alone – it should come as no surprise that scientists and engineers are hard at work developing and perfecting smart lighting choices. Of these, LEDs have the most potential to offer a pleasing, natural light while using a lot less energy. A type of solid-state lighting, LEDs are semiconductors that convert electricity into photons. Once used mainly for indicator and traffic lights, LEDs for general illumination applications are one of today’s most energy-efficient and rapidly-developing technologies. ENERGY STAR-qualified LEDs use only 20-25 percent of the energy of a common incandescent and last up to 25 times longer. Besides having a lower energy consumption, LEDs can also claim a longer lifetime, improved physical robustness, smaller size, and faster switching. Even though LEDs are one of the most promising light technologies to come along in decades, there is still room for improvement. Most notably, scientists are addressing the so-called “green gap,” a portion of the spectrum where LED efficiency drops significantly. Increasing the efficiency of green LEDs is a high-priority research area for the US Department of Energy. Simulations undertaken at the DOE’s National Energy Research Scientific Computing Center (NERSC) have shown that nanostructures half a DNA strand-wide could ameliorate this gap, delivering more energy-efficient LEDs. At low power, nitride-based LEDs – the ones most commonly used in white general-use lighting – are very efficient, converting most of their energy into light. But when the power is increased to create sufficient light for mainstream environmental and task lighting needs, a much smaller fraction of electricity gets turned to light. The effect is especially prominent in green LEDs, which is how the term “green gap” came into being. University of Michigan researchers Dylan Bayerl and Emmanouil Kioupakis are seeking to bridge this gap with the help of NERSC’s Cray XC30 supercomputer “Edison.” They discovered that the semiconductor indium nitride (InN), which typically emits infrared light, will emit green light when 1 nanometer-wide wires are employed. They further determined that by tailoring the width of the wire, these nanostructures emit different colors of light – with a wider wire generating yellow, orange or red and a narrower wire making indigo or violet. It is thought that by mixing these colors, LED engineers can create natural-looking lighting that does not suffer from a steep efficiency drop as power is increased. This direct method of making LEDs is not yet practical because green LEDs are not as efficient as blue and red versions. Today, most general use LED lights are made from blue LED light that has been passed through a phosphor. The process, akin to that used in conventional florescent tubes, does not fully exploit LED’s energy efficecy potential. As the related article at NERSC explains, “direct LED lights would not only be more efficient, but the color of light they produce could be dynamically tuned to suit the time of day or the task at hand.” “Our work suggests that indium nitride at the few-nanometer size range offers a promising approach to engineering efficient, visible light emission at tailored wavelengths,” said Kioupakis. A paper based on this study, called “Visible-Wavelength Polarized Light Emission with Small-Diameter InN Nanowires,” was published online in February. The work will also be featured on the cover of the July issue of Nano Letters.
<urn:uuid:8968ff5e-5829-428c-8939-8f3924722ed1>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/04/09/edison-aims-reinvent-light-bulb/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00487-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935856
877
4.09375
4
Glass optical fibers are constructed of tiny strands of glass that are bundled together inside an application-specific sheathing like stainless steel for durability and high temperatures. They are attached to certain photoelectric sensors and guide light from the sensing head to the target. Basically, glass fiber cable enables you to use a photoelectric sensor in areas where you wouldn’t normally be able to use them. The sensors are available with a wide range of housings, mounting styles, and features for your specific application. Glass optical fibers have an impressive temperature range, as low as -40 °F and up to +900 °F. That’s because the cables have no electrical components.They merely act as a conduit, or light guide between the target and the sensor. So, they can be used in high-temperature applications like furnaces, ovens, and condensers in large engines, and they can also be used in extremely low temperature areas such as cold storage warehouses. And because glass cores are efficient at transmitting light, allowing for significantly higher transfer speeds, they can be used at long sensing distances. Glass fiber optic cables are extremely versatile and robust and are available in a mix of configurations, end fittings, and adapter types. Even more, glass fiber optic cables are optimized for small spaces and small targets. They can be used with both visible red and infrared light and are compatible with a long list of fiber heads. Plastic Optical Fiber (POF) is an optical fiber which is made out of plastic rather than traditional glass. With very small diameter, the POF cable is easy to run along skirting boards, under carpets and around tight corners. It offers additional durability for uses in data communications, as well as decoration, illumination and industrial application. The plastic optical fiber as the ideal short-range communication network transmission medium, has an important position in military communication network and multimedia equipment in the data transmission. POF fiber provides numerous advantages over glass optical fiber without compromising the performance of products and services. To be extremely reliable and rugerizada, POF has been adopted in many different markets. With data rates up to 1 Gbps and guarantee high quality service, POF is the most robust technology for Ethernet networks to 100 Mbps Optical Plastic fiber optic is also particularly suitable for developing new applications with high bandwidth, including IPTV and Triple Play services. The light passes through the central portion of the plastic optical fiber diameter of about 1mm, about 100 times larger than the glass fiber, and the connection between the fiber connection and personal computer terminal apparatus is very easy. Plastic optical fiber, although the light-transmitting differential, light loss is large, the initial ships of 300 dB/km, the transmission optical narrow band (limited to the visible region), difficult to adapt to the needs of the multimedia communication network, but it has a series of advantages, including light and soft, flexural, impact strength, cheap, anti-radiation, easy to process and can be made (1 to 3 mm in diameter, in order to increase the light-angle, expand the scope of), so favored. In addition, plastic optical fiber installation costs low, very simple installation can align the connector plug, this plug can be used existing technology to produce.
<urn:uuid:3e681011-d8e1-4c52-9781-a5fa847635a9>
CC-MAIN-2017-04
http://www.fs.com/blog/glass-optical-fiber-vs-plastic-optical-fibers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00397-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934713
663
2.890625
3
Every business faces risk as long as they have something of value. The more valuable the assets of the company are, the more risk they face. Data value increases when the amount of information in a database grows and the data can be harvested more effectively. Data should be protected or secured at a reasonable cost that is a fraction of the value of the data. The cost of attacking a corporation’s data assets usually decreases as technology improves. To attack and exploit a company’s data center or get to a certain asset, a given investment would be needed to gain access to the data and gain benefit from it. The cost of certain attacks may be very low and the enterprise needs to guard against these attacks. If the cost of the attack becomes less than the value of the data, then the security for that asset should be upgraded to deter the attacker. Unfortunately, most attackers do not do a cost-benefit analysis on the victim before attacking. Many low-cost methods of attack, like kiddie scripts (attack modes that are obtainable for free on the Internet), are done for kicks. Attackers may not benefit from the attack, but the attack may hurt the owner of the data. Enterprises need to fight against all types of attacks that threaten their assets or their ability to do business. A general definition of risk will help show how threats are a factor in determining risk. Risk due to security attacks is the product of the threat, times the vulnerability to the threat, times the value of the asset. Since companies want to increase the value of their assets and cannot stop all threats, they must decrease their vulnerability to a given attack. To find the total risk that a company faces, the company must inventory their data assets. With each asset tallied, the company can estimate the probability of the threats to each asset and the vulnerability to each threat in terms of a probability. The total risk will be the summation of the risks to each asset in terms of dollars. To justify a security upgrade, the company may evaluate the reduction of risk due to a security upgrade. Dividing the reduction in risk by the cost of the security upgrade reassures return on security investment (ROSI). This analysis will give the user an estimate of the lower risk due to countermeasures. Reduction of risk makes the enterprise safer than if the threats are ignored. Enterprises can choose to install countermeasures before the attack or deal with the consequences after an attack. Risk always starts with a threat. Threats can be broken up into three basic levels. The first level of threats is unintentional and due to accidents or mistakes. While not intentional, these threats are common and can cause downtime and loss of revenue. The second level of threats is a simple malicious attack that uses existing equipment and possibly some easily obtainable information. These attacks are less common but are intentional in nature and are usually from internal sources. The third level of threat is the large scale attack that requires an uncommon level of sophistication and equipment to execute the attack. A third level attack is usually from an outside source and requires access, either physically or virtually. Third level attacks are extremely rare in SANs today and may take considerable knowledge and skill to execute. Table 1 summarizes the three levels of threat. Level 1 attacks are unintentional and are usually the result of common mistakes. A classic example of a Level 1 attack is connecting a device to the wrong port. While unintentional, a miscabling could allow a device to have unauthorized access to data or cause a disk drive to be improperly formatted. The incorrect connection could even join two fabrics that could enable hundreds of ports to be accidentally accessed. The unfortunate aspect of this attack is that it can be executed with little skill or thought. Fortunately, Level 1 threats are the easiest to prevent. Level 2 threats are distinguished by the fact that someone maliciously tries to steal data or cause disruption of service. The variety of Level 2 attacks increases as the intruder (anyone initiating the attack) is attempting to circumvent barriers. An intruder impersonating an authorized user would be a common Level 2 attack. To prevent a Level 2 threat, the SAN will need to add processes and technology to foil the attack. Level 3 threats are the most troublesome. These are large-scale offensives that are usually perpetrated by an external source with expensive equipment and sophistication. An example of this attack would be installing a Fibre Channel analyzer that monitors traffic on a link. Equipment to crack authentication secrets or encrypted data would be another example of a Level 3 attack. These cloak and dagger type attacks are difficult to accomplish and require uncommon knowledge and a serious commitment to perpetrate the attack. Level 3 attacks are rare and complex and are beyond the scope of this white paper. The three levels of attack are helpful in categorizing threats, but an in-depth analysis is required to address each threat. The next section will enable a systematic approach to dealing with individual threats. Administrator’s Perspective – Storage Network Points of Attack Threats to storage networks come from many places. Each point of attack may be used as a stepping-stone for later attacks. To provide high levels of security, several checkpoints should be placed between the intruder and the data. The various points of attack are helpful in identifying security method to thwart different attacks. Similar to how castles have several defense mechanisms to defend against invaders, the enterprise should install many barriers to prevent attacks. The point of attack helps the discussion of individual threats. The threats that will be discussed in this paper include: – Unauthorized Access Unauthorized access is the most common security threat because it can run the gamut of Levels 1 to 3 threats. An unauthorized access may be as simple as plugging in the wrong cable or as complex as attaching a compromised server to the fabric. Unauthorized access leads to other forms of attack, and is a good place to start the discussion of threats. Access can be controlled at the following points of attack: 1. Out-of Band Management Application – Switches have non- Fibre Channel ports, such as an Ethernet port and Serial Port, for management purposes. Physical access to the Ethernet port may be limited by creating a private network to manage the SAN that is separate from a company’s Intranet. If the switch is connected to the company Intranet, Firewalls and Virtual Private Networks can restrict access to the Ethernet port. Access to the Serial Port (RS 232) can be restricted by limiting physical access and having user authorization and authentication. After physical access is obtained to the Ethernet port, the switch can control the applications that can access it with access control lists. The switch may also limit the applications or individual users that can access through point of attack 3. 2. In-band Management Application — Another exposure that a switch faces is through an in-band management application. The in-band management application will access the fabric services – such as the Name Server and Fabric Configuration Server. Access to the fabric services is controlled by the Management ACL (MACL). 3. User to Application – Once a user has physical access to a management application, they will have to log into the application. The management application can authorize the user for role-based access depending on their job function. The management application will need to support access control lists and the roles for each user. 4. Device to Device – After two Nx_Ports are logged into the fabric, one Nx_port can do a Port Login (PLOGI) to the another Nx_Port. Zoning and LUN masking can limit the access of devices at this point. The Active Zone Set in each switch will enforce the zoning restrictions in the Fabric. Storage devices maintain the LUN masking information. 5. Devices to Fabric – When a device (Nx_Port) attaches to the fabric (Fx_Port), the device sends a Fabric Login (FLOGI) command that contains various parameters like Port World Wide Name (WWN). The switch can authorize the port to log into the fabric or reject the FLOGI and terminate the connection. The switch will need to maintain an access control list (ACL) for the WWNs that are allowed to attach. The real threat to data will occur after the device is logged into the fabric and can proceed to point of attack 4 or 5. 6. Switch to Switch – When a switch is connected to another switch, an Exchange Link Parameters (ELP) Internal Link Service (ILS) will send relevant information like the Switch WWN. The switch can authorize the other switch to form a larger fabric or the link can be isolated if the switch is not authorized to join. Each switch will need to maintain an ACL for authorized switches. 7. Data at Rest – Stored data is vulnerable to insider attack, as well as unauthorized access via fabric and host-based attacks. For example, since storage protocols are all cleartext, administrators for storage, backup and hosts have access to stored data in raw format, with no access restrictions or logging. Storage encryption appliances provide a layer of protection for data at rest, and in some cases provide additional application- level authentication and access controls. Controlling access with Access Control Lists (ACLs) prevents accidents from leading to catastrophes. ACLs will not stop attackers who are willing to lie about their identity. Unfortunately, most thieves usually don’t have a problem with lying to get what they want. To prevent spoofers (someone who masquerades as another) from infiltrating the network, the entity that is being authorized must also be authenticated. Spoofing is another threat that is related to unauthorized access. Spoofing has many names and forms and is often called: impersonation, identity theft, hijacking, masquerading and WWN spoofing. Spoofing gets its names from attacking at different levels. One form of attack is impersonating a user and another attack is masquerading as an authorized WWN. The way to prevent spoofing is by challenging the spoofer to give some unique information that only the authorized user should know. For users, the knowledge that is challenged is a password. For devices, a secret is associated with the WWN of the Nx_Port or switch. Management sessions may also be authenticated to ensure that an intruder is not managing the fabric or device. Spoofing can be checked at the following points of attack: 1. Out-of Band Management Application – When a management application contacts the switch, the switch may authenticate the entity that is connecting to the switch. Authentication of the users is addressed in point of attack 6. 2. In-band Management Application – The in-band management application will use Common Transport (CT) Authentication to prevent spoofing of commands to Fabric Services. 3. User to Application – When the user logs into the application, the management application will challenge the user to present a password, secret or badge. The application could authenticate the user with biometric data like fingerprints, retina scans or even DNA samples. 4. Device to Device – After an Nx_port receives a PLOGI, the Nx_Port can challenge the requesting port to show its credentials. CHAP is the standard Fibre Channel mechanism for authenticating Nx_Ports. The requesting Nx_Port should also challenge the other Nx_Port so that both ports are sure of the authenticity of the other port. Two-way authentication is known as mutual authentication. 5. Devices to Fabric – When a device sends a Fabric Login (FLOGI) command, the switch could respond with a CHAP request to authenticate the user. The Nx_Port should respond to the CHAP and challenge the switch as well for mutual authentication. 6. Switch to Switch – When a switch is connected to another switch, both switches should authenticate each other with CHAP. To authenticate every point, four types of authentication are possible: 1. User Authentication 2. Ethernet CHAP Entity Authentication 3. CT Message Authentication 4. Fibre Channel DH-CHAP Entity Authentication After entities and users are authorized and authenticated, the traffic should be able to flow securely between authorized devices. Data flowing on the link could still be stolen by a sniffer. Sniffers will be investigated in the final threat. Data can be stolen in many ways. One way to steal data is sniffing the data while it is in flight. Sniffing is also referred to as wire-tapping and is a form of the man-in-the-middle attack. A Fibre Channel analyzer is a good example of a sniffer that can monitor traffic transparently. Sniffing does not affect the operation of the devices on the link if done properly. A cure for sniffing is encryption. Encryption is the process of taking raw data and encrypting it in a manner that is unreadable without the correct secret. Without the correct key, the stolen data is worthless. Several encryption methods work and there are different encryption algorithms for different kinds of traffic. Instead of discussing the encryption techniques for each point of attack, the encryption method only applies to in-band and out-of-band traffic. Encapsulating Security Payload (ESP) can encrypt the Fibre Channel traffic to ensure confidentiality. Ethernet traffic can be encrypted with Secure Sockets Layer (SSL) or similar protocols. These encryption techniques can use different levels of encryption to make stolen data worthless. As SANs have grown in complexity, with many terabytes of data aggregated and replicated in shared systems, customers are increasingly concerned with the security of data at rest. Government regulations around privacy have further increased the importance of protecting stored customer information. McDATA has developed integrated solutions with partners to provide transparent, wire-speed encryption for data at rest. These appliances use hardware-based encryption and key management to lock down data at rest, and enforce overall fabric security and access controls. These McDATA-certified solutions have been deployed by government and enterprise customers with minimal impact to application performance or management overhead. McDATA’s experienced consultants can work with your team to ensure a seamless integration that addresses your unique security requirements. The most common threats are mistakes made by users. Various access control lists can limit the risk due to many forms of errors. Access control lists only stop users that do not spoof an authorized device. To prevent spoofing, authentication services are required to catch a lying intruder. If the intruder still manages to obtain physical access to the infrastructure and attaches a sniffer onto a link to steal data, encryption can render the stolen data worthless. These three common threats and solutions help IT organizations manage the risk associated with various attacks. Referring back to the level of attack, access control lists prevent Level 1 attacks by preventing miscabling from proceeding past the initialization stage. If an intruder is trying to use an application under an authorized users name, the Level 2 attack will be stymied with authentication. If the intruder installs a wire tap in a Level 3 attack, encryption can spoil the intruder’s ill gotten goods. The three levels of attack require different types of defensive maneuvers. Each threat must be dealt with individually.
<urn:uuid:99d03f6c-d1a3-4286-953c-fd34ae7b5b9f>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2005/07/11/risks-and-threats-to-storage-area-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00029-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931112
3,109
3.03125
3
Lately I tried visiting some website and I couldn’t help but notice this pages keep giving me warning of insecure website. I was wondering what could be the issue, mean while the solution was looking straight to my face singing here I am. :). Meaning SSL Certificate? SSL Certificates are small data files that link a secret writing key to an organization’s details. It is a protocol/rules that activate a padlock which secure connections from a web-server to a browser that is currently in use. SSL Certificates helps protect all your sensitive document and information such as your credit card information, usernames, passwords and many more. It helps keep data secure between all active and inactive server. It also increase your Google and other Search engine Rankings. The SSL certificate also Builds and strongly enhances customer trust. Fixing: The server’s security certificate is not yet valid Cut the long story, and land on the fix to this annoying server’s security certificate is not yet valid problem. Guide below is quick, brief and detailed information, you can easily fix this server’s security certificate issue. Steps on Fixing: Server’s security certificate is not valid You wont believe how simple this is, as all you have to do is to set and synchronize your system clock. Quick step below will guide you through on how to set synchronize or set your time to the current date. - From your task bar, Open your Time and Date settings and set the current date and time of the year. - Locate and click on Internet Time Zone Tab and Click on Update now button. - Once done, this will initiate the synchronization of your internet server time. - Wait for some time till you get a notification massage that says you have Synchronized correctly with server. For Windows 10 and 8 The process is similar. Just a little bit different and advanced. - Right click on your time and click on “Adjust Time and Date“ - Ensure Set time automatically is set to ON
<urn:uuid:c8facf05-66bf-43bb-b8ab-f214d709298b>
CC-MAIN-2017-04
http://mikiguru.com/fix-the-servers-security-certificate-is-not-yet-valid-solved/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00149-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902738
431
2.796875
3
Two MIT researchers have created Wi-Vi, an experimental system that uses Wi-Fi signals to track moving objects – usually people – behind a wall and in closed rooms. The system works on the same principle as a sonar – it releases (in this case) Wi-Fi radio waves and detects and measures their intensity as they rebound off walls and objects. “The technology can also determine the motion of different persons in a closed room,” it says on the project’s page. “It can answer questions such as: Is the person moving towards the device or away from it? What is the angle of motion of a person inside a closed room relative to the location of WiVi?” I can detect simple gestures, and determine up to 3 moving objects. So far, the system’s resolution is law, but they hope to improve it with time. Wi-Vi could be used by law enforcement agents to avoid walking into an ambush and help control hostage situations, and by emergency responders to find casualties buried beneath rubble and collapsed structures. But it can also be useful for private citizens in situations such as suspected home invasions. In a more distant future, its use in considerably more low-risk situations is also possible: non-invasive monitoring of children and elderly, controlling household appliances via gestures, gaming, and more. Here’s an interesting video that shows how the system works now:
<urn:uuid:864f75df-9408-44cb-b62d-c2f54a6a7f4c>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/06/28/wi-vi-seeing-through-walls-with-wi-fi-signals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00359-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947072
295
2.984375
3
At a recent street fair, I was mesmerized by the one-man band. Yes, I am easily amused, but I was impressed nonetheless. Combining harmonica, banjo, cymbals, and a kick drum -- at mouth, lap, knees, and foot, respectively -- the veritable solo symphony gave a rousing performance of the Led Zeppelin classic "Stairway to Heaven" and a moving interpretation of Beethoven's Fifth Symphony. By comparison, I'm lucky if I can pat my head and rub my tummy in tandem. (Or is it pat my tummy and rub my head?) Lucky for you, the UNIX® operating system is much more like the one-man band than your clumsy columnist. UNIX is exceptional at juggling many tasks at once, all the while orchestrating access to the system's finite resources (memory, devices, and CPUs). In lay terms, UNIX can readily walk and chew gum at the same time. This month, let's probe a little deeper than usual to examine how UNIX manages to do so many things simultaneously. While spelunking, let's also glimpse the internals of your shell to see how job-control commands, such as Control-C (terminate) and Control-Z (suspend), are implemented. Headlamps on! To the bat cave! A real multitasker On UNIX (and most modern operating systems, including Microsoft® Windows®, Mac OS X, FreeBSD, and Linux®), each computing task is represented by a process. UNIX runs many tasks seemingly at the same time because each process receives a little slice of CPU time in a (conceptually) round-robin fashion. A process is something of a container, bundling a running application, its environment variables, the state of the application's input and output, and the state of the process, including its priority and accumulated resource usage. Figure 1 pictures a process. Figure 1. A conceptual model of a UNIX process If it helps, you can think of a process as its own sovereign nation, with borders, resources, and gross domestic product. Each process also has an owner. Tasks you initiate -- your shell and commands, say -- are typically owned by you. System services might be owned by special users or by the superuser, root. For example, to enhance security, the Apache HTTP Server is typically owned by a dedicated user named www, which provides access to the files the Web server needs, but no others. Ownership of a process might change but is otherwise strictly exclusive. A process can have only one owner at any given time. Finally (and simplifying for this introduction), each process has privileges. Typically, a process's privileges are commensurate with those of its owner. (For instance, if you can't access a particular file from your command-line shell, programs you launch from the shell inherit the same limitation.) An exception to this inheritance rule, where a process might acquire greater privileges than its owner, is an application with the special setuid or setgid bit enabled, as shown by ls. The setuid bit can be set using chmod u+s. setuid permissions look like this: $ ls -l /usr/bin/top -rwsr-xr-x 1 root wheel 83088 Mar 20 2005 top The setgid bit can be set using $ ls -l /usr/bin/top -r-xr-sr-x 1 root tty 19388 Mar 20 2005 /usr/bin/wall A setuid process, such as launching top, runs with the privileges of the user who owns the file. Hence, when you run top, your privileges are promoted to those of root. Similarly, a setgid process runs with the privileges associated with the group owner of the file. For instance, on Mac OS X, the wall utility -- short for "write all," because it writes a message to every physical or virtual terminal device -- is setgid tty (as shown above). When you log in and are assigned a terminal device to type in (the terminal becomes standard input for your shell), you're made the owner of the device, and tty becomes the group owner. Because wall runs with the privileges of group tty, it can open and write to every terminal. Like all other system resources, your UNIX system has a finite, albeit large pool of processes. (In practice, a system almost never runs out of processes.) Each new task -- say, launching vi or running xclock -- is immediately allocated a process from the pool. On UNIX systems, you can view one or more processes using the For example, if you want to see all the processes you own, type ps -w --user username: $ ps -w --user mstreicher You can view the entire list of processes using ps -a -w -x. (The format and specific flags of the ps command vary from UNIX flavor to UNIX flavor. See the online documentation for your system to find specifics.) -a selects all processes running on a tty device; -x further selects all processes not associated with a tty, which typically includes all the perpetual system services, such as the Apache HTTP server, the cron job scheduler, and so on; and -w shows a wide format, useful for seeing the command line or full pathname of the application associated with each process. ps has a legion of features, and some versions of ps even allow you to customize the output. For example, here is a useful custom process listing: $ ps --user mstreicher -o pid,uname,command,state,stime,time PID USER COMMAND S STIME TIME 14138 mstreic sshd: mstreicher S 09:57 00:00:00 14139 mstreic -bash S 09:57 00:00:00 14937 mstreic ps --user mstrei R 10:23 00:00:00 -o formats output according to the order of the named command are process ID, user name, and command, state reflects the process state, such as S) or running ( (More on process state in a moment.) stime shows when the command started, and time shows how much CPU time the process has consumed. Daddy, where do processes come from? On UNIX, some processes run from system boot to shutdown, but most processes come and go rapidly, as tasks start and complete. At times, a process can die a premature, even horrible death (say, due to a crash). Where do new processes come from? Each new UNIX process is the spawn of an existing process. Further, each new process -- let's call it the "child" process -- is a clone of its "parent" process, at least for an instant, until the child continues execution independently. (If each new process is the offspring of an existing process, that begs the quandary, "Where does the first process come from?" See the sidebar below for the answer.) Figures 1-4 detail the spawning, uh, process: - In Figures 2 and 3, Process A is running a program represented by the blue box. It runs the instructions numbered 10, 11, 12, and so on. Process A has its own data, its own copy of the program, its set of open files, and its own collection of environment variables, which were initially captured when Process A sprang into existence. Figure 2. Process A running code - In UNIX, the fork()system call (so named because it's a call, or request, for operating system assistance) is used to spawn a new process. When Program A executes fork()in Instruction 13, the system immediately creates an exact clone of Process A, named Process Z. Z has the same environment variables as A, the same memory contents, the same program state, and the same files open. The state of Processes A and Z immediately after Process A spawns process Z is shown in Figure 3. Figure 3. Process A spawns a clone of itself - At inception, Process Z begins execution at the same place where Process A left off. That is, after inception, Process Z begins execution at Instruction 14. Process A continues execution at the same instruction. - Typically, the programming logic at Instruction 14 tests whether the current process is the child or parent process -- that is, Instruction 14 in Process Z and Instruction 14 in process A separately determine if its process is the progeny or progenitor. To differentiate, the fork()system call returns 0 in the progeny but returns the process ID of Process Z to the progenitor. - After the previous test, Process A and Process Z diverge, each taking a separate code path, as if both came to a fork in the road and each took a distinct branch. The process of spawning a new process is more often called forking, given the metaphor of two travelers reaching a fork in the road. Hence, the system call is named After the fork, Process A might continue running the same application. However, Process Z might immediately choose to metamorphose to another application. The latter operation of changing what program is running with a process is called execution, but you can think of it as reincarnation: Although the process ID remains the same, the instructions within the process are replaced entirely with those of the new program. Figure 4 shows the state of Process Z some time later. Figure 4. Process Z is now independent of its progenitor, Process A You can experience forking right from the comfort of your private command line. To begin, open a new xterm. (You likely now realize that xterm is its own process and, within xterm, the shell is a separate process spawned by xterm). Next, type: ps -o pid,ppid,uname,command,state,stime,time You should see something like this: PID PPID USER COMMAND S STIME TIME 16351 16350 mstreic -bash S 11:23 00:00:00 16364 16351 mstreic ps -o pid,ppid,u R 11:24 00:00:00 According to the PPID fields in this list, the ps command is a child of the bash shell. (The -bash indicates that the shell instance is a login shell.) To run ps, bash forks to create a new process; the new process reincarnates itself using execution, turning into a new Here's another experiment to try. Type: sleep 10 & sleep 10 & sleep 10 & ps -o pid,ppid,uname,command,state,stime,time You should see something like this: $ sleep 10 & sleep 10 & sleep 10 & ps -o pid,ppid,uname,command,state,stime,time PID PPID USER COMMAND S STIME TIME 16351 16350 mstreic -bash S 11:23 00:00:00 16843 16351 mstreic sleep 10 S 11:42 00:00:00 16844 16351 mstreic sleep 10 S 11:42 00:00:00 16845 16351 mstreic sleep 10 S 11:42 00:00:00 16846 16351 mstreic ps -o pid,ppid,u R 11:42 00:00:00 The command line spawns four new processes. Typing ampersand &) after each command runs each of those commands in the background, or in parallel with ps is another spawned process, but it's running in the foreground, preventing the shell from running another command until it terminates. Again, all four processes are the spawn of the shell, as shown by the values of PPID. The three commands are marked S, because none of the process are consuming resources while they're sleeping. For convenience, the shell keeps track of all background processes it spawns. jobs to see a list: $ sleep 10 & sleep 10 & sleep 10 & 16843 16844 16845 $ jobs Running sleep 10 & Running sleep 10 & Running sleep 10 & Here, the three jobs are labeled 1, 2, and 3 for convenience. The numbers 16843, 16844, and 16845 are the process IDs of each respective process. Thus, background task 1 is process ID 16843. You can manipulate your background jobs from the command line using these labels. For instance, to terminate a command, type kill %N, where N is the command's label. To move a command from the background to the foreground, type $ sleep 10 & sleep 10 & sleep 10 & 17741 17742 17743 $ kill %7 $ jobs Terminated sleep 10 - Running sleep 10 & + Running sleep 10 & $ fg %8 sleep 10 Running multiple commands simultaneously and asynchronously from the command line is a great way to juggle your own set of tasks. A long-running job -- say, number crunching or a large compilation -- is perfect to place in the background. To capture the output of each background command, consider redirecting the output to a file, using the redirection operators >>&. Whenever a background command finishes, the shell prints an alert message before the next prompt: $ whoami mstreicher - Done sleep 10 + Done sleep 10 $ To the great process pool in the sky Some processes live forever (such as init), and some processes reincarnate themselves into a new form (such as your shell). Ultimately, most processes die of natural causes -- a program runs to completion. Additionally, you can place a process in a kind of suspended animation, where it waits to be reanimated. And as the previous example shows, you can terminate a process prematurely with If a command is running in the foreground and you want to suspend it, press Control-Z: $ sleep 10 (Press Control-Z) + Stopped sleep 10 $ ps PID PPID USER COMMAND S STIME TIME 18195 16351 mstreic sleep 10 T 12:44 00:00:00 The shell has suspended the command and assigned it a label for convenience. You can use this label as before to terminate the job or return it to the foreground. You can also use the bg command to resume the process in the background: bg %1 + sleep 10 & If a command is running in the foreground and you want to terminate it, press Control-C: $ sleep 10 (Press Control-C $ jobs $ Your shell makes suspending and terminating a process easy, but a little voodoo is working beneath the shell's innocent facade. Internally, your shell uses UNIX signals to affect the state of processes. A signal is an event, and it's used to alert a process. The operating system originates many signals, but you can send signals from one process to another, or even have a process signal itself. UNIX includes a wide variety of signals, most of which have a special purpose. For example, if you send signal SIGSTOP to a process, the process suspends. (For a complete list of signals, type man 7 signal or type kill -L). You send signals with the $ sleep 20 & 19988 $ kill -SIGSTOP 19988 $ jobs + Stopped sleep 20 sleep command started in the background with process ID 19988. After sending process changed state, becoming suspended or stopped. Sending another signal, SIGCONT, reanimates the process, and it resumes where it left off. In other words, your shell sends SIGSTOP to the foreground process each time you press Control-Z. The bg command sends And Control-C sends SIGTERM, which requests that the process terminate immediately. Some signals can be blocked by a process, and applications can be designed to explicitly "catch" signals and react to each event in a special way. For instance, the system service xinetd, which launches other network services on demand, re-reads its configuration files upon the receipt of SIGHUP. On Linux, sending signals to init can change the system runlevel and even initiate system shutdown. (Here's a question: What's the difference between kill %1 and A process can even signal itself. Imagine that you're writing a game and want to give the user five seconds to respond. Your code can set a five-second timer and continue, say, redrawing the screen. When the timer runs out, SIGALRM is sent back to your process. Bzzzzt! Time's (Here's the answer to the question: kills your background job labeled 1. kill 1 terminates init, which is a signal to the operating system that it should shut down the entire Still other signals are transmitted from the operating system to processes in special circumstances. A memory violation can spur SIGSEGV, killing the process instantly while leaving a core dump behind. One special signal, SIGKILL, can't be blocked or caught, and it kills a process immediately. As with many other resources in UNIX, you can only signal processes that you own. This prevents you from terminating important system services and the processes of other users. The superuser, root, can signal any process. More magic demystified UNIX has many moving parts. It has system services, devices, memory managers, and more. Luckily, most of these complex machinations are hidden from view or are made convenient to use through user interfaces, such as the shell and windowing tools. Better yet, if you want to dive in, specialized tools, such as kill, all are readily available. Now that you know how processes work, you can become your own one-person band. Just one request: Freebird! - Speaking UNIX: Check out other parts in this series. - AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills. - New to AIX and UNIX?: Visit the New to AIX and UNIX page to learn more about AIX and UNIX. - AIX 5L™ Wiki: A collaborative environment for technical information related to AIX. - Check out other articles and tutorials written by Martin Striecher: - Search the AIX and UNIX library by topic: - Safari bookstore: Visit this e-reference library to find specific technical resources. Get products and technologies - IBM trial software: Build your next development project with software for download directly from developerWorks. - Participate in the developerWorks blogs and get involved in the developerWorks community. - Participate in the AIX and UNIX forums: - zsh: Collaborate, discuss, and share your expertise of zsh on the zsh wiki.
<urn:uuid:8bb88099-c88d-4b58-9091-29afe3ca293e>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-speakingunix8/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00469-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882583
4,061
2.5625
3
Analytics play a major role in today’s business world, but they are also pivotal to the development of artificial intelligence systems. One of the expected qualities of AI is machine learning, the ability for a system to identify and learn from trends. Yet, this extremely practical aspect of artificially intelligent systems may have its uses in a business setting. What Is Machine Learning? The concept is that an AI program can use machine learning to examine a large amount of data of a particular kind, and then apply that data in a real-world setting to meet a goal. The way that the algorithm moves forward depends on what it’s allowed to do. The program could be left to its own devices, or it could be told to examine specific information that’s provided to it. The following are methods of machine learning, as explained by TechRepublic: - Supervised learning: The "trainer" will present the computer with certain rules that connect an input (an object's feature, like "smooth," for example) with an output (the object itself, like a marble). - Unsupervised learning: The computer is given inputs and is left alone to discover patterns. - Reinforcement learning: A computer system receives input continuously (in the case of a driverless car receiving input about the road, for example) and constantly is improving. In some instances, another offshoot called “deep learning” can be used. Deep learning is defined as algorithms layered together that are designed to process data and reach predictions. The main difference between machine learning and deep learning is the fact that deep learning doesn’t require the assistance of humans. How It’s Being Used Some common uses of machine learning can be seen in IBM’s Watson, which is infamous for being a Jeopardy powerhouse. Google’s DeepMind was capable of using machine learning to beat the world champion of Go, a board game that’s similar, yet far more complex, than chess. Microsoft and Amazon also offer machine learning platforms that are designed to help organizations build programs themselves. In a business setting, machine learning can be used to help build an automated help desk solution, where a “chatbot” virtual assistant can answer commonly-asked questions. For example, Pizza Hut allows customers to place orders through Facebook Messenger and Twitter, utilizing intelligent technology that can personalize offers and make it easy for customers to quickly reorder their favorite menu items. Popular ride-hailing app Uber utilizes chatbots to let users request rides and get status updates. This is particularly useful for businesses that don’t have the time to handle each individual inquiry. However, we understand that not every client you have wants to deal with an automated chatbot, and would rather deal with a person. In order to help your business devote more time to its operations, Nerds That Care offers automated managed IT services that are designed to handle your technology solutions in a hands-off fashion. This allows you to focus on your business, while we keep your technology in proper working order. To learn more, give us a call at 631-648-0026.
<urn:uuid:6d80b902-1c2b-4c4f-afdb-9a1b450084d7>
CC-MAIN-2017-04
https://nerdsthatcare.com/nerd-alerts/entry/how-accurate-can-supercomputers-predict-the-future
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00377-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948461
650
3.671875
4
By Alexander Damisch, Director IoT, Wind River The Internet of Things (IoT) is not just a technology, or a system, or an architecture—it is mainly a business case, and requires a combination of all these pieces to fulfill its promise, which is a smarter way to do business. A major use case for IoT is predictive and preventive maintenance. The ability to accurately diagnose and prevent failures in real time is a major advantage for companies and is vital for critical infrastructure applications. A failure of high-tech machinery and equipment can prove to be highly expensive in terms of repair costs, in addition to lost productivity from the resulting downtime. Historically, technicians have been sent to carry out routine diagnostic inspections and preventive maintenance according to fixed schedules, which can be a costly and labor-intensive process with little assurance that failure will not occur between inspections. One example in the renewable energy sector is a wind farm—and in the extreme case, an offshore farm. Wind turbine systems contain a great deal of technology, including a generator, a gearbox, and a multitude of electronics, including control systems to adjust blade pitch and many other parameters. If any element fails—due to dust buildup or cumulative vibrations, for example—the remote location means the repair cost will be extremely high. In addition, given that weather will always be a significant factor in this particular example, a turbine may not be producing electricity for some time. However, onsite sensor-equipped systems can collect data from multiple turbines, not just a single turbine, enabling failure analysis to be performed to predict when a system or component is likely to malfunction due to stress or overheating, and thereby enabling better operator or autonomous decision-making for maintenance. For example, if there is a high likelihood of the gearbox breaking down within a turbine, then switching to a lower performance mode and a reduced mechanical load, while still delivering 80 percent efficiency, could mean continued operation and further electricity generation for several weeks. This would allow scheduled maintenance that combined the repair and maintenance of more than just one turbine. This clearly shows the value of the adaptive element of control and analytics is key for best possible performance. A second important use case is adaptive analytics, which involves looking at an overall system or a system of systems. Based on much the same data already being collected for predictive maintenance, adaptive analytics enables equipment and devices to analyze enormous amounts of data and make real-time decisions to help refine and improve operational processes. The adaptive analytics and predictive maintenance capabilities of IoT can also play a significant role in providing opportunities for new revenue streams, and not just reducing OPEX. In industrial markets, for example, historically the big players have had two main ways of generating revenue: the traditional way of selling devices such as control systems or motor drives or human–machine interfaces (HMIs); and complete hardware and software system solutions including maintenance service level agreements (SLAs). But both of these business models are under intense pressure from a cost point of view, with increasingly strong competition and significantly reduced margins—especially after the recent financial crisis, which has meant over-saturation in the market for manufacturing equipment. In the case of major industrial and automotive equipment OEMs, they have a huge base of already installed equipment at customer premises—which can be a challenging situation for innovation. However, in addition to new products or assets, one way to increase revenue is to obtain new recurring revenue streams based on existing and already deployed devices. The ability to innovate and deploy simple solutions that allow the connection of equipment to IoT can provide customers with significantly reduced operational costs and additional value via predictive maintenance and adaptive analytics. A service fee could be based on production volume, number of deployed devices, or a certain amount of data. This could give rise to a subscription-based business model including the leasing of equipment with ownership retained by the device manufacturer. However, while this model can work quickly and easily in some markets (for example, many smaller companies now moving from M2M to IoT have already transitioned from charging for devices to charging for data volume or a specific analytics service), many large industrial systems are well entrenched technologically, so there are many challenges to overcome. There has always been data collection in manufacturing and process automation markets via Supervisory Control and Data Acquisition (SCADA) industrial control systems. But data is collected in a very static way in a SCADA system, with no real-time access to information, and the existing OPC and OPC/UA protocols are not nearly enough. The reality is that over the past decade, much automation equipment has moved into a more network-like environment, where most of the parameters are being exchanged between devices via primarily IP-based industrial Ethernet protocols such as PROFINET, Ethernet/IP, EtherCAT, TSN, or Ethernet POWERLINK. If a data-aggregation device or gateway is deployed, though, it can become an interface between devices, albeit devices that were designed for local operations technology (OT) communication only, without expectations of connecting to an IT system, and certainly not to send vast quantities of data up to an IoT cloud-based platform. This gateway therefore presents a huge challenge: the security perimeter requirement, essentially protecting against hackers and other threats. In the energy generation industry, there are Critical Infrastructure Protection (CIP) cyber security standards, created and administered by North American Electric Reliability Corporation (NERC) for the U.S.; and there are various International Society of Automation (ISA) standards for industrial control and automation markets. Security is absolutely critical in the IoT environment, protecting equipment and assets from the outside during system boot time and run time, and preventing possible system shutdown or even potential threats to functional safety, while still enabling communication with the devices to obtain the data. A second necessary component is system manageability and customization—there is limited value in the gateway if devices or platforms cannot be given orders because they object to a slight deviation from the process parameters. In most applications, there is no need to receive data on hundreds of parameters every other millisecond, so the system needs to be managed in a post-deployment phase—by connecting to a device and installing a new software application to filter the volume of information available, for example. A third key requirement for IoT is obviously connectivity. In industrial automation in particular, there is a move away from the old-style static or cyclic data gathering to Ethernet-based collection. Protocols now becoming standard in IoT are Extensible Messaging and Presence Protocol (XMPP), which is primarily a one-way protocol and therefore largely secure. Another is MQ Telemetry Transport (MQTT), which is a lightweight publish/subscribe messaging transport protocol and is especially useful for communication with remote locations that require a small code footprint. In Figure 1, a simplified end-to-end IoT architecture shows the combination of different layers requiring expertise from different market segments. At one end are sensors and devices. Sensors could be parking sensors, a traffic flow sensor in a smart city, or an actuator such as a valve in an industrial application. Through the well-known wires or bus system they would connect to a device, in most cases called a controller. Today, this is where most devices are connecting with a data aggregation or SCADA system. Currently these are mostly local supervisory systems connected to statically provisioned controls. While moderns systems can be reconfigured “on the fly” for additional data tags, they still require commissioning, and do not offer event based data aggregation supported by dynamic local intelligence, such as an algorithm that could be updated based on analytics.
<urn:uuid:a9d99642-8cda-42c9-89f1-63296e74502c>
CC-MAIN-2017-04
http://www.machinetomachinemagazine.com/2014/07/18/challenges-and-opportunities-in-the-internet-of-things/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936432
1,547
2.796875
3
Opinion: The next-generation cell processor needs a realistic road map. Much has been written about the next-generation cell processor, a collaborative effort among IBM, Toshiba and Sony. It is an intriguing technology, already being dubbed as a "supercomputer on a chip," but IT executives should not be making plans for using it in their enterprises any time soon. The chip landscape has been littered with innovations that did not work out for any number of reasons: lack of demand, lack of tools support and lack of applications. The last processor to receive this level of hype and speculation was Intels Merced, which as the Itanium has achieved decidedly mixed results. The Cell processor comprises a main PowerPC processor core linked to eight special-purpose "synergistic processing element" cores. If all goes well, the processor will be the building block of powerful distributed computing grids. The design is very different from the multicore processors starting to arrive, which pack more computing power into todays processors without radically changing the computing model. Based on articles and rumors flying around the Cell processor, the technology will first find its place in consumer markets, from HDTV and gaming systems to high-end supercomputing. Mere speculation on the Cell becoming the next processor for Apple, in a partnership with Sony, was enough for financial analysts to raise their estimates for Apples stock. Is there a "Cell" processor in Apples future? Click here to find out. If IT managers have learned anything from the Merced/Itanium debacle, it is that they cannot plan for the next generation until promises turn into products. Too often "the next big thing" is superseded by improved old technology that is backward compatible. This is what happened with AMDs Opteron processor, which proved that 64-bit technology can support the 32-bit world, something the 64-bit-only Itanium could not do. No matter how impressive the Cell processor becomes, all the innovation will be for nothing if developers have a hard time creating applications for it. Cell processor vendors need to be upfront about what the processor can do and offer a realistic road map for when the chip will be ready, avoiding the lofty promises and debilitating delays that the Merced will long be remembered for. Just as the leap from 32-bit processing to 64 bits has been hard to digest, the Cell architecture will challenge developers. Cell processors will not rule the desktop market in the near future, but the architecture could be used in distributed computing environments. No matter how much performance the Cell can churn, the key values that will determine whether it is a success will not be transactions-per-second or gigabits-per-second bandwidth; it will be the number of applications written for it and the number of processors sold. Right now all that is sure is that the Cell processor will be powering the PlayStation 3 and Toshiba HDTVs; any prediction of it going beyond the world of fun and games is dangerous. Were interested in your Opinion. Tell us what you think at eWEEK@ziffdavis.com. To read more from the eWEEK Editorial Board, subscribe to eWEEK magazine. Check out eWEEK.coms for the latest news in desktop and notebook computing.
<urn:uuid:b3811f64-336f-4e36-86c4-5398b9e32bbd>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Not-Sold-on-Cell-Yet
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00086-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941849
673
2.53125
3
The experts at Kaspersky Lab present their monthly report about malicious activity on users’ computers and on the Internet. May in figures. The following statistics were compiled in May using data from computers running Kaspersky Lab products: - 242,7 mln network attacks blocked; - 71,3 mln attempted web-borne infections prevented; - 213,7 mln malicious programs detected and neutralized on users’ computers; 84,3 mln heuristic verdicts registered. Rogue antivirus program for Mac OS X. In May, there were 109,218 attempts to infect users’ computers with rogue antivirus programs via the Internet. This is twice lower than the peak activity seen in February-March 2010 – during this period, some 200,000 security incidents occurred each month. Nevertheless, rogue antivirus attacks came as a surprise to users of Apple computers. The first attacks were detected on 02 May when the web was abuzz with news about the death of Osama bin Laden. Some users searching Google for information about this event did not receive search results, but instead were presented with a notification in their browser windows that a Trojan had been detected on their machines and could be removed. If a user agreed to try the suggested anti-malware software, the rogue antivirus (MAC defender in this case) would say that it had detected several malicious programs on the computer (which in fact were not there), and ask $59-80 to remove them. If the victim paid for the fake program they received a registration key; when the user entered this key, the system stated it was now malware-free. Interestingly, the purported number of “signatures’ in MAC Defender’s “antivirus database” is 184,230. For comparison, the number of malicious programs created for Mac to date amounts to hundreds, but not tens of thousands. Malware for Win64. The growth in the number of users who prefer the 64-bit OS did not go unnoticed. In May, Brazilian cybercriminals whose main “specialization” over the last several years has been banking Trojans released the first banking rootkit for the Windows 64-bit OS (Rootkit.Win64.Banker). They targeted users’ logins and passwords to online banking systems. During the attack the users were redirected to phishing pages which imitated the websites of respectable banks. May was also marked by ZeroAccess’ comeback, but this time the Trojan was capable of functioning on x64 systems. Computers were infected using a drive-by download attack. After ZeroAccess penetrates a victim’s computer it determines whether the victim’s computer runs either a 32- or 64-bit operating system and downloads the appropriate version of the backdoor to it. Sony targeted yet again. The hackers did not give Sony a chance to relax. After attacks on the Sony Playstation and Sony Online Entertainment Networks in late April – early May they compromised Sony’s Thai site on 20 May. As a result, a phishing page targeting Italian credit card owners was hosted on hdworld.sony.co.th. However this was not the end of it. On 22 May, the Greek site SonyMusic.gr was attacked, making user data available for public access, including users’ nicknames, real names and email addresses. Two days later several vulnerabilities were detected on sony.co.jp. However this time the stolen database did not contain users’ personal data. In our forecasts for 2011 we suggested that information of any type would become the target of many attacks. Unfortunately, the number of attacks on Sony reinforces the accuracy of this prediction. Currently, IT security issues are extremely important as services such as PSN and iTunes harvest as much information as possible. The legislation surrounding personal data security is not always clear and all users can really do is to stop using these services. There can be no doubt that the attacks on Sony were well planned and executed. We can confidently predict that in the future, services similar to PSN will become the targets of such attacks. Therefore users need to be very careful when using these services and with the companies that provide them. More detailed information about the IT threats detected by Kaspersky Lab on the Internet and on users' computers in May 2011 is available at: www.securelist.com
<urn:uuid:28a608bc-0fad-4802-9bf6-67d80210bf6a>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2011/Malware_in_May_Rogue_Antivirus_Programs_Attack_Mac_OS_Users
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960416
896
2.515625
3
If someone told you passwords were a thing of the past, you might well laugh in disbelief. Undoubtedly, passwords have been the cornerstone of digital security for a long time. As technology has improved, however, passwords have become increasingly easy to hack, forcing the IT community to search for new solutions. Most people regularly use weak passwords - in fact we’re getting worse at this- but with the constantly expanding list of websites and services, the demand for us to remember unique usernames and passwords for is growing all the time. No matter which way you look at it, the problem is enormous. Consider, for instance, the fact that 42 per cent of the world’s 7.2 billion population are now connected to the Internet, and that 52 per cent of these people use the Internet on a mobile device. In the US, these statistics are even higher, with 64 per cent of adults are enjoying the Internet via a connected smartphone - a number that shoots up even further in the 18 to 24 year age range. Most of these people subscribe to email accounts such as Gmail, and have social media accounts such as Twitter and Facebook. Others use workflow management applications like Trello, and messengers like Skype or Slack. Then there are forums, online magazine subscriptions, banking and PayPal accounts, and online shopping accounts, and so much more. In other words, with a seemingly endless list of reasons for passwords, and an ever-increasing number of connected people, we can’t help but see the enormity of the problem. This is exacerbated by the fact that most people do not do enough to ensure their digital security. A recent string of celebrity hacks has reinforced the danger of weak password management. No-one, it would appear, is safe from cybercriminals and their malevolent technical prowess. Peer a little closer at the startling phenomenon, however, and you’ll discover the reason Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey were hacked is that they had weak passwords. Passwords they had unwisely used across various online accounts. The problem with passwords is that unless they are incredibly complicated (and by that, I mean impossible to remember), they are easy to hack. Astonishingly, despite that knowledge, Zuckerberg reportedly used ‘dadada’ as his password. The next problem is that you can’t use the same password for different accounts, because if one gets hacked, then they all will. This, again, is precisely the mistake that Zuckerberg made, and is why his Facebook was hacked after his password was stolen from a different service’s (LinkedIn) servers. If passwords are so insecure, then, and easier to crack with every passing moment, what are we supposed to replace them with? For some time now, firms with an online presence have been choosing to employ a number of techniques to shore up their websites’ security. A famous example of this is the somewhat annoying ‘Captcha’ feature. The hard to read characters are designed to distinguish you from a Bot that is attempting to ‘brute force’ the password stage of your login. Though Captchas do work well for this purpose, they aren’t a convenient security implementation, as they negatively affect the user experience. The security feature that has so far emerged as the winner in the rush to abolish the password is two-factor authentication (2FA) – which mainly uses SMS messages to verify users’ identity. With almost 100 per cent of young adults in the US owning at least one phone, just about all of them could apply for a two-factor authentication code. Logging in with a code that arrives via SMS is a very robust form of security, because it involves you having to actually possess a phone verified as belonging to you. While, of course, it is possible for you to be mugged or lose that phone, it should (hopefully) also have built-in password security. So the steps involved in getting into your account are much stronger than with a password alone. Sadly, hackers have ways of accessing your phone’s SMS messages. A Remote Access Tool (RAT) such as the one discovered in the unofficial Pokemon Go .akp file, gives a hacker permission to use all a phone’s features, meaning that they have access to any 2FA codes sent via SMS. Another solution to the password problem is to use password management tools such as KeePass. These protect your passwords behind strong encryption. You need only remember one password, and can then give all your accounts long, random, and (most importantly) different passwords. This certainly does decrease your chance of getting hacked, but with the rise of quantum computing (for example the kind of systems being developed by D-Wave), we are fast approaching a time when government intelligence such as the NSA or GCHQ will be able to crack even the military grade encryption protocols used by password managers. So the password truly is dead. So, now what? Wells Fargo & Co. (WFC) says that the password will be gone in 5 years, and the firm is predicting that biometrics are the solution. Most people agree. Retina scanning for bank account access is another area currently being examined into by WFC. These work by taking pictures of veins and other unique physical attributes in people’s eyes. These unique features are then turned into a digital code, which is matched against a stored template. Smartphones have also introduced fingerprint scanners as a replacement to passwords for unlocking mobiles and tablets, so certainly, there is hope in tech security circles that our physical features are the solution to the problem. Once again, however, there are problems, and they involve the fast-paced evolution of technology. D-Wave company founder, Geordie Rose, has already described the second generation of his company’s technology as ‘like standing in front of an alien altar’. NASA and Google have claimed that one in their possession is 100 million times faster than a regular single chip traditional computer. With so much power to hand - and so much more on the way - we have to wonder whether WFC’s servers, which contain the digital templates of people’s eyes, are going to be the weak link. After all, crack them, and you have access to all the unique eye signatures stored there. So what is the answer to the password problem? Nobody knows, but I can tell you one thing – the password is dead. Ray Walsh at BestVPN
<urn:uuid:009a5e77-316b-4a1e-9527-13604a60bd7c>
CC-MAIN-2017-04
http://www.itproportal.com/features/why-the-password-is-dead/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00527-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962449
1,348
3.0625
3
The basic architecture of fiber optical communication is only “fiber transmitter” and “fiber receiver” converter units, so that when we want to transmit signals by using fiber optical communication, signals must first be converted into optical signals at the transmitting end, then through fiber to transmit to the receiving end. Finally optical signal is reverts to the original electrical signal, so “optical transceiver” is a very important part in fiber optic communications, components and equipments of this means communication needed have formed another chain. For example, optical receiver, nowadays not few companies have developed optical module which can speed up to 10Gbps. The Development Of Fiber Optic Technology With more and more requirements form a variety of video and data CCTV, the video signal transmission distance has been unable to meet the demand, therefore, the progressive development of optical integrated video and control signals (WDM or DWDM multiplexer technology) can transmit over longer distances. 8 Advantages Of Fiber Optic Transmission 1. High sensitivity, is not interfered by the electromagnetic noise; 2. Small size, light weight, long life, low price; 3. Insulation, high pressure, high temperature resistance, corrosion resistance, for a particular work environment; 4. The geometry can be modulated as the environment requirements, signal transmitted easily; 5. high-bandwidth, less attenuation, long transmission distance; 6. Small signal crosstalk, high transmission quality; 7. High security; 8. Easy to installation and handling of raw materials. Fiber Optical Transmission In Architecture And Application Of CCTV System In addition to the requirements of combination video and control, the architecture of CCTV transmission is the main principle of entire fiber transmission construction, with the deployment in different ways, also has different applications and functions. The application of fiber optic communication is extremely wide, can be broadly divided into five categories, including Telecom network, Datacom network, CCTV and CATV fiber optic transmission network, and Fiber In The Loop (or FITL). While in national defense and military, there are also the applications of fiber optical communication. In the CCTV network area, they are mostly used as the backbone part of monitoring system, it may combining a simple video and control signals converted to FOT / FOR, there are also over TCP / IP network converts digital video signals converted to TCP / IP signals, for transmission and reduction mode.
<urn:uuid:d7c2f6ba-92cf-48e1-b492-86a70b352c89>
CC-MAIN-2017-04
http://www.fs.com/blog/advantages-of-fiber-optic-transmission-and-application-in-cctv-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00159-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905839
503
3.25
3
Hackers have targeted the open access encyclopaedia, Wikipedia, to spread a Windows virus. The attackers created a page on the German edition of the online encyclopaedia, containing a link to a download site where users could get a security patch purportedly protecting against a variant of the Blaster worm. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. But the “patch” was in fact malicious code aimed at Windows users. The attackers attempted to lure more people towards their fake article with a spam e-mail designed to look as if it had come from Wikipedia. Graham Cluley, senior technology consultant at security firm Sophos, said, “The good news is that the authorities at Wikipedia quickly identified and edited the article on their site. “Unfortunately, however, the previous version of the page was still present in the archive and was continuing to point to malicious code. The hackers were thus able to send out spam pointing people to the page on Wikipedia, and try and lead them into infection.” Wikipedia has now confirmed that the archive page has been permanently deleted. The open nature of the online encyclopaedia – which can be amended by anyone – has brought controversy before, when false information was added to a journalist’s biography, linking him with the assassination of John F Kennedy. Comment on this article: firstname.lastname@example.org
<urn:uuid:9b8dab91-8dd6-4f21-9b63-6a9334c85195>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240078980/Windows-virus-infects-Wikipedia
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00003-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960477
305
2.9375
3
The University of Maine envisions opportunities for businesses around the state to have access to sophisticated computational resources that might otherwise be unobtainable due to cost and a lack of expertise. An effort led by the state’s university system called Cyberinfrastructure Investment for Development, Economic Growth and Research (CIDER) just received a $250,000 grant from the Maine Technology Institute to start stringing together 500-700 machines into a new supercomputer dedicated to serving public and private sector organizations. According to Bruce Segee, director of the project and associate professor of computer and electrical engineering at the university, the idea is to turn the supercomputer into a cloud-based resource or a “central hub where companies’ applications and data could live, completely separate from the physical computing hardware in their offices.” As a report today in Maine’s Business Source magazine noted, CIDER is “one piece of a broader vision for expanding Main’s cyberinfrastructure, aiding not only Maine businesses, but ideally providing a competitive advantage to attract new ones. “ The author notes that Maine has a rather unique, unexpected advantage that might make it more attractive to companies that have formerly rented space in out-of-state datacenters on the power and cost front—paper mills. As Jackie Farwell explains, “power represents roughly 90% of the cost of operating a datacenter. By sharing space with a mill, a center could run on the affordable, reliable power generated by the mill through hydropower and biomass. That would also serve as a boon for the mill, which could sell the power to the center free of any distribution costs.” Farwell also notes that paper mills are most often located in rural areas, which provide a perfect location “for computing warehouses that most of their customers never visit and cold Maine river water is ideal for cooling the centers.”
<urn:uuid:533295d4-7cb2-426c-a900-c6b98ee8eec5>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/03/07/businesses_soon_to_sample_maine_s_supercomputing_cider/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00399-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952809
391
2.640625
3
Wireless communication has not only change workplace communication between individuals within the same office but has also changed the nature of communication between different parts of the supply chain. This mean communication between distribution center and store managers along with that between those two and the inventory they manage. The two main leading technologies in this specific area of wireless communication is Radio Frequency Identification (RFID) and Global Positioning System (GPS). RFID is the use of small radio transmitters to keep track of the objects they are attached to. RFID are no bigger than a grain of rice and a relatively cheap to buy. They are effective within 300 meters of a RFID transmitter. GPS is a large piece of equipment that is used to track objects. Since the increase of commercial use, GPS units have become extremely cost efficient with prices ranging around $100 a unit. The satellite system that is used with GPS is made up of a total of 27 satellites, of which 24 are in use and 3 remain as back-ups in case of satellite failure. Both technologies are beginning to be used in innovative ways to communicate inventory within the supply chain. The Tuck School of Business at Dartmouth University has done recent studies on the increased use of these wireless technology as a way to substitute the traditional forms of inventory management. RFID is being used within distribution centers and brick-and-mortar stores in order to keep track of the different pieces of inventory moving around inside the building or out of it. These work given the fixed size of the space and the unplanned movement of inventory inside a store when being moved by customers. GPS is used to manage the large truck fleets that many major retailers use to transport products from distribution centers to brick-and-mortar stores. The capabilities of these technologies to connect the physical layer and information layer of operations alters the traditional communication in these business area. If a product was lost in the store as a result of being wrongly shelved by a customer, in the past the only option was having an employee search the shelves to locate it but with RFID that problem no longer exists. Employees are able to locate the product and save man hours searching for it. This system also allows to ensure products aren’t stolen because the current systems in most stores can only identify thief through the main entrances but the RFID can function and track the movement through all exits in real time. GPS also alters communication between truck drivers and the distribution center/brick-and-mortar stores by giving real time location without needing to pull over to give periodic check-ins over the phone. GPS ping systems can be established to ping very often if the product is high value and less often if the product is lover cost. As a result the tracking is more exact from products who exact location are more important to communicate to the necessary managers. As these two types of technology increase in sophistication and become more ingrained in supply chain systems, we will see these person to person communication and greater efficiency. This leaves the managers in the supply chains to focus more on their employees and suffer less from the information overload from keeping track of thousands of products.
<urn:uuid:06c82534-cfc7-4b33-8ff8-7cad65e53bb5>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/025bf606-020a-48e9-89bf-99adda13e9b1/entry/gps_never_lose_track?lang=en_us
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960056
628
3.125
3
Holographic to DNA Storage: What about the Hardware Requirements? August 18, 2012 About 20 years ago, I received an award from a professional association. I was nominated by someone at Bell Labs, and as part of the activities, I was invited to tour a Bell Labs’s facility in New Jersey. On that tour, my hosts introduced me to a Bell Labs’s researcher who had developed a holographic storage technology. The demo was in a small, gray room. I asked the inventor, “What hardware is required?” I recall that a research assistant open a door to reveal a large room stuffed full of gizmos. I asked, “What’s the challenge to commercialize the multi terabyte storage system.” The answer, “Making everything small.” I thought of this demo when I read “Harvard Cracks DNA Storage, Crams 700 Terabytes of Data into a Single Gram.” The write up points out: “The work, carried out by George Church and Sri Kosuri, basically treats DNA as just another digital storage device. Instead of binary data being encoded as magnetic regions on a hard drive platter, strands of DNA that store 96 bits are synthesized, with each of the bases (TGAC) representing a binary value (T and G = 1, A and C = 0). To read the data stored in DNA, you simply sequence it — just as if you were sequencing the human genome — and convert each of the TGAC bases back into binary. To aid with sequencing, each strand of DNA has a 19-bit address block at the start (the red bits in the image below) — so a whole vat of DNA can be sequenced out of order, and then sorted into usable data using the addresses.” I like the word “simply.” Now what about the hardware required to make this stuff work? No information. Fancy Dan storage technologies are fascinating. Practical too … if you have the resources to make these breakthroughs work when you are checking email at a coffee shop. Stephen E Arnold, August 18, 2012
<urn:uuid:e28738e0-5b3a-4e60-ba05-85ec89aba7dd>
CC-MAIN-2017-04
http://arnoldit.com/wordpress/2012/08/18/holographic-to-dna-storage-what-about-the-hardware-requirements/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952174
445
2.765625
3
Social engineering – the act of manipulating people into giving up confidential information – has long been a threat to businesses. One of the more common social engineering tricks, in terms of IT, is scammers posing as Windows technicians who call Windows users and try to trick them into believing their computers have viruses and that they need to pay to have the problem fixed. Have you had these calls? These scams have long been a part of the Windows environment. Despite users being fully aware of these attacks, some people still falling into the trap. These deceptions generally follow the same formula: A person calls you pretending to be from the Windows technical team at Microsoft. The scammer usually tells you that they need to renew their software protection licenses to keep their computer running. Most of the time, these scammers spread the conversation out over a number of phone calls and emails, the goal being to gain the trust of the user. Once trust is established, or the user seems interested enough, the scammer will offer a seeming sweet deal: They will offer a service that will make your computer run like new, usually for a seemingly reasonable price. The scammer will then use remote PC support software to show you ‘problems’ your computer is having. They will usually show you the Windows Event Viewer – a part of the OS that shows errors, usually harmless, that your computer has generated. The scammer will then convince the user that these errors are harmful, and if you have paid, they will make it look like they are cleaning your computer. If you give them your credit card number, you will likely see ridiculous charges, or even have people trying to access your accounts. What’s being done? Governments are aware of this increasingly common trend, and some organizations, like the FTC, have taken measures to shut down scammers. This article from ars technica gives a good overview of what exactly the FTC is doing, while another article provides a first-hand account of how the scammers operate. What can we do? While action is being taken, these scams are still continuing. From what we can tell, they likely won’t stop in the near future. To ensure you don’t fall prey to this trickery, these five tips should help you identify when an attempted scam is at play: As a rule of thumb: If you get an unsolicited call about your computers and IT security, it’s likely not genuine. If these criminals provide you with a website, do a quick Google search to see if there have been any scam reports. You can also join the No-Call Registry if you are in the United States. To learn more about these scams, please contact us.
<urn:uuid:237bebe0-d664-46eb-8c9e-a66088212d10>
CC-MAIN-2017-04
https://www.apex.com/beware-windows-tech-support-scammers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959053
559
2.65625
3
Cloud computing is the latest hot trend in the IT world and among technology consulting companies. To a point where almost every meeting I go on talks about this subject matter and does so in a very misinformed way. The perception out in the marketplace is that the cloud is cheaper, more reliable, and secure. That is simply just not the case unless the proper steps and procedures are followed. When will we see cloud standards? That is a really great question because the security questions of encryption and penetration capability still have not been addressed. How reliable is the data in the cloud? The protocol, data format and program-interface standards for using cloud services are mostly in place, which is why the market has been able to grow so fast. But standards for configuration and management of cloud services are not here yet. The crucial standards for practices, methods and conceptual architecture are still evolving and we are nowhere close. Cloud computing will not reach its full potential until the management and architectural standards are fully developed and stable. Until these standards are formalized and agreed upon there will be pitfalls and mishaps, which cannot take place. The main premise of Cloud protocol is TCP/IP. The cloud usually uses established standard Web and Web Service data formats and protocols. When it comes to configuration and management, the lack of effective, widely accepted standards is beginning to be felt and I have seen the negative results. There are several agencies and organizations working on cloud configuration and management standards, including the Distributed Management Task Force (www.dmtf.org), the Open Grid Forum (www.ogf.org), and the Storage Networking Industry Association (www.snia.org). Currently there are, as of yet, no widely accepted frameworks to assist the integration of cloud services into enterprise architectures. An area of concern is the possibility of changing cloud suppliers. You should have an exit strategy before finding a provider and signing a cloud contract. There’s no point in insisting that you own the data and can remove it from the provider’s systems at any time if you have nowhere else to store the data, and no other systems to support your business. When selecting an enterprise cloud computing provider, its architecture should have the following: • the cloud services form a stable, reliable component of the architecture for the long term; • they are integrated with each other and with the IT systems operated by the enterprise; and • they support the business operations effectively and efficiently. Other groups that are looking to establish industry standards include the U.S. National Institute of Standards and Technology (http://csrc.nist.gov), the Object Management Group (www.omg.org) and the Organization for Advancement of Structured Information Systems (www.oasis-open.org).
<urn:uuid:71ac0bc5-29cf-4dd0-bfb2-e64ee116c4ee>
CC-MAIN-2017-04
http://www.bvainc.com/establish-standards-for-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00499-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935113
564
2.5625
3
Apple has contributed both technical innovation and dedicated compassion to education and learning. Over the years, Apple continues to introduce technologies that transform and enhance learning, help teachers better engage with students, and create interactive learning opportunities for students of all ages. Apple knows that the needs, the purpose, and the success of any technology are tied to the people that create it, implement it, and use it. As we embark on our efforts to introduce personalized and transformative learning programs, let’s take a moment to review a few technologies that Apple has introduced to benefit educational programs. - The Device Enrollment Program (DEP) makes it easy for schools to deploy or re-deploy new hardware with no involvement from IT and a simple, streamlined process for teachers and students. - The Volume Purchase Program (VPP) makes it easy for IT organizations to get the right learning apps to the right students at the right time for a transformative and personalized learning experience — all while saving school districts significant costs. - Guided Access technology enables teachers to focus student iPads on specific learning apps, websites, and attention cues to guide students through activities and help teachers regain instruction time. - iBeacons enable technologies such as content, printers, and AirPlay mirroring to be automatically deployed to student iPads as they enter a classroom. iBeacons can also be used to automate attendance and apply specific settings to student iPads while they are in areas where standardized tests are being conducted. - iBooks provide powerful, interactive digital learning materials to students that far exceed text books or PDFs, with the ability to embed videos, polls, presentations, and other transformational learning tools. Learn more about the power of Apple in education.
<urn:uuid:b3963334-dbbf-4293-afd7-37361d75b085>
CC-MAIN-2017-04
https://www.jamf.com/blog/five-apple-technologies-that-will-revolutionize-the-digital-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00315-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922217
345
3.125
3
The National Science Foundation has banned a researcher for using agency-funded supercomputers to mine bitcoins, a virtual currency that can be converted into traditional currencies through exchange markets. According to a recently surfaced report from the National Science Foundation Office of the Inspector General, the NSF banned the unnamed researcher after receiving reports that NSF systems at two universities had been used for personal gain. Bitcoin mining refers to how the virtual currency is generated. Miners solve math problems that serve to verify bitcoin transactions. In exchange they are issued a certain number of bitcoins as a reward. “The researcher misused over $150,000 in NSF-supported computer usage at two universities to generate bitcoins valued between $8,000 and $10,000,” according to the March 2014 Semi Annual Report to Congress. “Both universities determined that this was an unauthorized use of their IT systems. The researcher asserted that he was conducting tests on the computers, but neither university had authorized him to conduct such tests — both university reports noted that the researcher accessed the computer systems remotely and may have taken steps to conceal his activities, including accessing one supercomputer through a mirror site in Europe.” This is the latest case of university systems being commandeered to mine for digital currency. Other notable incidents involve a researcher at Harvard and a student at Imperial College London. The audacity of the crime is not the only disturbing element. Bitcoin mining and research supercomputers are a complete mismatch from a resource standpoint. “Using a Supercomputer to mine for bitcoin is both appalling and shocking to common sense,” opines Brian Cohen at Bitcoin Magazine. Cohen goes on to describe the likely economics of the situation. The breach may have netted the perpetrator $8 to $10 thousand worth of bitcoins, but the electricity cost to run the machines was likely ten to 20 times that amount. Today’s bitcoin mining rigs employ far more specialized hardware than a PC or even a supercomputer, as this statement from Michael B. Taylor, a professor at the University of California, San Diego, attests to: “Today, all of the machines dedicated to mining Bitcoin have a computing power about 58,600 times the capacity of the United States government’s [second] mightiest supercomputer, the IBM Sequoia,” says Taylor. “The computing capacity of the Bitcoin network has grown by around 1,300 percent since the beginning of the year.” The virtual currency system was set up this way. When bitcoins first came on the scene, they were pretty easy to generate, but the more bitcoins there are, the harder and harder they are to mine. The researcher, who has not been identified, had his account suspended government-wide and access to all NSF-funded supercomputer resources was terminated.
<urn:uuid:f618432f-7579-42b5-9227-68b63962831a>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/06/09/us-researcher-caught-mining-bitcoins-nsf-iron/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956813
575
2.703125
3
Having created a loopback interface and assigned it to an IP address, we want to use it for management purposes, so it must be reachable from other routers and hosts. What we need to do is advertise the loopback’s prefix via our routing protocol(s). This requires “network” statements, interface commands, or route redistributions (“redistribute connected”), as appropriate. Of course, if we’re using non-default masks with our loopbacks (i.e., “/32”, as is typically done to conserve IP address space), any routing protocol advertising the loopback prefixes needs to be “classless”. Speaking of addressing, from what address space are loopback addresses assigned? Since router management functions are usually accomplished “in-house”, loopback addresses are typically taken from RFC 1918 private address space, but if you want the loopbacks reachable from the outside world, their addresses can certainly be assigned from the organization’s public address space. In general, whether the addresses used are public or private, each individual loopback’s address should be unique within your organization (there are exceptions to this, such as PIM-SM “anycast” RP’s). When advertising “/32” loopbacks via routing protocols that support automatic route summarization (RIPv2, EIGRP and BGP), if the loopback’s address is on a different classful network from those used on the interfaces and subinterfaces, auto summary should be disabled, so that the loopback’s actual prefix is advertised (and not its classful network). Of course, loopbacks within a region can be advertised as a summary block to other regions (between OSPF areas, for example). Next question: Can we have more than one loopback on a single router? Absolutely! In fact, within reason, you can have as many loopbacks on a router as you want. You might have multiple loopbacks on a single router for different purposes, for example: - Router identifiers (BGP, OSPF, LDP, MPLS-TE) - Rendezvous points (Sparse-Mode PIM) - General management (Ping, Telnet, SSH, SNMP, SDM, etc). The first loopback created is commonly “Loopback0”, but since the range of available loopback numbers goes into the billions, you can use pretty much any numbers you want (“Loopback1”, “Loopback1001” or “Loopback1234567890”). Loopbacks do not need to go in sequential order, so you can have “Loopback 0”, “Loopback 10”, and “Loopback 99” on a router, if you like. Also, unlike IP addresses, loopback numbers are “locally significant”, so you can have a particular loopback number on as many routers as you want (for example, you can have a “Loopback0” on every router). Aside from router management, another common use of loopbacks is to simulate networks for testing, in both production and lab environments. Unlike static routes, which can be advertised using routing protocols (“redistribute static”), but don’t “answer” to pings, Telnet or whatever, you could configure ten or twenty loopbacks with addresses and masks (not necessarily “/32”), and advertise them into the routing protocol(s). Not only would the loopback prefixes appear in routing tables, they would respond to IP utilities just like a “real” host.
<urn:uuid:e08e773d-7400-459d-833a-c2df50f3c096>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/12/07/loopbacks-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00344-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916562
782
2.75
3
They also have a high dropout rate, she said. Before MIT joined its online course efforts with Harvard in edX, it offered “Circuits and Electronics” under the name MITx. Nearly 155,000 people signed up, according to MIT. Of these students, less than 15 percent tried the first problem set — and fewer than 5 percent passed the course. The dropout rate is really not exceptionally high in context, DeMillo said. A 20 percent retention rate in these courses is good. In other businesses, an online conversion rate of 1 to 2 percent is considered a win. Since January, top research universities have banded together to offer courses featuring their rock star professors. Georgia Tech started offering classes through Coursera in July and had 90,000 students registered in two months. “The high-quality portion of this story is really important,” DeMillo said. “The reason people are flocking to these courses is that the quality of the courses is so high, and it’s such a compelling experience for students that they’re drawn to it.” Online classes like these will be just one of the alternative paths that students can take down the road, Wildavsky said. Students will choose from multiple options, including online classes, traditional course credits and competency-based learning. Traditional course credits measure time spent learning, while competency-based learning measures mastery of skills and knowledge. Western Governors University — an accredited online university founded by 19 state governors — follows the competency-based learning path. A start-up called StraighterLine offers online classes a la carte for $99 a month, which is part of a trend called unbundling, Wildavsky said. Unbundling disassembles higher education into pieces and parcels them off to whoever can provide them at the highest quality for the lowest price. Think of it as contracting out teaching, curriculum, advising and other services. Once companies like StraighterLine can get universities to recognize their classes for credit, this will be yet another option for students to access higher education. “We’re going to move to a world where academic results matter much more than how you get there,” he added. No matter how students get there, they need to earn a recognized credential that gets them into the workplace in larger numbers, Evans said. According to a 2011 Pathways to Prosperity project from the Harvard Graduate School of Education, 56 percent of students at four-year colleges earn a bachelor’s degree within six years. And less than 30 percent earn an associate’s degree in three years. Students will not complete all of their learning at one institution. But students who currently transfer to multiple institutions end up with more credits than they need to finish a degree. States will need to think about ways to have credits and academic experience transfer to any public institution across their state system. That way, students can finish their degrees without worrying about credits transferring or retaking courses elsewhere. “As students become far more mobile, their academic experience has to be as portable as the mobility they represent in their own lives,” Evans said. “And that’s where technology can enable that portability to happen in a far greater way than what we have today.” Because academic results will matter more than how students get there, accreditors will change the way they evaluate institutions. Currently institutions are evaluated by inputs like the size of the university library or the amount universities spend. In the future, accreditors will evaluate universities by outputs, which include student learning, student success in the labor market and graduation rates. Along with multiple pathways and different accreditation measurements, credentials will change. Over the next five to 10 years, people will get a job solely by earning micro-credentials, demonstrating competency and showcasing their knowledge and skills on the Internet, Staton said. By placing more value on what people can do, everyone will focus on the actual work of potential employees rather than being hung up on credentials, he said. But that doesn’t mean that a bachelor’s degree has no place. Society may decide that a degree is important because of other signals it conveys about the individual, such as being highly socialized, capable of doing long-term projects or having a supportive family. Either way, this focus on the work rather than the diploma will undercut the skyrocketing prices of undergraduate education and potentially some types of graduate education. Depending on who casts the vision, higher education could be headed down a road that leads to technology-mediated or technology integrated learning. Students could travel multiple paths to get to academic results. And technology could play an increasing role in making higher education accessible and affordable. “It shouldn’t be [about] funding monolithic technology platforms; there will be no monolithic technology platforms,” Staton said. “It will be about interoperability, not about one solution for the entire system.”
<urn:uuid:8e2e5588-9cd2-4696-a4a5-a7a2dc8996a6>
CC-MAIN-2017-04
http://www.govtech.com/What-Will-Higher-Education-Look-Like-in-25-Years.html?page=3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960493
1,033
2.609375
3
While Africa is lagging behind others in space science, the very first attempts at any technology were, in fact, made by Africans. A few centuries ago, Egyptians were the first to enter the Agricultural age, they were also the first in stone building. Then somewhere along the way, the Africans slept on the job and lost their technological focus. Yet according to Nigerian computer scientist and engineer Philip Emeagwali Africa has the potential. African medical innovations will raise African life expectancy from the current 50 years to 150. Those Africans currently aged 17 are likely to see life across three centuries. “A child born today could live long enough to see the middle of the twenty-second century. In a sense, African children of today will be time travellers that will live in and connect the twentieth, twenty-first and twenty-second centuries,” he observed in an interview. The wealth of the future, Africa has realised, will be created largely by knowledge and technology and not by the export of natural resources and having a large population. The World Bank’s latest report [PDF] shows that net African exports are expected to make a negative contribution to real GDP growth in the near term. Furthermore, investment in family planning technologies to check overpopulation and laying foundational infrastructures for scientific and entrepreneurial innovation is already starting to take shape in Africa. In the Information Age, millions of well-paying jobs require computer literacy and Africa is already preparing by focusing on education and technology. So where does space technology fit into this? “Americans won the lunar space race by landing the first man on the Moon. [But] when it was discovered that the Moon is the most expensive and most useless piece of real estate in our solar system, all trips to the Moon were cancelled,” Emeagwali observed on why Africa should trend carefully on the space journey and focus more on other pressing priorities such as HIV and poverty. However, he believes it is conceivable that the US cannot afford to journey to Mars alone; so “the first astronaut crew to land on the planet Mars will include an African, an Asian and a female.” Does Africa need a space program? Scientists and other think tanks have credible reasons why the continent needs a space program to tackle various issues of development. As Kathryn Cave observed in a piece for IDG Connect titled Why Africa needs a space program, unlike what we see in TV dramas, space is about very prosaic applications that make life easier. And of the 40 current core African Union objectives 35 of them require space technology in some form or other. Africa is therefore making its much needed first toddler steps towards space exploration. African tech giants like South Africa, Nigeria, Ghana and Kenya always feature in everything technological. However, there are also other countries that are making important technological contributions. In 2015, Ethiopia built the first East African astronomical observatory. Privately funded to the tune of $3 million, the investment is one of the first great steps for the country towards a fully-fledged space agency. Ethiopia is the second most densely populated countries on the continent. Its citizens grapple with malnutrition, poverty and other socioeconomic woes. The project is therefore considered from some quarters as a mis-priority. However, no government or donor funds have been included in the project. Mohammed Alamoudi, an Ethiopian-Saudi magnate funded the observatory through Ethiopian Space Science Society (ESSS). While Abinet Ezra, ESSS communications director, told AFP: “Our main priority is to inspire the young generation to be involved in science and technology.” Strategically positioned on the equator, Kenya stood a chance of enjoying space science privilege courtesy of NASA, Italian and other international space programs from the onset. Yet it is news to many people including many Kenyans, that the country has had a space centre since 1964 and launched a satellite named “Uhuru”, (Kiswahili for Independence), on the Day’s anniversary in 1970. What is not news is that Kenya is currently at advanced stages in the establishment of NASA prototype that will cost $100 million and provide Kenya with a ticket to the space observatory owners’ elite clubs. “Kenya's strategic outer space includes the geographic location along the equator and bordering the Indian Ocean to its East that facilitates ease of landing of space crafts, tracking of space craft’s in space, and ease of access to equatorial orbits, and in particular the geostationary orbit," reads the project’s policy order. What about the afronauts? The US has astronauts, Russia has cosmonauts, India has vyomanauts, and China has taikonauts. Africa is working hard to produce an afronaut. South Africa and Nigeria are the two countries that can be considered to have advanced space programs. This makes the African prospect of sending afronauts into space is becoming more and more promising. Although no afronaut has been produced with indigenous African technologies so far, serious attempts are being made across the continent, especially in Nigeria – which plans to send afronauts in 2030. National Space Research and Development Agency in Nigeria operates a number of multimillion-dollar satellites, which claims to have trained 300 staff to PhD or BSc level. These technologies have been used to monitor the oil-rich Niger Delta, track general elections and deal with extremist groups such as Boko Haram. The country’s satellites played an important role in looking for the 273 school girls that this terrorist group had kidnapped. “The country has a long and well-defined road map of its space program and embodies the vision to use space technology for the benefit of the Nigerian people both by providing information to help manage the country and by providing a focus for the training of engineers and scientists,” Luis Gomes, head of earth observation and science at the UK-based Surrey Satellite Technology told the BBC. South Africa launched its maiden satellite in 1999. The country’s National Space Agency was formed in 2010 with the Cape Peninsula University of Technology launching a nano-satellite CubeSat in 2013. These continued efforts culminated in the launching of Kondor-E satellite in 2015 to provide the South African military with all-weather, day-and-night radar imagery. Other countries such as Ghana, Algeria, Angola and Egypt have programs in space technology and have made attempts at solving their particular needs. However, almost all African countries have an element of space science in one way or another, mostly in their policy books. What is uncertain is whether the two African tech giants Nigeria and South Africa (together with all other aspiring space contenders) will join efforts for a pan-African Space program and possible association with NASA. PREVIOUS ARTICLE«How can Latin America become a global leader in Impact Investing? Phil Muncaster reports on China and beyond Jon Collins’ in-depth look at tech and society Kathryn Cave looks at the big trends in tech
<urn:uuid:af76d9c5-87d3-46c1-9842-bcebfc2f9449>
CC-MAIN-2017-04
http://www.idgconnect.com/blog-abstract/19402/-afronauts-satellites-what-africa-space-program
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952141
1,442
3.5
4
C++11: C99 long long sumi_cj 270001SV2S Visits (2339) Compared with C++98, C++11 supports two integer types long long and unsigned long long, which were introduced in the C99 standard before. The C++11 standard supports these two integer types to be compatible with the C99 standard. The long long and unsigned long long types can have different lengths on difference platforms, but the C++11 standard requires that their length should be at least 64 bits. When we define an integer literal, we can use suffix LL (ll) to indicate the type of the literal is long long, and use suffix ULL (ull, Ull, uLL) to indicate the type of the literal is unsigned long long. In the following example, variable x is of type long long and its value is -950 long long int x = -950 unsigned long long y = 9500 We call this feature as the C99 long long feature. The following example shows the different behaviors when the C99 long long feature is enabled and disabled: printf("C99 long long is enabled"); printf("C99 long long is disabled"); In this example, the values 3999999999 and 4000000000 are too large to fit into the 32-bit long int type, but they can fit into either the unsigned long or the long long int type. If the C99 long long feature is enabled, the two values have the long long int type, so the difference of 3999999999 and 4000000000 is negative. Otherwise, if the C99 long long feature is disabled, the two values have the unsigned long type, so the difference is positive. To strictly conform to the C++11 standard, the IBM XL C/C++ compiler introduces the extended integer safe behavior to ensure that a signed value never becomes an unsigned value after a promotion. After this behavior is enabled, if a decimal integer literal that does not have a suffix containing u or U cannot be represented by the long long int type, the compiler issues an error message to indicate that the value of the literal is out of range.
<urn:uuid:f4b38ad6-ead9-4b8d-a441-cc4fd37530b7>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/5894415f-be62-4bc0-81c5-3956e82276f3/entry/c_11_c99_long_long?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00418-ip-10-171-10-70.ec2.internal.warc.gz
en
0.82756
445
3.515625
4
Definition: A variant of stack in which one other cactus stack may be attached to the top. An attached stack is called a "branch." When a branch becomes empty, it is removed. Pop is not allowed if there is a branch. A branch is only accessible through the original reference; it is not accessible through the stack. Formal Definition: The operations new to this variant of stack, branch(S, T) and notch(v), may be defined with axiomatic semantics as follows. Also known as saguaro stack. Generalization (I am a kind of ...) Note: A saguaro is a kind of branching cactus. Pictures and a description of saguaro cactus. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 22 August 2013. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "cactus stack", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 22 August 2013. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/cactusstack.html
<urn:uuid:ee6248d4-f869-4892-8ced-88ef4146ac38>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/cactusstack.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00354-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885606
268
2.859375
3
Denial of Service (DoS) A quick guide to Denial of Service attacks - what they are and how they affect normal access to websites. What is a Denial of Service? A Denial of Service (DoS) attack is a type of assault against a program, system, network, website or online service that disrupts their normal function and prevents other users from accessing it. Usually, when people talk about DoS, it refers to an attack against a website or online service. What happens in a DoS attack? Attacked sites see a huge increase in network traffic, up to a rate of several gigabits per second – far beyond the capacity of most sites. The site becomes slowed or completely disconnected or crashed. If the site isn't adequately defended against DoS attacks, restarting it is useless, as reconnecting to the Internet just exposes it to the same attack, causing it to crash again and again until the attack ceases. There have been numerous cases of DoS attacks being launched for personal reasons – a grunge against a user, the service, or just pure mischief. In recent years however, there have also been cases of DoS attacks that were launched because of corporate or political rivalry. How a DoS attack is done Technically, a DoS attacks prevents users from accessing a website, server or other targeted service by either overwhelming its physical resources or by disrupting all the network connections to it. An attack can overload a website's physical resources by sending it so many requests in such a short time that it overwhelms all the site's available memory, processing or storage space. Once those limits are reached, the site has to first clear all the pending requests before new ones are can be accepted, and so blocking out any other users trying to reach the site – in other words, a denial of service. Similarly, an attack can disrupt all the available network connections to a site by rapidly sending invalid, malformed, or just overwhelming amounts of connections requests to it. While the site attempts to unsnarl these requests, no other users can connect to it, again resulting in a denial of service. In some instances, a malware or an attacker can find and exploit a vulnerability in a program or website that also triggers incorrect use of the available resources or network connections, which also leads eventually to a denial of service. DoS attacks can be made by a single attacker using a simple utility program (sold in underground forums) to attack a program or site. Some malware also include the ability to launch DoS attacks, using the resources of the infected machines or devices to perform the attack. If multiple infected machines launch attacks against the same target, it's known as a ‘distributed denial of service attack, or DDoS. The collective resources of botnets can also be used to launch and control DDoS attacks. Defending against DoS attacks Many websites targeted by DoS attacks have been slowed or crashed for periods ranging from a few hours to a couple days. For online businesses, the forced downtime can result in significant losses. Perhaps the biggest case of DoS (or rather, DDoS) attacks was the 2007 attacks on Estonia, in which many of the online resources of the Estonian government were targeted.
<urn:uuid:b24f396d-c9ec-4331-a60a-fb7583fc8584>
CC-MAIN-2017-04
https://www.f-secure.com/en/web/labs_global/denial-of-service
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95609
665
3.578125
4
Rego P.S.,Federal University of Para | Rego P.S.,State University of Maranhao | Araripe J.,Federal University of Para | Silva W.A.G.,Associcao de Pesquisa e Preservao de Ecossistemas Aquaticos | And 7 more authors. Auk | Year: 2010 The Araripe Manakin (Passeriformes: Pipridae: Antilophia bokermanni) is the most threatened passeriform species and is classified as critically endangered. With an estimated population of only 800 individuals, this species is endemic to a small area (∼30 km2) of forest on the slopes of the Araripe Plateau in northeastern Brazil. The urgent need to implement an effective conservation program for the Araripe Manakin has stimulated intensive research into various aspects of its biology. We sequenced a segment of the mtDNA between the genes ND6 and 12S rDNA, which includes a pseudo-control region. This region was analyzed in 30 specimens of A. bokermanni with the aim of measuring intraspecific genetic diversity and population structure. Although the segment's position is the same as described in other bird species, A. bokermanni differs in some aspects, such as its length of 200 base pairs and the absence of indels or tandem repeats. Our analysis provides no evidence of population substructuring or a history of population expansion. The species' genetic variability is slightly reduced in comparison with its sister species A. galeata, but their similarity indicates a relatively recent process of separation. © 2010 by The American Ornithologists' Union. All rights reserved. Source
<urn:uuid:f24de137-a358-4b8d-ae36-f749af184dd8>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/associcao-de-pesquisa-e-preservao-de-ecossistemas-aquaticos-1606634/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00135-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899418
354
2.5625
3
Liquid-crystal displays (familiar to most as LCDs) rely on the light modulating properties of liquid crystals to bring images to life on a wide variety of screens. From computer monitors to televisions to instrumental panels and signage, LCDs are a pervasive element of modern life. LCDs employ high-tech films, which must be both thin and robust. The problem is that these films degrade over time as liquid-crystal “mesogens,” which make up the films, redistribute to areas of lower energy in a process called dewetting. Eventually the film ruptures. Recently a team of scientists at Oak Ridge National Laboratory put the lab’s Titan supercomputer – packed with 18,688 CPUs and an equal number of GPUs – to work to better understand the mechanics of this process, as reported on the OLCF website. Some of the important uses of high-tech films include protecting pills from dissolving too early, keeping metals from corroding, and reducing friction on hard drives. When the films are manufactured using liquid crystals – macromolecules with both rigid and flexible elements – the innovation potential goes through the roof. The rigid segments support interaction with electric currents, magnetic fields, ambient light and temperature and more. This has led to the material’s wide prevalance in 21st century flat-panel displays. Researchers are actively looking to expand the use of liquid crystal thin films for nanoscale coatings, optical and photovoltaic devices, biosensors, and other innovative applications, but the tendency toward rupturing has stymied progress. By studying the dewetting process more closely, scientists are paving the way for a better generation of films. For several decades, the prevailing theory held that one of two mechanisms could account for dewetting, and these two mechanisms were mutually exclusive. Then about 10 years ago experiments showed that these two mechanisms did coexist in many cases, as Postdoctoral fellow Trung Nguyen of Oak Ridge National Laboratory (ORNL) explains. Nguyen, who was coprincipal investigator on the project with W. Michael Brown (then at ORNL, but now working at Intel), ran large-scale molecular dynamics simulations on ORNL’s Titan supercomputer detailing the beginning stages of ruptures forming on thin films on a solid substrate. The work appears as the cover story in the March 21, 2014, print edition of Nanoscale, a journal of the Royal Society of Chemistry. “This study examined a somewhat controversial argument about the mechanism of the dewetting in the thin films,” stated Nguyen. The two mechanisms thought to be responsible for the dewetting are thermal nucleation, a heat-mediated cause, and spinodal dewetting, a movement-induced cause. Theoretical models posited decades ago asserted that one or the other would be responsible for dewetting thin film, depending on its initial thickness. The simulation validated that the two mechanisms do coexist, although one does predominate depending on the thickness of the film – with thermal nucleation being more prominent in thicker films and spinodal dewetting more common in thinner films. The impetus for the ruptures is the liquid-crystal molecules striving to recover lower-energy states. While still in the research stages, it is thought that this finding may boost innovation in using thin films for applications such as energy production, biochemical detection, and mechanical lubrication. The research was facilitated by a 2013 Titan Early Science program allocation of supercomputing time at the Oak Ridge Leadership Computing Facility. Nguyen’s team went through the ORNL’s Center for Accelerated Applications Readiness (CAAR) program, which gives early access to cutting-edge resources for codes that can take advantage of graphics processing units (GPUs) at scale. Under the CAAR program, Brown reworked the LAMMPS molecular dynamics code to leverage a large number of GPUs. Titan, the most powerful US supercomputer and the world’s second fastest, has a max theoretical computing speed of 27 petaflops and a LINPACK measured at 17.59 petaflops. The Titan Cray XK7 system is also the first major supercomputing system to utilize a hybrid architecture using both conventional 16-core AMD Opteron CPUs plus NVIDIA Tesla K20 GPU parts. The researchers utilized Titan to simulate 26 million mesogens on a substrate micrometers in length and width, employing 18 million core hours and harnessing up to 4,900 of Titan’s nodes. The study lasted three months, but would have taken about two years without the acceleration of Titan’s GPUs. “We’re using LAMMPS with GPU acceleration so that the speedup will be seven times relative to a comparable CPU-only architecture – for example, the Cray XE6. If someone wants to rerun the simulations without a GPU, they have to be seven times slower,” Nguyen explained. “The dewetting problems are excellent candidates to use Titan for because we need to use big systems to capture the complexity of the dewetting origin of liquid-crystal thin films, both microscopically and macroscopically.” This is the first study to simulate liquid-crystal thin films at experimental length- and timescales and also the first to relate the dewetting process to the molecular-level driving force, which causes the molecules to break up. The Nanoscale paper was also authored by postdoctoral fellow Jan-Michael Carrillo, who worked on the simulation model, and computational scientist Michael Matheson, who developed the software for the analysis and visualization work.
<urn:uuid:60cfb154-d599-4e47-85a4-0b42b8f93900>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/04/14/titan-captures-liquid-crystal-film-complexity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924012
1,174
3.734375
4
Definition: A spatial access method which splits space with hierarchically nested boxes. Objects are indexed in each box which intersects them. The tree is height-balanced. See also R-file. Note: After [GG98]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "R+-tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/rplustree.html
<urn:uuid:4759dcfc-752f-4b12-8813-8bf1257d58f6>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/rplustree.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00281-ip-10-171-10-70.ec2.internal.warc.gz
en
0.847389
166
2.5625
3
"In this digital age, a thriving and dynamic economy requires Internet policies that promote innovation domestically and globally while ensuring strong and sensible protections of individuals' private information and the ability of governments to meet their obligations to protect public safety," wrote Cameron Kerry, General Counsel at the Department of Commerce and Christopher Schroeder, Assistant Attorney General at the Department of Justice on the White House Blog. "The public policy direction developed by the Subcommittee will be closely synchronized to privacy practices in federal Departments and agencies, resulting in a comprehensive and forward-looking commitment to a common set of Internet policy principles across government. These core principles include facilitating transparency, promoting cooperation, empowering individuals to make informed and intelligent choices, strengthening multi-stakeholder governance models, and building trust in online environments," the blog stated. The subcommittee will include the Departments of Commerce, Justice, Education, Energy, Health and Human Services, Homeland Security, State, Transportation, Treasury and Small Business Administration. The Federal Trade Commission and the Federal Communications Commission will be included such as the National Security Council and National Security Staff and the Office of the U.S. Intellectual Property Enforcement Coordinator. In the face of numerous attacks on and leaks of private data, there has been an increase in the number of government-sponsored projects looking to protect privacy on the Internet. Recently the Defense Advanced Research Projects Agency (DARPA) issued a call for information on how it can help develop technology to best protect the rich private details that are often available on social media sites. Better anonymization algorithms and other technology to hide data seems to be a key component of what DARPA is looking to develop, though it notes: Anonymization techniques for social network data can also be more challenging than those for relational data. "Massive amounts of social network data are being collected for military, government and commercial purposes. In all three sectors, there is an ever growing need for the exchange or publication of this data for analysis and scientific research activities. However, this data is rich in private details about individuals whose privacy must be protected and great care must be taken to do so. A major technical challenge for social network data exchange and publication is the simultaneous preservation of data privacy and security on the one hand and information utility on the other," DARPA stated. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:a7c3218a-6f97-4e09-ad37-23f0eb9bc3ff>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227558/security/white-house-group-takes-aim-at-internet-privacy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924719
483
2.5625
3
Table of Contents When a hard drive is installed in a computer, it must be partitioned before you can format and use it. Partitioning a drive is when you divide the total storage of a drive into different pieces. These pieces are called partitions. Once a partition is created, it can then be formatted so that it can be used on a computer. When partitions are made, you specify the total amount of storage that you would like to allocate to that partition from the total size of the drive. For example, if you have an 80 GB drive, then it would be possible to make one partition consisting of the entire 80 GB of available storage. Alternatively,you could make two partitions consisting of a 20 GB partition that will be used for the operating system and programs and a 60 GB partition set aside for data, music, and images. In the current IBM PC architecture, there is a partition table in the drive's Master Boot Record (section of the hard drive that contains the commands necessary to start the operating system), or MBR, that lists information about the partitions on the hard drive. This partition table is then further split into 4 partition table entries, with each entries corresponding to a partition. Due to this it is only possible to have four partitions. These 4 partitions are typically known as primary partitions. To overcome this restriction, system developers decided to add a new type of partition called the extended partition. By replacing one of the four primary partitions with an extended partition, you can then make an additional 24 logical partitions within the extended one. The table below illustrates this. |Primary Partition #1| |Primary Partition #2| |Primary Partition #3| |Primary Partition #4 (Extended Partition)| |Logical Partition #1| |Logical Partition #1| As you can see, this partition table is broken up into 4 primary partitions. The fourth partition, though, has been flagged as an extended partition. This allows us to make more logical partitions under that extended partition and therefore bypassing the 4 partition limit. Each hard drive also has one of its possible 4 partitions flagged as an active partition. The active partition is a special flag assigned to only one partition on a hard drive that the Master Boot Record (MBR) uses to boot your computer into an operating system. As only one partition may be set as the active partition, you may be wondering how people can have multiple operating systems installed on different partitions, and yet still be able to use them all. This is accomplished by installing a boot loader in the active partition. When the computer starts, it will read the MBR and determine the partition that is flagged as active. This partition is the one that contains the boot loader. When the operating system boots off of this partition the boot loader will start and allow you to choose which operating systems you would like to boot from. Now that you know what a partition is, you may be wondering why you would even need to make multiple partitions instead of just making one. Though there are quite a few reasons, we will touch on some of the more important ones below: Bleeping Computer: Hardware Tutorial BleepingComputer.com: Computer Help & Tutorials for the beginning computer user. In order to use a hard drive, or a portion of a hard drive, in Windows you need to first partition it and then format it. This process will then assign a drive letter to the partition allowing you to access it in order to use it to store and retrieve data. A filesystem is a way that an operating system organizes files on a disk. These filesystems come in many different flavors depending on your specific needs. For Windows, you have the NTFS, FAT, FAT16, or FAT32 filesystems. For Macintosh, you have the HFS filesystem and for Linux you have more filesystems than we can list in this tutorial. One of the great things about Linux is that you have the ... Almost everyone uses a computer daily, but many don't know how a computer works or all the different individual pieces that make it up. In fact, many people erroneously look at a computer and call it a CPU or a hard drive, when in fact these are just two parts of a computer. When these individual components are connected together they create a complete and working device with an all ... Almost all desktop computers have a hard drive inside them, but do you really know what they are? Many people when they hear the word hard drive, think that it refers to the computer as a whole. In reality, though, the hard drive is just one of many different pieces that comprise a computer. The hard drive is one of the most important parts of your computer because it is used as a long-term ... In the past when you needed to resize a partition in Windows you had to use a 3rd party utility such as Partition Magic, Disk Director, or open source utilities such as Gparted and Ranish Partition Manager. These 3rd party programs, though, are no longer needed when using Windows as it has partition, or volume, resizing functionality built directly into the Windows Disk Management utility.
<urn:uuid:d62058eb-c68d-4f50-bba4-97462587f560>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/understanding-hard-disk-partitions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00401-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937488
1,057
4.25
4
The biggest downside of Wi-Fi for most users might be that it can really drain your smartphone or tablet battery, but a research team at the University of Washington has come up with a way to make using the nearly ubiquitous wireless technology in a less taxing way. They have demonstrated a technique for using 10,000 times less power than typical Wi-Fi (well, at up to 11Mbps anyway) and next month will present a paper titled "Passive Wi-Fi: Bringing Low Power to Wi-Fi Transmissions" at the USENIX Symposium on Networked Systems Design in Santa Clara. Not only is Passive Wi-Fi more energy-efficient than typical Wi-Fi, but the research says it uses 1,000 times less power than Bluetooth Low Energy and Zigbee. "We wanted to see if we could achieve Wi-Fi transmissions using almost no power at all," said co-author Shyam Gollakota, a UW assistant professor of computer science and engineering, in a statement. The researchers did this in part by decoupling analog and digital operations of radio transmissions, and making the analog part (which typically consumes 100s of milliwatts of power) much more efficient. Their scheme involves centralizing the heavy lifting, like producing a signal at a specific frequency, into a plugged-in device, then using passive sensors that include digital switches to reflect and absorb such signals. Such passive Wi-Fi technology could support typical Wi-Fi applications, but also pave the way for broader Internet of Things support. After all, one of the obstacles to IoT adoption is additional power usage for all sorts of household items, for example. The research was funded by the National Science Foundation, the University of Washington and Qualcomm. Wi-Fi has been a ripe area for research, such as with Rice University exploring ways to improve propagation. In fact, Rice and MIT engineers each have presentations at the same USENIX conference next month at which University of Washington researchers will present.
<urn:uuid:cf7dc783-6cec-4537-bf01-faa57f2b4ea5>
CC-MAIN-2017-04
http://www.networkworld.com/article/3037088/mobile-wireless/researchers-make-low-power-wi-fi-breakthrough.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00033-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942275
406
3.359375
3
Health Benefits of Pineapple This sticky and sweet tropical fruit is a favorite with children of all ages. Perfect as an integral part of sweet and sour sauce, you can't go past the wonderful pineapple. Pineapple is a tropical fruit, it contains a proteolytic enzyme bromelain, which helps in the digestion of protein. Pineapple can prevent blood clot formation because of its bromelain content. Pineapples are a member of the Bromeliaceae family and are composed of many flowers whose fruitlets are fused around a core. Each fruitlet has an eye which is the spiny part on the pineapple's surface. Pineapples are both sweet and tart with a beautiful, tropical yellow colour, reminiscent of warm summer days at the beach. Health Benefits of Pineapple One of the most important enzymes in pineapple is bromelain and it is bromelain that holds the key to many of the pineapple's health benefits. Fresh pineapple is full of these sulphur-containing, protein-digesting compounds so, what can they do for you? Bromelain and Pineapples Bromelain has been found to be a useful anti-inflammatory, effective in reducing swelling and assisting in the treatment of conditions such as acute sinusitis, sore throat, arthritis and gout. For increased effectiveness, pineapple should be eaten between meals without other food. This is because of another of bromelaid's properties, that as an aid to digestion. If eaten with other food, bromelaid's health benefits will be taken up in helping to digest the other food. Pineapple is high in anti-oxidants A very good source of vitamin C, pineapple offers your body an excellent protection against free-radicals, substances that attack healthy cells. A build up of free-radicals can lead to atherosclerosis and diabetic heart disease, an increase in asthma attacks and an increased risk of developing certain cancers, such as colon cancer. Free-radicals have also been shown to accentuate the problems associated with osteoarthritis and rheumatoid arthritis. Vitamin C, your body's most important water-soluble anti-oxidant has proven itself invaluable in fighting against and aiding treatment for these conditions. Vitamin C is, of course, also an excellent cold and flu fighter due to its importance to the proper functioning of the immune system. Pineapple is also an excellent source of manganese, a mineral essential in some of the enzymes necessary in the body for energy production. It also has very good amounts of thiamine (vitamin B1) which is also important in these energy producing enzymes. Pineapple and other fruit has been shown to be important in maintaining good eye health, helping to protect against age-related eye problems. Three serves of fruit a day, in particular those high in anti-oxidants, has been shown to lower your risk of developing this potentially debilitating condition. There are even some beneficial molecules hidden in the stems of pineapples, Australian research has found. These molecules have been seen to act as a defence against certain types of cancer. The types of cancer benefited by these molecules are ovarian, breast, lung, colon and skin cancer. Pineapple Nutritive Value (Per 100 gm.) - Vitamin A: 130 I.U. a: 130 i.u. - Vitamin C: 24 mg c: 24 mg - Calcium: 16 mg 16 mg - Phosphorus: 11 mg 11 mg - Potassium: 150 mg 150 mg - Carbohydrates: 13.7 gm 13.7 gm - Calories: 52 52 Pineapple has following health benefits: - It is regulates the gland and found to be helpful in cases of goiter (enlargement of the thyroid gland). - Dyspepsia (chronic digestive disturbance). - Bronchitis (inflammation of the bronchial tubes). - Catarrh (secretions from mucous membranes). - High Blood pressure. - Arthritis (diseases of the joints) - Fresh pineapple juice is also used in removing intestinal worms. - Fresh pineapple juice has been used to combat diptheria and other infections of the throat or other parts of the body. - Prevents nausea (includes morning sickness and motion sickness), Take 230 cc. of pineapple juice or papaya juice. Selecting & Storing Pineapple Pineapples should feel heavy for their size, otherwise they could end up dry and tasteless. They should look, feel and smell clean and have no bad or mouldy marks on the outer surface. As pineapple stops ripening when picked, choose carefully and don't select one that looks immature. Pineapples can be stored at room temperature however they spoil easily and should be watched carefully. To keep it longer than a day or two, wrap in a plastic bag and store in the fridge for up to five days. If you've cut your pineapple, store unused pieces in the fridge in an airtight container and use as soon as possible. They can be frozen however this will change the flavour so be careful. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:4862e353-8059-4724-832d-70163fef73f6>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-283.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00363-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943095
1,169
2.609375
3
In early January, over 7 million user accounts belonging to members of the Minecraft community "Lifeboat" had been hacked. Lifeboat runs servers for custom, multiplayer environments for Minecraft Pocket edition - the smartphone version of the game which allows players to play in different game modes. It appears that even though the passwords in the breach were hashed, they were done so by using the MD5 algorithm which is known for being weak, meaning that plenty of the passwords can be deciphered with the use of online tools, a Linux command line, or even a simple hashing program which takes an input as text and then converts that to an MD5 hash. Naturally as many of us do, that same password might be used for more than just the one account, meaning that anyone in possession of the data now has a chance of accessing the users other accounts as well. Examples like these show why we should not be using the same password for multiple accounts. We should really be using strong, unique passwords for each. That way, when a breach occurs on one service (and evidence shows that breaches are occurring with increasing frequency) hackers will only be able to access that specific account, reducing the area of compromise. Read more about Passwords and you. “I was able to easily verify people's passwords with them simply by Googling them, such is the joy of unsalted MD5,” Hunt said. Motherboard confirmed that one of the hashes provided by Hunt corresponded to an easily guessable password. The Lifeboat representative said that the company now uses a stronger hashing algorithm. Naturally, if victims have used the same passwords on other services, such as their email, anyone in possession of the data has a chance of accessing those accounts too.
<urn:uuid:ef87fd0f-4139-44af-85db-1d2a2d8322e7>
CC-MAIN-2017-04
http://cybersecurityinsights.foregenix.com/post/102dc1s/why-recycling-the-same-password-for-everything-is-bad-practice
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968829
356
2.953125
3
Dec. 16 — Researchers with the Southern California Earthquake Center are using ORNL’s Titan supercomputer to prepare the state for its next big earthquake. When the last massive earthquake shook the San Andreas Fault in 1906, no one would hear about “plate tectonics” for another 50 years, and the Richter scale was still a generation away. Needless to say, by today’s standards, only primitive data survive to help engineers prepare southern California for an earthquake of similar magnitude. “We haven’t had a really big rupture since the city of Los Angeles existed,” said Thomas Jordan, Southern California Earthquake Center (SCEC) director. Titan is the world’s most powerful system for open scientific research, and project scientists including Yifeng Cui of the University of California, San Diego, and geophysicist Kim Olsen of San Diego State University are using it to simulate a major earthquake at high frequencies. These calculations are especially demanding, but are needed for the detailed predictions that are required by structural engineers.
<urn:uuid:30ff4633-4cde-43fc-8cd2-beff34c8dffc>
CC-MAIN-2017-04
https://www.hpcwire.com/off-the-wire/researchers-utilizing-titan-simulate-earthquake-physics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934506
217
3.8125
4
(By Info-Tech Analyst James Quin- Printed with permission from Processor Magazine www.processor.com). Passwords. Just saying the word can be enough to send shivers down the spines of users and administrators alike. Users have too many to remember, and administrators have too many to reset. What was supposed to be an efficient and cost-effective way of providing secure authentication has become one of the biggest problems that enterprises face. Give everyone a break and move away from passwords. Understanding The Nature Of The Beast Though it is difficult to find definitive statistics on how many passwords the average corporate user has to remember, four or five is likely a reasonable (if not conservative) estimate. If this number is then multiplied by the number of times a year that these passwords must be reset, the count of passwords that need to be remembered rises significantly. Even if we assume a lackadaisical expiry rate of every 90 days, four or five passwords suddenly become 16 or 20 a year. Then, when we consider that these passwords must be a minimum number of characters in length (eight being the norm) and follow complex construction rules (upper- and lower-case letters, numbers, and special characters), is it any wonder that users forget their passwords?
<urn:uuid:b9fad271-de71-46fb-b018-edf58f6295b1>
CC-MAIN-2017-04
https://www.infotech.com/research/quit-saddling-users-with-so-many-passwords
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938222
254
2.5625
3
Imagine that you are trying to create a new sauce for a special dish, or the perfect adhesive for a new aircraft, or you’re flying a helicopter looking for victims of a natural disaster — and you succeed at each of these. This is wonderful news for your dinner guests, or the company that will use the new adhesive, and especially for the victims of the natural disaster. But the question is — Could you do it again and get the same results? Or, did you just get lucky the first time? At the XSEDE14 conference in Atlanta, a roomful of computational veterans from inside and outside the NSF Extreme Science and Engineering Discovery Environment (XSEDE) participated in a full-day workshop on the topic of reproducibility, and clearly, there is a lot at stake. “There is a growing awareness in the computational research community that this question of ‘can we do it again’ is becoming important for us in new ways, and the stakes are high — computational research is helping to save lives, answering policy questions, and making an impact on the world,” said Doug James, an HPC researcher at the Texas Advanced Computing Center, in his opening remarks for the workshop. People have been thinking about reproducibility for a long time – it is one thing to reproduce a small scale lab experiment, or a computation on your desktop, but it is an entirely different matter to reproduce something that the Hubble Space Telescope did over five years at the cost of hundreds of millions of dollars, for example. So, what is reproducibility? One working definition might resemble this: the ability to repeat an experiment to the degree necessary to assess the correctness and importance of the results. Practices that promote reproducibility include anything that makes a researcher more organized, provides a better audit trail, allows a researcher to track source code, and to know what data sources were used. Victoria Stodden of Columbia University, who led a roundtable on the topic of reproducibility in 2009 and an ICERM workshop on Reproducibility in Computational and Experimental Mathematics in 2012, gave the keynote address at the XSEDE14 workshop. She raised the issue of a credibility crisis. “Reproducibility has hit the popular press over the last several months,” Stodden said, citing recent coverage by The Economist (October 2013) and editorials in Nature and Science. Issues around the importance of reproducibility were catalyzed by the clinical trials scandal at Duke University in computational genomics where mistakes in the research were uncovered in 2010 in The Cancer Letter. “This really goes to the heart of how important reproducibility issues are, and how we need to reconstruct the pipeline of thinking, reasoning and observation that a scientist does, but for the computational aspects, too, where many of these decisions are being manifest.” Stodden also touched on separate discussions going on regarding different aspects of reproducibility such as statistical reproducibility, which questions the research decisions about the statistics and data analysis, and empirical reproducibility, which focuses on the reporting standards for the physical experiment, but does not focus on the computational steps. Everyone in the room agreed that computational research is now in a position where complexity and mission criticality take on new import, and the community needs to develop confidence in the results of that research. But what should our priorities be? Training? Better tools? New steps in proposals and submissions? NCSA Director Ed Seidel shared his view that there are three levels where things have to happen to get momentum moving in right direction: 1) campus level; 2) national level; and 3) publisher level. Seidel said that local campuses have to think about how they can begin to support local data services, not just repositories, so there is a local structure. “This is a policy issue that vice chancellors for research and provosts need to take seriously…and there are organizations in place like Internet2 and Educause that span the research universities across the country that can help,” Seidel said. “It’s important to frame it not just as data but more around reproducibility; scope the problem beyond data and the data infrastructure.” In addition, Seidel cited the XSEDE initiative as being a good organization for aiding the reproducibility process. XSEDE was instrumental in starting the National Data Service Consortium, aimed at organizing a number of individual efforts for data services around tools to create data collections to get Digital Object Identifiers or ‘DOIs’ associated with them and to provide linking services to publishers. While typically thought of as pointers to data collections, DOIs can also attach to code. This is a crucial part of reproducibility. Professional societies and journals can play a part as well. Many are starting to require links to the data referenced in a publication. But reproducible practices must start in the research group. Lorena Barba of George Washington University and a leading advocate of reproducible science said, “Conducting research reproducibly doesn’t mean someone else will reproduce the results, but that you are doing it as if someone would do this. By providing full documentation, access to input data and source code, the community will have confidence in your results and will label them as reproducible even if they are, in fact, not reproduced.” Many other people added to the conversation including Mark Fahey of the National Institute of Computational Sciences. According to Fahey, the centers need to step up and take some responsibility for providing documentation about how users build and run their codes. Fahey said, “Centers can automatically collect information for each code built and each run of the code, and this information can be made available back to the researcher for publications if desired. There are already two prototypes (ALTD and Lariat) at a variety of computing centers around the world that collect a good portion of this information, and a new improved infrastructure is in development called XALT funded by NSF.” At the outset of the workshop, the group committed to a key deliverable: recommendations in the form of priorities and initiatives for organizations and communities. “It’s been implicit that ‘Of course, this is what people do, system administrators and researchers check to ensure that codes gets the same results after systems upgrades and when porting to new platforms’ but reproducibility has never been a formal enterprise,” said Nancy Wilkins-Diehr of the San Diego Supercomputer Center, who summarized the workshop and helped facilitate suggestions for moving forward. “This is a good time to do this. Computational science is a respected contributor of the scientific knowledge base. Important decisions are now based on simulation. While this is gratifying, it has very real implications for our responsibilities as well,” she said. The participants intend to move forward with humility, however. “The vision for the recommendations is to honor the reality of a diverse set of viewpoints and include ideas that might be outside of the box,” James concluded. Everyone agrees that there is a need to promote confidence-building tools and methodologies that do not adversely affect performance. Recommendations will be ready in the September 2014 timeframe — please refer to xsede.org/reproducibility to read them. In addition, you can send comments and suggestions to email@example.com. The Help Desk will send any and all inquiries to the XSEDE team working on this initiative.
<urn:uuid:fc33a402-e00f-4aa6-bc6d-a3212a447000>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/19/xsede14-workshop-wrestles-reproducibility/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00107-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941325
1,557
2.890625
3
When a Hard Drive Motor Failure occurs it is commom that the hard drive motor has stopped spinning, or is making a beeping, stuttering, or chattering noise, its spindle motor may have failed. If this has happened to you, our data recovery technicians can help reunite you with your lost data. Data recovery in the event of a spindle motor failure often involves placing the user’s hard drive platters into a compatible donor chassis with a functional motor. How Does a Hard Drive Motor Failure Happen? Under normal operation, your hard drive’s platters are spinning at thousands of revolutions per minute. Modern hard drives tend to spin at 5,400 or 7,200 RPM. Enterprise-class hard drives spin at up to 15,000 RPM. The high rotational speed of your platters is what enables your hard drive to access information so quickly. A point on the edge of a 3.5-inch platter in a typical desktop hard drive can be traveling at almost 150 miles per hour! The high rotational speed also generates the thin cushion of air that keeps your hard drive’s read/write heads afloat. All of this happens because the hard drive spindle motor dutifully does its job. Modern hard drive spindle motors have been designed to spin the platters far more quietly and efficiently than you’d expect. But like most hard drive components, the spindle motor is delicate and vulnerable. A spindle motor failure can happen for several reasons. Most commonly, it is the result of physical trauma. Environmental conditions or old age can also cause the hard drive spindle motor’s lubricated bearings to dry out. Without lubrication, the heat and resistance generated by friction becomes unbearable. This quickly burns out the motor. When the motor becomes seized, no matter how much power flows into it from the control board, it can’t make the platters spin. The motor can make a quiet buzzing sound, and the hard drive may become very hot. If you drop or mishandle your hard drive while it’s running, the read/write heads can clamp down on the magnetic data storage platters. Suddenly, something is holding the platters in place and preventing them from spinning. The spindle motor tries and tries to spin the platters, but there’s nothing it can do. This can also happen if you flip a hard drive over or handle it roughly while it’s running. Total spindle motor failure can quickly set in if you keep trying to run the drive. Many modern hard drives have safety features built into them to prevent this situation. An accelerometer can detect when a hard drive has entered a state of free-fall and quickly move the read/write heads off of the platters. This can still render the hard drive inoperable. But the chances of the motor being seized and the platters holding your critical data becoming damaged is much lower. Sudden Power Loss When a hard drive gets powered off normally, the air cushion slowly dissipates as the platters slow to a halt. The read/write heads have ample time to move to their rest positions away from the platters. But if a hard drive suddenly loses power, the air cushion could vanish before the heads can move to safety as the platters come to an abrupt halt. The heads end up crashing onto the surfaces of the platters. The next time you power your hard drive on, the spindle motor will try to spin the platters. But the heads have too tight a grip on them. Trying to run the drive could cause platter damage and spindle motor failure. Old Age/Environmental Conditions Hard drive spindle motor failure can be caused as a result of natural wear or poor environmental conditions as well. Lubricated bearings keep the forces of friction (an engineer’s most hated enemy) at bay while the motor does its job. Over time, the lubrication will simply dry up. Foreign contaminants can also enter the hard drive and gunk up the motor. This is one of the many reasons not to open up your hard drive outside of a cleanroom environment. The breathing hole is necessary for hard drives to work properly. Hard drives are sealed very tightly, but they are not hermetically sealed. Most notably, they have a tiny breathing hole on the faceplate. There’s always a warning label telling you not to cover it up. But if foreign contaminants are so dangerous to a hard drive, why is there a hole there in the first place? That hole is there to make sure the air pressure inside and outside the drive is the same. Otherwise, the hard drive’s heads might fall too close to the platters’ surfaces. Behind that hole is a cloth filter. This filter can keep all but the tiniest contaminants out. Unfortunately, even the tiniest contaminants can build up over the years and compromise the spindle motor’s lubricated bearings. Spindle motor failure can also be brought on by environmental conditions. Exposing a hard drive to excessive humidity and heat for prolonged periods can cause the lubricated bearings to fail. If this happens, friction will overwhelm the motor and burn it out. Why Choose Gillware for Hard Drive Motor Failure Data Recovery Services? Recovering data from a hard drive with a failed hard drive motor can be difficult. In many cases, not only has the motor sustained damage. Motor failure can cause, or can be caused by, a domino effect of failure. Damage to the hard drive motor can come part and parcel with damaged read/write heads and damaged platters. This requires intensive cleanroom work to repair. Drew, one of our cleanroom engineers, prepares to disassemble a hard drive to use as a donor. This intensive cleanroom work can take the form of read/write heads replacements, platter burnishing, or replacing the hard drive’s spindle motor and chassis. Occasionally, a stuck hard drive motor can be unstuck. But in cases where the hard drive motor has completely failed, the hard drive platters must be removed and placed into a compatible chassis with a functional motor. These operations are always carried out by our skilled and highly trained cleanroom data recovery technicians and engineers. At Gillware Data Recovery, we understand that data recovery is often an unplanned expense. No one can predict when they’ll lose data to a hard drive failure. We also understand that while we do our best to keep the hard drive recovery cost lower than our competitors, sometimes intensive cleanroom work is just outside of your budget. That’s why we stand by our financially risk-free data recovery services for hard drive motor failure. We start with a completely free evaluation, and even offer to cover the cost of inbound shipping. Our professional hard drive recovery technicians will assess the damage to your hard drive. We’ve seen just about every kind of failure in just about every brand of hard drive at least once before. One look at a hard drive’s condition gives us a very good idea of the cost and potential for success of a data recovery operation. If the price quote is too high or the probability of success too low, you’re free to back out right then and there. But even if you approve the quote, we aren’t ready to send you a bill just yet. We hold off on that until we’ve completed the recovery procedure. And even then, we only charge you for our work if we successfully recover your critical data. If we fail to meet your data recovery goals, you don’t owe us a dime. If you find your hard drive is not spinning, overheating, or making odd noises, your hard drive motor may be to blame. Don’t panic! Just power the drive off and submit a case to our website. One of our recovery client advisers will contact you to walk you through the rest of our data recovery process. Ready to Have Gillware Assist You with Your Hard Drive Spindle Motor Failure Data Recovery Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:9dcd2355-9b38-47e1-a57a-2c5203c92180>
CC-MAIN-2017-04
https://www.gillware.com/hard-drive-motor-failure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920075
2,230
2.65625
3
Table Of Contents This chapter presents information about the wide variety of tools available to assist you in troubleshooting your internetwork. This includes information on using router diagnostic commands, Cisco network management tools, and third-party troubleshooting tools. Using Router Diagnostic Commands Cisco routers provide numerous integrated commands to assist you in monitoring and troubleshooting your internetwork. The following sections describe the basic use of these commands: •The show commands help monitor installation behavior and normal network behavior, as well as isolate problem areas. •The debug commands assist in the isolation of protocol and configuration problems. •The ping commands help determine connectivity between devices on your network. •The trace commands provide a method of determining the route by which packets reach their destination from one device to another. Using show Commands The show commands are powerful monitoring and troubleshooting tools. You can use the show commands to perform a variety of functions: •Monitor router behavior during initial installation •Monitor normal network operation •Isolate problem interfaces, nodes, media, or applications •Determine when a network is congested •Determine the status of servers, clients, or other neighbors The following are some of the most commonly used show commands: •show version—Displays the configuration of the system hardware, the software version, the names and sources of configuration files, and the boot images. •show running-config—Displays the router configuration currently running. •show startup-config—Displays the router configuration stored in nonvolatile RAM (NVRAM). •show interfaces—Displays statistics for all interfaces configured on the router or access server. The resulting output varies, depending on the network for which an interface has been configured. •show controllers—Displays statistics for interface card controllers. •show flash—Displays the layout and contents of Flash memory. •show buffers—Displays statistics for the buffer pools on the router. •show memory summary—Displays memory pool statistics and summary information about the activities of the system memory allocator, and gives a block-by-block listing of memory use. •show process cpu—Displays information about the active processes on the router. •show stacks—Displays information about the stack utilization of processes and interrupt routines, as well as the reason for the last system reboot. •show cdp neighbors—Provides a degree of reachability information of directly connected Cisco devices. This is an extremely useful tool to determine the operational status of the physical and data link layer. Cisco Discovery Protocol (CDP) is a proprietary data link layer protocol. •show debugging—Displays information about the type of debugging that is enabled for your router. You can always use the ? at command line for a list of subcommands. Like the debug commands, some of the show commands listed previously are accessible only at the router's privileged exec mode (enable mode). This will be explained further in the "Using debug commands" section. Hundreds of other show commands are available. For details on using and interpreting the output of specific show commands, refer to the Cisco Internetwork Operating System (IOS) command references. Using debug Commands The debug privileged exec commands can provide a wealth of information about the traffic being seen (or not seen) on an interface, error messages generated by nodes on the network, protocol-specific diagnostic packets, and other useful troubleshooting data. To access and list the privileged exec commands, enter this code: Note the change in the router prompts here. The # prompt (instead of the normal > prompt) indicates that you are in the privileged exec mode (enable mode). Caution Exercise care when using debug commands. Many debug commands are processor-intensive and can cause serious network problems (such as degraded performance or loss of connectivity) if they are enabled on an already heavily loaded router. When you finish using a debug command, remember to disable it with its specific no debug command (or use the no debug all command to turn off all debugging). Use debug commands to isolate problems, not to monitor normal network operation. Because the high processor overhead of debug commands can disrupt router operation, you should use them only when you are looking for specific types of traffic or problems, and have narrowed your problems to a likely subset of causes. Output formats vary with each debug command. Some generate a single line of output per packet, and others generate multiple lines of output per packet. Some generate large amounts of output, and others generate only occasional output. Some generate lines of text, and others generate information in field format. To minimize the negative impact of using debug commands, follow this procedure: Step 1 Use the no logging console global configuration command on your router. This command disables all logging to the console terminal. Step 2 Telnet to a router port and enter the enable exec command. The enable exec command places the router in the privileged exec mode. After entering the enable password, you receive a prompt that consists of the router name with a # symbol. Step 3 Use the terminal monitor command to copy debug command output and system error messages to your current terminal display. By redirecting output to your current terminal display, you can view debug command output remotely, without being connected through the console port. If you use debug commands at the console port, character-by-character processor interrupts are generated, maximizing the processor load already caused by using debug. If you intend to keep the output of the debug command, spool the output to a file. The procedure for setting up such a debug output file is described in the Debug Command Reference. This book refers to specific debug commands that are useful when troubleshooting specific problems. Complete details regarding the function and output of debug commands are provided in the Debug Command Reference. In many situations, using third-party diagnostic tools can be more useful and less intrusive than using debug commands. For more information, see the section "Third-Party Troubleshooting Tools," later in this chapter. Using the ping Commands To check host reachability and network connectivity, use the ping command, which can be invoked from both user exec mode and privileged exec mode. After you log in to the router or access server, you are automatically in user exec command mode. The exec commands available at the user level are a subset of those available at the privileged level. In general, the user exec commands enable you to connect to remote devices, change terminal settings on a temporary basis, perform basic tests, and list system information. The ping command can be used to confirm basic network connectivity on AppleTalk, ISO Connectionless Network Service (CLNS), IP, Novell, Apollo, VINES, DECnet, or XNS networks. For IP, the ping command sends Internet Control Message Protocol (ICMP) Echo messages. ICMP is the Internet protocol that reports errors and provides information relevant to IP packet addressing. If a station receives an ICMP Echo message, it sends an ICMP Echo Reply message back to the source. The extended command mode of the ping command permits you to specify the supported IP header options. This allows the router to perform a more extensive range of test options. To enter ping extended command mode, enter yes at the extended commands prompt of the ping command. It is a good idea to use the ping command when the network is functioning properly to see how the command works under normal conditions and so that you have something to compare against when troubleshooting. For detailed information on using the ping and extended ping commands, refer to the Cisco IOS Configuration Fundamentals Command Reference. Using the trace Commands The trace user exec command discovers the routes that a router's packets follow when travelling to their destinations. The trace privileged exec command permits the supported IP header options to be specified, allowing the router to perform a more extensive range of test options. The trace command works by using the error message generated by routers when a datagram exceeds its time-to-live (TTL) value. First, probe datagrams are sent with a TTL value of 1. This causes the first router to discard the probe datagrams and send back "time exceeded" error messages. The trace command then sends several probes and displays the round-trip time for each. After every third probe, the TTL is increased by 1. Each outgoing packet can result in one of two error messages. A "time exceeded" error message indicates that an intermediate router has seen and discarded the probe. A "port unreachable" error message indicates that the destination node has received the probe and discarded it because it could not deliver the packet to an application. If the timer goes off before a response comes in, trace prints an asterisk (*). The trace command terminates when the destination responds, when the maximum TTL is exceeded, or when the user interrupts the trace with the escape sequence. As with ping, it is a good idea to use the trace command when the network is functioning properly to see how the command works under normal conditions and so that you have something to compare against when troubleshooting. For detailed information on using the trace and extended trace commands, refer to the Cisco IOS Configuration Fundamentals Command Reference. Using Cisco Network Management Tools Cisco offers the CiscoWorks 2000 family of management products that provide design, monitoring, and troubleshooting tools to help you manage your internetwork. The following internetwork management tools are useful for troubleshooting internetwork problems: •CiscoView provides dynamic monitoring and troubleshooting functions, including a graphical display of Cisco devices, statistics, and comprehensive configuration information. •Internetwork Performance Monitor (IPM) empowers network engineers to proactively troubleshoot network response times utilizing real-time and historical reports. •The TrafficDirector RMON application, a remote monitoring tool, enables you to gather data, monitor activity on your network, and find potential problems. •The VlanDirector switch management application is a management tool that provides an accurate picture of your VLANs. CiscoView graphical management features provide dynamic status, statistics, and comprehensive configuration information for Cisco internetworking products (switches, routers, hubs, concentrators, and access servers). CiscoView aids network management by displaying a physical view of Cisco devices and color-coding device ports for at-a-glance port status, allowing users to quickly grasp essential information. Features include the following: •Graphical displays of Cisco products from a central location, giving network managers a complete view of Cisco products without physically checking each device at remote sites •A continuously updated physical view of routers, hubs, switches, or access servers in a network, regardless of physical location •Updated real-time monitoring and tracking of key information and data relating to device performance, traffic, and usage, with metrics such as utilization percentage, frames transmitted and received, errors, and a variety of other device-specific indicators •The capability to modify configurations such as trap, IP route, virtual LAN (VLAN), and bridge configurations Internetwork Performance Monitor IPM is a network management application that enables you to monitor the performance of multiprotocol networks. IPM measures the response time and availability of IP networks on a hop-by-hop (router-to-router) basis. It also measures response time between routers and the mainframe in Systems Network Architecture (SNA) networks. Use IPM to perform the following tasks: •Troubleshoot problems by checking the network latency between devices •Send Simple Network Management Protocol (SNMP) traps and SNA alerts when a user-configured threshold is exceeded, when a connection is lost and re-established, or when a timeout occurs •Analyze potential problems before they occur by accumulating statistics, which are used to model and predict future network topologies •Monitor response time between two network end points The IPM product is composed of three parts: the IPM server, the IPM client application, and the response time reporter (RTR) feature of the Cisco IOS software. The TrafficDirector RMON Application The TrafficDirector advanced packet filters let users monitor all seven layers of network traffic. Using Cisco IOS embedded RMON agents and SwitchProbe standalone probes, managers can view enterprise-wide network traffic from the link, network, transport, or application layers. The TrafficDirector multilayer traffic summary provides a quick, high-level assessment of network loading and protocol distributions. Network managers then "zoom in" on a specific segment, ring, switch port, or trunk link and apply real-time analysis and diagnostic tools to view hosts, conversations, and packet captures. TrafficDirector threshold monitoring enables users to implement a proactive management environment. First, thresholds for critical Management Information Base (MIB) variables are set within the RMON agent. When these thresholds are exceeded, traps are sent to the appropriate management station to notify the network administrator of an impending problem. The VlanDirector Switch Management Application The VlanDirector switch management application simplifies VLAN port assignment and offers other management capabilities for VLANs. VlanDirector offers the following features for network administrators: •Accurate representation of the physical network for VLAN design and configuration verification •Capability to obtain VLAN configuration information on a specific device or link interface •Discrepancy reports on conflicting configurations •Capability to troubleshoot and identify individual device configurations that are in error with system-level VLANs •Quick detection of changes in VLAN status of switch ports •User authentication and write protection security Third-Party Troubleshooting Tools In many situations, third-party diagnostic tools can be more useful than commands that are integrated into the router. For example, enabling a processor-intensive debug command can be disastrous in an environment experiencing excessively high traffic levels. However, attaching a network analyzer to the suspect network is less intrusive and is more likely to yield useful information without interrupting the operation of the router. The following are some typical third-party troubleshooting tools used for troubleshooting internetworks: •Volt-ohm meters, digital multimeters, and cable testers are useful in testing the physical connectivity of your cable plant. •Time domain reflectors (TDRs) and optical time domain reflectors (OTDRs) are devices that assist in the location of cable breaks, impedance mismatches, and other physical cable plant problems. •Breakout boxes, fox boxes, and BERTs/BLERTs are useful for troubleshooting problems in peripheral interfaces. •Network monitors provide an accurate picture of network activity over a period of time by continuously tracking packets crossing a network. •Network analyzers such as sniffers decode problems at all seven OSI layers and can be identified automatically in real time, providing a clear view of network activity and categorizing problems by criticality. Volt-Ohm Meters, Digital Multimeters, and Cable Testers Volt-ohm meters and digital multimeters are at the lower end of the spectrum of cable-testing tools. These devices measure parameters such as AC and DC voltage, current, resistance, capacitance, and cable continuity. They are used to check physical connectivity. Cable testers (scanners) also enable you to check physical connectivity. Cable testers are available for shielded twisted-pair (STP), unshielded twisted-pair (UTP), 10BaseT, and coaxial and twinax cables. A given cable tester might be capable of performing any of the following functions: •Test and report on cable conditions, including near-end crosstalk (NEXT), attenuation, and noise •Perform TDR, traffic monitoring, and wire map functions •Display Media Access Control (MAC)-layer information about LAN traffic, provide statistics such as network utilization and packet error rates, and perform limited protocol testing (for example, TCP/IP tests such as ping) Similar testing equipment is available for fiber-optic cable. Because of the relatively high cost of this cable and its installation, fiber-optic cable should be tested both before installation (on-the-reel testing) and after installation. Continuity testing of the fiber requires either a visible light source or a reflectometer. Light sources capable of providing light at the three predominant wavelengths—850 nanometers (nm), 1300 nm, and 1550 nm—are used with power meters that can measure the same wavelengths and test attenuation and return loss in the fiber. TDRs and OTDRs At the top end of the cable testing spectrum are TDRs. These devices can quickly locate open and short circuits, crimps, kinks, sharp bends, impedance mismatches, and other defects in metallic cables. A TDR works by bouncing a signal off the end of the cable. Opens, shorts, and other problems reflect the signal back at different amplitudes, depending on the problem. A TDR measures how much time it takes for the signal to reflect and calculates the distance to a fault in the cable. TDRs can also be used to measure the length of a cable. Some TDRs can also calculate the propagation rate based on a configured cable length. Fiber-optic measurement is performed by an OTDR. OTDRs can accurately measure the length of the fiber, locate cable breaks, measure the fiber attenuation, and measure splice or connector losses. An OTDR can be used to take the signature of a particular installation, noting attenuation and splice losses. This baseline measurement can then be compared with future signatures when a problem in the system is suspected. Breakout Boxes, Fox Boxes, and BERTs/BLERTs Breakout boxes, fox boxes, and bit/block error rate testers (BERTs/BLERTs) are digital interface testing tools used to measure the digital signals present at PCs, printers, modems, the channel service unit/digital service unit (CSU/DSU), and other peripheral interfaces. These devices can monitor data line conditions, analyze and trap data, and diagnose problems common to data communication systems. Traffic from data terminal equipment (DTE) through data communications equipment (DCE) can be examined to help isolate problems, identify bit patterns, and ensure that the proper cabling has been installed. These devices cannot test media signals such as Ethernet, Token Ring, or FDDI. Network monitors continuously track packets crossing a network, providing an accurate picture of network activity at any moment, or a historical record of network activity over a period of time. They do not decode the contents of frames. Monitors are useful for baselining, in which the activity on a network is sampled over a period of time to establish a normal performance profile, or baseline. Monitors collect information such as packet sizes, the number of packets, error packets, overall usage of a connection, the number of hosts and their MAC addresses, and details about communications between hosts and other devices. This data can be used to create profiles of LAN traffic as well as to assist in locating traffic overloads, planning for network expansion, detecting intruders, establishing baseline performance, and distributing traffic more efficiently. A network analyzer (also called a protocol analyzer) decodes the various protocol layers in a recorded frame and presents them as readable abbreviations or summaries, detailing which layer is involved (physical, data link, and so forth) and what function each byte or byte content serves. Most network analyzers can perform many of the following functions: •Filter traffic that meets certain criteria so that, for example, all traffic to and from a particular device can be captured •Time stamp-captured data •Present protocol layers in an easily readable form •Generate frames and transmit them onto the network •Incorporate an "expert" system in which the analyzer uses a set of rules, combined with information about the network configuration and operation, to diagnose and solve, or offer potential solutions to, network problems
<urn:uuid:bff32ead-a517-4c0c-9e24-9bdc42f3dd57>
CC-MAIN-2017-04
http://www.cisco.com/en/US/docs/internetworking/troubleshooting/guide/tr1902.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869064
4,111
2.515625
3
File and directory timestamps are one of the resources forensic analysts use for determining when something happened, or in what particular order a sequence of events took place. As these timestamps usually are stored in some internal format, additional software is needed to interpret them and translate them into a format an analyst can easily understand. If there are any errors in this step, the result will clearly be less reliable than expected. My primary purpose this article is to present a simple design of test data suitable for determining if there are errors or problems in how a particular tool performs these operations. I will also some present some test results from applying the tests to different tools. For the moment, I am concerned only with NTFS file timestamps. NTFS is probably the most common source of timestamps that an analyst will have to deal with, so it is important to ensure that timestamp translation is correct. Similar tests need to be created and performed for other timestamp formats. Also, I am ignoring time zone adjustments and daylight savings time: the translation to be examined will cover Universal Coordinated Time (UTC) only. NTFS file timestamps, according to the documentation of the ‘FILETIME’ data structure in the Windows Software Development Toolkit, is a “64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC)”. Conversion from this internal format to a format more suitable for human interpretation is performed by the Windows system call FileTimeToSystemTime(), which extracts the year, month, day, hour, minutes, seconds and milliseconds from the timestamp data. On other platforms (e.g. Unix), or in software that is intentionally platform-independent (e.g. Perl or Java) other methods for translation is be required. The documentation of FileTimeToSystemTime(), as well as practical tests, indicate that the FILETIME value to be translated must be 0x7FFFFFFFFFFFFFFF or less. This corresponds to the time 30828-09-14 02:48:05.4775807. File timestamps are usually determined by the system clock at the time some file activity was performed. It is, though, also possible to set file time stamps to arbitrary values. On Vista and later, the system call SetFileInformationByHandle() can be used; on earlier versions of Windows, NtSetInfomationFile() may be used. No special user privileges are required. These system calls have a similar limitation in that only timestamps less than or equal to 0x7fffffffffffffff will be set. Additionally, the two timestamp values 0x0 and 0xffffffffffffffff are reserved to modify the operation of the system call in different ways. The reverse function, SystemTimeToFileTime(), performs the opposite conversion: translating a time expressed as the year, month, day, hours, minutes, seconds, etc into the 64-bit file time stamp. In this case, however, the span of time is restricted to years less than or equal to 30827. Before any serious testing is done, some kind of baseline requirements need to be established. - Tests will be performed mainly by humans, not by computers. The number of test points in each case must not be so large as to overwhelm the tester. A maximum limit around 100 test points seems reasonable. Tests designed to be scored by computer would allow for more comprehensive tests, but would also need to be specially adapted to each tool being tested. - The currently known time range (0x0 to 0x7FFFFFFFFFFFFFFF) should be supported. If the translation method does not cover the entire range, it should report out-of-range times clearly and unambiguously.That is, there must be no risk for misinterpretation, either by the analyst or by readers of any tool-produced reports. A total absence of translation is not quite acceptable on its own — it requires special information or training to interpret, and the risk for misinterpretation appears fairly high. A single ‘?’ is better, but if there are multiple reasons why a ‘?’ may be used, additional details should be provided. - The translation of a timestamp must be accurate, within the limits of the chosen representation.We don’t want a timestamp translated into a string become a very different time when translated back again. The largest difference we tolerate is related to the precision in the display format: if the translation doesn’t report time to a greater precision than a second, the tolerable error is half a second (assuming rounding to nearest second) or up to one second (assuming truncation) If the precision is milliseconds, then the tolerable error is on the corresponding order. Test 1: Coverage The first test is a simple coverage test: what period of time is covered by the translation? The baseline is taken to be the full period covered by the system call FileTimeToSystemTime(), i.e. from 1601-01-01 up to 30828-09-14. The first subtest checks the coverage over the entire baseline. In order to do that, and also keep the number of point tests reasonably small, each millennium is represented by a file, named after the first year of the period, the timestamps of which are set to the extreme timestamps within that millennium. For example, the period 2000-2999 is tested (very roughly, admittedly) by a single file, called ‘02000’, with timestamps representing 2000-01-01 00:00:00.0000000 and 2999-12-31 23:59:59.9999999 as the two extreme values (Tmin and Tmax for the period being tested). The second subtest makes the same type of test, only it checks each separate century in the period 1600 — 8000. (There is no particular reason for choosing 8000 as the ending year.) The third subtest makes the same type of test, only it checks each separate year in the period 1601 — 2399. In these tests, Tmin and Tmax are the starting and ending times of each single year. The fourth subtest examines the behaviour of the translation function at some selected cut-off points in greater detail. These tests could easily be extended to cover the entire baseline time period, but this makes them less suitable for manual inspection: the number of points to be checked will become unmanageable for ‘manual’ testing. Test 2: Leap Years The translation must take leap days into account. This is a small test, though not unimportant. The tests involve checking the 14-day period ‘around’ February 28th/29th for presence of leap day, as well as discontinuities. Two leap year tests are provided: ‘simple’ leap years (2004 – year evenly divisible by 4), and ‘exceptional’ leap years (2000 – year even divisible by 400). Four non-leap tests: three for ‘normal’ non-leap years (2001, 2002, 2003) and one ‘exceptional’ non-leap tear (1900 — year is divisible by 100). More extensive tests can easily be created, but again the number of required tests would surpass the limits of about 100 specified in the requirements. It is not entirely clear if leap days always are/were inserted after February 28th in the UTC calendar: if they are/were inserted after February 23th, additional tests may be required for the case the time stamp translation includes the day of the week. Alternatively, such tests should only be performed in timezones for which this information is known. Tests 3: Rounding This group of tests examines how the translation software handles limited precision. For example, assume that we have a timestamp corresponding to the time 00:00:00.6, and that it is translated into textual form that does not provide sub-second precision. How is the .6 second handled? Is it chopped off (truncated), producing a time of ’00:00:00′? Or is it rounded upwards to the nearest second: ’00:00:01′? In the extreme case, the translated string may end up in another year (or even millennium) than the original timestamp. Consider the timestamp 1999-12-31 23:59:59.6: will the translation say ‘1999-12-31 23:59:59’ or will it say ‘2000-01-01 00:00:00’? This is not an error in and by itself, but an analyst who does not expect this behaviour may be confused by it. If he works after an instruction to ‘look for files modified up the end of the year’, there is a small probability that files modified at the very turn of the year may be omitted because they are presented as belonging to the following year. If that is a real problem or not will depend on the actual investigation, and if and how such time limit effects are handled by the analyst. These tests are split into four subgroups, testing rounding to minutes, seconds, milliseconds and microseconds, respectively. For each group, two directories corresponding to the main unit are created, one for an even unit, the other for an odd unit. (The ‘rounding to minutes’ test use 2001-01-01 00:00 and 00:01. In each of these directories files are created for the full range of the test (0-60, in the case of minutes), and timestamped according to the Tmin/Tmax convention already mentioned. If the translation rounds upwards, or round to nearest even or odd unit, this will be possible to identify from this test data. More complex rounding schemes may not be possible to identify. Tests 4: Sorting These tests are somewhat related to the rounding test, in that the test examines how the limited precision of a timestamp translation affects sorting a number of timestamps into ascending order. For example, a translation scheme that only includes minutes but not seconds, and sorts these events by the translation string only will not clearly produce a sorted order that follows the actual sequence of events. Take the two file timestamps 00:00:01 (FILE1) and 00:00:31 (FILE2). If the translation truncates timestamps to minutes, both times will be shown as ’00:00’. If they are then sorted into ascending order by that string, the analyst cannot decide of FILE1 was timestamped before FILE2 or vice versa. And if such a sorted list appears in a report, a reader may draw the wrong conclusions from it. The tests are subdivided into sorting by seconds, milliseconds, microseconds and nanoseconds respectively. Each subtest provides 60, 100 or 10 files with timestamps arranged in four different sorting order. The name of these files have been arranged in an additional order to avoid the situation where files already sorted by file names are not rearranged by a sorting operation. Finally, the files are created in random order. The files are named on the following pattern: <nn>_C<nn>_A<nn>_W<nn>_M<nn>, e.g. ’01_C02_A07_W01_M66′. Each letter indicates a timestamp field (C = created, A = last accessed, W = last written, M = last modified), with <nn> indicating the particular position in the sorted sequence that timestamp is expected to appear in. The initial <nn> adds a fifth sorting order (by name), which allows for the tester to ‘reset’ to a sorting order that is not related to timestamps. Each timestamp differs only in the corresponding subunit: the files in the ‘sort by seconds’ have timestamps that have the same time, except for the second part, and the ‘sort by nanoseconds’ files differ only in the nanosecond information. (As the timestamp only accommodates 10 separate sub-microsecond values, only 10 files are provided for this test.) The test consists in sorting each set of files by each of the timestamp fields: if sorting is done by the particular subunit (second, millisecond, etc.) the corresponding part of the file name will appear in sorted order. Thus, an attempt to sort by creation time in ascending order should produce a sequence in which the C-sequence in the file name also appears in order: C00, C01, C02, … etc, and no other sequence should be the same ascending order. An implementation with limited precision in the translated string, but that sorts according to the timestamp values will sort perfectly also when sorting by nanoseconds is tested. If the sort is by the translated string, sorting will be perfect up to that smallest unit (typically seconds), and further attempts to sort by smaller units (milliseconds or microseconds) will not produce a correct order. If an implementation that sorts by translated string also rounds timestamps, this will have additional effects on the sorting order. Tests 5: Special tests In this part, additional timestamps are provided for test. Some of these cannot be created by the documented system calls, and need to be created by other methods. These timestamp can be set by the system calls, and may not have been tested by other test. This timestamp should translate to 1601-01-01 00:00:00.0000000, but it cannot be set by any of the system calls tested. These timestamps cannot be set by system call, and need to be edited by hand prior to testing. These values test how the translation mechanism copes with timestamps that produce error messages from the FileTimeToSystemTime() call. TZ & DST — Time zone and daylight saving time adjustments are closely related to timestamp translation, but are notionally performed as a second step, once the UTC translation is finished. For that reason, no such tests are included here: until it is reasonably clear that UTC translation is done correctly, there seems little point in testing additional adjustments. Leap seconds — The NTFS timestamp convention is based on UTC, but ignores leap seconds, which are included in UTC. For a very strict test that the translation mechanism does not take leap seconds into account, additional tests are required, probably on the same pattern as the tests for leap years, but at a resolution of seconds. However, if leap seconds have been included in the translation mechanism, it should be visible in the coverage tests, where the dates from 1972 onwards would gradually drift out of synchronization (at the time of writing, 2013, the difference would be 25 seconds). Day of week — No tests of day-of-week translation are included. A Windows program that creates an NTFS structure corresponding to the tests described has been written, and used to create a NTFS image. The Special tests directory in this image have been manually altered to contain the timestamps discussed. Both the source code and the image file is (or will very shortly be) available from SourceForge as part of the ‘CompForTest’ project. It must be stressed that the tests described should not be used to ‘prove’ that some particular timestamp translation works as it should: all the test results can be used for is to show that it doesn’t work as expected. As the test image was being developed different tools for examination of NTFS timestamps were tried out. Some of the results (such as incomplete coverage) was used to create additional tests. Below, some of the more interesting test results are described. It should be noted that there may be additional problems that affect the testing process. In one tool test (not included here), it was discovered that the tool occasionally did not report the last few files written to a directory. If this kind of problem is present also in other tools, tests results may be incomplete. Notes on rounding and sorting have been added only if rounding has been detected, or if sorting is done by a different resolution than the translated timestamp. 1970-01-01 00:00:01 — 2106-02-07 06:28:00 1970-01-01 00:00:00.0000000 is translated as ‘0000-00-00 00:00:00’ Timestamps outside the specified range are translated as if they were inside the range (e.g. timestamps for some periods in 1673, 1809, 1945, 2149, 2285, etc. are translated as times in 2013. This makes it difficult for an analyst to rely only on this version of Autopsy for accurate time translation. In the screen dump below, note that the 1965-1969 timestamps are translated as if they were from 2032-2036. EnCase Forensic 6.19.6: 1970-01-01 13:00 — 2038-01-19 03:14:06 1970-01-01 00:00 — 12:00 are translated as ” (empty). The period 12:00 — 13:00 has not been investigated further. Remaining timestamps outside the specified ranges are also translated as ” (empty). The screen dump below show the hours view of the cut-off date 1970-01-01 00:00.The file names indicate the offset from the baseline timestamps, HH+12 indicating an offset of +12 hours to 00:00. It is clear that from HH+13, translation appears to work as expected, but for the first 13 hours (00 — 12), no translation is provided, at least not for these test points. ProDiscover Basic 184.108.40.206: 1970-01-02 — 2038, 2107 — 2174, 2242 — 2310, 2378 — 2399 (all ranges examined) Timestamps prior to 1970-01-02, and sometime after 3000, are uniformly translated as 1970-01-01 00:00, making it impossible to determine actual time for these ranges. Timestamps after 2038, and outside stated range are translated as ‘(unknown)’. Translation truncates to minutes. The following screen dump shows both the uniform translation of early timestamps as 1970-01-01, as well as the ‘(unknown)’ and the reappearance of translation in the 2300-period. (The directories have also been timestamped with the minimum and maximum times of the files placed in them.) WinHex 16.6 SR-4: 1601-01-01 00:00:01 — 2286-01-09 23:30:11. 1601:01:01 00:00:00.0000000 and .00000001 are translated as ” (blank). Timestamps after 2286-01-09 23:30:11 are translated partly as ‘?’, partly as times in the specified range, the latter indicated in red. The cut-off time 30828-09-14 02:48:05 is translated as ” (blank). Two additional tests on tools not intended primarily for forensic analysis were also performed: Windows Explorer GUI and PowerShell command line. Neither of these provide for additional time zone adjustment: their use will be governed by the current time configuration of the operating system. In the test below, the computer was reset to UTC time zone prior to testing. 1601-01-01 00:00:00 — 9999-12-31 23:59:59 Timestamps outside the range are translated as blank. Sorting is by timestamp binary value. The command line used for these examination was: Get-ChildItem path | Select-Object name,creationtime,lastwritetime for each directory that was examined. Sorting was tested by using Get-ChildItem path | Select-Object name,creationtime,lastwritetime,lastaccesstime | Sort timefield The image below shows sorting by LastWriteTime and nanoseconds (or more exactly tenths of microseconds). Note that the Wnn specifications in the file names appear in the correct ascending order : Windows Explorer GUI: 1980-01-01 00:00:00 — 2107-12-31 23:59:57 2107-12-31 23:59:58 and :59 are shown as ” (blank) Remaining timestamps outside the range are translated as ” (blank) . It must be noted that the timestamp range only refers to the times shown in the GUI list. When the timestamp of an individual file is examined in the file property dialog (see below), the coverage appears to be full range of years. Additionally, the translation on at least one system appears to be off by a few seconds, as the end of the time range shows. Additional testing is required to say if this happens also on other Windows platforms. However, when the file ‘119 – SS+59’ is examined by the Properties dialog, the translation is as expected. (A little too late for correction I see that the date format here is in Swedish — I hope it’s clear anyway.) Interpretation of results In terms of coverage, none of the tools presented above is perfect: all are affected by some kind of restriction to the time period they translate correctly. The tools that comes off best are, in order of the time range they support: PowerShell 1.0 (1601–9999) Windows Explorer GUI (1980–2107) EnCase 6.19 (1970–2038) Each of these restricts translations to a subset of the full range, and shows remaining timestamps as blank. PowerShell additionally sorts by the full binary timestamp value, rather than the time string actually shown. The Windows Explorer GUI also appears to suffer from an two-second error: the last second of a minute, as well as parts of the immediately preceding second are translated as being the following minute. This affects the result, but as this is not a forensic tool it has been discounted. The tools that come off worst are: ProDiscover Basic 220.127.116.11 WinHex 16.6 SR-4 Each of these show unacceptably large errors between all or some file time stamps and their translation. ProDiscover comes off only slightly better in that timestamps up to 1970 are all translated as 1970-01-01, and so can be identified as suspicious, but at the other end of the spectrum, the translation error is still approximately the same as for Autopsy: translations are more than 25000 years out of register. WinHex suffers from similar problems: while it flags several ranges of timestamps as ‘?’, it still translates many timestamps totally wrong. It should be noted that there are later releases of both Autopsy and ProDiscover Basic that have not been tested. It should probably also be noted that additional tools have been tested, but that the results are not ‘more interesting’ that those presented here. How to live with a non-perfect tool? - Identify if and to what extent some particular forensic tool suffers from the limitations described above: does it have any documented or otherwise discoverable restrictions on the time period it can translate, and does it indicate out-of-range timestamps clearly and unambiguously, or does it translate more than one timestamp into the same date/time string? - Evaluate to what extent any shortcomings can affect the result of an investigation, in general as well as in particular, and also to what extent already existing lab practices mitigate such problems. - Devise and implement additional safeguards or mitigating actions in the case where investigations are significantly affected . These steps could also be important to document in investigation reports. In daily practice, the range of timestamps is likely to fall within the 1970–2038 range that most tools cover correctly — the remaining problem would be if any outside timestamps appeared in the material, and the extent to which they are recognized as such and handled correctly by the analyst. The traditional advice, “always use two different tools” turns out to be less than useful here, unless we know the strengths and weaknesses of each of the tools. If they happen to share the same timestamp range, we may not get significantly more trustworthy information from using both than we get from using only one.
<urn:uuid:a16a0f77-ce62-4cb5-ac8b-4a8177efc4cb>
CC-MAIN-2017-04
https://articles.forensicfocus.com/2013/04/06/interpretation-of-ntfs-timestamps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931412
5,135
2.78125
3
The business continuity industry has heard a lot about Plan, Do, Check Act (PDCA) recently. Nearly every emerging standard is following this approach, from BS 25999 and NFPA 1600 (2010 edition) to the new American business continuity standard being created by ASIS. However, there seems to be a lot of confusion about what PDCA is – and what it means for business continuity. The PDCA model is the basic building block of a management system, focused on weaving management-level decision making into traditional program practices. The “traditional” business continuity program activities, like a business impact analysis and plan development, fall mostly into one of the categories (“do”) – see figure 1 – but the value of PDCA is that management input and feedback wraps around these activities, thus ensuring continuous improvement. The following sections break down the components of a PDCA approach to business continuity, with a focus on which activities will provide your organization’s program the most value. The “plan” process establishes objectives, targets, controls, processes and procedures for the program to deliver results in accordance with an organization’s overall policies and objectives. Related to business continuity, this involves defining the business continuity management program; including identifying standards, creating a policy statement, appointing a program sponsor and steering committee, and establishing an initial program scope and risk tolerance. One of the most important “plan” activities is for executive management to identify what they want to protect and recover with respect to their business continuity program. These are typically stated as “critical products and services”. Of note, phrasing the key objectives of the business continuity program as key organizational outputs helps to position the business continuity program in management’s language, thus gaining understanding, support, and involvement from management. The “do” process implements and operates the business continuity policy, controls, processes and procedures. This includes a number of actions in order to understand, strategize, plan, and test the organization for business continuity events. As mentioned earlier, the “do” process is where the common business continuity tasks are performed. The first step in “do” is to perform a business impact analysis, or BIA, as well as a risk assessment. The business impact analysis maps critical products and services to individual departments and activities, and seeks to identify recovery objectives for each. The purpose of the risk assessment is to describe the outcomes from disruptive events and the suitability of current-state controls to prevent the disruptive event from occurring, as well as control recommendations to align with the organization’s risk tolerance. The second step is the identification of risk mitigation, response and recovery strategy options, and once selected, the implementation of these risk treatments. The third step of “do”, involves developing plan documentation, which should be written in a way to enable repeatable response and recovery performance, regardless of the experience of the person leading the effort. The last step is organization-wide training and validation of strategies and plans through exercises, and the initiation of program maintenance activities. The “check” process monitors and reviews performance against established management system objectives and policies and reports the results to management for review. The program should be subject to internal review to measure program performance against pre-defined policies and objectives. The results of said assessments should be presented to management via the established business continuity steering committee. You may be thinking: my organization’s management will hardly meet to establish objectives, let alone meet to review them. If this is the case for your organization, implementing a management system is likely the exact solution that you need. Instead of merely presenting metrics based on BIA and plan reviews and maintenance, the “check” process allows the business continuity program to communicate program performance in a language that management understands. Presenting the organization’s continuity capability in terms of alignment to pre-defined organizational outputs (critical products and services) will help management understand how the program is performing in association to what is important to them. This also helps them better recognize how key risks would actually impact the organization, allowing them to accept the risk or take action on the risk. In short, the “check” process ensures that management is accountable for the program and the organization’s overall business continuity capability. The “act” process maintains and improves the program by taking preventive and corrective actions, based on the results of management review and re-appraising the scope, business continuity policy, and objectives. This includes updating and maintaining the corrective actions / preventative actions (CAPA) list and a post-incident review process, as well as ensuring continuous improvement of the business continuity program in order to constantly align the business continuity program to management expectations. Should My Organization Use the PDCA Model? In many organizations, management systems concepts are already incorporated into many existing programs, such as quality management. Thus, it is likely that your executive management team may be already familiar with management systems concepts and understand their role in operating within a management system. Consequently, implementing a management system-framework connects business continuity planning efforts to commonly understood and defined business objectives. Business continuity efforts are enhanced with management system-oriented models that avoid professional jargon and focus on business outcomes. Putting the business continuity program into key organizational outputs is meant to focus on the objectives of management and to speak in the language of the organization. Because of this, the business continuity program is able to communicate real capability and output, rather than focusing on micro business continuity-specific projects. What are the Benefits Associated with the PDCA Model? There are many benefits to implementing a business continuity management system, many of which have already been described in this article. However, the benefits can be summarized into two key benefits: a business continuity management system compels management to be accountable to the outcome of the program, and provides an accepted approach for external validation. The key to a successful business continuity management system is gaining and maintaining the interest and support of executive management. Guidelines on how to do this have been discussed throughout this article, but the main tip is to ensure the program speaks executive management’s language (key organizational outputs) and to communicate program performance and tasks in terms of alignment to the continuity capability of those organizational outputs. By communicating in this way, management understands the need for their continued interest in the program. Management will be presented with choices to accept or take action on risks that directly contribute to the success and continuity of key organizational outputs. A business continuity management system forces strong alignment with what management is thinking with what the business continuity program is doing and communicating – it actually compels management to be accountable to the program. The second key benefit to implementing a business continuity management system is that it inherently solves the “audit a plan” problem that many organizations encounter. Has your organization been asked for a copy of your business continuity plan to prove you’re prepared? We all know that a thick business continuity plan doesn’t equate to organizational capability. Unfortunately, third parties often have few other options than to review the plan and evaluate it. By implementing a business continuity management system, especially one certified by a third party, it changes the focus from a “check the box” viewpoint (auditing a plan), to actually making sure that your organization has a real, useful, and capable business continuity capability (reviewing the performance of the management system). This enables the organization to validate and communicate program performance both internally (to executive management) and externally (to key stakeholders) – in many cases just by showing proof of certification! How Can My Organization Implement a PDCA Model or Incorporate PDCA Into Our Existing Program? The following guidelines can assist you in implementing business continuity management system into your organization: - Establish a cross-functional management steering committee - Talk in the language that management understands - How they view the business and what is important to them - Make business continuity relevant and easy to understand – simplicity is key! - Focus on organizational outputs (key products and services) - Establish downtime tolerances for key products and services - Explain current capability of key products and services in order for management to take action or accept risk - Ensure objectives are realistic and management is willing to spend the resources needed to achieve objectives With the growing popularity and continued success of business continuity management systems, this approach and framework is proving to be the future of business continuity. Organizations struggling with capturing and maintaining executive management’s attention will realize tremendous value when implementing a business continuity management system. Input and continuous feedback will increase, as will the decisions and resources necessary to meet management’s expectations. The business continuity management systems framework has one goal: to provide a business continuity program that works, is flexible, and is efficient. Avalution Consulting: Business Continuity Consulting
<urn:uuid:9e0dfec5-c3a7-4064-985c-9353709e8edc>
CC-MAIN-2017-04
http://perspectives.avalution.com/2010/plan-do-check-act-pdca-%E2%80%93-how-it-applies-to-business-continuity-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00466-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944388
1,814
2.765625
3
Since the first personal computers started showing up in classrooms in the mid-1970s, schools have been struggling to figure out what to do with them. It wasn't uncommon to find donated, never opened and eventually outdated computers in classroom closets because no one knew how to set them up, use them or fix them. But computers have become easier to use, less expensive and ubiquitous in everyday life. And public schools are increasingly seeing the benefits of bits and bytes. In San Francisco, district officials have embarked on a 15-year plan to transform schools with digital curriculum, universal wireless access, a laptop for every educator, and laptops or tablets for every classroom. A big part of it is training teachers. This year, 30 of the district's middle school math and science teachers have spent hours and hours learning how to incorporate hundreds of Salesforce.com-donated iPads, already chock full of free and purchased apps, into the learning process. Next year, 50 more will get the same training. In one math class this year, textbook learning and solving 20 problems for homework went out the door. Instead, the teacher told the students to create a catering budget for a movie set and present their bid for the job, said Michael Bloemsma, a program administrator in the San Francisco school district's education technology department. The students were then set loose with their iPads to research the price of food and make a presentation using Skitch, Keynote, Educreations or Explain Everything software programs. The teacher didn't have to spend much time showing the students how to use the apps. Like most middle school kids with an innate sense of technology, they figured it out. The district is partnering with 3-D design software maker Autodesk, which provides training and free software to schools. On Tuesday, the 30 middle school teachers in the first training cohort filled a conference room at the company's San Francisco office to learn about possible applications of the software. The idea is to expose students to technology used in the workforce now and likely commonplace in the future, said Tom Joseph, senior director of education at Autodesk. Creating a digital part for a broken pair of eyeglasses and printing it on a 3-D computer, for example, will be relatively simple in the near future, he said. "You don't need to be a geek," he said. "You can use this in our everyday lives." And kids need to learn how, district officials said. Teacher Steve Temple uses the software in his science classes at San Rafael High School. The goal isn't to teach them the software, but how to use the software to solve problems. "We are in alignment with what industry and higher education were doing," he told the 30 teachers during the training. And the best part? The students to a large degree taught themselves or each other how to use the 3-D modeling program. He described himself as a facilitator who challenges students to make robots or solve engineering conundrums. "If you think you're the only avenue to knowledge as a teacher, you really have to rethink that," Temple said. "That's hubris." Not surprisingly, Silicon Valley is paying close attention to the increased use of technology in classrooms. Education technology is an $8 billion industry in the United States, according to the Software and Information Industry Association. The number of education apps and gizmos or gadgets grows every day, with venture capital pouring millions into what are being called ed-tech startups. Yet schools are not easy targets anymore in terms of buying technology because it's shiny and new, even if it might sit on the shelf. "Education leaders are becoming more sophisticated," the association's analysts wrote in a 2013 report. "They are not looking for companies to sell them technology products but are instead looking for partners who understand their challenges and can help provide matching solutions." In other words, schools want stuff that improves learning and won't go to waste in the back of a classroom closet or used as a glorified piece of paper. "We don't want teachers to basically put their worksheets on the iPad," Bloemsma said. Sales pitches are evaluated with a wary eye, said Michele Dawson, district supervisor of education technology. "Trust me," she said. "We get a plethora of people who want to show us their products." To be selected, a product has to meet stiff criteria, giving students and teachers the tools for critical thinking, creativity, the ability to communicate or share information and offer feedback on student understanding, she said. But the stuff is secondary. Training teachers how to teach with technology is even more important, Bloemsma said. "It's more student centered," he said. "This is scary for teacher, giving up control." San Francisco veteran teacher Karen Clayman is among the 30 trainees this year. She has always loved computers. Her first computer was an Apple IIe, first released in 1983. But using technology in her classes at Giannini Middle School was a little intimidating. With 35 years in the classroom, she had seen the early attempts and the resulting disasters. This year, her students are using iPads to create class presentations, share documents and other applications that allow them to be creative in their class projects. She can see what they are each doing and control their iPads from hers or post their work on a digital whiteboard. "They love it," she said. The only downside is the reliance on power. "If I didn't have electricity, if we have a power failure," she said shaking her head. "I'd have to remember how to use a (manual, write with a pen) whiteboard." ©2014 the San Francisco Chronicle
<urn:uuid:3f1bb90d-6ff0-4677-9a97-d48420539323>
CC-MAIN-2017-04
http://www.govtech.com/education/The-Future-is-Finally-Here-for-Computers-in-Schools.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968809
1,195
3
3
NASA has posted an interesting group of images its says were rendered by an unnamed artist from its Jet Propulsion Lab that show what the space agency, or at least the artist, though Mars might look like up close. From NASA: "'Life on Mars' was envisioned as low to the ground, symmetrical and simple. The artist drew silicon-based life forms, probably coached by others, perhaps scientists, who had thought about such possibilities. Peculiar saucer-like shapes stood only slightly above ground level, root-like structures reached outward for growth resources; a bundle of cones faced many directions for heat, light or food. Instead of reality, the images embodied the artist's hope and anticipation of what future Martian exploration would find." More on Mars: 15 reasons why Mars is one hot, hot, hot planet NASA says the artist's composite stems from a series of renderings from 1975 and took into account what was known about Mars that year. "Compared to Earth, Mars is further away. Compared to Earth, Mars is further away from the light of the sun, very cold and very arid, and had a thin atmosphere rich in carbon dioxide but little nitrogen, an environment distinctly inhospitable to complex, Earth-like, carbon-based life forms," NASA stated. If you want to download a larger image go here. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:1d83f63f-b89a-40c3-9831-a510c9cb9c6c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2228861/security/nasa-s--images--of-life-on-mars-circa-1975.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00456-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974414
298
3.53125
4
In this installment, we’ll look at some techniques for improving the performance of Distance-Vector (D-V) routing protocols. As previously discussed, if a routing loop occurs between two routers, the routing updates will bounce between those routers. This is sometimes referred to as “routing feedback”. The metric for the prefix will count up to the maximum (as determined by the number of bits used to track the metric), roll over to zero, and then start counting upwards again (the “count to infinity”). This process, and thus the loop, will continue forever, and any data packet caught in the loop will ricochet between the routers until its IP TTL reaches zero, at which time it is discarded. This is a waste of bandwidth in the routers’ backplanes, and on the link connecting them. Defining a Maximum One thing we can do to break the loop is to enforce a rule that if a metric exceeds a specified maximum, that route is considered unreachable and removed from the routing table. Once the metric exceeds the max, the route will be removed from the routing table, and it will no longer be advertised to any neighbors. Eventually the route times out of the neighbors’ routing tables as well, and the network is converged. For IP RIP, for example, a metric of 16 or more hops is considered “infinity”. With an update timer of 30 seconds and a flush timer of 4 minutes, convergence for RIP should occur within 12 minutes. As we have seen, defining a maximum stops a routing loop from existing forever, but it could still last quite a while. What we’d like to do is prevent the loop from forming between the two routers in the first place. This can be accomplished by enforcing what is known as the “split horizon” rule. The split horizon rule says that “It is never useful to advertise information back in the direction from which it came” What this means is, if a router learns a route on one interface, and decides that’s the best way to reach that destination, it won’t advertise that route back out on that same interface. It will, however, advertise the route on the other interfaces. Thus, instead of just simple-mindedly sending its routing table out on every interface, the router actually filters the table in accordance with the split horizon rule, on a per-interface basis. This process of sending information out on all interfaces other than the one on which it is received is called “flooding”. The split horizon rule has two effects. First, since a router can no longer advertise a “best” route back to the router from which it learned it, no loop can form between two routers. Second, the routers are not wasting each others’ time and bandwidth with redundant advertisements (why tell them what they told just you?). Note that it’s not a violation of split horizon for a router to advertise a better route to a prefix from neighbors from which it has learned an inferior route. In fact, it’s required! For IP RIP, which has a 4 minute flush timer and a 30 second update timer, the split horizon rule should reduce the convergence interval to less than 5 minutes. Another refinement is what’s called “route poisoning”. The idea is that when a router learns that a route is unreachable, instead of just removing it from the routing table and no longer advertising it, it advertises the route with the maximum metric (which the routers treat as “infinity”), and then removes it. The effect of this is that routers are now able to explicitly inform their neighbors that they can no longer get packets to a prefix. Thus, instead of waiting for the flush timer to remove the route from the routing tables, it would be removed at the next update. For IP RIP, this would be within 30 seconds. An enhancement to route poisoning goes by the name “poison reverse”. What this means is that when a router poisons a route (advertises it as unreachable), a neighbor that now has no route to that destination will echo the poisoning back. This makes sure that both routers know that the route is unreachable, and there is no way that a loop between those two routers could form. Note that poison reverse overrides the split horizon rule, but because the routers are advertising “bad news”, it’s allowed. Continuing on in the spirit of shortening the convergence time, we could also have the routers advertise changes that occur when they occur. In other words, rather than waiting until the next update interval to inform their neighbors that the status of a route has changed, tell them right now using what is referred to as a “triggered” update (also known as a “triggered incremental” or “flash” update). A triggered update contains information only about the route for which the status has changed. This change could be that the route has become unreachable (route poisoning), that it has become reachable, or that its metric has changed. Flash updates occur in addition to the periodic flooding of the routing tables, and can reduce the convergence time to nearly zero. Next time, we’ll continue our discussion of enhancements to Distance-Vector protocols. Author: Al Friebe
<urn:uuid:263c6720-fae4-4c2e-8662-68eb3454ba72>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/09/28/routing-protocols-part-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00484-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950192
1,112
2.765625
3
A Virtual Private Network, or VPN, is used to connect a private network, such as a company’s internal network, using public wires. In other words you can use an other IP other than your own to appear you are something other than where you actually are. Pretty nifty. The use of VPNs started as a way for work at home users to access their workplace network just like if they were working in the office. Benefits reach farther now than just work from home capabilities. It is difficult for advanced malware to self install through open ports because the computer will always appear to be another system someplace else. This other machine is often a server that is more heavily protected and harder to attack. Not a sure fire way to avoid attack, but most certainly a viable preventative option. This presents an extra method of protection, basically playing a little hid and go seek with potential malware. Increased mobile internet usage will eventual open a new vulnerability for hackers to infiltrate, and VPNs could be the eventual answer to avoiding attacks on mobile devices as well. Need for mobile phone VPNs could be the next big thing for data protection. If you would like to educate yourself in more detail about the information presented in this blog post please visit: You Need a VPN, or You’re Screwed
<urn:uuid:2afb5d7b-0ae8-447b-a32b-e213b1721d95>
CC-MAIN-2017-04
http://www.bvainc.com/vpn-you-need-one/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00392-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951699
263
2.6875
3
Sixty years after it was built and nearly a decade after it was seriously damaged in a major earthquake, the San Francisco-Oakland Bay Bridge is about to get a major face-lift. The entire east span, a two-mile stretch from Yerba Buena Island to Oakland, will be replaced with a structurally superior single-tower suspension bridge designed to withstand an 8.5 magnitude quake on the San Andreas Fault. But unlike their predecessors six decades ago, engineers designing the bridge's new incarnation have the benefit of a wealth of computing technologies to facilitate the $1.4 billion project. All facets, from conceptual planning through detailed design and construction, rely on information technology applications covering such areas as computer-aided design (CAD), three-dimensional structural analysis, seismic modeling, animation and digital imaging. The California Department of Transportation (Caltrans), with the Bay Area Metropolitan Transportation Commission (MTC) and Bay Bridge Design Task Force, recently finalized the conceptual design of the new bridge structure. Caltrans and its consortium of design consultants, led by T.Y. Lin International, are readying hardware and software to take the project from its current partial design through scheduled completion in 2003. "We're aggressively using technology to help us manage this very large-scale project," said Denis Mulligan, Caltrans' toll bridge program chief. "It's how we're achieving our goal, which is to make the Bay Bridge seismically sound." Technology as a Marketing Tool Information technology made its first contribution to the project early in the planning phase, as a tool to help transportation officials sell the proposed design to citizens' groups and others concerned about the project's visual impact. Using a computer-generated 3-D model of the Bay Area, Caltrans planners were able to simulate views of the bridge from any point around the bay. Animation portrayed proposed structures as seen by motorists crossing from either end. Graphical representations of four proposed designs were placed on the MTC Web site , where visitors could vote on their favorite design. From Concept to Construction Long before construction crews begin laying the foundation piers for the new bridge sometime in 2000, designers will have "built" and analyzed every inch of the span in the form of computer models. Advances in desktop PC computing power now enable engineers to readily conduct complex structural analyses of all critical bridge components early in the design phase. Using proprietary software developed in the last five years by ADINA Research and Development Inc., of Watertown, Mass., structural engineers create an onscreen stick model of the new bridge span. The software subjects the bridge model to timed pulses programmed to simulate a quake of a desired magnitude. The results allow engineers to see graphically where bridge components might break apart so that structural connections can be strengthened. "The design of bridges has changed dramatically," said Mike Whiteside, Caltrans' assistant contract manager for the Bay Bridge replacement project. "The Cypress Freeway failures caused by the 1989 Loma Prieta earthquake pushed Caltrans to the forefront of new computer design techniques that are now used throughout the country." Similar computational analyses were used successfully by T.Y. Lin in designing seismic retrofits for the Bay Bridge's more famous neighbor, the Golden Gate Bridge, completed in 1937. Structural members of the lower portions of the main tower legs were modeled and rocked with simulated earthquakes to see how the tower legs would lift off their concrete base during seismic events of magnitudes up to 50 percent stronger than those the bridge was originally designed to withstand. But unlike bridge designs of the 1930s, which consisted of large numbers of smaller steel components riveted together, newer bridge designs use larger steel and concrete members that can be readily modeled on most high-performance PCs. The models created for the Golden Gate Bridge analysis consisted of some 67,000 parts, requiring the processing power of a Cray C90 computer. Computing power is also playing a key role below the water line, in the design of the foundations for the new bridge's 530-foot support tower. A barge serving as home to a floating soils laboratory allows geotechnical engineers to gather realtime data from borings of mud formations more than 200 feet below the bottom of the bay. The results of critical soil strength tests are transmitted by wireless link to onshore offices, where they are used by engineers designing the foundations. Sonic methods are also used to create a 3-D map of the topography on the bay floor. "This use of technology at the boring site is helping tremendously to speed up the geotechnical work and foundation design," said Steve Hulsebus, Caltrans' contract manager for civil and environmental engineering. "We can position the barge over several proposed foundation locations and compare the data very quickly. It's facilitating our aggressive schedule." Traffic models will also be prepared, letting designers compare the bridge's vehicle capacity to known demand data. The new bridge's side-by-side roadway design will carry an estimated 280,000 motorists each day. Linking the Design Team With more than a dozen engineering and planning firms working on the project, Caltrans officials have established strict policies for managing the transfer of data between design groups. And while no dedicated WAN has been set-up for the project, a Web site -- accessible to members of the design team only -- has been established to serve as a vehicle for exchanging hundreds of CAD drawing files, schedules, meeting agendas, minutes, design calculations and other project data that will be generated during the planning and design phases. The agency also uses teleconferencing to hold weekly coordination meetings between its headquarters in Sacramento, Calif., and design engineers in the San Francisco Bay Area, 85 miles away. "The coordination challenges alone are daunting," Mulligan said. "But our consultant teams are required by contract to use common platforms for exchanging information. We regulate that very carefully." Those platforms include Micro- Station design and drafting software by Bentley Systems Inc., Adena structural modeling and analysis software, an Oracle-based cost-estimating database and Primavera project-scheduling software. In late August, Caltrans agreed to purchase $2 million in Bentley MicroStation software, bringing its total of CAD workstations up to 3,000. Under the deal, Caltrans will receive software licenses for new and recently upgraded PCs, training and product support from Applied GeoDynamics, an independent value-added reseller of Bentley products. Mulligan admits the myriad of hardware and software programs used in today's bridge design would no doubt raise an eyebrow or two among the slide-rule-toting bridge designers of yesteryear, but he sees the reliance on technology as a natural progression because of the amount of information needed for design and construction. "We know more about earthquakes and their effects than they did 60 years ago," Mulligan said. "The technology allows us to model ground motion during a seismic event and see the impact on intricate bridge connections. That's a tremendous advantage." Those advantages will be exploited during construction as well. Caltrans bridge inspectors will use portable computers and digital cameras to document each phase of construction, including demolition of the old bridge once the new structure is opened to traffic. Digital images captured during daily construction inspections will be readily distributed electronically throughout the organization, giving engineers and program managers the opportunity to see their designs evolve into concrete and steel. The digital images are also valuable tools in resolving claims with contractors, Mulligan said, because inspectors can use date-stamped digital images to identify exactly when problems occurred. Sight of the Times When the mammoth span opens to the public early next century, its distinctive single tower will no doubt establish a new visual landmark for the San Francisco Bay Area. But to Mulligan and his team of engineers, programmers and contractors, the modern structure will also serve as a monument to the vital role tools of the Information Age played in making it safe for motorists. Tom Byerly is a Sacramento, Calif.-based writer. November Table of Contents
<urn:uuid:b22e9b00-1577-4705-acda-f588703dd6cd>
CC-MAIN-2017-04
http://www.govtech.com/featured/Building-a-Better-Bridge-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00328-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945349
1,638
2.921875
3
Ercelen N.,Genetic Disease Diagnostics Center | Turtar E.,Genetic Disease Diagnostics Center | Gultomruk M.,Genetic Disease Diagnostics Center | Comert H.,Genetic Disease Diagnostics Center | And 3 more authors. Genetics and Molecular Research | Year: 2011 Preimplantation genetic diagnosis is a preventive approach for identifying genetic abnormalities in early stages of reproduction. We used preimplantation genetic aneuploidy screening in 230 cycles of patients with indications of advanced maternal age, recurrent implantation failure, recurrent spontaneous abortions, or severe male factor. Biopsied blastomeres from embryos with six to eight blastomeres on day 3 were fixed and fluorescence in situ hybridization was utilized on chromosomes 13, 16, 18, 21, 22, X, and Y. Among 945 morphologically normal embryos, 314 were diagnosed as chromosomally normal. Trisomy and monosomy were observed in 36% of the cases (18% each). Embryo transfer was used in 144 cycles, resulting in 41 pregnancies. Thirty-seven healthy babies were delivered, with a take-home baby rate of 24.2% and an implantation rate of 22%. We recommend preimplantation genetic aneuploidy screening as a valuable technique to select normal chromosome embryos in order to avoid multiple pregnancies due to the multiple embryo transfers that are normally necessary to ensure pregnancy in poor prognosis in vitro fertilization patients. © FUNPEC-RP. Source
<urn:uuid:ae208447-f0a6-49f5-b8dc-5ee0b1983c4a>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/genetic-disease-diagnostics-center-956518/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00328-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885631
313
2.609375
3
Researchers at North Carolina State University have harnessed the Jaguar supercomputer at Oak Ridge National Laboratory to examine the a possible link between protein misfolding and Parkinson’s disease. The team behind the research initially pointed to copper as a primary ingredient responsible for the misfolding of a protein called alpha-synuclein. The researchers believe that certain metals, including copper, can create fibrillar plaques that are known to be present Parkinson’s patients, but until recently they have not been able to ascertain how the copper was binding to the protein. As Frisco Rose, a Ph.D. candidate in physics and lead investigator behind the project stated, the problem can best be described by harkening back to the image of a school playground. In a release today he told readers to imagine a large swing set with a number of kids swinging and holding hands—an image that is parallel to the protein itself. He says that “copper is the kid who wants a swing…there are a number of ways that copper could grab a swing or bind to the protein, and each of those ways would affect all the other kids on the swing set differently.” In short, they want to find that binding process or, along with his metaphor, the unique way the “kid” decides to overtake a swing. The team set to work developing a series of simulations that would attempt to look for the most likely binding scenario by looking at the interactions of many thousands of atoms and the dynamics of the possible misfolding. This required hundreds of thousands CPU-hour runs and required a new processing method due to the size of the required calculations. According to the team, only Jaguar would have been able to tackle the problem, even with the refined computational methods. Their decision to make use of the supercomputer yielded answers to their questions—and definitive concepts that point to specific binding configurations that can lead to misfolding. These findings, which will be published this week in Nature Scientific Reports, could lead to innovative therapeutic methodologies for understanding and treating Parkinson’s disease.
<urn:uuid:3757cb8f-367e-480d-b4ad-3f8f283eba9c>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/06/14/jaguar_reveals_parkinsons_roots/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965518
426
3.59375
4
DELL EMC Glossary Server SAN is software-led storage built on commodity x86 servers with directly attached storage (DAS). It is designed to pool storage resources, consisting of more than one DAS, from multiple separate servers. Server SAN leverages standardized/commodity hardware (including hyper-converged appliances) and enables organizations to run applications on the same server as compute and storage. Who uses this technology, and why Server SAN architectures are an alternative to traditional SAN architectures for Service Providers and Enterprises looking to adopt software-defined storage infrastructures that are extremely scalable, flexible, cost effective and simple to manage. It is a server-based design, but it has storage in it and, by design, should be very simple and extremely scalable. How Server SAN works DAS usually is internal to the server and cannot scale or be shared. Server SAN combines compute and pools DAS from multiple, separate, servers. Communication between DAS occurs via a high speed network connection (InfiniBand or Low Latency Ethernet). Storage resources are managed by a software solution. The multi-protocol storage can utilize spinning disk and flash storage (HDD, SDD, PCIe flash). Benefits of this technology Server SAN offers many of the benefits of traditional SAN architectures while reducing complexity and costs. Server SAN offers potential benefits to application design, operation, performance due to the increased flexibility in how storage is mapped to applications. It removes the complexity of SAN network architectures and pooled resources deliver extreme scale in both performance and capacity.
<urn:uuid:bae1de51-3453-47e4-8321-8982a2a22aa9>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/server-san.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91141
320
2.53125
3
If you have ever used Sysinternals’ Process Monitor, chances are high you were a little intimidated when you looked at your first capture: it probably contained hundreds of thousands registry and file system events, generated in a minute or less. That amount of activity must surely indicate high system load – but strangely, very often it does not. Looking at the hard disk LED you will only see an occasional flickering, even though thousands of file system events are captured per second. How is that possible? Read on to find out. The main problem with measuring I/O operations per second (IOPS) is how to define what an I/O operation (short: IO) actually is. Depending on where you look, IOs can be entirely different things. Take a typical application. When it wants to write to a file it calls the appropriate function from the framework the developer chose to make his life easier. In case of C++ that function might be fputs. From the application’s point of view each call to fputs constitutes an IO. But that does not mean that the IO even reaches the disk. There is still a long way to go to permanent storage. On the way the IO could be cached, redirected, split, torn apart and put back together. Let’s travel down the layers and see what happens. To prevent applications from saturating the disk with many small IOs the framework buffers IOs until they reach 4K in total size. Then the data is flushed, aka written to disk in a single operation. This happens by calling the Windows API function WriteFile. From the point of view of the framework each call to WriteFile constitutes an IO. File System Cache The WriteFile call is processed by the kernel which has no intention of hitting the disk with everything user-mode application developers manage to come up with. So it buffers the framework’s data in the file system cache and spawns a background process to deal with it later. This so-called lazy writer evaluates the data in the cache and writes it to disk as it deems necessary. From the point of view of the lazy writer each cache flush constitutes an IO. File System Driver Before the lazy writer’s data can be written to disk it must be processed by the file system driver (typically ntfs.sys). The driver might find it necessary to not only write the actual data to the disk but also to update the file system metadata, e.g. the master file table (MFT). When that happens the number of IOs required to store the data increases. Writing data to a file is not a simple each layer reduces the number of IOs by x percent type of scenario. Now that we have reached the hardware level it is surely safe to assume that the IO is not manipulated any further, making this a good place to take measurements? Let’s see. In the simplest case, the disk is a physical hard disk. But even with plain HDDs there is yet another cache, and there is Native Command Queuing (NCQ, IO reordering to minimize head movements), both changing the IO on its way to permanent storage. So the only way to correctly measure IOPS would be on the disk platter. But what to measure? Head movements? What about SSDs, solid-state drives that thankfully do not have moving heads? What about virtual disks? The device seen by the OS as a physical hard disk might not be physical at all. It could be a LUN in a SAN spread across many physical disks – yet another set of layers. I hope I could make the point that there is no such thing as the IO; neither can there be a single definition of IO throughput (aka IOPS). So how do we measure IOPS? How does our monitoring tool uberAgent for Splunk measure IOPS? uberAgent for Splunk uberAgent measures IOPS right before they are handed off from the operating system to device-specific drivers. In other words: the data it collects is as accurate as it can be without interfacing with the hardware directly. The cool thing about what uberAgent does is that it is capable of mapping each IO to an originating process – and thus to a user, a session and an application. As a result, uberAgent can show you how many IOs each of your applications generate. It can do the same for each user session, too, of course. Measuring IOPS is harder than it may seem. It depends on many factors: the access pattern is very important, but so is the layer at which the measurement is taken. When you have decided how to do it and arrive at a number, you still do not have the single handle to disk performance. There are also throughput and latency to consider. uberAgent for Splunk gives you the information you need to understand what is going on in your systems. No more, no less. Download and try it yourself.
<urn:uuid:a26f6b9a-f107-4922-a01e-d80e32235670>
CC-MAIN-2017-04
https://helgeklein.com/blog/2013/03/the-impossibility-of-measuring-iops-correctly/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938056
1,018
2.953125
3
"I wish battery technology followed Moores Law." This was Pebble Watch founder and CEO Eric Migicovsky’s answer to my question, “if you had one wish for a technological breakthrough what would it be?“ Moore’s law is the high-tech maxim coined by Intel that says every two years the number of transistors on a microprocessor chip become twice as dense and the microprocessor twice as powerful. It was a surprising answer from the head of the company that makes a device with a five- to seven-day battery life that leads the industry. Steady improvements in power management have compensated for lagging developments in battery technology. But the most advanced notebook battery life is still only 10 hours. The power-miserly smartphone blackens its LCD within a few seconds, but still has to be recharged every night and more frequently with heavy use. A smartwatch is different. The regular annoyance of recharging a third device would hinder consumer adoption. Smartwatch makers are not in a race to release a next-generation device with faster processors, more memory and higher-resolution displays like the smartphone makers. Sony, Qualcomm and Pebble smartwatches are all powered by a 200MHz or slower ARM Cortex M3 processor, which are unremarkable compared to the processors powering smartphones. The exception is the Samsung Gear, which is powered with an 800 MHz Exynos processor most likely to have the capacity to run Android 4.x. The use cases for wearables like smart watches are based on what designers call “microinteractions” between people and their smartphones. Everyone has used microinteractions even if they are not familiar with the term. A downward swipe to display notifications or sideways swipes to turn a page are examples of microinteractions. Well-designed microinteractions simplify apps and shorten the time to convert the user’s intent into a gratifying experience. Relocating the microinteraction to a smartwatch from a smartphone app tethered together with Bluetooth improves the user experience because the clumsy delay of pulling out a smartphone can be exchanged with a glance at the wrist. Pebble and independent developers moved microinteractions, such as email and text notifications, checking running pace and changing music tracks onto the Pebble. Smartwatch apps reduce the awkwardness of constantly turning one's attention to a smartphone. For example, one app vibrates both the smartphone and Pebble when an important call is received, eliminating deliberations in answering. Pebble is the first app-compatible smartwatch to reach 100,000 unit shipments in a year. Migicovsky started iterating designs five years ago in his dorm room. The concept evolved into the Impulse smartwatch designed to display Blackberry notifications. The first Impulse production run was only 500 units. The $10 million Pebble raised in its Kickstarter campaign funded the development and production of the current design, which works wirelessly tethered to both iOS and Android devices. Pioneer Pebble’s success is partly to blame for Qualcomm, Sony and Samsung’s smartwatch introductions and the rumored pending smartwatch announcements by Google and Apple. Although BI Intelligence forecasted the wearables market to grow to $12 billion by 2018, it is early in its evolution. These large consumer electronics companies don’t expect smartwatches to immediately effect sales results; rather, they are experimenting to learn how to build smartwatches that consumers will buy in smartphone volumes. The most obvious case of practical smartwatch R&D is Qualcomm, which makes mobile processors powering many popular smartphones. CNET reported that Rob Chandhok, president of Qualcomm Internet Services and Qualcomm Innovation Center, said regarding his company’s smartwatch introduction: “The company will sell only a limited number of smartwatches -- in the tens of thousands -- to show customers what its technology can do." The design that wins the lead in the smartwatch race will be more than a derivative of a smartphone strapped to the wrist. The winning designs will create the right balance between hardware performance, very long battery life and smartphone and other wearable apps that improve the user experience by distributing microinteractions to the smartwatch. The smartwatch needs to be attractive too, because it’s also about wearing a stylish accessory that creates the right image. There's a balance between experience and company size. The Pebble team has acquired creativity and experience innovating during the last five years that rang true with the consumers who backed Pebble on Kickstarter. Pebble has been constrained by limited resources relative to its much larger rivals, but myopically focused. The rest of the consumer electronics companies entering the market have greater resources but less experience and more distractions. Pebble could not succeed without competitors noticing. Samsung’s primetime advertising of the Galaxy Gear and anticipation of similar announcements from Google and Apple are more likely to increase broad consumer interest in Pebble and other wearables.
<urn:uuid:5e9f0245-9883-4504-844d-0ee3741951cb>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225674/data-center/how-smartwatch-designers-should-be-designing-smartwatches.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00382-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951661
1,001
2.546875
3
Sparked by their recent findings, some MIT researchers have realized that shining some light on Graphene, a carbon sheet a single atom thick, can get those electrical juices flowing. Previously only possible "under very special circumstances," the new current generating effect could pave the way for better photodetectors, night vision systems, generating electricity from sunlight, and flux capacitors*. Postdoc Nathaniel Gabor, along with his research team, found that shining light on a sheet of Graphene, split into two regions with different electrical properties, created a temperature difference that eventually turned into an electrical current. Again, I'm not a scientist, so here's a quote on how this really works: "The material's electrons, which carry current, are heated by the light, but the lattice of carbon nuclei that forms Graphene's backbone remains cool. It's this difference in temperature within the material that produces the flow of electricity. This mechanism, dubbed a "hot-carrier" response, "is very unusual." Like we said off the top, these findings could help with photodetectors and night vision, but it's the generation of electrical current from light that has the most long term potential if you ask me. Could we find ourselves putting Graphene on house siding to generate cleaner energy? The possibilities are endless, and the potential for clearner energy based on this technology could be huge. *We added the Fluxcapacitor, we're still holding on to the dream of traveling back to the future. ↩ Like this? You might also enjoy... - Researchers Turn iPhone Into 350x Microscope on the Cheap - Monkeys Control Exoskeletons With Their Minds - Scientists Use Carbon Nanotubes to Create an Underwater Invisibility Cloak This story, "Graphene could change how we harvest solar energy" was originally published by PCWorld.
<urn:uuid:7d3efeca-2f93-48f7-a8a2-c83e8232b658>
CC-MAIN-2017-04
http://www.itworld.com/article/2735530/green-it/graphene-could-change-how-we-harvest-solar-energy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00137-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945311
397
3.703125
4
With the number of smartphones you see everywhere, you'd think it would be safe to assume that everyone is good with computers. But state library officials say there's more to using technology than tweeting and playing Candy Crush. West Virginia state officials worked to put together the State Library Guidebook: Support for Literacy in Public Libraries, which will help state groups plan their digital literacy projects. The American Library Association defines digital literacy as "the ability to find, understand, evaluate, create and communicate digital information." According to the Guidebook, eight out of 10 Fortune 500 companies take only online job applications and using the Internet to look for a job cuts the average unemployment time by 25 percent. It also says 77 percent of jobs will use tech skills by the end of the decade. One of the library directors who helped put the study together was Raleigh County's own Amy Lilly. "There's a misconception about what the need is with digital literacy," Lilly said. "We assume because a lot of people have smartphones or know how to go on Facebook that they know how to operate a computer. What we said during the survey is that yes, they can use social media, but we have a need with our patrons to know how to fill out job applications online. "Maybe that coal miner hasn't had to apply online before and always did it with pens and paper. Here they've lost their job and haven't been job hunting in 10 to 15 years and now it has changed. Now you can't get a job without filling out an application online. "We said these are people who need basic skills like setting up an e-mail address," Lilly added. "They need to know how to fill out a job application or government forms online or how to locate government resources online. What we said was that is where the focus needs to be. They don't need to know how to get on Facebook. They know how to do that." To help fix problems like this, Lilly said Raleigh County Public Libraries are using several programs. "One of the things we've been doing is teaching computer classes," she said. "We also offer one-on-one class work. If you come in and say 'I need help. I've never used a computer and I need to fill out a job application,' then you can actually book a librarian and they'll sit down with you and do one-on-one. "We've also done classes where we teach basic computer skills. We've been teaching people how to use the genealogy resources that are online, such as ancestry.com." Lilly said they're also working with local schools to make sure students understand how to use the technology around them. "We've been doing a new initiative with Beckley-Stratton Middle School with their after-school program," she said. "We go out there every three weeks and take books for the kids. They're an after-school program and we've also been teaching them how to check out e-books. All of the children in Raleigh County have the iPads. "One of the things we said is that it's great, but we want to make sure that they can check out books and read books on those iPads. We've been working and doing this pilot program and teaching kids how to check out books from the public library onto their iPad at school." Every program the library offers is free, Lilly said. "Any time someone calls and says they've never used a computer or set up an e-mail, we actually help them set up their e-mail," she said. "We've also helped people fill out information for the Affordable Care Act. We also have a volunteer that comes in several times a month and will actually sit down and work one-on-one with people to fill out paperwork for that. They're usually here on Mondays. "There are no computer courses for January yet, but that's just because we typically have very poor turnout because of the weather. We usually kick everything back off in early March. The on-demand help is always there though. People can come in and say, 'I need an hour,' and we have staff who can help them. People can set up appointments for the one-on-one help with the computers every Thursday." The computers at each of Raleigh County's libraries stays pretty busy, Lilly said. "Our computer lab has 21 computers and it's usually full," she said. "We're open from 9 a.m. to 8 p.m. Monday through Thursday, so we have about 900 people who come through the door every day. That's just at this branch. "My Shady Spring branch has five public computers and my Sophia branch has four. They're generally full, too. We actually have a computer program that actually times computers up here. They're given an hour at a time because there's such a wait to get on the computers." To keep up with the digital literacy trends of the area, Lilly said they're just going to keep doing what works. "Most of it is just to continue to offer the ongoing classes," she said. "We're always investigating new technology. We base it on what the patrons have been asking for. "Right now, some libraries in other big cities have been checking out iPads, laptops and other things like that. We're not to that point yet, but I can see that as being the future." It was groups like those in Raleigh County that helped put together the Guidebook that would help people all over the state. "With the Internet, online tools and e-books the new normal, digital literacy is here to stay," West Virginia Library Commission's Secretary Karen Goff said in a press release. "Digital literacy will continue to evolve as a necessary skill set for individuals, organizations and communities to have in order to participate in our ever more connected society. West Virginia's libraries will be there to offer assistance, resources and guidance." (c)2013 The Register-Herald (Beckley, W.Va.)
<urn:uuid:e2091a84-c973-469e-873e-2e35d4918fa8>
CC-MAIN-2017-04
http://www.govtech.com/state/Libraries-Get-Help-to-Improve-Digital-Literacy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.98087
1,242
2.859375
3
2.1.9 What are secret sharing schemes? Secret sharing schemes were discovered independently by Blakley [Bla79] and Shamir [Sha79]. The motivation for secret sharing is secure key management. In some situations, there is usually one secret key that provides access to many important files. If such a key is lost (for example, the person who knows the key becomes unavailable, or the computer which stores the key is destroyed), then all the important files become inaccessible. The basic idea in secret sharing is to divide the secret key into pieces and distribute the pieces to different persons in a group so that certain subsets of the group can get together to recover the key. As a very simple example, consider the following scheme that includes a group of n people. Each person is given a share si, which is a random bit string of a fixed specified length. The secret is the bit string s = s1 Ås2 żÅsn. Note that all shares are needed to recover the secret. A general secret sharing scheme specifies the minimal sets of users who are able to recover the secret by sharing their secret information. A common example of secret sharing is an m-out-of-n scheme (or (m,n)-threshold scheme) for integers 1 £ m £ n. In such a scheme, there is a sender (or dealer) and n participants. The sender divides the secret into n parts and gives each participant one part so that any m parts can be put together to recover the secret, but any m - 1 parts do not suffice to determine the secret. The pieces are usually called shares or shadows. Different choices for the values of m and n reflect the tradeoff between security and reliability. An m-out-of-n secret sharing scheme is perfect if any group of at most m-1 participants (insiders) cannot determine more information about the secret than an outsider. The simple example above is a perfect n-out-of-n secret sharing scheme. Both Shamir's scheme and Blakley's scheme (see Question 3.6.12) are m-out-of-n secret sharing schemes, and Shamir's scheme is perfect in the sense just described. They represent two different ways of constructing such schemes, based on which more advanced secret sharing schemes can be designed. The study of the combination of proactive techniques (see Question 7.16) with secret sharing schemes is an active area of research. For further information on secret sharing schemes, see [Sim92].
<urn:uuid:ab433ac2-4ef4-4c72-9400-ba32c5769726>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-secret-sharing-schemes.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91135
523
3.59375
4
Cooling Data Center CostsBy Samuel Greengard | Posted 2010-08-13 Email Print Roughly 40 percent of the energy consumed in data centers is a result of cooling systems, and that figure is rising. Reversing that trend is possible, and could mean big savings. One of the hottest opportunities for greening the data center lies in cooling. According to Booz & Co., roughly 40 percent of the energy consumed in data centers is a result of cooling systems, and that figure is rising. Here’s what Booz recommends to increase cooling efficiency: • Optimize airflow in the data center to reduce the mean gradient temperature and reduce cooling requirements. • Use a hot/cold aisle configuration, in which equipment racks are arranged in alternating rows of hot and cold aisles. • Rely on air handlers to better control airflow within the data center and enable more efficient cooling. • Deploy smart cooling energy-management systems with sensors to reduce energy consumption by as much as 40 percent. • Increase cooling temperature targets to slightly above the data center baseline temperature. Every added degree results in an estimated 4 percent reduction in energy consumption. • Install renewable cooling sources such as outside air during the winter—where practicable—to minimize usage of internal cooling systems.
<urn:uuid:9f92aa04-1e2a-47c6-92ab-59a39660ada4>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/IT-Management/Cooling-Data-Center-Costs-368334
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00255-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913103
263
2.609375
3
Today’s computers have more than 2,000 times as much memory as the machines of yesteryear, yet programmers are still writing code as if memory is in short supply. Not only does this make programs crash annoyingly, but it also can make users vulnerable to hacker attacks, says computer scientist Emery Berger from the University of Massachusetts Amherst. With such problems in mind, Berger created a new program that prevents crashing and makes users safer, he says. Dubbed DieHard, there are versions for programs that run in Windows or Linux. DieHard is available free for non-commercial users at www.diehard-software.org. Berger developed DieHard together with Microsoft researcher Ben Zorn. Berger has received a $30,000 grant from Microsoft, a $30,000 grant from Intel, and a $300,000 grant from the National Science Foundation for his work on DieHard. Almost everything done on a computer uses some amount of memory—each graphic on an open Web page, for example—and when a program is running, it is constantly requesting small or medium chunks of memory space to hold each item, explains Berger. He likens the memory landscape to a row of houses, each with only enough square footage for a certain number of bytes. The problem, says Berger, is that sometimes when memory real estate is requested, programs can unwittingly rent out houses that are already occupied. They also might request a certain amount of square footage when they actually need more, so an item can spill over into another “house.” These mistakes can make programs suddenly crash, or worse. DieHard presents several remedies to such problems. First, it takes a compact row of memory buildings and spreads them around in the landscape. It also randomly assigns addresses—a password that has a downtown address in one session may be in the suburbs next time around. And in some versions of the program, DieHard will secretly launch two additional versions of the program the user is running—if a program starts to crash, that buggy version gets shut down and one of the other two is selected to remain open. DieHard can also tell a user the likelihood that they’ll have been affected by a particular bug.
<urn:uuid:ecbbcbd5-c5ec-4470-a7ec-3010390af5aa>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2007/01/02/diehard---a-new-software-that-prevents-crashes-and-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00163-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950858
454
3.078125
3
If you can shop, make appointments or register your car via the Internet from the convenience of your home -- or from halfway around the world, for that matter -- why can't you get your education in the same manner? These days, many students can. Not only are continuing education and college students completing courses at times and places convenient for them, but in recent years, more and more high-school, middle-school and even grade-school students do so as well. At least 15 states have distance education programs for public school students, according to the U.S. Department of Education's 2004 report, Toward a New Golden Age in American Education -- How the Internet, the Law and Today's Students Are Revolutionizing Expectations. A deluge of regional and districtwide programs exist as well, and new programs are cropping up all the time. Within the next decade, the report predicted, every state and most schools will offer some form of virtual schooling. Virtual schools offer students and their families scheduling flexibility, course options, varied learning formats and experience that will benefit them beyond their schooling. Each program, however, is not for everybody, and a wide variety of virtual schools have emerged to cater to the varying needs of both students and the educational systems in which they learn. Though this next evolution in education shows no sign of slowing, some kinks must be worked out -- such as questions about funding and oversight in a school system that no longer fits within previously delineated geographic boundaries. "It's relatively new even at this point," said David Griffith, spokesman for the National Association of State Boards of Education (NASBE), comparing virtual schools -- in existence for only about 10 years -- to charter schools. "Even [with] charter schools, there's a physical location, and the geographic area where students -- the attendance -- would be pulled from is somehow limited and finite. With virtual schools, that's not the case. "You could have students living in one part of the state attending a virtual school in another," he continued. "And one of the things we've seen is it's a question of who pays, and determining which is the student's location for the purposes of making funding calculations and that sort of thing." In many cases, this situation has created friction among virtual and traditional schools, as per-pupil funds are drawn away from traditional schools along with students choosing to attend a virtual or charter school. In Ohio, many public school entities have begun offering online courses to compete with charter schools. Virtual schools have also raised questions about oversight, said Griffith. "If it's removed from the local community, then who is able to guarantee the services are being delivered and students are making the educational progress they should? The local district? The virtual school on the other side of the state? The local district in which the school is being operated? Is it the state, because it does cross district boundaries? These are all issues as we move forward that are going to have to be figured out." Virtual Learning Put to Use For years, Florida has used virtual schools to offer students and families more options. The state has several statewide programs -- including two full-time K-8 pilots aimed at reducing class sizes and the Florida Virtual School (FLVS), a supplemental program that caters to middle- and high-school students. Florida's Legislature has taken virtual schooling in hand and created mechanisms to fund its three statewide programs, as well as a handful of district-level programs. "What we're providing for students is really a choice of how they receive their education," said Bruce Friend, chief administrative officer for the FLVS. "The traditional classroom environment model is not the best learning environment for every student. That doesn't mean the online learning environment is the best model for every student, but I think we've been able to complement what students can get at their traditional schools with providing the option of online courses." The school does not grant diplomas, and for the most part, students take courses from the school on a supplemental basis. Students must arrange through their own school counselors to take courses for credit. Friend said the school allows students in rural districts to take courses not normally available to them, and accommodates scheduling needs for students and their families. "What we're really providing is just another option for students to earn credits they need toward high-school graduation." As one of the most established virtual schools in the nation, the FLVS serves as a model for other programs both within the state and without. The school started with a "Break the Mold" funding grant in 1997 as a pilot project between two Florida counties, which offered a minimal number of classes to a limited number of students. The state Department of Education continued funding the school as a line item until 2003, when the FLVS was deemed a public school district and the Legislature began funding the school on a variation of full-time equivalent (FTE) funding. In Florida, each school district receives a certain amount of funds, which varies by district, for each student enrolled full time. The virtual school, however, isn't funded exactly the same as a regular school district -- it receives funds based on course completions rather than on simple head count. Since most FLVS students don't take all of their classes through the program, the school's FTE funds are based on aggregate course completions (i.e., how many courses completed by various students add up to one FTE). The school does not charge fees to students who are Florida residents. Attendance has grown exponentially since the program began -- Friend said he anticipated at least 28,000 course enrollments for the 2004-2005 school year. Even as the school expands, it continues to have waiting lists for certain courses. The FLVS devised a way to partner with state school districts to keep up with in-state demand. Where demand is high in certain districts, the FLVS will set those districts up with their own programs if the districts so choose, allowing them to run their own virtual school with local staff based on the FLVS curriculum and practices. The FLVS provides training and oversight to the franchises. The arrangement helps local districts ensure that their kids can enroll in courses and allows the districts to keep the FTE funds in their district. The arrangement also benefits the FLVS, Friend said. "It just provides more opportunity for us to serve more students because now we know we have sort of a franchise that can offload some of the demand we ordinarily would have received." Broward County was the first of seven Florida districts to establish a franchise. MaryAnn Butler-Pearson, Broward Education Communications Network director of distance learning for Broward County Public Schools, said the district approached the FLVS with the idea because the district had so many students on waiting lists for courses. "In Broward County, one very important goal is to make sure all of our students have equitable access to quality learning, so when some students were able to get in and others weren't, it caused an inequity." Equal access is one reason an urban school district needs such a program, said Butler-Pearson, adding that people are often surprised an urban district like Broward would need a virtual school. "Our students need it just as much as suburban or rural students," she said, "just for different reasons." Reducing class size is one area in which the school, Broward Virtual Education (BVEd) has helped the district, she said. "If a class had just two or three too many students, they wouldn't be able to afford to hire a teacher and split the class in half." Now students can opt to take classes online. The school also expands offerings available to students in smaller schools with limited courses, she said. "The students they're teaching are learning the skills and attitudes they're going to need for this type of success in the workplace, as well as in higher education." One district school requires that students take at least one class online. While competition for funding has in some cases created a contentious relationship between districts and remote virtual schools, Butler-Pearson said students can take courses from either virtual school, and that the relationship between the FLVS and the Broward County franchise is a good one. "They really nurture the franchises, and they go out of their way to make sure we have everything we need," she said. "It's not a competitive relationship at all because they have so many students." She said having a model to follow made it a lot easier for the district to get the school up and running. "Not only did we use the coursework, but they also trained all of our teachers initially in both the pedagogy, the philosophy, as well as the course content." Though franchising can be beneficial for districts with a large demand for online classes, Friend said franchising isn't always a good option. "Franchising is really meant for districts that want to have a large online initiative, and maybe even larger than what we ourselves can provide to them," he said. "A district that's not going to serve many students online would frankly be better off just having the students come to us because there's a threshold of how many credits they need to get before they break even on the franchising cost." He said districts that choose to franchise must spend at least $20,000 upfront for franchising costs. "But then they're going to have to put people behind it." He said the FLVS recommends that districts have one full-time staff member oversee the initiative, and then the district must supply teachers as well. And Butler-Pearson said there are ongoing costs associated with the franchise. The in-state franchising fees, however, are not profit-making initiatives for the FLVS. "It's really a cost recovery for us," said Friend. "It's not something that we generate revenue from in our own state. It's really meant to cover the cost of setting them up and providing oversight." Broward County Public Schools started the franchise in September 2001 with school board funding, and schools paid for students who took courses through the school until 2003, when the Legislature allowed FTE-type funding for the franchises and the FLVS, Butler-Pearson said. There are some benefits a local program can offer that a statewide program can't. Unlike the FLVS, BVEd can issue diplomas to its own students -- some of the tracking requirements for graduation would be too unruly for a statewide program, Butler-Pearson said. "For example, it's one of the state requirements that the students do 50 hours of community service, and we have to document that just like the traditional high schools do. That would be a lot of record-keeping for Florida Virtual because their student population is huge," she said. "I think it's because we're local that we have a little bit more of a handle on it." The FLVS also sells its curriculum and training to other states or jurisdictions, and offers courses to out-of-state students for a fee. Profits, by law, must go back into course development and otherwise improving the FLVS, and the school's out-of-state efforts earned approximately $1.5 million in the 2003-2004 school year, Friend said. The FLVS partnered with Stetson University to provide courses to out-of-state students. Stetson runs the program, but the courses and practices are the same as those at the FLVS. "It's basically a mirror of our program," said Friend, adding that keeping up with demand in the state was behind the decision to work with Stetson. "We want to serve as many Florida students as we possibly can so there's not that conflict of interest of, 'Why would you have a Florida teacher teaching to Georgia if you still have Florida kids wanting to get in the courses?'" In addition to the FLVS, Florida is now piloting two K-8 virtual school programs. The state contracted with private companies -- K12 and Connections Academy -- for both schools. Unlike the FLVS, the K-8 initiatives are full-time programs, and though courses are taught by certified teachers, the classes rely heavily on parental guidance. Some question the reliance on parents, who often lack the experience necessary to teach children. But K12's senior public relations manager, Jeff Kwitowski, said parental involvement is an added benefit for any age group -- especially youngsters. "Obviously a child who is 6, 7, 8 years old is going to need more oversight with the parent." He said traditional schoolteachers are often overwhelmed with classes that are too big, and schools try to bring in other adults who can provide more individual attention. This environment, he said, allows parents to provide that attention. "That, we believe, is the incentive for a lot of parents who want to be very involved in their child's education, but it's also to the benefit of the child because it gives even one more adult along with the certified teacher." NASBE's Griffith questioned whether elementary school students are mature enough to take full advantage of virtual schooling. "It does take a certain level of sophistication to get all the benefits out of a virtual education," he said. "It's also an efficiency thing -- being able to operate the computer, being able to keep work, typing speed, that sort of thing." Lisa Gillis, operations administrator for the California Virtual Academies (CAVA), a network of six K12 programs in California, said the "virtual" is a little misleading. "Kids only spend about 20 to 30 percent of their time online. The rest of it is project-based instruction," she said, adding that everything -- including the computer, Internet connection, the seeds used in science experiments and the clay used for modeling -- is sent to the home so parents have the guidance and support they need to lead the lessons. The educational structure and guidance provided by virtual schooling has raised another issue as the programs have become popular with previously homeschooled students, bringing formerly unfunded students back into the public education system. To quell the demand on their budgets, some states have created requirements that only students who previously attended public schools can take advantage of public virtual schools. Though acknowledging there are costs involved, Griffith said homeschooled students should be allowed to take courses in any public school. "To the extent you're bringing homeschool students back into the system, that's a good thing," he said. "I think that's an encouraging sign that the system is offering all kinds of families the options they desire." Another issue that's been raised as students from traditional schools opt to go online is that of socialization. Griffith noted that students who choose virtual schooling must find other ways to socialize with kids in their age group. "I don't think you can overlook the intangibles that the traditional schools have now -- the school spirit, athletic opportunities, extracurricular activities and benefits students get from gathering with their peers in a nonacademic setting but still within the school," he said, noting that this concept is especially important for youngsters, and that extracurricular activities can be supplemented in other ways. Gillis said in the K12 programs, teachers organize monthly outings for their students (each class is grouped by local area), but any CAVA student can go on the outings. In contrast to Florida, where the Legislature has driven statewide initiatives, California virtual schools are governed by the state's charter school laws, which require that schools have a local educational agency (LEA) sponsor the program. Any virtual school in California, by law, can only enroll students from the county in which it operates, or from contiguous counties. Though CAVA does not operate in all counties, Gillis said the network of schools affords students some of the benefits a statewide program offers, such as going on CAVA outings that have been organized in another part of the state, and enjoying uninterrupted schooling if a student moves to another CAVA county. Other California counties and districts have implemented their own programs, including some that serve high-school students and some run by the LEAs themselves, as opposed to a private entity. Of the many ways to deliver online education, the best approach is still up in the air, said Griffith. "It's still in its infancy," he said. "States are trying to wrestle with this. We'll see what works and what doesn't. Hopefully each of those models will take the best aspect from each other and learn from each other's mistakes as well." Griffith said states not only need to settle the issues currently brought by the expanding educational boundaries, but they must plan for future possibilities as well. NASBE's Public Policy Positions urge states to form relationships with one another to further the possibilities afforded by online education, and Griffith said it's only a matter of time until states will need to consider such possibilities. "The Internet is bringing states and countries closer together, making these geographic artificial boundaries obsolete." Online schools are an indicator of education's path for the future, a logical next step in a society where, thanks to the Internet and technology, we expect more conveniences, he said. "It's no longer, the store [closes] at 5:00, and you have to fit to the store's hours, not have the store fit to your hours," said Griffith, adding that virtual schooling has brought about a more individualized way of learning. "Much more student-centered, based on students' time, learning ability and that sort of thing, as opposed to being more of a one-size-fits-all model we've got now. "In the long run," he continued, "it's a pretty radical change from the way schools have been operating in this country for 150 years." And with such a radical change, he said, there is some level of tolerance for the growing pains states and schools must endure to get the programs running smoothly. "You certainly don't want to impede people from trying different things, experimenting or pioneering, but at the same time, it's maturing, evolving and developing in some areas," he said. "You need to make sure that if it's going to be offered and students are taking advantage of it, that you get back to making sure it's equitable, it's working for students, they're getting something out of it, and it's available to as many as possible."
<urn:uuid:aea4a5c2-984b-48f5-8760-eb9ab4838f7d>
CC-MAIN-2017-04
http://www.govtech.com/featured/Education-Options.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00071-ip-10-171-10-70.ec2.internal.warc.gz
en
0.979012
3,797
2.859375
3
Coats, hats and gloves are essential in order for warehouse workers to function more than a few minutes in cold storage areas. In much the same way, mobile data-collection computers must be built to perform under these demanding conditions. Unless mobile computers, associated bar code readers and wireless networking equipment have been designed with features required specifically for use in cold environments, the level of their reliability will fall right along with the temperatures. Companies no longer have to compromise on functionality and information access just because of the environment. Next generation cold storage mobile computers are built not only to withstand prolonged use in the cold, but more importantly transitioning between cold and warm locations. Cold air, frost and condensation. Each of these elements creates a specific challenge for rugged mobile computing equipment. The insulation used to keep refrigerated and frozen storage areas cold also poses problems when it comes to wireless connectivity. Here's a brief overview of how these conditions impact mobile computer performance.
<urn:uuid:6fca0e5e-041a-4841-8fe7-03cb80fb778b>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/the-cold-hard-facts-using-rugged-mobile-0002
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00403-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933566
189
2.515625
3
Elect IT: Technology and the Democratic Process On Election Day 2008, Americans will head en masse to the polls to cast their ballots in an undoubtedly historic election. But what the average voter might not realize is this election is historic in more ways than one. Not only has information technology enabled voters to have more access to information about the candidates and the electoral process, but it’s never been easier for people to participate in the process, as the Internet has helped break down barriers to both transparency and accessibility. “People expect to get the information they want with a quick Web search, and as more people experience the power of having information at their fingertips, it will be increasingly difficult for the government to keep any of its information behind closed doors,” said Daniel Newman, executive director and co-founder of MAPLight.org, a nonprofit, nonpartisan organization that works to show the public the connection between money and politics. IT has opened those doors, as voters have better access to candidates’ views and finances, as well as a clearer understanding of how to register, where to vote and what’s on the ballot. In prior years, public government information was held captive, available only to those who could afford to pay a fee. “Government agencies would often make their data sets available only in bulk on big computer tapes,” Newman explained. That meant it was practical only for large database companies to load these tapes onto their computers and sell access by the hour back to the public. “Citizens had to pay fees to search databases that were public information and generated by tax dollars. As we’ve moved toward the Internet era, [it’s become] possible [for] government agencies to make all this information available. And they can do it relatively inexpensively.” One such agency that has made tremendous strides in this effort is the Federal Election Commission (FEC) that discloses campaign-finance information for presidential, House and Senate candidates. Five years ago, the organization’s Web site was difficult to navigate; now, with a few clicks of the mouse, a user can find out how much money a candidate has raised, who contributed to the campaign and how the candidate spent those funds. “I look at information technology, here at the Federal Election Commission, to act as a facilitator and to make it easy for people to become educated on the campaign finance process,” said Alec Palmer, chief information officer and co-privacy officer. “The easier that we can make it for the individuals doing research, or for the general citizen from just a point of curiosity, the better off the democratic process.” MAPLight.org is another example of improved transparency. This particular nonprofit uses three databases of information to illustrate the connection between campaign contributions and how legislators vote. “The hundreds of millions of dollars that politicians raise to run their campaigns often comes from interest groups that have a stake in legislation,” Newman said. “Even though many people know this general concept, there was not much information out there on the specifics of how money influences legislation. So [we] set out to build what would be a continuous example generator of how money influences our political system.” The first component of the example generator is how each member of Congress votes on every bill. To get this information, MAPLight uses GovTrack.us, an automated service that polls the Library of Congress site every 15 minutes to determine whether any recent votes or changes have been made. GovTrack.us then “downloads those changes, parses it into a structured format and then MAPLight imports” that information into its MySQL database, Newman said. The second component of MAPLight’s research is the campaign money that’s given to each member of Congress, a record of which is filed with the FEC. The Center for Responsive Politics — a nonprofit research group — takes that information, processes and analyzes it, and then classifies each contribution into one of 400 industry denominations (e.g., oil company, environmental group). Once a month, after the candidates have filed their reports, MAPLight imports the data from the Center for Responsive Politics. The third and final component is collected in-house through intense manual research. MAPLight’s research team selects a bill and reads the congressional testimony about that bill. If any of the speakers came out in support or opposition of the bill, the researchers log into the back-end Web database, make a note of which organization the speaker represented, whether the speaker was a supporter or opponent and the source for that information. This process is repeated with news databases and Internet searches. In the end, these three sources work together on the site to provide information such as which organizations gave money to which politicians, how those politicians voted on specific bills and whether there’s a connection between the money given and how the politician voted. “All of this information would have taken days, if not weeks, to collect and analyze before MAPLight.org came along,” Newman said. “What [you] see that you could never see before is how money correlates with the votes.” For this upcoming presidential election, MAPLight also has downloadable, customizable widgets that can compare funds raised by Sen. Barack Obama to the funds raised by Sen. John McCain. There are similar widgets for the congressional candidates. “Often people say, ‘Well, so what if there’s more information? Do people really want more information?’” Newman said. “It’s not so much [that] people will check MAPLight as often as they check the weather report. No, rather journalists, bloggers and active citizens [who] are already the sources of information for their communities will rely on MAPLight and other cutting-edge tools to extract key pieces of information. That power to drill down quickly to get a key connection will revolutionize democracy.” Behind the Scenes IT hasn’t just had an impact on information accessibility; it’s also had an impact on the electoral process. For instance, in Minnesota, technology has affected the way information is distributed to the public, how voters cast their ballots and the way election results are reported. In terms of informing the public, Minnesota provides online information about the electoral process such as what’s on the ballot and where to vote. But this year, Minnesota went above and beyond and developed the Caucus Finder, which is based on Polling Place Finder, in preparation for caucus season. The application, the first tool of its kind, allowed residents to enter their addressees and find out where all of the political parties in Minnesota were caucusing. To obtain the caucus locations, though, the state had to coordinate with each of the parties. “We gave [the parties] an Excel spreadsheet,” said Ted Lautzenheiser, chief information officer for the secretary of state’s office. “They would fill it out, and then we had the ability to load that into our system. We used DTS [data transformation services] and SSIS [SQL server integration services] packages and SQL server to load the data from Excel. Sometimes, it’s not how technically eloquent the solution is, but are you using a solution that everyone can use?” The tool proved unexpectedly successful, and the state had an unbelievably large turnout for the caucuses, four times more than in 2004, according to Mark Ritchie, Minnesota’s secretary of state. “From our perspective, we already had this great thing called the Polling Place Finder that had 90 percent of the machinery that was necessary,” Lautzenheiser said. “By far, the huge value for the Caucus Finder was recognizing that we had the tools, and with just a few simple applications of the tools and some minor process development, we could really have something that would be worthwhile to the public.” While IT has changed the way information is distributed to Minnesota voters, it has not affected the actual voting process all that much, as direct recording electronics (DREs) are illegal in the state for security reasons. Instead, voters go to the polls, vote on a paper ballot and then insert that ballot into an optical scan machine that then tabulates all the votes at the precinct level. “It’s a system that Minnesotans trust deeply. One result of that trust is that we have the highest voter turnout in the nation,” said Ritchie. “Most states that are looking to find a more dependable, less expensive system find themselves moving toward our approach.” With the 2008 presidential election just around the corner, the state’s IT staff has been working at full speed since January, making changes, collecting candidate names, running tests and synchronizing the systems. When the polls close on Election Day, the optical scan machine results are brought to each county seat, and the counties can enter them either manually or electronically into the state’s election reporting system. Then the public can begin viewing the results. During the course of the night, there’s nothing quite like the climate in the IT department: It’s a mixture of optimism, anxiety, excitement, frustration and, ultimately, exhaustion, Lautzenheiser said. “As far as the environment, it’s optimistic and quiet around 8 o’clock as we unlock the system,” he said. “Then we’re anxious to see the results [come] in, just so we know all the processes of the system are working. As the first couple counties’ data comes in, we watch carefully to make sure there are no issues with the files being processed and loaded on the system. “From about 9 o’clock to midnight, there are many counties on the system, and the amount of activity increases rapidly. There are definitely issues that can arise, so we’ve got many monitors set up [and] we’re looking for anomalies.” Being involved in the development of IT tools that improve the electoral process is an exciting field and it will only become more so in the future, as states and organizations begin experimenting with the prospects and potential of online voting. “Today, if I go to my Polling Place Finder, I can see my entire ballot. That begs the question, how come I can’t vote online?” Lautzenheiser said. “It’s not a question of if; it’s more a question of when. [But] although [it’s] becoming more and more technically feasible, there are still many social, political, legal, financial and security hurdles to voting online.” His views, however, are not supported by Minnesota laws or the Office of the Secretary of State. E-Voting: Secure or Not? Because of the infamous hanging chads in the 2000 presidential election and the resulting uncertainty of voter intentions, electronic voting (e-voting) entered the scene. “In 2000, you were dealing with paper ballots,” said David Beirne, executive director of the Election Technology Council, a trade association that represents the voting system platforms for 90 percent of the registered voters in the United States. “Anytime you inject that type of item, it’s not a series of ones and zeroes or binary codes: It’s subject to interpretation. Electronic voting systems were pretty much the natural evolution for eliminating questions about voter intent.” But with e-voting comes the need for security, and now there’s a robust discussion nationwide on the integrity of DREs and other e-voting platforms. “A big concern is that these have lacked independently auditable voting records and that the internals are a black box,” said Micah Altman, the associate director of the Harvard-MIT Data Center and senior research scientist for the Institute for Quantitative Social Science in the faculty of arts and sciences at Harvard University. “What happens is that you touch the screen, and it records [the vote] in memory. You can go and examine the tallies, but that’s not really an effective record because there’s no trail of individual votes. Combine that with the fact that these are all fairly complex pieces of software and they are not really open to inspection, [and] it raises the possibility that either intentionally or unintentionally something could go seriously wrong with the voting system.” But any voting platform has security concerns, Beirne argued — even paper ballots can be tampered with. “From a computer science standpoint, the machines are not hooked up to external networks, so that eliminates 80 percent of your known threats right there,” he said. “Then it comes down to the procedures you use. Regardless of how good your technology is, you have to have procedures built around it to encapsulate it, and that’s true for paper systems, [too].” The development of e-voting devices makes one wonder if the Internet will become the next viable platform for voting. It’s never been used in U.S. elections, but there was an attempt by the Department of Defense to set up a system for military personnel to vote over the Internet. It was canceled, though, due to “deep security flaws,” Altman said. “Technology can make it easier [to vote],” he explained. “But a lot of the newer technology, and certainly the Internet voting technology, is just not ready for prime time. It’s unable to provide the integrity that we should expect in an advanced democracy.” Beirne himself isn’t sure what the future of voting holds. If people expect DREs to be absolutely secure, there’s no telling how the electoral process will change in the future. “If the model we’re trying to pursue is an absolute threshold, then it’s going to be very difficult for electronic voting to continue to take hold,” he explained. “It’s difficult to know where the future’s going to take us. We’re going to have to see how it plays out and see how comfortable voters continue to be with electronic voting.” – Lindsay Edmonds Wickman, firstname.lastname@example.org
<urn:uuid:e455038e-1db2-4286-8358-82e8679b1bba>
CC-MAIN-2017-04
http://certmag.com/elect-it-technology-and-the-democratic-process/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00521-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955267
3,013
2.984375
3
Those free apps, like Angry Birds, Instagram and Tiny Wings may be loads of fun but they suck the battery life out of your smartphone by tracking your geographical location, sending information about you to advertisers and downloading ads. That was the major conclusion of research done by Y. Charlie Hu, a Purdue University professor of electrical and computer engineering who said: "It turns out the free apps aren't really free because they contain the hidden cost of reduced battery life." Hu and other researchers on his team authored a paper, "Where is the energy spent inside my app?" which will be presented during the EuroSys 2012 conference on April 10-13 in Bern, Switzerland. The researchers findings show that 65% to 75% of the energy used to run free apps is spent for advertising-related functions. The free Angry Birds app for example was shown to consume about 75% of its power running "advertisement modules" in the software code and only about 25% for actually playing the game. The modules perform marketing functions such as sharing user information and downloading ads, according to the researchers. Games and applications that heavily use built-in phone features like GPS, the camera, compass and "proximity sensor" are the main culprits of inefficient power consumption, the researchers said. "A particular source of power inefficiency is a phenomenon called 'tails.' In principle, after an application sends information to the Internet, the "networking unit" that allows the phone to connect to the Internet should go to a lower power state within a fraction of a second. However, researchers found that after the advertising-related modules finish using the network, the networking unit continues draining power for about seven seconds. The tails are a phenomenon of several smartphone hardware components, including 3G, or third-generation wireless systems, GPS and WiFi, not flaws within the app software itself. However, software developers could sidestep the problem by modifying apps to minimize the effect of tails," Hu said. "Any time you use the 3G network, there will be a tail after the usage," Hu said. "The ad module in Angry Birds obviously uses 3G for network uploading and downloading, while the game itself did not, which is why we blame the ad module for the tail." The ultimate goal is to develop an "energy debugger" that automatically pinpoints flaws in software and fixes them without the intervention of a human software developer, Hu said. The researchers have developed a tool known as "Eprof" that maps how much energy comes from each software component, giving researchers or developers a way to see smartphone energy consumption without using a power meter, an expensive and cumbersome piece of laboratory equipment. "We've seen around 1 million apps written since smartphones emerged roughly five years ago, but there has been no systematic way for the developer to see how much energy the different components consume. Using this tool, you can see what should be changed to improve energy efficiency," the researchers said. The power problem is palpable when it comes to smartphones. "Despite the incredible market penetration of smartphones and exponential growth of app market, their utility has been and will remain severely limited by the battery life. Today, energy is the single most important factor plaguing smartphones. Modern smartphones come with faster processors, latest sensors, incredible screen resolutions, faster network connectivity, and as such these factors together contribute to the ability of the smartphone to consume energy at much faster rate than the ability to produce/store energy, i.e., the battery capacity. For example, the CPU performance over the last 15 years has grown by 246 times while the battery energy density has only doubled during the same period," wrote Abhinav Pathak, a Purdue doctoral student who was part of the research team. Layer 8 Extra Check out these other hot stories:
<urn:uuid:46d021f2-2086-4e7b-a6ea-a58d2bc18763>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222070/mobile-apps/free-apps-like-angry-birds-suck-the-life-out-of-your-smartphone.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00273-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950742
768
2.71875
3
Almost every SaaS (software as a service) is based on REST or XML Web services. In this post, I’d like to provide a brief introduction to some typical threats and security countermeasures to protect Web services; Malicious Attack on the message The beauty of HTTP Web Services is that traffic flows through port 80 and port 443 and it uses a human-readable format (XML or JSON). This is also the key vulnerability. A typical IT / system administration approach that relies on protecting Web service providers with a firewall/IPS setup is not very effective. We will explain why. Firewalls do a good job of port monitoring and recognizing brute force malicious attack but are not good at being able to view the content of messages in order to detect and prevent more sophisticated security compromises. While most firewalls can recognize SOAP as well-formed HTTP traffic they cannot inspect the actual content of the SOAP message or JSON data. Web Services interfaces are much more complex than Web site interfaces which exchange HTML pages and forms. Web service interfaces are like software APIs and expose database functionality. In addition, an attacker has more information available to them. The message is often self-describing and clearly shows the data elements. A Web service provider is a juicy, self-describing target. Similar to Denial of Service, replay attacks involve copying valid messages and repeatedly sending them to a service. Similar techniques for detecting and handling Denial of Service can be applied towards replay attacks. In some ways, replay attacks are easier to detect with Web Services because payload information is more readily available. With the right tools, patterns can be detected more easily even if the same or similar payload is being sent across multiple mediums like HTTP, HTTPS, SMTP, etc. An attacker can send a parameter that is longer than the program can handle, causing the service to crash or for the system to execute undesired code supplied by the attacker. A typical method of attack is to send an overly long request, for instance, a password with many more characters than expected. Similar to buffer overflow attacks; hackers often send malformed content to produce a similar effect. Sending in strings such as quotes, open parentheses and wildcards can often confuse a Web Service interface. Dictionary attacks are common where a hacker may either manually or programmatically guess passwords to gain entry into the system. Administrators should ensure that passwords are difficult to guess and are changed often. Intrusion Detection of attacks by malicious outsiders Proactively securing all of the possible misuses of Web Services is almost impossible. Security policies and strict access control management should help reduce the occurrence of intrusion. An IPS will detect anomalous attack behavior and if monitored may help the security team mitigate the threat. Extrusion detection of attacks by trusted insiders Attackers are usually thought to be outside of the organization. However, most security breaches occur from within the organization. With Web Services, more functionality is available to a more people. Access to confidential information or embezzlement of funds is just some of the possible internal security breaches that can be performed by employees or former employees. Because employees are the most familiar with internal systems, detection can be made extremely difficult. Unintentional compromises are also possible. If an interface is unsecured, an employee may accidentally access information that they are not intended to view. Since Firewalls are insufficient for data breach, we would require use of a DLP – Data loss prevention system such as Fidelis XPS or WebSense DLP. Once a security breach is detected, being able to shut down systems and reject traffic from specific sources are important for handling a compromise. A DLP system provides real-time detection, forensics recording and the ability to drop traffic from specific IP source addresses in order to properly mitigate the threat. Cross-posted from Israeli Software
<urn:uuid:40573122-474c-4d86-ae31-a64d9b88c244>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/11014-Securing-Web-Services-in-the-Cloud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00485-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920934
794
2.640625
3
Over the last weeks, I have found myself in several rather intense discussions about “cloud testing”: what it is, what it isn’t, and what it means for testing and QA professionals. The major source of confusion in these discussions usually revolves around the definition of cloud itself; if you try look up cloud computing on the Internet, you will find it hard to get a formal definition. Wikipedia says it outright: “Cloud computing is a jargon term without a commonly accepted non-ambiguous scientific or technical definition." Duh! Since the primary goal of this article is to talk about cloud applications and testing thereof, I’ve tried to distill a number of common traits for “cloud applications” based on the above and similar articles (let me know if you disagree!). Common for "cloud applications" is that they - Run on a virtualized hardware and a software stack that can be moved and replicated between physical machines as needed. - Share common physical resources with other cloud applications (disk, network, data stores, etc.). - Are built to be highly scalable in real-time – meaning that they can handle high increases in load by dynamically scaling to more physical resources as needed. - Are predominately accessed using standard network protocols. - Use HTML and other web technologies for providing both their front-end and management UIs. - Provide APIs for integration and management – possibly made available to users or third-party application vendors. - Consume third-party APIs for providing common services and functionality – things like authentication (OpenID, Facebook), storage (DropBox, Google Drive), messaging (Twitter, Gmail), geo-functionality (GooglMaps) etc. - Tend to use NoSQL data stores – primarily for managing large amounts of unstructured data. The “cloud” itself comes down to being the infrastructure that hosts a “cloud application”; it is usually either public (Amazon, Rackspace, etc.), private, or a combination of the two – and can offer many different levels of service (IaaS, PaaS, SaaS, etc.). So, given these basic characteristics, what should testers be thinking of when tasked with testing a “cloud application” – or more likely, a web application or API that is running “in the cloud”? Are there any specific traits related to a cloud application that mandate extra consideration as opposed to if the application is deployed on an old and dusty server in the corner of your office (running Windows 2000?). My immediate answer to this question used to be “No.” A web application or API needs to be tested in the same way no matter how it is deployed. It still has to work and perform as required, and testing is no different for different deployment scenarios. Or so I thought - thanks to my colleagues’ persistence, I have started to open up to a more nuanced answer; perhaps there are a number of aspects that a tester needs to give special consideration to when tasked with testing a “cloud application” – many of which are related to the infrastructural nature of the cloud. For example: - Performance – Applications running in a cloud run on hardware that you might not have any control over, and that they share with other applications. Therefore, ensuring both performance and required scalability is extremely important. Be sure to test performance in a cloud environment similar to the one you will be using in production. If you know that your application shares resources with other applications under your control, run load tests on both at the same time to see if they affect each other. In production, be sure to use monitoring as a means to continuously validate both performance and functionality while your application is in production – ensuring that it scales as required. Using cloud resources to scale under load can be costly, so knowing where that breakpoint is and monitoring to see how close you are to it can also help you budget correctly for your infrastructure needs. - Security – Since cloud applications usually share resources and infrastructure with others, you have to give extra consideration to data privacy and access control issues. Is sensitive data encrypted when stored? Are access control mechanisms in place in all possible situations and at all levels? This is just as valid for applications hosted in a private cloud; data intrusion and “theft” could even happen “by accident” if, for example, a backup for one cloud application happens to access resources or data related to another application. - Third-party dependencies – Cloud applications are likely to consume external APIs and services for providing some of their functionality. You should consider testing and monitoring these as if they were part of your own solution (since that’s what they are from your users’ point of view). You want to make sure they work as you need them to and you want to be the first to know when they don’t. One common interpretation of “cloud testing” that many vendors seem to adhere to is using the cloud to run or manage the tests themselves. For example, testers can use the cloud to generate massive distributed load tests, simulate a large number of mobile devices, or run functional and performance monitors from all over the world. Although these are all extremely valuable offerings themselves, they are not very specific for testing cloud applications. So, calling it “cloud testing” is kind of a stretch in some situations. So, did this initial insight help me in understanding what “cloud testing” is? Well, to be honest, not really. Although I do agree that there are things to be aware of when testing an application in the cloud, I still struggle with “cloud testing” being something separate than “regular” performance, integration or security testing. All of these just need some special consideration and understanding when applied to an application running in the cloud. I’m not sure if this has cleared or created similar confusion regarding “cloud testing” for you, but please don’t hesitate to share your view on what “cloud testing” means for you and your organization. Did you have the same dizzies as I did – or is it just me being overly technocratic?
<urn:uuid:e47e4325-3873-48ea-bf4b-b4b5195e1bec>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225173/opensource-subnet/what-exactly-is--cloud-testing--anyway-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943275
1,282
2.609375
3
6.2.2 What is the NSA? NSA is the National Security Agency, a highly secretive agency of the U.S. government created by Harry S. Truman in 1952. The NSA's very existence was kept secret for many years. For a history of the NSA, see Bamford [Bam82]. The NSA has a mandate to listen to and decode all foreign communications of interest to the security of the United States. It has also used its power in various ways to slow the spread of publicly available cryptography in order to prevent national enemies from employing encryption methods that are presumably too strong for the NSA to break. As the premier cryptographic government agency, the NSA has huge financial and computer resources and employs a host of cryptographers. Developments in cryptography achieved at the NSA are not made public; this secrecy has led to many rumors about the NSA's ability to break popular cryptosystems like DES (see Section 3.2), as well as rumors that the NSA has secretly placed weaknesses, called ``trapdoors,'' in government-endorsed cryptosystems. These rumors have never been proved or disproved. Also the criteria used by the NSA in selecting cryptography standards have never been made public. Recent advances in the computer and telecommunications industries have placed NSA actions under unprecedented scrutiny, and the agency has become the target of heavy criticism for hindering U.S. industries that wish to use or sell strong cryptographic tools. The two main reasons for this increased criticism are the collapse of the Soviet Union and the development and spread of commercially available public-key cryptographic tools. Under pressure, the NSA may be forced to change its policies. The NSA's charter limits its activities to foreign intelligence. However, the NSA is concerned with the development of commercial cryptography, since the availability of strong encryption tools through commercial channels could impede the NSA's mission of decoding international communications. In other words, the NSA is worried that strong commercial cryptography may fall into the wrong hands. The NSA has stated that it has no objection to the use of secure cryptography by U.S. industry. It also has no objection to cryptographic tools used for authentication, as opposed to privacy. However, the NSA is widely viewed to be following policies that have the practical effect of limiting and/or weakening the cryptographic tools used by law-abiding U.S. citizens and corporations; see Barlow [Bar92] for a discussion of NSA's effect on commercial cryptography. The NSA exerts influence over commercial cryptography in several ways. NSA serves as an advisor to the Bureau of Export Administration (BXA) at the Commerce Department, which is the front-line agency on export determination. In the past, BXA generally has not approved export of products used for encryption unless the key size is strictly limited. It did, however, approve export of any products used for authentication purposes only, no matter how large the key size, as long as the product cannot be easily converted to be used for encryption. Today the situation is different with dramatically relaxed restrictions on export regulations. The NSA has also blocked encryption methods from being published or patented, citing a national security threat; see [Lan88] for a discussion of this practice. Additionally, the NSA serves an ``advisory'' role to NIST in the evaluation and selection of official U.S. government computer security standards. In this capacity, it has played a prominent and controversial role in the selection of DES and in the development of the group of standards known as the Capstone project. The NSA can also exert market pressure on U.S. companies to produce (or refrain from producing) cryptographic goods, since the NSA itself is often a large customer of these companies. Examples of NSA-supported goods include Fortezza (see Question 6.2.6), the Defense Messaging System (DMS), and MISSI, the Multilevel Information System Security Initiative. Cryptography is in the public eye as never before and has become the subject of national public debate. The status of cryptography, and the NSA's role in it, will probably continue to change over the next few years.
<urn:uuid:e86748cd-17e8-4d58-97a8-f2a156cdfe80>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/nsa.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00475-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959138
835
3.890625
4
In brief, cloud storage infrastructures are the hardware and software (or both) that are either assembled or delivered turnkey to physically store data, either for an organization building a private cloud storage system or a provider that offers public cloud storage services. These are not onramp type of systems that help get data to cloud storage--they are the services and systems that help provide storage in the cloud. These systems should have some key capabilities that we will look at over the next few entries. First, there should be a component that can be either fully or partially installed in your data center. In most cases, when we think of private cloud storage we think exclusively of on-premises equipment. But as we discuss in our recent video "What is Hybrid Cloud Storage?," some public cloud providers are now able to extend the public cloud into the data center, which in my mind now qualifies them to be an infrastructure provider. In either case, on-premises deployment is designed to reduce latency and maintain some sense of control. Second, cloud infrastructures have to scale. When we talk about scaling the cloud infrastructure, we typically focus on how much capacity the storage environment can handle with a single management point. These systems all have to be able to scale to multiple petabytes and all the data should be accessible from a mount point. I purposely don't use the term "file system" since many of these systems are object based. Essentially, they should have a single container that can scale to multiple petabytes of storage. The goal for a cloud storage infrastructure product to eliminate storage system sprawl as much as possible. Capacity isn't the only issue when it comes to scaling, and some of those issues might force an organization to implement multiple systems. One is a limit on the number of files or objects that can be stored. This could be a problem encountered early on by service providers that hope to have thousands, if not millions, of users all storing various types of data. There is also a performance aspect when it comes to the size of a cloud storage infrastructure. If the cloud storage infrastructure is being used to support a repository function like archive, file sharing, or backup, then modest but consistent performance is needed as capacity is added. These are the systems that our project is focusing on. There is also a cloud storage infrastructure that is going to be used to support cloud computing applications and in that case, performance demands will outweigh capacity demands. That issue will probably be a separate project for us in the future. Having an on-premise capability and the ability to scale are just the initial capabilities to look for in a cloud storage architecture. Another is the ability to curtail costs and is something that we will look at in the next entry in this series. Follow Storage Switzerland on Twitter
<urn:uuid:5e45e024-9fe1-4b00-9d55-1d7c0647d2f4>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/cloud-storage-infrastructures-raise-many-issues/1863246327
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00383-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955841
569
2.671875
3
The rise of virtualization technology coupled with the economic downturn of the late 2000’s has resulted in a tremendous surge in the use of “the cloud” (Software, Platform, or Hardware as a Service) to reduce costs and increase business agility. However, this also means increased risk as cloud service providers are often handling sensitive data on our behalf. Complicating the issue is the rise of regulations governing the data we are pushing to the cloud. Sarbanes Oxley (financial), 47 state PII regulations (personally identifiable information), HIPAA (medical), and PCI (credit card) have dramatically increased our responsibility to ensure that third parties handle our data in a manner consistent with our security requirements. This has resulted in a tremendous burden for cloud service providers to be able to “prove” they are secure and compliant – and for us the consumers of cloud services, to make certain that they are. Interestingly, perhaps even conveniently, both problems share the same answer – Relief is spelled: I-S-O-2-7-0-0-1. What is ISO-27001? Simply put, ISO- 27001 is an internationally recognized standard that makes it easy to know you are secure and to be able to prove it. It defines a systematic approach to managing information security risk, often referred to as an Information Security Management System (ISMS). The ISO-27001 “story” began in 1987 when Ronald Reagan was President, CompuServe was king, and HTML was still a gleam in Tim Berners Lee’s eye. At that time the British government had the foresight to realize that the growth of digital information and its flow across networks and systems posed a new-found and significant risk. In order to address this risk they developed BS-7799 “a code of good security practice” (actually a collection of 127 good security practices) to define the “controls” necessary to keep critical government information secure. By 1995, with the internet driving new risk, BS-7799 had evolved to be the de-facto guidance on information security. At that time it was formally adopted by the International Standards Organization as ISO-17799 (now referred to as ISO-27002). The only challenge with ISO17799/27002 was that it was a “code of practice” -- not a “standard” – so it wasn’t possible for an organization to be sure they had leveraged it optimally or for an auditor to formally opine with a traditional pass or fail verdict. That challenge was solved by the development of BS-7799-2 which spelled out what an organization needed to do to best leverage the code of practice and what an auditor needed to do to validate that the organization was compliant with the standard. In 2005 BS-7799-2 became ISO-27001 - and the world’s first internationally recognized Information Security standard was born. An unexpected realization of the development of BS-7799-2 / ISO-27001 is that the ISMS itself is of far greater (and more fundamental) importance than the Code of Practice itself. As Stephen Covey often says: “If the ladder is not leaning against the right wall, every step we take just gets us to the wrong place faster.” ISO-27001 for the Service Provider No matter the industry (e.g., debt collection, eDiscovery, hosting) or service offering (e.g., managed services, Software as a Service, Hardware), organizations processing data on behalf of their clients are experiencing the pain of proving they are secure and compliant with client standards and/or the myriad of regulations to which their clients are obligated. Their challenge is exacerbated by their market success, as each new client has “their” security/regulatory requirements and means of assessing the same. This results in the “successful” service provider enduring dozens of penetration tests, control questionnaires, on-site client audits, and/or an independent SAS-70 (now SSAE-16). Several of our clients have small teams dedicated to addressing these “attestation” requirements year-round – a costly and time-consuming process. The logical response to these disparate demands is to “simplify”: Prove you are secure to all of your clients with a single standard– ISO-27001. Once you have developed your Information Security Management System (ISMS) you undergo a “certification audit” performed by an ISO validated registrar who issues a certificate demonstrating that you are compliant with the standard. At that point, proving you are secure and compliant becomes as simple as providing a copy of your certificate. Sound promising? It is. That’s why worldwide organizations like SalesForce, Microsoft, and Amazon have chosen ISO-27001 to demonstrate they are secure to the clients that entrust critical data to them. ISO-27001 for Everyone Else (Two Sides of the Same Coin) Consumers of cloud services also feel the “pain” associated with cloud usage: How do they verify that they themselves are keeping their data secure? How do they prove the same to key stakeholders? How do they know that the third party service providers they are leveraging are keeping their data secure? These issues are especially relevant in situations where organizations are processing Personally Identifiable Information (PII) and the cost of a third party breach may be measured in millions of dollars . ISO-27001 can be leveraged in two distinct ways by the “non-service provider”. Vendor Risk Management Simplified Managing vendor risk is a problem for many: - Determining and formally documenting the risk controls required to ensure the security of your data for third party can be a challenging task. - Communicating these requirements to (and adapting them for) each third party in a non-ambiguous way is even more challenging. - Ensuring that the requirements remain up to date each time a new threat, vulnerability, or regulation emerges is virtually impossible. ISO-27001 simplifies Vendor Risk Management. Rather than detailing 100+ controls (across hundreds of contract pages) your ISO 27001 focused organization only needs to communicate a handful of key risks. As long as the third party incorporates these as an input into their ISMS (remember ISO-27001 is a risk based approach) you can be confident that your risks are being appropriately managed. Information Security Simplified As data becomes increasingly mobile, network borders become fuzzier, third party handling of your data becomes more prevalent, and regulatory requirements multiply, the process of managing internal and external information security risk becomes even more challenging. These “worries” are exacerbated by the need to provide assurance to key organizational “shareholders” (e.g., CXO, Audit committee, Board) that these risks are under control. Therefore, the idea of leveraging a “cookbook” that has been vetted by tens of thousands of other organizations over a 15 year period is an appealing one. Better yet, this approach aligns with your existing enterprise risk management principles, and it’s relatively straightforward to execute; thus, security becomes “simplified.” Looking for Information Security Relief? If the challenges of proving that you and/or key service providers are keeping your data secure and complying with key laws/regulations -- join the nearly 7,500 certified companies that have chosen to spell relief: I-S-O-2-7-0-0-1. Tim Berners-Lee invented the World Wide Web in 1989. He wrote the first web client and server in 1990. His specifications of URIs, HTTP and HTML were refined as Web technology spread. Stephen Richards Covey is the author of the best-selling book, "The Seven Habits of Highly Effective People" The average cost of a corporate data breach reached $7.2 million in 2010, up from $6.8 million in 2009, according to the 2010 Annual Study: U.S. Cost of a Data Breach conducted by the Poneman Institute.
<urn:uuid:201b0efd-a778-490c-856d-b9bc4e428da6>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/20266-Information-Security-Relief-is-Spelled-ISO-27001.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948062
1,694
2.546875
3
Notepad++ is a common tool used when editing files in a site. One of the most useful functions within the tool is its ability to search for strings and even search and replace within an entire file structure. When doing this users may find the need to search for wildcard strings, or strings that contain some static information and then more information that is specific to each instance. Many users may be aware of the wildcard function, though it is not outlined within Notepad++. - Open Notepad++ - Select Search on the top bar, and then select Find... (for single files) or Find in Files... (for multiple files) - In the Find What: box place the static part of the string, or the part that does not change per instance - Immediately following the beginning of the string place the wildcard .* - Enter the end of the string you wish to search for you immediately after the wildcard. Your Find What: bar will look something like the following example (the wildcard has been highlighted for emphasis) - Make sure that Regular expression is checked at the bottom and pressed Find All - If your files contain the information you are searching for they will be displayed in Notepad++
<urn:uuid:308bd61d-992d-4a22-aa0a-24fa2dc4cb2a>
CC-MAIN-2017-04
https://support.managed.com/kb/a2315/how-to-use-notepad-wildcard-function.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.866695
249
2.5625
3
Scrolling and panning If content on a screen extends beyond the viewing area, scroll arrows appear. Users can scroll by moving a finger on the trackpad, or, on BlackBerry devices with a touch screen, users can scroll by dragging a finger vertically across the touch screen. When users scroll, the movement of the content is proportional to the input rate of the motion. The quicker the motion, the faster the content on the screen moves. If a highlighted item moves out of view, the highlighted item moves back into view as scrolling decelerates. Users can pan a picture, map, or web page by moving a finger on the trackpad, or, on BlackBerry devices with a touch screen, users can use a finger to drag the content on the screen. The content moves in the same direction and at the same rate that users move their finger.
<urn:uuid:93fe27b6-c442-4ed2-9e12-fb1cef4e59e0>
CC-MAIN-2017-04
https://developer.blackberry.com/design/bb7/scrolling_and_panning_6_1_1508829_11.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00138-ip-10-171-10-70.ec2.internal.warc.gz
en
0.814684
173
3.125
3
Both approaches achieve their effect by denying access to 1. Certain classes of instructions, and 2. Certain segments of memory and sensitive files. instructions are usually divided into two classes: user instructions and privileged instructions. User instructions are those that are not privileged. Instructions can be labeled as privileged for a number of reasons. Confusion Instructions such as input / output instructions can cause difficulties if executed directly by the user. Consider output to a shared print device. Security Instructions such as memory management can cause severe security problems if executed by the user. I can directly read and corrupt your program memory. Rings of Protection The simple security models in a computer call for rings of protection. The protection rings offered by the Pentium 4 architecture (IA–32) are fairly typical. Attempts to read data at higher (less protected) rings are permitted. to read data at lower (more protected) rings are not permitted and cause traps to the operating system. The PSW (Program Status Word) Often called the PSL (Program Status Longword), as with the VAX–11/780. really a word or longword, but a collection of bits associated with the program Some bits reflect the status of the program under execution. N the last arithmetic result was negative Z the last arithmetic result was zero V the last arithmetic result caused an overflow C the last arithmetic result had a “carry out” security–relevant parts of the PSW relate to the protection ring that is appropriate for the program execution. VAX–11/780 and the Pentium 4 each offered four protection rings. The ring number was encoded in a two–bit field in the PSW. The VAX stored both the ring for the current program and the previous program in the PSW. This allowed a program running at kernel level (level 00) to determine the privilege level of the program that issued the trap for its services. Commercialization of the Protection Rings Early computers had operating systems tailored to the specific architecture. Examples of this are the IBM 360 and OS/360; VAX–11/780 and VMS. modern operating systems, such as UNIX, are designed to run on many hardware platforms, and so use the “lowest common denominator” of protection rings. This figure shows the IA–32 protection rings as intended and as normally implemented. The problem here is that any program that executes with more than user privileges must have access to all system resources. This is a security vulnerability. SPOOLING: A Context for Discussing O/S Services The term “SPOOL” stands for “System Peripheral Operation On–Line”. Direct access to a shared output device can cause chaos. The Spooling approach calls for all output to be to temporary files. The print manager has sole control of the shared printer and prints in order of closing the files. Privileges: User vs. SPOOL User Program must be able to create a file and write data to that file. It can read files in that user’s directory, but usually not in other user’s directory. It cannot access the shared printer directly. Print Manager must be able to: Read a temporary file in any user’s directory and delete that file when done. Access a printer directly and output directly to that device. It should not be able to create new files in any directory. The Print Manager cannot be run with Level 3 (Application) privilege, as that would disallow direct access to the printer and read access to the user’s temporary files. Under current designs, the Print Manager must be run with “Superuser Privileges”, which include the ability to create and delete user accounts, manage memory, etc. This violates the principle of least privilege, which states that an executing program should be given no more privileges than necessary to do its job. We need at least four fully–implemented rings of privilege, as well as specific role restrictions within a privilege level. Memory paging divides the address space into a number of equal called pages. The page sizes are fixed for convenience of addressing. Memory segmentation divides the program’s address space into logical segments, into which logically related units are placed. As examples, we conventionally have code segments, data segments, stack segments, constant pool segments, etc. Each segment has a unique logical name. All accesses to data in a segment must be through a <name, offset> pair that explicitly references the segment name. For addressing convenience, segments are usually constrained to contain an integral number of memory pages, so that the more efficient paging can be used. Memory segmentation facilitates the use of security techniques for protection. data requiring a given level of protection can be grouped into a single with protection flags specific to giving that exact level of protection. All code requiring protection can be placed into a code segment and also protected. is not likely that a given segment will contain both code and data. For this reason, we may have a number of distinct segments with identical protection. Segmentation and Its Support for Security The segmentation scheme used for the MULTICS operating system is typical. segment has a number of pages, as indicated by the page table associated with the segment. The segment can have a number of associated security descriptors. Modern operating systems treat a segment as one of a number of general objects, each with its Access Control List (ACL) that specifies which processes can access it. More on Memory Protection There are two general protections that must be provided for memory. Protection against unauthorized access by software can be provided by a segmentation scheme, similar to that described above. Each memory access must go through each of the segment table and its associated page table in order to generate the physical memory address. Direct Memory Access (DMA) provides another threat to memory. In DMA, an input device (such as a disk controller) can access memory directly without the intervention of the CPU. It can issue physical addresses to the MAR and write directly to the MBR. This allows for efficient Input / Output operations. Unfortunately, a corrupted device controller can write directly to memory not associated with its application. We must protect memory against unauthorized DMA. Some recent proposals for secure systems provide for a “NoDMA Table” that can be used to limit DMA access to specific areas of physical memory. Securing Input and Output Suppose that we have secured computing. How can we insure that our input and output are secure against attacks such as key logging and screen scraping? Suppose that we want to input an “A”. We press the shift key and then the “A” key. keyboard sends four scan codes to the keyboard handler, operating in user mode. This might be 0x36, 0x1E, 0x9E, 0xB6 – for pressing the Shift Key, then pressing the “A” key, then releasing the “A” key, then releasing the Shift Key. This sequence is then translated to the ASCII code 0x41, which is sent to the O/S. Either the scan codes or the ASCII code can be intercepted by a key logger. When the program outputs data, it is sent to the display buffer. The display buffer represents the bit maps to be displayed on the screen. While it does not directly contain ASCII data (just its “pictures”), it does contain an image that can be copied and interpreted. This is called “screen scraping”. Protecting the Code under Execution We can wrap the CPU in many layers of security so that it correctly executes the code. How do we assure ourselves that the code being executed is the code that we want? More specifically, how do we insure that the code being executed is what we think it is and has not been maliciously altered? One method to validate the code prior to execution is called a cryptographic hash. One common hash algorithm is called “SHA–1” for “Secure Hash Algorithm 1”. This takes the code to be executed, represented as a string of 8–bit bytes, and produces a 20 byte (160 bit) output associated with the input. hardware can have another mechanism that stores what the 20–byte hash should The hardware loads the object code, computes its SHA–1 hash, and then compares it to the stored value. If the two values match, the code is accepted as valid. What Is a Cryptographic Hash? First, we begin with the definition of a hash function. It is a many–to–one function that produces a short binary number that characterizes a longer string of bytes. Consider the two characters “AC” with ASCII codes 0100 0001 and 0100 0011. One hash function would be the parity of each 8–bit number: 0 and 1 (even and odd). Another would be the exclusive OR of the sequence of 8–bit bytes. C 0100 0011 Å 0000 0010 The hash function must be easy to compute for any given input. A cryptographic hash function has a number of additional properties. 1. A change of any single bit in the input being processed changes the output in a very noticeable way. For the 160–bit SHA–1, it changes about 80 of the bits. 2. While it is easy to compute the SHA–1 hash for a given input, it is computationally infeasible to produce another input with the identical 20–byte hash. Thus, if a code image has the correct hash output; it is extremely probable that it is the correct code image and not some counterfeit. This is not the common definition of Virtual Machine as used by authors of books on Computer Architecture. Precise Definition of Virtual Memory Virtual memory is a mechanism for translating logical addresses (as issued by an executing program) into actual physical memory addresses. This definition alone provides a great advantage to an Operating System, which can then allocate processes to distinct physical memory locations according to some optimization. Although this is a precise definition, virtual memory has always been implemented by pairing a fast DRAM Main Memory with a bigger, slower “backing store”. Originally, this was magnetic drum memory, but it soon became magnetic disk memory. The invention of time–sharing operating systems introduced another variant of VM, now part of the common definition. A program and its data could be “swapped out” to the disk to allow another program to run, and then “swapped in” later to resume. Spoolers can be written with security rules to prevent disclosure of sensitive data
<urn:uuid:e2c3d935-1d1f-428b-b2e6-8e84ddb1902b>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155_Slides/Chapter09/MoreSecureOS.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908671
2,291
3.1875
3
HOME Environment Management Enviro Management Air Pollution Control Under a joint project with schools in the local community, Hanwha Techwin is implementing measures to continuously enhance the working environment at its production sites and reduce emissions of air pollutants, by, for instance, improving hoods used for ammonia gas released during plating processes and installing dual air cleaning units. Previously, chemical storage tanks in our production sites had exposed vents that were placed indoors. During the feed of chemicals, acid mist and other pollutants were released into the atmosphere, undermining the safety of the workplace. Chemical storage tanks at Hanwha Techwin now have a dual structure, and chemical fumes are released through a scrubber so as to reduce human exposure to toxic gases and more efficiently eliminate air pollutants. As a result, the level of concentration of air pollutant emissions has been significantly lowered. Noise produced from machine rooms and air vent lines is a serious pollution affecting not only workers onsite, but also those on neighboring workplaces. At Hanwha Techwin, we effectively cut down the level of noise by installing silencers at six areas including machine rooms and air vent lines, located outside job sites.
<urn:uuid:3af37448-0832-4790-96cf-37be277f7936>
CC-MAIN-2017-04
http://eco.hanwhatechwin.com/activity/act_air.asp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00532-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965523
236
2.6875
3
Securing America’s future in science, technology, engineering and math fields requires more than expanding opportunities for women. Promoting interest and opportunities for minorities also should be a national imperative, particularly as more than half of children born in the United States today are of minority descent. That was the topic of a symposium at the National Academy of Sciences on Tuesday that sought to find solutions for providing minorities and women with proven pathways for obtaining good jobs and a higher standard of living through STEM education. The event, hosted by the Leadership Conference on Civil and Human Rights, highlighted that now, 60 years after the landmark Supreme Court decision in Brown v. Board of Education, education in the United States remains separate and unequal for many minorities, children with disabilities and those living in high-poverty areas. STEM is one area that has great potential to reverse that trend and help the United States maintain a competitive edge, experts noted. “The era of pick-and-shovel jobs is gone,” said Wade Henderson, president and chief executive officer of the Leadership Conference on Civil and Human Rights. “Those who would support themselves in the 21st century need a high school diploma and more -- career training, an associates degree, or ideally, a four-year college degree.” The symposium explored the need to pique girls' and minority children's interest in science and math; the importance of expanding access to Advanced Placement courses and broadband access; and the need for more technology-competent teachers. The Obama administration’s five-year strategic plan for STEM education shows that only 2.2 percent of Hispanics and Latinos, 2.7 percent of African Americans and 3.3 percent of Native Americans and Alaska Natives have earned a first university degree in natural sciences or engineering by age 24. Women represent less than 20 percent of bachelor’s degree recipients in areas like computer science and engineering, and hold less than 25 percent of STEM jobs. “We have only 21 percent of students in high school STEM programs who are girls, and we know girls are about half of the kids in high school,” said Catherine Lhamon, assistant secretary of the Education Department’s Office for Civil Rights. “We are not serving our girls period in STEM.” David Johns, executive director of the White House Initiative on Educational Excellence for African Americans, spoke about programs like My Brother’s Keeper that aim to unlock opportunities for boys and young men of color. Lhamon urged symposium participants as well as government agencies, policymakers, educators and the public to visit OCRdata.gov, the Education Department’s civil rights data collection website, to analyze student equity and opportunity. She also stressed the need for continued funding for programs like Race to the Top, which provides competitive grants to states willing to innovate and reform K-12 education, in helping open up opportunities in STEM to minority students and women. “We should be asking questions about whether disparities present in the data warrant further action and warrant changes at our schools and districts in our state,” Lhamon said. “We should be doing better than offering calculus to a few said students; we should be doing better than offering physics to a few said students. We need to be sure we have access to teachers that are prepared for them and schools that are prepared for them. … We should be one joined community in demanding civil rights for all of our kids.”
<urn:uuid:66fd27ff-aeb8-48ce-90f3-3def51aa8dab>
CC-MAIN-2017-04
http://www.nextgov.com/cio-briefing/wired-workplace/2014/05/link-between-stem-training-and-civil-rights/85411/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00164-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955788
714
2.828125
3
Steve Wallach, a supercomputing legend and recipient of the 2008 IEEE Seymour Cray Award, has participated in all 22 supercomputing shows. He is known for his contributions to high performance computing through the design of innovative vector and parallel computing systems. He is co-founder and chief science officer for Convey Computer Corp., a new company with a hybrid-core computer that marries the low cost and simple programming model of a commodity system with the performance of customized hardware architecture. Never short on opinions, especially when it comes to high performance computing, Steve Wallach talked to HPCwire about the future of HPC and how lessons from the past can point the way for the future. HPCwire: There’s been a lot of talk about how recent architecture advancements will bring GPU computing into the mainstream for high performance computing with significant speedups and energy savings. You disagree. Why? Steve Wallach: GPUs are an interesting technology and some applications will probably see significant speed-up, but I don’t see them in the mainstream. Here’s why: programmers will have to put in a lot of effort to get the speed-up. Real-world applications consist of millions of lines of code, and organizations have invested too much money in those programs. If you tell them they have to modify those programs to use your technology, you lose. And it’s not just the software that has to be changed; it is the entire programming eco-structure: debuggers, profilers, and programming memory. Anything that disturbs those underlying realities is destined to become a niche player. This is the biggest difference between an accelerator and a coprocessor. A coprocessor is an extension to the instruction set and is part of the same environment. GPUs are not. A GPU consists of two different programming environments and you have to move the data back and forth between them to get the benefits. The host cannot see the memory of the GPU; there are two separate address spaces. It’s similar to what we saw with attached array processors in the 80s. What we saw back then that you had to explicitly move and manage the data — which reduced programmer productivity, raised actual cost of ownership and ultimately reduced performance. Like back then, the GPU programming model is different from its host. GPUs initially did not have ECC correctable memory, now they do. This, however, demonstrates their lack of general purpose computing requirements. You have to work hard to make it work and not every application is amenable. The memory structure of a GPU is meant to be optimal for sequential access, but many programs require non-unity stride which will reduce performance for those applications. Classical supercomputers from Cray, Convex, NEC, and Fujitsu had very high bandwidth, highly interleaved main memory. A GPU is not going to be a general-purpose or a wide-spread solution for technical and software reasons. You can only execute the “hot spot” on the GPU, for example, and still need a classical host like the x86. It is not an integrated system. And, as of now, GPUs do not support virtual memory. The GPU is really just a contemporary version of an attached array processor. If you look at the last 30 years, the architectures that have succeeded in the long term have been the ones that are easiest to program and that fit into the current environment. New languages take time to be learned and adopted. Organizations can’t hire the right people to program the machines. Each new full-time equivalent programmer who has to be hired can easily add $200,000 to $300,000 to the costs of the new system per year. This is not a new phenomenon; it has been true for the past few decades. The time to reconfigure is really expensive. HPCwire: You’ve said that “software is the ‘Trojan Horse’ of high-performance computing.” What do you mean by that? Wallach: As an organization, you accept the hardware — the horse — and then the next day the software warriors pour out and devour your IT department. As technology enthusiasts, we get excited by new technologies based on peak performance micro-architecture and the software questions come later as well as questions about “how do I fit it into my environment?” and “will I be able to achieve this level of performance with my applications?” This has been true for the last 30 and will be true for the next 30 years. If you go back to the 80s — you had all kinds of interesting technologies like array processors and others, but the ones that had the best software succeeded such as Convex, Cray and Alliant. They succeeded because the programmer could leverage the technology for their FORTRAN and C environments. Integrated solutions like these succeeded and companies like CDC failed because their software was part of an anemic development environment. As another example from the past, the Japanese (Fujitsu and NEC) had exceptional software environments. Fast forward to today. It’s like déjà vu all over again. A lot of new technologies are evolving but are not dealing with the software environment. Previous FPGA vendors had this problem. They were not integrated with the host environment. Vector processors, such as ClearSpeed, have this problem and this is true of all accelerators and GPUs. The GPUs have some great technologies for visualization, for example, but are not integrated. You have to learn how to program in new languages like CUDA and there aren’t a lot of major applications written in CUDA. Programmers have to re-code or set up source translators that facilitate FORTRAN to CUDA. There are no translators for FORTRAN to assembly code and from a technical perspective it is much more efficient to go from FORTRAN to assembly code. Source to source translators are NOT as efficient as compilation to assembly code. HPCwire: You talk about Convey’s hybrid-core computer as being an application specific, low power node. What is the significance of this description to the market? Wallach: In the past decade, every generation has added new, specific instructions to general purpose computers to speed performance. For example, the current x86 system enhances image processing and new instructions have been developed to enhance vector processing. Since clock cycles are basically flat, you will see the trend toward specific instructions built into the microprocessors increasing. If one instruction can replace 10 instructions, you will have reduced power for that application. Our view is that it is now time to step up and increase the functionality of this approach. We advocate having one instruction to replace 100 instructions. Now you don’t have to rely on clock cycles to increase performance. You are relying instead on data and control paths. This approach is extremely useful for Convey and allows us to significantly increase performance while reducing power requirements, footprint and overall facility costs for a data center. HPCwire: In order to be successful, do you think new computing paradigms need to leverage existing eco-structures like Linux and Windows? Wallach: Absolutely. As I said before, new languages mean higher costs and lower productivity. In VC deals, whenever I hear that you have to program in a new language to make it work, I turn it down. With new computing paradigms, you get several benefits when they leverage existing eco-structures like Linux and Windows. First off, they are more easily acceptable in the marketplace. If I’m the data center manager, I don’t have to hire anyone new or have training for a new eco-structure. No need to program in OCCAM, for example. I call programs that don’t take into consideration legacy systems and that are obscenely difficult to integrate, “pornographic” programs — you can’t always describe them exactly, but you know them when you see them. In 1984, I converted a FORTRAN program from CDC to ANSI FORTRAN to see what they were doing and it was awful. In the contemporary world, CUDA is the new pornographic programming language. In addition, Windows and Linux allow for adoption of related technologies from other industries without changing the programming environment. Industry innovators such as the researchers at Lawrence Berkeley National Laboratory believe, for example, that future supercomputers will use the processors found in cell phones and other hand-held devices. Why? Because they use so little energy and have proven that they can handle sophisticated tasks (October 2009, IEEE Spectrum: “Low-Power Supercomputers” ). It is easy for the manufacturers to build chips designed for specific HPC applications just like they build different chips for each Smartphone brand. Chip manufacturers will also provide the software — compilers, debuggers, profiling tools, even complete Linux operating systems — tailored to each specific chip they sell which will make the new systems easy to integrate into a current environment. HPCwire: Last year in HPCwire you said the future of HPC involves improved software, in particular more widespread use of PGAS languages and optical interconnects. Is this still the case? Wallach: Yes. I believe the need for optical interconnects increases as we build large systems. The efficiency of scaling in parallel processing has to do with bandwidth and latency. Optical interconnects are much more efficient in terms of speed and power as compared to copper. PGAS (partitioned global address space) languages allow programmers a global view of their dataset and are much more efficient. PGAS languages also make it much easier to program highly parallel systems — they are much better than MPI. HPCwire: Speaking of software, where is Convey on its development of different software personalities? Wallach: We are on track with our development of personalities. Convey’s personalities are application architectures and instruction sets that support a wide array of application-specific solutions. Rather than develop hundreds of unique applications, we a creating a manageable number of personalities that can be leveraged in hundreds of different ways. We’ve shipped a range of different personalities for different customers, and we’ve got several others in development. In the end, we anticipate developing around a dozen different core personalities. This is consistent with what leading researchers have determined, also. For example, in the study published by the University of California at Berkeley, “The Landscape of Parallel Computing Research: A View from Berkeley,” researchers define what they call MOTIFs or computer application structures for HPC. They describe 13 computer application structures on the Y access with the X access representing a particular application and how it uses that structure. Berkeley’s view is consistent with ours that there are approximately a dozen different personalities that cover the full spectrum of computing. In our development, we add a third element to the equation — the memory system — and see this as a three-dimensional grid. For this case, a unity stride (access sequential elements — dense data); or a highly interleaved (access non-sequential elements — multiple independently accessible memory banks — sparse data); or a “smart” memory (PIM – perform specific operations in the memory system — thread based) system is required for optimal performance. We are on track to have personalities with memory structure and instruction sets with these MOTIFs, which is where we believe computing is going. For the HC-1, we ultimately anticipate 13 MOTIFs — but some will use the same personality. HPCwire: Convey has just started shipping production units, can you tell us about the company’s early customers and how they’re using the HC-1? Wallach: Early applications for the HC-1 follow the classic profile of HPC applications: signal-image processing, computer simulations, bioinformatics, and other applications we can’t discuss at this time. We have HC-1s going into the world’s leading research labs, all of which we will talk about during SC09 at our booth. You can catch up with Steve Wallach during SC09, where he is participating in a talk on “HPC Architectures: Future Technologies and Systems” from 1:30-2:00 p.m. on Thursday (Rm. E143-144); or at Convey’s booth (#2589).
<urn:uuid:b58ea978-b10e-42c0-b5cc-45dba9cf8e30>
CC-MAIN-2017-04
https://www.hpcwire.com/2009/11/16/d_j_vu_all_over_again/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00164-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948445
2,573
2.625
3
DNS tunneling — the ability to encode the data of other programs or protocols in DNS queries and responses — has been a concern since the late 1990s. If you don't follow DNS closely, however, DNS tunneling likely isn't an issue you would be familiar with. Originally, DNS tunneling was designed simply to bypass the captive portals of Wi-Fi providers, but as with many things on the Web it can be used for nefarious purposes. For many organizations, tunneling isn't even a known suspect and therefore a significant security risk. When most organizations think of DNS security, there is a tendency to overlook the security of critical data or systems being compromised by covert outbound DNS inside their networks. To add some perspective, over the past several years there have been at least two large-scale security breaches using tunneling, affecting millions of accounts. In part one of this blog, we will provide some background on DNS tunneling in addition to discussing how DNS tunneling can be used to infiltrate your internal infrastructure. In part two, we will discuss how to detect whether your organization is affected by DNS tunneling and some proactive tips you can use to better protect your network. What is DNS tunneling? Through DNS tunneling, an organization's DNS can be used as a method of command and control and/or data exfiltration. The basic method of tunneling requires that a client be compromised in some way. This follows the norm: malware via email attachment, compromised site, social engineering, etc. While all of those methods of delivery typically require the compromised client to have external connectivity, interestingly, the compromised machine doesn't need actual external connectivity. The machine simply requires access to an internal DNS server with external access, which will enable the machine to send and receive DNS responses. In addition to compromising the target organization, the attacker must also control a domain and a server that can act as an authoritative server for that domain in order to run the server-side tunneling and decoding programs. Hackers use a variety of DNS tunneling utilities as well as several known malwares that use DNS as their communication channel. While each utility varies on the specifics of how they work, they all transmit the data encoded in the payload using Base32/Base64 Binary, NetBios, or Hex encoding. Hackers also use a wide variety of DNS record types, from A records to CNAME, to MX and TXT records, all of which can be combined with EDNS to increase the payload. (TXT records are the most common because they offer the largest and most diverse payload structure.) How can DNS tunneling be used in a network breach? Because DNS is rarely monitored and analyzed, hackers are able to use DNS tunneling to slip under the radar — until something else draws attention to the breach. Here is the usual sequence of activities: Join us next week for part two in this series. Learn steps you can take to identify whether your network has been compromised and some tips for preventing a breach. By Chris Beauregard, Senior Professional Services Engineer at Neustar |Data Center||Policy & Regulation| |DNS Security||Regional Registries| |Domain Names||Registry Services| |Intellectual Property||Top-Level Domains| |Internet of Things||Web| |Internet Protocol||White Space| Afilias - Mobile & Web Services
<urn:uuid:b2f1d6c0-8e92-4872-9ffe-62e06841ec16>
CC-MAIN-2017-04
http://www.circleid.com/posts/20131030_dns_tunneling_is_it_a_security_threat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903418
700
2.921875
3
- Availability: H... Availability: How many 9s are enough? By: Bob Landstrom In this post we’ll discuss this term “availability,” as it applies to the data centre world and give some perspective on this notion of the number of “nines.” Availability vs. Reliability Let’s first talk about this term, “availability,” and how it is different from the better understood term, “reliability.” Availability is a probability (though a conditional one…) that the system delivers the required service at the time it’s called upon to do so. Availability considers that when the system fails, it can be repaired and restored for service. That is, it includes the time to repair, and this is a critical aspect of availability. Availability is often expressed in terms of some number of nines. “Three 9’s,” for example, is the same as saying “99.9% availability.” This is different from “Reliability,” which is simply the probability of the system will perform its function for a given period of time, under certain conditions. Reliability is often expressed in terms of Mean Time Between Failure (MTBF) or Failure Rate. Unlike availability, reliability is not a conditional probability, and does not include maintenance or repair. Availability in Real Life Let’s use an example of 99.9% (three 9’s) availability, to see what availability could mean in more tangible terms. - 99.9% availability means, for example: - 44 minutes of unsafe drinking water per month - 3 crash-landings per week at Heathrow - 3,000 letters lost by the Postal Service every hour - 2,000 surgical mistakes in the NHS every week. - 9,000 incorrect banking debits per hour - 36,000 missed heartbeats per year (9 hours) These are all different scenarios, and all unacceptable, but perhaps surprisingly, all pertain to the same availability value. Let’s look at this from a slightly different angle. This table shows the amount of time annually that a system is down (or unavailable), given the number of nines availability that system has. We see here that even a 5-nines system has on average, over five minutes of downtime annually. |% Availability||Amount of time unavailable, annually| It’s important to remember that the true availability performance of a data centre is not based solely upon engineering and certifications, but also requires superior operational processes and discipline. A skilled and well trained data centre operations team, mature MOPs & SOPs, disciplined security processes, and superior engineering combine to ensure superior availability performance. In today’s always-on digital world, downtime translates into lost revenue. Hypothetical tier ratings are interesting, but a demonstrated track record of strong availability performance, along with evidence of mature and disciplined data centre operations is important for minimizing risk to your business.
<urn:uuid:38b7ca38-86e3-446d-9d89-a8f0c6122a43>
CC-MAIN-2017-04
http://www.interxion.com/blogs/2014/07/availability-how-many-9s-are-enough/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00560-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919814
634
2.859375
3