text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Simoes K.,Federal University of Goais | Magosso R.F.,Centro Universitario Of Rio Preto Da University Unirp | Lagoeiro C.G.,Centro Universitario Of Rio Preto Da University Unirp | Castellan V.T.,Centro Universitario Of Rio Preto Da University Unirp | And 6 more authors. Revista Brasileira de Medicina do Esporte | Year: 2014 Introduction: Free radicals produced during exercise may exceed the antioxidant defense system, causing oxidative damage to specific biomolecules. The lesions caused by free radicals in cells can be prevented or reduced by natural antioxidants, which are found in many foods. Lycopene is one of the most potent carotenoids with antioxidant properties, and it is used to prevent carcinogenesis and atherogenesis, as it protects molecules such as lipids, low-density lipoproteins (LDL), proteins and DNA. Objective: To investigate the role of lycopene as a potential protector of cardiac and skeletal muscle fibers against oxidative stress during strenuous exercise, which would cause morphological changes in these tissues. Methods: The experiments consisted of 32 adult male rats divided into four groups: Two control groups and two trained groups with and without lycopene supplementation (6 mg per animal). The animals of the trained groups were subjected to 42 swimming sessions over a nine-week period, involving daily swimming sessions, five days a week, with overload produced by increasing the training time. The morphological analysis was performed using histological slides of cardiac and skeletal muscle tissues. Results: Modifications were observed in cardiac and skeletal muscle tissue in the trained group that did not receive lycopene supplementation, while the trained group supplemented with lycopene showed muscle tissue with a normal morphological appearance. The tissues of both supplemented and non supplemented sedentary control groups showed no change in their histological characteristics. Conclusion: It can be stated that lycopene exerted a protective effect on cardiac and skeletal muscles against oxidative stress induced by strenuous exercise, besides promoting cardiac neovascularization, and can be used efficiently by athletes and physically active individuals. Source
<urn:uuid:83dd7293-528e-4412-8333-3cca842a9d33>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/centro-universitario-of-rio-preto-da-university-unirp-2399013/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931747
447
2.78125
3
The latest tools from Apple have the potential to drive the educational digital shift and transform classroom practices. But, with any advancement in education technology comes the concerns of misuse or propagating inferior teaching habits. When technology like Apple’s Classroom app can improve the way teachers teach and students learn, these concerns should be mitigated rather than used as an excuse to not implement certain technology advantages. Here are a few benefits to share with the skeptics out there and provide more freedom for teachers. Apple’s Classroom app comes with the ability to view a student’s screen while in the classroom. This functionality allows teachers to be mobile while still being able to check in on students’ progress. Untethering teachers from their desk, whiteboard, or podium enables them to meet students' learning needs. Teachers are free to move about the room working one-on-one or with small groups of students. With screen view, the possibilities are endless In addition to increased mobility, Classroom app comes with a variety of features that promote positive and effective teaching practices: - Real-time checks for understanding. Acting as a student response system, teachers can see student progress, notes, or answers to questions displayed on their iPads in real time. Instead of waiting until test day, teachers can check for understanding multiple times throughout a lesson to ensure students are on track. - Academic achievement for every student. Seeing the progress of individual students through screen view helps teachers recognize which students are progressing adequately and which students may need more assistance. Being able to identify which students may fall behind earlier in a lesson increases the likelihood that they’ll get the help they need. - More student-to-student engagement. With the ability to AirPlay screens, teachers who observe students’ screens have the ability to recognize opportunities where student work can be spontaneously shared with the class; setting students up to be better, lifelong contributors and collaborators. - Less interruptions or conflict when students get off task. If a teacher suspects a student is off task, they can quickly and unobtrusively check in and pause their screen if necessary, thereby reducing escalation, frustration, and conflict that could arise from a negative encounter. And, this streamlined interaction ultimately leads to more active learning and a better experience for students and teachers. Fear, uncertainty, and doubt should not drive decisions. Identify concerns and risks and put plans in place to mitigate them, especially when tools enable a more engaged environment that supports student learning. While these tools bring a change to the classroom environment, they support a teacher’s need to check for understanding and ensure students are progressing as expected—all while minimizing interruptions and distractions. While the fear of new technology or the unknown may come into play, schools and teachers should consider how the benefits (creating a more engaged environment for students) far outweigh any uncertainty.
<urn:uuid:a90fda4b-e4df-4c6f-b546-734c3f5eabb3>
CC-MAIN-2017-04
https://www.jamf.com/blog/dont-fear-the-screen-how-screen-sharing-benefits-teachers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00546-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940945
584
2.984375
3
Have you ever done a Google news search on the term “water shortages”? If you haven’t, you may be in for a rude awakening. Based on the number of stories, it seems as if there is news about a municipality, county or entire region of the country dealing with an emergency-level water shortfall. And it stems from all sorts of causes, not merely the well-documented drought conditions in the West and Southwest, but by frigid temperatures brought on by the influx of polar vortexes, as well as prolonged hot and dry cycles. The bottom line is that readily available potable water is not something we can just take for granted anymore. So what, you may ask, does M2M/IoT have to do with water shortages? While M2M/IoT may not be able to change weather patterns or solve acute, unexpected events such as water main breaks that lead to an outage, there are some very specific ways it can and already is doing in playing a role to keep the wheels on the track, in everyday water management and conservation. In short, this is a post about doing the little things; the things that are ideally suited for M2M/IoT. The first is in making sure the water assets we have stay clean. In order to do this, solid waste management and wastewater treatment facilities must maintain constant guard to ensure there are no adverse events where contaminated water would leach from these facilities into groundwater supplies, and cause a widespread environmental and health hazard. The real challenge here is that the times when overflows are most likely to occur coincide eerily with the times when mechanical systems are most likely to fail (i.e., during severe weather events). Every storm is, quite literally, the perfect storm for things to go haywire. It’s in these extreme conditions that KORE customer OmniSite is at its best. The OmniSite pump station and landfill runoff systems are trusted by more than 1,300 government, municipal and private waste processing organizations across the US and Canada. By providing cellular-based M2M monitoring and control to these ultimate high-priority systems, OmniSite acts as a reliable (and coffer-conscious, btw) mechanism to keep personnel in the loop about all kinds of functional data such as pump run time, pump cycles, drawdown times and inflow. It also monitors each pump’s power current and is able to take control of the pump automatically if the main controller fails. Like the sophisticated computers that keep airplanes in the air and on course, M2M has become the called-upon technology to ensure dirty water stays well away from clean water in our highly-engineered waste treatment infrastructure. But water is more than just for drinking, we need water for another critical need – food. And here’s where another KORE customer shines. PureSense deals in extremely precise water management for the farming industry. Sensors in the field gather ongoing data about a host of factors such as leaf wetness, soil dampness, ambient air, plant response and expected rainfall, then feed that data into the irrigation management system. From there, the system delivers the correct amount, and only the correct amount, of water directly to the roots of crops to optimize the plant’s production throughout its growth cycle. PureSense is used by more than 1,400 farms and, on average, these users report a 16 percent decrease in water consumption. But here’s the kicker: Even with that decrease in water use, they see a 20 percent increase in crop yield. Clearly, irrigation is not merely about making sure plants “have enough” water. There is much more precision to it. Overall, PureSense estimates it has saved about 87 Billion gallons of water from California growers alone. That’s enough water to supply an average suburban California community with water for an entire year! Soon, however, we may see M2M’s long-standing prowess in asset security come into play, in this case to prevent the “theft” of scarce water resources. Feel free to file this under news of the bizarre, but it appears California authorities are having to deal with “water bandits” who’ve been tapping unmetered fire hydrants to get around having to pay for their water. This is actually an understandable by-product of the extended drought, since water commands such a high premium in those areas; oh, the lengths to which humans will go when the laws of supply and demand get stretched to their limits. It would be a relatively simple process to retrofit fire hydrants in “high crime” regions with M2M-enabled tamper alerts. Such an application in fact demonstrates M2M’s extreme versatility for connecting highly dispersed assets to a central control center, even when those assets were not built with such controls in mind as a future potential. We may be calling upon the California water district managers shortly with the suggestion! It is easy to get excited by the futuristic implications of M2M and the Internet-of-Things, but just as important is to keep in mind that right now, the technology is out there working to safeguard the most basic of human resources. It’s all about the little things.
<urn:uuid:6d6370bf-1976-4fe1-ac0b-f319d6ba8237>
CC-MAIN-2017-04
http://www.koretelematics.com/blog/can-m2m/iot-safeguard-the-worlds-most-sacred-resource-water
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955123
1,091
2.84375
3
In passive optical network (PON), fiber optic splitter is used as a key component to divide the power of the light over multiple fibers which allows a single PON interface to be shared among many subscribers. When choosing a splitter, we often feel puzzled about some similar concepts. Today, we are going to unravel the mystery of fiber optic splitter through three groups of concept comparisons. There are two most commonly deployed splitters in the market—FBT (Fused Biconical Taper) splitter and PLC (Planar Lightwave Circuit) splitter. FBT splitter is made out of materials that are easily available, for example steel, fiber, hot dorm and others. All of these materials are low-price, which determines the low cost of the device itself. The technology of the device manufacturing is relatively simple, which has the impact on its price as well. PLC splitter manufacturing technology is more complex. It uses semiconductor technology (lithography, etching, developer technology) production, hence it is more difficult to manufacture. Therefore, the price of the device is higher. Although the cost of PLC splitter is higher than FBT splitter, PLC splitter is more reliable when compared to FBT splitter. Other differences between FBT and PLC splitters are shown as following: |Operating Wavelength||1310 nm, 1550 nm, 850 nm||Whole Wavelength (1260-1650 nm)| |Inputs/Outputs||One or two inputs with an output maximum of 32 fibers. |One or two inputs with an output maximum of 64 fibers. |Split Ratio||Customisable. Special types such as 1:3, 1:7, 1:11 split ratio are available. |Non-customisable. Only standard versions such as 1:2, 1:4 and 1:8 and so on. |Size||It is much bigger in size and cannot easily fit in all cabinet. |It is much smaller and can easily fit in a cabinet and save much space |Assymetric of Attenuation per Branches||Customized attenuation split possible.||Attenuation split evenly| A uniform power splitter with a 1xN or 2xN splitting ratio configuration is most commonly deployed in a PON system so that the optical input power is distributed uniformly across all output ports. Of course, splitters with non-uniform power distribution is also available but such splitters are usually custom made and command a premium. Thus, they are not commonly used. Here, the letter “N” refers to the number of output ports. It is because of this 1xN or 2xN configuration, the splitter enables to deploy a Point to Multi Point (P2MP) physical fiber network with a single OLT port serving multiple ONTs. 2xN splitter is one more output than 1xN splitter. So, when to use it? Some operators prefer to include a certain level of redundancy in their network to ensure service even when a fiber is accidentally cut. Because the metro network is often constructed in a ring configuration, it makes sense to connect both ends of the fiber to the input of the splitter. In that case, the second splitter input leg will still be accessible from the other side if the connection to the first input leg fails. Thus, in general, the 1:N splitters are usually deployed in networks with a star configuration while 2:N splitters are usually deployed in networks with a ring configuration to provide physical network redundancy. Splitters can be deployed in a centralized splitting or a cascaded splitting (also called distributed splitting) configuration depending on the desired network topology. Centralized split typically entails 1×16 and 1×32 split ratio counts. It provides the best optical budget in a single access point. The more centralized, the higher the port aggregation it is which makes network testing, troubleshooting and maintenance more easy and efficient in one location. In addition, it can also improve the utilization of splitter output ports. Cascaded split is possible in 1:4 and 1:8 or 1:2 and 1:16 split combinations. A cascaded split gives the advantage of a lower fiber count in smaller access points which enables PON port economization for sparser zones. Additionally, it can reduce cost in terms of horizontal optical fiber and optical fiber enclosures are much smaller due to lower port count. Thus, this arrangement is not only optimal in rural areas due to the low fiber count but also be the preferred solution in cities due to the smaller size of the network elements. In fact, it is not absolute. The optimal solution will differ also depending on the expected take rate: centralized splitters have a more efficient use of the splitter ports when the take rate is low. Fiber optic splitters provide capabilities that help users maximize the functionality of optical network circuits. You could choose FBT splitter or PLC splitter according to your budget and expected performance. You could choose 1xN or 2xN configuration depending on your network architecture. And depending on the customer distribution, centralized split or cascaded split can be choose for deployment. In addition, according to different application environments, different package type of splitters such as bare fiber type, LGX box type, ABS box type and so on are available in the market. Fiberstore offers a variety of fiber optic splitters which will always have a right one for you. Visit FS.COM or send e-mail to email@example.com to get more information.
<urn:uuid:2b134f35-6571-4195-8700-39e60086e521>
CC-MAIN-2017-04
http://www.fs.com/blog/three-groups-of-concept-comparisons-help-you-better-understand-fiber-optic-splitters.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916249
1,156
2.890625
3
You know those pesky but necessary CAPTCHA boxes whose squiggly letters and digits you need to retype to make use of certain parts of sites such as Yahoo, Wikipedia and PayPal? A computer scientist from Carnegie Mellon is looking to replace many of those boxes with anti-spam boxes of his own for the purpose of helping to digitize and make searchable the text from books and other printed materials. To boot, the system could help companies better secure their Web sites. The idea is somewhat along the lines of projects like the famous SETI@Home grid supercomputer project for detecting signs of extra terrestrial life from deep space. Organizers of SETI@Home convinced computer users all over the world to allow their computers’ CPU cycles to be used to process information for the ET hunt when the systems weren’t otherwise being used. But in the case of Luis von Ahn’s project, he and his team are convincing organizations to replace the CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) security boxes on their Web sites with what the assistant professor of computer science calls reCAPTCHA boxes. Instead of requiring visitors to retype random numbers and letters, they would retype text that otherwise is difficult for the optical character recognition systems to decipher when being used to digitize books and other printed materials. The translated text would then go toward the digitization of the printed material on behalf of the Internet Archive project . “I think it’s a brilliant idea — using the Internet to correct OCR mistakes,” said Brewster Kahle, director of the Internet Archive, in a statement. “This is an example of why having open collections in the public domain is important. People are working together to build a good, open system.” Von Ahn says it is estimated that people solve 60 million-plus CAPTCHAs a day, amounting to 150,000 or more man hours of work that can be put to use for the digitization effort. His team is working with Intel to offer a Web-based service enabling Webmasters to adopt reCAPTCHAs to secure their sites. An audio version is in the works for transcribing radio programs and that can be used by blind Web users.
<urn:uuid:0441111f-dccd-4493-b31d-20406a6c17ee>
CC-MAIN-2017-04
http://www.networkworld.com/article/2347361/data-center/you-might-be-digitzing-books-on-the-web-without-knowing-it-thanks-to-this-stealthy-anti-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929268
470
3.265625
3
The fibre optic modem (or FOM), connects an electronic device such as a computer to a network or the Internet, provides electrical to optical conversion of electronic communication and data signals for transmission using tactical fiber optic cable assemblies. How Does Fiber Optic Modem Work? Fiber optic modems receive incoming optical signals over fiber optic cables and convert them back to their original electronic form for full duplex transmission. Together with the tactical fiber optic cables, the FOM provides a rugged, secure, and easy deployable optical link. What is the maximum distance that a fiber optic modem can go? The maximum distance a modem can go is the difference between receiver sensitivity and transmit power of the fiber optic modem, divided by the transmission loss of the fiber used. For example, a basic single-mode OSD815 digital video system’s transmitter power is greater than -10dBm and its receiver sensitivity is better than -29dBm so the difference of 19dB at 1310nm allows operation over at least 45km. Note this would be very poor design because there is no allowance for a link margin. What Is Fiber Optic Modem Used For? Fiber optic modems are often used in data communication systems to bridge long distances at high data rates. Fiber optic systems are particularly immune to electromagnetic interference and therefore very suitable for harsh industrial environments. They can transmit data at up to 12 Mbit/s over distances up to 80 km depending on the fiber type. They range from simple devices with just a few ports to multiplexers capable of handling large-scale communication networks. For example, fiber optic cables are used by some networks for the server to building connection, while cat 7 cable is used for the wiring within the building. To convert these two types of cables you need a fiber optic modem. What Are The Available Types Of Fiber Optic Modems? Fiber optic modems are the ideal when working with large amounts of data. Fiber optics allows data to be transferred quickly and efficiently. Available in single mode or multimode models, it’s important to choose the best one for your needs. E1 fiber optic modem is used for modulating a framing or non-framing E1 data signal directly into single mode or multimode optical fiber for a transmission via fiber optic cable line. At another end of the optic cable, optical signal is demodulated into a framing or non-framing E1 data signal. E1 interface may be directly connected with the E1 interfaces of image and data terminals or the WAN ports of MUX, switch and router for a dedicated network setup or a LAN connection. V35 fiber optic modem converts V 35 electrical signal into optical data stream for transport over single mode or multi-mode fiber optic cables. At the opposite end of the fiber, the optical stream is converted back into electrical signals of the appropriate interface such as G.703 & V.35. The fiber optic modem extends the transmission distance up to more than 100Km. The frame format of MOD-V.35 is frame / unframed for E1, so it can be used to transmit E1 signal of frame video or unframed video. RS232 converter, RS485 converter and RS422 converter (RS means “recommended standard”) are the standards introduced by The Electronics Industry Association to ensure compatibility of the data transmission between equipment made by different manufacturers. RS232 Modem is for single ended data transmission from one transmitter to one receiver at relatively slow data rates (up to 20K bits/second) and short distances. RS232 Ethernet converter is widely used as we can see them from the common desk computer cases. RS422 converter is the standard for longer data transmission distances and higher Baud rates compared with RS232. R485 converter standard meets the requirements for a truly multi-point communications network, and the standard specifies up to 32 drivers and 32 receivers on a single (2-wire) bus.
<urn:uuid:e77f09c0-1a21-4198-9bfa-edb011656fd1>
CC-MAIN-2017-04
http://www.fs.com/blog/things-you-should-know-about-fiber-optic-modem.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.881692
809
3.28125
3
No self-respecting Fibre Chanel blog would be complete unless we included a detailed discussion on zoning and the best practices concerning zoning. This is precisely the topic of the final blog in series, we will define what zoning is, the need for zoning, some potential strategies and best practices surrounding zoning. What is Zoning ? Fibre Channel Zoning is a means to segregate a Fibre Channel switch in order to provide security and minimise interruption between devices. In practice, the concept is no more difficult that than simple Venn diagrams that you probably drew in Primary/Elementary school: If two objects are in the same zoneset then they are able to communicate with one another, if they are not, then they are unable to communicate. This forms a basic level of security, managed at the fibre channel switch, to segregate devices which are only allowed to communicate with one another. As we have seen in the previous blogs, another level of security is LUN Masking, where the Volume/LUN is mapped to only the Initiator Hosts that are allowed to communicate, managed at the storage array. Both methods are independent and are highly recommended in all Fibre Channel storage deployments. A fabric can function and operate with little or no zoning implemented however as the SAN grows over time and devices are added, contention and interaction with fabric elements can cause issues (devices failing or intermittent problems and in certain cases devices being inappropriately accessed). Zoning provides an effective method to maintain data integrity and heighten data security. Why is Zoning Important ? As mentioned above there are a number of reasons why Zoning is implemented; Management, Security and Segregation are all primary reasons why zones should always be implemented. However for large Enterprise SAN’s there are a number of technical reasons why the zoning policy should be considered: Request for State Change Notification (RSCN) All devices within a SAN need to be informed of changes to the environment (ie, when nodes log in and out of the fabric). Such changes are referred to as topology changes and each host requires notification as to the nature of change, as it may reflect access to a newly available device or access to a device that no-longer exists. The process of communicating topology changes to each device is accomplished by Request for State Change Notification (RSCN). Each time an event occurs on the SAN, a RSCN is issued and each device must pause processing, receive the RSCN, reply and then continue with it’s processing. In busy fabrics, or fabrics with misbehaving nodes, processing the RSCN’s could introduce an unwanted overhead in processing the RSCN’s. In some severe cases, this can cause a device to report a 'busy' status back to the request or the RSCN – creating problems which then need to be managed or corrected, potentially creating a cascading effect as nodes login/logout of the fabric in an attempt to recover. In fabrics where no zoning is employed, any topology change would cause RSCN’s to be processed by every device within the fabric. In a zoned fabric only devices that are in the same zone as the changed device would receive the RSCN; limiting the impact of RSCN’s and the exposure to intermittent failures and issues. During device discovery, a host will query the fabric as to what targets are available and which targets should be probed for available devices. In a un-zoned fabric this leads to longer discovery times as each target needs to be queried, resulting in increased SAN traffic (as each host will poll each of the available targets, regardless to whether the target has available devices associated to that host). Zoning limits this impact, as only targets in the same zone as the initiator are probed, reducing SAN traffic and device discovery at boot-time. Zoning provides a logical method of dividing and restricting access between a number of nodes. By introducing zoning, security is increased as nodes can only communicate to the defined set of nodes that are in the same zone. Various Zoning Strategies When considering appropriate zoning strategies, a number of elements should be considered, port v worldwide name zoning, hard or soft zoning, zone granularity and effective management of the zones. Each one of these considerations is explored further: Zones are easily defined. They are simply a collection of World Wide Names (WWN’s) or Port Address that are grouped together to allow or revoke access between the collated devices. A collection of zones is called a zoneset. Each zoneset can contain many zones, which can be segregated or overlapped with one another. The granularity of which the zones can be applied is dependant on what is trying to be achieved; common strategies involve zoning by Application, Operating System or even Business units. Another common approach (especially in disk fabrics) is to have a separate zone for each initiator with it’s set of targets. There is obviously a trade off between the benefits that zoning brings and the management of zones. It is recommended that the SAN administrator should create zones for each of the servers initiator. In addition to the initiator, the set of targets which that host accesses should also be grouped into the same zone (ie, one zone for each servers HBA containing all disks devices it will communicate it), this is often referred to as Single Initiator Zones. This is recommend practice for Nimble devices (and majority of storage vendors). Below is an example of a Single Initiator Zones across two fabrics: Smaller more granular zones are more difficult to manage, however they limit the exposure to RSCN’s and overall device discovery. Methods for effectively managing large zone configurations are discussed later. Port v WWPN Zoning There are two types of zoning: WWPN zoning and Port zoning. WWPN zoning uses the name server database located in the fibre-channel switch. The name server database stores port numbers and World Wide Port Names (WWPN) used to identify devices during the zoning and login process. When a zone change is made, the devices in the database receive Registered State Change Notification (RSCN). Each device must correctly address the RSCN to change related communication paths. Any device that does not correctly address the RSCN, yet continues to transfer data to a specific device after a zoning change will be blocked from communicating with its targeted device. As it’s definition suggests, the WWPN Zoning consists of simply creating the zone using all the WWPNs that communicate with one another. Below is an example of a WWPN Zone: In this example, the Emulex WWPN is one of my Node3's HBA Ports and each of the four Nimble Storage Target WWPN's are grouped into the WWPNZone_Node3 zoneset. This is an example of a Single Initiator Zone. Note: In many environments there is typically dual redundant fabrics so a second set of switches with independent zone configs will need to be configured. Port zoning requires each device to pass through the switch’s route table so that the switch can regulate the data transfers. For example, if two ports are not authorized to communicate with each other, the route table for those ports is disabled, and the communication between those ports is blocked. Port zoning does not require that the WWPN is specified, only the physical switch ports that a host and it’s relative devices are defined. Below is an example of a Port Zone: In the example above, there are no WWPNs defined at all, merely the physical switch ports that are allowed to communicate to one another are listed. There are pros and cons for each method. As WWPN Zoning is defined via the WWPN, each time a HBA is changed then the zoning configuration needs to be updated to reflect this change. In addition redeploying HBA's can sometimes allow unauthorised access to devices, unless the zones are kept up-to-date. Port zoning requires that servers and devices are physically restrained to only accessing via the ports to which they are allocated. Port Zoning also assumes that the datacentre is secure, as access to the switch port would give a host access to the associated devices. Many large environments will implement Port zoning strategy, as the impact of HBA/Device failures and the associated changes to the fabric would cause additional management and exposure. In addition, Port binding promotes a stable and consistent approach to allocating switch ports to associated devices on the SAN. Clearly, there is a balance between SAN flexibility and management overhead that the flexibility brings, no method is incorrect and both can be intermixed if required. With Nimble there are a couple of considerations to think about, when it comes to Zoning: What happens when a controller fails ? This has no real impact, should a controller (or a target adaptor) fail, as soon as it is replaced, Nimble OS will assign the same WWPN/WWNN that was previously assigned. There is no impact on the zoning configuration regardless of what type of Zoning is used. Planning Installation ? Until the Nimble array is setup, you will not know what the Nimble's target WWPN's are. Therefore it maybe prudent to use Port Zoning so that any zone changes can be made in isolation of what the specific WWPN are. Note: this is something that is likely to change in future releases of Nimble OS where we expect the user to be able to set a specific WWPN for the Port Zones. This will be handy if zone changes are made in advance to implementation of a new array. Hard v Soft Zoning Hard and soft zoning is often confused with port (hard) and WWPN zoning (soft). They are in-fact completely separate discussions. As mentioned above Port/WWPN zoning defines what is referenced when zones are created and enforced. Hard and soft zoning describes how communication between ports within the switch is limited and implemented. Soft zoning was the first method of zoning implemented by switch manufactuers. In basic terms, it works on the notion that “if the initiator can’t see or know about the target then how can it communicate with it”, it’s completely analogous to having an ex-directory phone number, if the number isn’t listed then I can’t communicate as I don’t know the number. This approach is flawed as hosts can access the devices by executing commands directed to an unknown address; such behaviour could be made by mistake or be made via hacking. There is physically nothing stopping the host from accessing the device once it knows the address. Again with the telephone analogy, once I know the ex-directory number nothing stops you from dialling and having a conversation. Hard zoning is a function latterly put into switch hardware to prevent the soft zoning security issue. Hard zoning physically blocks access to zone from any devices that are outside to the zone. In the telephone analogy it’s the same as call barring. Hard zoning is often confused with port zoning but they are fundamentally very different concepts. Hard zoning is currently enforced on the majority of switches technology by default. Best Practices for Zone Management The following section describes methods that can assist the storage administrator with Zone management; the zoning strategy includes each one of the following best practises: Aliases simply provide zone management with an effective method of defining and associating WWPN’s with an alias name, which is human readable. The use of alias not only cut’s down on administration but also restricts the likelihood of errors being introduced by incorrect typing. Aliases also allow for groups of devices to be associated under one name. The use of aliases can also promote a naming standard, which can be defined and adhered, making administration more effective. Compare the example above for the WWPN Zoning with the example using Aliases below: The alias uses a far more descriptive term and if a standard is kept it will assist with day-to-day management. A standard naming convention eases administration and provides clarity when performing management tasks on a SAN. A consistent naming convention should be defined for each of the zone elements. Many customers implement specific times or days when zones can be updated. That is really a decision for each organisation and their change management procedures. One recommendation is to give yourself a little 'air gap'. Often administrators will change the two fabrics at the same time. The redundancy is there for a reason. Change the first one, wait a little while, perform some checks and if all is good then change the second fabric. A little air-gap allows you a time to spot an error before making the same mistake on your second redundant fabric. It's simple tip but surprising how few use it! Most switches provide the ability to store several zone configurations at the same time (only one is ever in effect at any one time). As this is the case a naming mechanism for configurations that include the timestamp will allow quick and easy rollback to the last known good configuration should any errors be introduced. Adherence to standards Always define zones, members and aliases to the documented standards. This will vastly help improve the readability and management of zone configurations. Configurations can be uploaded and downloaded to ftp servers. Although zone information is propagated to each switch on the fabric, it is still good practise to download configurations to host after each change. That's it ! Hopefully, if you have been following the blog you will now feel well versed with the Nimble Fibre Channel implementation. Of course if there are any further questions, hints, tips and tricks then we'd really like you to comment or post a new discussion on Nimble:Connect ! We will post some additional hints and tips over the common weeks!
<urn:uuid:a0b44ce0-9bc2-4f7d-a33f-82d3e2a18060>
CC-MAIN-2017-04
https://connect.nimblestorage.com/community/configuration-and-networking/blog/2014/11/24/nimble-fibre-channel-in-the-zone-a-discussion-around-best-practice-with-fibre-channel-zones
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93611
2,848
2.703125
3
Definition: A tree where each node is split along all d dimensions, leading to 2d children. See also octree, quadtree complexity theorem, linear quadtree, BSP-tree. Note: It was originally designed for 2-dimensional data (pictures), so each node had four children, hence quadtree. After [GG98]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 16 November 2009. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "quadtree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 16 November 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/quadtree.html
<urn:uuid:04f742f0-c687-4bda-a971-b79b44460d6b>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/quadtree.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897775
186
3.015625
3
PowerShell 2.0 embraces and extends an old idea in command-line administration — the pipeline. Understanding what can be done with this powerful feature is a key to being effective in the new, bold world of command-line administration of Windows systems. The pipeline is not a new idea. Douglas McIlroy is a mathematician and programmer who noticed, back in the 70s, that the output of one command in Unix often became the input for the next. He put forward the idea of a “pipe” to send the output of one program directly to the input of another, skipping the step of saving that output in a file of some sort. A few years later in 1973, Unix co-creator Ken Thompson added the feature to Unix. Pipeline Limits = Tool Development One key limitation of the Unix pipeline (and of the CMD.EXE pipeline that eventually followed in DOS and Windows) was that it was only passing text from one command to the other. In other words, it wasn’t much more sophisticated than writing all the output to a file and reading it back in to give it to the next application. What the receiving program received was a tsunami of text. This led to the development of tools like grep, awk, sed, and Perl as ways to manipulate that big block of text to produce meaningful output to feed to that next program. This has been a workable approach for Unix users for generations now. Microsoft has taken a totally different approach with PowerShell, however. PowerShell’s pipeline isn’t passing plain text, it’s passing Objects. Objects and You Objects, in programmer-speak, are a software representation of a real-world thing. Real objects have properties: my pen has a certain quantity of ink in it; my computer has a screen with a certain number of inches in diagonal size. Real objects also usually support tasks we can perform on them: my car has a GoFaster and TurnLeft method (hey, I could be a NASCAR driver!). Programming objects represent a real-world thing using a set of properties and a set of methods that replicate the information and tasks that are relevant to using that thing in the real world. Applying that to computer technology, consider the idea of a Windows service. It’s a piece of software that runs in the background and allows your computer to perform some sort of functionality. What kinds of information do you want to know about a service? Things like the name of the service, the name of the executable it’s running, whether it’s stopped or started… things like that. What kinds of tasks do I need to perform to administer services? Well, I’d need to stop and start services, mostly. I might need to change the start mode from Automatic to Manual. PowerShell represents the idea of a Windows service using an object. When I call on the Get-Service cmdlet to give me a list of services, what I get is actually a list of objects that represent those services. How do I know what kind of objects? I can pipe those objects to Get-Member and it will tell me all about it, like so: (and don’t forget, you can click on these images to see them at full size!) So a service is an instance of a System.ServiceProcess.ServiceController object. What’s that? Remember that PowerShell is built on top of the .Net Framework, a large base of code for Windows application developers. .Net provides a vast list of types of objects that can be used to assemble programs, and they all have lengthy names like that one. Long story short, “ServiceController” objects are .Net’s way (and therefore PowerShell’s way) of describing services.
<urn:uuid:eff43013-70c0-4598-84ac-f29057ae6c46>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/08/25/powershell-2-0-fits-square-pegs-in-round-holes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00418-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937893
780
2.578125
3
It started with Web-based social media. We became enamored with sharing and connecting via Facebook, Twitter, even LinkedIn. Those platforms and others migrated as apps to our now-indispensable smartphones and tablets. This was the beginning of us becoming "sensor platforms" as the data being culled from the websites and apps became ever more sophisticated and deep. And we've all done so willingly, enthusiastically, in exchange for the reach and convenience that these systems provide us. The evolution is continuing with two key technological trends: the addition of physical-world sensors to smartphones, and specifically designed sensing devices, or wearables. Phones have become rich sensing devices It turns out that newer smartphone models, such as those from Apple and Samsung, are bristling with sensors that are capable of capturing much information from you and the surrounding world. These include sensors for fingerprint/touch, movement and, soon, air temperature, barometric pressure and more. But even without these, virtually all phones can provide geolocation, audio and imagery -- all forms of sensing. For example, it's straightforward to use the quantity of phones "moving" down a sidewalk as a proxy for the number of people there. There are even crowdsourced data mining initiatives that use cameras in phones to discern traffic movements. This is only the beginning. As the instrumentation embedded within our personal communications devices becomes more and more powerful, coupled with improvements in location accuracy, phones-as-sensors will take on an increasingly important role as data sources for a variety of applications. March of the wearables At the recent Consumer Electronics Show (CES) in Las Vegas, so many companies came to present their offerings in the wearables category that an entire hall was filled with them. Frankly, it was difficult to discern what differentiated one from another; the current crop are all similar in monitoring pulse rates, body temperature and sometimes a few extras. They all offer a means to get the data into your laptop, onto your phone and/or posted in cloud databases. The takeaway is that these, and more advanced devices such as the Apple Watch, are gaining in popularity and bring an enhanced set of sensing capabilities to individuals. The fascinating attribute of this is that, much as with Facebook and smartphones, the data being generated by wearables can be mined for applications that have nothing to do with their original intent. Applying data collected for one purpose to another usage is typically referred to as cross-correlative analysis. My favorite example is the measuring of an earthquake's intensity by analyzing the jump in heart rates collected by FitBit and Jawbone wearables, mapped via geolocation. In essence, humans became seismographs. An idea along these lines would be to use footfalls over time (i.e., walking pace) as a means to determine how rapidly people are moving around malls, train stations, airports and even theme parks. Many of the latest fitness trackers, and now "smart shoes," will make this possible. The more of these devices that we add to our ensemble, coupled with new features of our smartphones, the larger the range of cross-correlative uses for the data. If I'm a sensor platform, do I own my data? A looming issue facing the Internet of Things surrounds the ownership of sensor data. It's a very big deal, and potentially has implications for business, government policies and individual rights. In the realm of social media, and most Web-based systems that collect rich user data, the de facto mode is that we accept a user agreement which essentially barters our usage of the service for data collected on us. The value of such data has become staggering, and mobile device data is adding to it exponentially. Bring in physical-world sensors, and yet more data is being generated, and there again, we're happy to trade the data for convenience and usability. Therefore, in the current modus operandi of the industry, we are giving away our data. Some challenges to this notion are popping up around the world, most notably in Europe. It remains to be seen how this will shake out, but in the meantime, particularly in the U.S., our personal data, regardless of how generated, is being utilized and often vended by the companies who sell or offer us the products or services that generate it. It will get exotic As sensors become smaller and mobile processors more powerful, coupled with advents in wireless technology, we'll doubtless see the amount of data being generated from our bodies, our movements and the environment growing incessantly. Measurement of attributes such as perspiration, breathing, digestion, exposure to air pollution, hydration and more are all coming. As machine learning, particularly as applied to cross-correlative analysis, gets smarter and broader and taps into the burgeoning trove of physical-world data and other sources, improvements in healthcare, transportation, leisure activity and even supply chain management will result. So we're sensor platforms; what of it? It's worth keeping an eye on the regulatory matters pertaining to all of this, in particular the usage of data generated by individuals, and/or devices carried or worn by individuals. For the time being, the benefits of the devices and apps that are streaming data from one's person are outweighing concerns over privacy, ownership, or even getting paid. That could change in the future, and possibly will be dependent upon the city, state or country one calls home. On the business front, most companies haven't yet tapped into this mega-source of new information. Depending upon what your organization offers as a product or service, this physical-world data could be a powerful new resource, leading to improvements in personalized marketing, public safety, logistics, recreation, transportation and the workplace itself. Fitness trackers and other wearables, increasingly sensor-laden smartphones and the power of big data analytics are sure to change the landscape forever. With all of us becoming sensor platforms, plentiful opportunities are -- well -- within arm's reach. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:0aab587e-37ec-4931-8b53-f6bdbaa169c4>
CC-MAIN-2017-04
http://www.computerworld.com/article/3020536/internet-of-things/are-you-a-human-sensor-platform.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954259
1,245
2.84375
3
Journalists use temperature monitoring to better understand cicadas Wednesday, Mar 20th 2013 Scientific researchers understand that in order to come to a definitive conclusion on any particular matter, a large amount of verifiable information needs to be collected and analyzed. While scientists over the years have developed theories and hunches based on observation, those hypotheses remain unproven unless quality data can back up the claim. According to the Nieman Journalism Lab, a project run by Harvard University's Nieman Foundation, one such claim has been that cicadas have an internal temperature sensor that tells them when to emerge from the ground. Cicadas are unique in that once every 17 years, the insects pop out of the ground en masse in a swarm. Although the bugs do not bite or otherwise harm humans, their regular emergence is often considered a nuisance considering that cicadas inundate their environments and make a lot of noise. Scientific conventional wisdom held that the insects only emerge when soil temperatures reach approximately 64 degrees Fahrenheit. However this hypothesis remains unproven. The main reasons why researchers are not certain about what triggers the swarm to arise is twofold. For one, cicadas only emerge once every 17 years, meaning that scientists must wait a long time before being able to even begin to test out the hypothesis. Plus, cicadas are only around for a short period when they do finally come out, which even further limits research efforts. Another major issue is that the cicada natural habitat is incredibly vast - according to the Nieman Journalism Lab, cicadas along the Eastern Seaboard can be spotted from Virginia to Connecticut. In order for scientists to come to a definite conclusion, data would need to be collected and analyzed across the insect's entire home range. "While the periodic emergences are hard to miss because of the noise and the overwhelming numbers, it is hard to predict where and how many will actually come out because of habitat degradation," public radio station WNYC said in a recent blog post. How temperature monitoring and the masses can solve this dilemma To finally figure out how warm the soil really needs to be, a team of journalists at WNYC is hoping to arm as many people as possible with temperature monitoring equipment. The purpose of the Cicada Tracker project is to install a temperature sensor in as many locations as possible. That way, when the insects do finally emerge, the team will have enough data from across the cicada's population zone to accurately determine if the previous hypothesis about ground temperatures is correct or not. "WNYC for a long time has been a leader with experimenting with crowdsourcing and data news. This is almost like experimenting with crowd hardware hacking," WNYC's John Keefe, who is spearheading the project, said to Nieman Journalism Lab. "We're trying to go into this arena of independent sensors built and run by a crowd to collect information that might not otherwise be available. We're creating our own data set." One potential issue with this project is that the WNYC team is encouraging those interested in participating to build their own sensors. However, the data this temperature monitoring equipment would gather could potentially be inaccurate. Typically, researchers opt for professional environmental monitoring solutions to ensure the success of their efforts.
<urn:uuid:39497b48-7246-402e-a372-e0b9a2f1a16c>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/journalists-use-temperature-monitoring-to-better-understand-cicadas-407571
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950812
662
3.21875
3
Hardee K.,Reproductive Health Program Population Council | Gay J.,What Works Association | Gay J.,Center for Policy and Advocacy | Croce-Galis M.,What Works Association | And 2 more authors. Journal of Acquired Immune Deficiency Syndromes | Year: 2014 BACKGROUND: Adolescent girls face unique challenges in reducing their risk of acquiring HIV because of gender inequalities, but much of HIV programming and evaluation lacks a specific focus on female adolescents. METHODS: This article, based on a review of 150 studies and evaluations from 2001 to June 2013, reviews evidence on programming for adolescents that is effective for girls or could be adapted to be effective for girls. RESULTS: The evidence suggests specific interventions for adolescent girls across 3 critical areas: (1) an enabling environment, including keeping girls in school, promoting gender equity, strengthening protective legal norms, and reducing gender-based violence; (2) information and service needs, including provision of age-appropriate comprehensive sex education, increasing knowledge about and access to information and services, and expanding harm reduction programs for adolescent girls who inject drugs; and (3) social support, including promoting caring relationships with adults and providing support for adolescent female orphans and vulnerable children. DISCUSSION: Numerous gaps remain in evidence-based programming for adolescent girls, including a lack of sex- and age-disaggregated data and the fact that many programs are not explicitly designed or evaluated with adolescents in mind. However, evidence reinforces bolstering critical areas such as education, services, and support for adolescent girls. CONCLUSIONS: This article contributes to the growing body of literature on HIV and adolescent girls and reviews the vulnerabilities of girls, articulates the challenges of programming, develops a framework for addressing the needs of girls, and reviews the evidence for successful programming for adolescent girls. Copyright © 2014 by Lippincott Williams & Wilkins. Source Alkenbrack S.,Center for Policy and Advocacy | Alkenbrack S.,The World Bank | Chaitkin M.,Center for Policy and Advocacy | Chaitkin M.,Results for Development Institute | And 3 more authors. PLoS ONE | Year: 2015 Introduction: Despite widespread gains toward the 5th Millennium Development Goal (MDG), pro-rich inequalities in reproductive health (RH) and maternal health (MH) are pervasive throughout the world. As countries enter the post-MDG era and strive toward UHC, it will be important to monitor the extent to which countries are achieving equity of RH and MH service coverage. This study explores how equity of service coverage differs across countries, and explores what policy factors are associated with a country's progress, or lack thereof, toward more equitable RH and MH service coverage. Methods: We used RH and MH service coverage data from Demographic and Health Surveys (DHS) for 74 countries to examine trends in equity between countries and over time from 1990 to 2014. We examined trends in both relative and absolute equity, and measured relative equity using a concentration index of coverage data grouped by wealth quintile. Through multivariate analysis we examined the relative importance of policy factors, such as political commitment to health, governance, and the level of prepayment, in determining countries' progress toward greater equity in RH and MH service coverage. Results: Relative equity for the coverage of RH and MH services has continually increased across all countries over the past quarter century; however, inequities in coverage persist, in some countries more than others. Multivariate analysis shows that higher education and greater political commitment (measured as the share of government spending allocated to health) were significantly associated with higher equity of service coverage. Neither country income, i.e., GDP per capita, nor better governance were significantly associated with equity. Conclusion: Equity in RH and MH service coverage has improved but varies considerably across countries and over time. Even among the subset of countries that are close to achieving the MDGs, progress made on equity varies considerably across countries. Enduring disparities in access and outcomes underpin mounting support for targeted reforms within the broader context of universal health coverage (UHC). © 2015 Alkenbrack et al. Source
<urn:uuid:06fe1100-7883-42cc-9da5-624a4d2ef2e6>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-policy-and-advocacy-2368865/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00474-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932804
849
2.90625
3
Women in IT Still Fall Behind in Salaries According to a report from the American Association of University Women (AAUW) Educational Foundation, women may be pursuing more education, landing more employment and finding jobs at higher levels, but they are still landing “pink-collar” jobs, for the most part. The report, called “Women at Work,” showed that women have caught up with men in obtaining four-year degrees. They are also more likely to work in management and professional jobs now than they were two decades ago. Still, women are not prepared to move into higher-paid, fast-growing job roles like systems analysis, software design and engineering. Mary Ellen Smyth, president of the AAUW Educational Foundation said the new high-tech economy is leaving women behind. “It’s not that women are hitting a glass ceiling in the high-tech sector,” she explained. “It’s that they don’t have the keys to open the door.” The wage gap is still a problem as well. In CertMag’s most recent annual salary survey, only 8 percent of respondents were women, and women reported earning 3.25 percent less than their male counterparts. (See www.certmag.com/salaries). And according to Brainbench, “the virtual salary parity women had achieved in most wage categories in 2001 was strongly eroded in 2002 and continues to reflect that loss in 2003.” Only in large corporations have women been able to overcome the wage gap, Brainbench reports. The AAUW report shows that more women need to pursue advanced education in computer and IT fields, or the gender gap in these fields will grow wider. Jacqueline Woods, AAUW’s executive director, said that only 28 percent of women are studying in fields that will enable them to work in science, engineering or IT. The AAUW report recommends increasing access and opportunities for education for women and girls in underrepresented racial-ethnic communities. It also suggests increased promotion of the benefits of an education in computer science, engineering, math and technology for women and girls. For an overview of the AAUW report, visit http://www.aauw.org/research/index.cfm and go to “Women at Work.” Emily Hollis is managing editor for Certification Magazine. She can be reached at email@example.com.
<urn:uuid:8a881aad-1f28-4b77-ba9a-0f3ca5434596>
CC-MAIN-2017-04
http://certmag.com/women-in-it-still-fall-behind-in-salaries/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00500-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967161
510
2.5625
3
High-wireless act: Can high-frequency WiFi be practical? - By Greg Crowe - Oct 16, 2012 This article has been updated to correct an inaccurate reference to the 60 GHz band as 60 MHz. As agencies continue to build out wireless networks and extend the use of mobile devices, a new spectrum specification, which will add a lot of speed and capacity, could become a key factor in how well those networks function. In a recent article explaining the wireless spectrum allocation situation, we touched on the 60 GHz band that the Wireless Gigabit Alliance (WiGig) proposes to use for the next generation of wireless networking, IEEE 802.11ac. But although the bandwidths are quite roomy up there in the 60 GHz band — more than 50 times as wide as in the current Institute of Electrical and Electronics Engineers (IEEE) 802.11n specification, enough to allow streaming of uncompressed video — there is an innate hurdle to working with higher frequencies: signal propagation loss over distance. Since the air is made up of randomly-arranged molecules of matter, any waveform signal sent through them has a chance of being bounced off of whatever it runs into. Higher-frequency waves are more susceptible to signal loss than lower-frequency ones because of this. A good example exists in everyday nature. Light on the blue end of the visible spectrum has a higher frequency than on the red end. When the sun’s light hits the atmosphere, the blue light scatters more, so the sky looks blue. When the sun is low on the horizon, its light has to go through even more atmosphere to reach you, making the sun look more reddish. So now if your kids ask why the sky is blue, you’ll know what to tell them. WiGig proposes several ways to combat this innate problem. For one, the alliance is continuing to refine the multiple-input multiple-output (MIMO) antenna configuration that was first implemented in 802.11n. With MIMO, several antennae are all talking to each other to determine the best path to the other station. The 802.11ac standard doubled the MIMO streams used in ‘n’ from 4 to 8, so what WiGig is working on will likely have at least that many. But the area that will likely have the greatest impact on the practical range of WiGig-designed devices is in the precoding stage of the MIMO process. This is when the device will use what is called “beamforming” to focus the signal. This process, an example of which is illustrated in the accompanying graphic by Stephane Dedieu, uses the multiple antennae to combine into a phased array. The signal produced will experience constructive interference in one direction and destructive in other directions. So the signal will go farther in the desired direction. This type of transmission is also sometimes called “unidirectional,” meaning that a signal with beamforming will go much farther than it would without it. How far will depend upon the technological improvement that happen between now and when WiGig and the IEEE come out with the new specification based on WiGig’s current research. Greg Crowe is a former GCN staff writer who covered mobile technology.
<urn:uuid:1d8a308f-6f76-46ec-9218-56c0a8b4f088>
CC-MAIN-2017-04
https://gcn.com/articles/2012/10/16/explainer-60-mhz-wifi-band-signal-propagation-loss.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00227-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943038
678
2.953125
3
Can the world be rid of software bugs and vulnerabilities that are open to exploitation? Despite all the advancements in software development, Tenable Network Security Asia Pacific principal architect, Dick Bussiere, characterises bug-free software as a pipe-dream. "It's virtually impossible, as no technology is ever completely perfect," he said. What is achievable is creating higher quality software, and Bussiere said automated vulnerability assessment tools help analyse code for potential vulnerabilities. "It is possible to create a better world with fewer bugs, but the bugs are always going to be there," he said. "The issues are not necessarily introduced by coding errors, but also through misinterpretation of the initial customer requirements." Bussiere adds anything complex designed by humans such as computers and networks are subject to flaws. Looking at the source There are more software developers these days than ever before, with www.numberof.net putting the number at 17 million worldwide and 4.16 million in the United States alone. While the large amount of coders enables better software to be written, Bussiere said it also translates to more exploits by malicious individuals. Bussiere references the recent Heartbleed vulnerability and how its open source roots allowed it to spread undetected. "It was a piece of code that was written by a relatively small team and then utilised in hundred of other products," he said. The vulnerability could have been discovered by looking at the open source code, but Bussiere said the cost of doing so meant it was not an option for many. "Because of the cost and time involved, they didn't invest the time and money in appraising the open source software and doing a vulnerability assessment of it," he said. Almost perfect There may never be a point where people will generate bug-free software, though Bussiere said some software is close to perfect. One example of software perfection he points to is avionics systems in aircrafts. Not only does it cost an "inordinate amount of money" to develop, Bussiere said it takes many years to develop a system like that. "There are commercial pressures to get normal software out, as by the time you finish the testing process to achieve so-called perfection, the product would be obsolete before it was released," he said. "So the expense and time required to develop perfect software makes it commercially unviable." Bussiere said the avionics industry is not under the same time pressures, so it has the leeway to ensure software is as bug-free as possible. Patrick Budmar covers consumer and enterprise technology breaking news for IDG Communications. Follow Patrick on Twitter at @patrick_budmar. This story, "Is it possible to create bug-free software?" was originally published by ARNnet.
<urn:uuid:3a2f2caa-7285-4ab8-91f5-b36babc3b057>
CC-MAIN-2017-04
http://www.csoonline.com/article/2451594/data-protection/is-it-possible-to-create-bug-free-software.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00345-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955416
583
2.578125
3
It's always depressing to read about a species being endangered and headed toward extinction. So I'm happy to report the opposite news: In Europe, many species previously dwindling in number have made comebacks and are no longer considered endangered, according to a study commissioned by the conservation group Rewilding Europe. From the Zoological Society of London: The results show that a wide-ranging comeback of iconic species has taken place in many regions across Europe over the past 50 years. Legal protection of species and sites emerged as one of the main reasons behind this recovery, while active reintroductions and re-stockings have also been important factors. Among the species reversing previous declines in population are the "European bison, the Eurasian beaver, the white-headed duck, some populations of the pink-footed goose and the barnacle goose," the BBC reports. "These had all increased by more than 3,000% during the past five decades." Brown bears doubled in numbers, while the gray wolf's population increased by 30%. Other species showing gains include eagles and vultures. The Iberian lynx was the only one of 18 mammal species not to register a health gain in population. But good news overall. Now read this:
<urn:uuid:ad17d1e4-2741-4a1b-a342-c00e54a1c1d7>
CC-MAIN-2017-04
http://www.itworld.com/article/2704718/enterprise-software/study-credits-conservation-for-comeback-of-some-endangered-european-species.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00255-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957508
251
3.0625
3
I came across an astonishing and worrisome statistic the other day: The weight of electronic waste worldwide is expected to jump by a third, to over 60 million tons annually by 2017, according to a new report. The report is based on data compiled by “Solving the E-Waste Problem (StEP) Initiative” a partnership of UN organizations, industry, governments, nongovernmental, and science organizations shows that the United States produces the most “e-waste,” at more than 9 million tons a year. China, though, is not far behind, producing more than 7 million tons a year. E-waste includes end-of-life refrigerators, TVs, mobile phones, computers, monitors, e-toys and other products with a battery or electrical cord. Old electronic junk creates multiple problems. Because electronic devices are loaded with heavy metals -- things like lead, cadmium and mercury – it’s a problem when they want up in a landfill, where the toxic elements will leach into the ground and, sometimes, the groundwater. Even when recycled, a significant fraction of those old devices are sent to developing nations where people endanger their health by picking them apart so the various components can be resold. Still, it’s much better to recycle it than to throw it away. It’s very easy to find a site that accepts recycled electronic gear, but before you recycle your digital gadgets, there are a few things to remember. - If you’re recycling a computer, remove the hard drive. Simply erasing data doesn’t real remove it; it can easily be retrieved. Beat on it with a hammer (kind of fun, actually) or leave it in a pail of water for a while, and then recycle it. There’s no need to spend money on a program to wipe the disk, unless you can’t figure out how to get it out of your computer. - The same is true for your camera. Remove the memory card and smash it before it goes to the recyclers. - Before you toss the smartphone, do a reset that wipes out all of your data -- after backing it up, of course. - If you’ve been using iTunes, you need to deauthorize the device. Simply follow the directions in iTunes. You may need to deauthorize all of your devices, but don’t worry. You can simply reauthorize the ones you’re keeping.
<urn:uuid:11cf0ddb-9fb7-4939-a414-c53b9a966fed>
CC-MAIN-2017-04
http://www.cio.com/article/2370173/consumer-technology/electronic-waste-gets-piled-higher-and-higher-and-higher.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911955
521
2.96875
3
MISSION, KS--(Marketwire - Dec 17, 2012) - (Family Features) It beats about 100,000 times a day, 35 million times a year. It pumps blood through the body three times every minute, taking that blood on the equivalent of a 12,000 mile trek every 24 hours. Even at rest, it works twice as hard as the leg muscles of a person running. The heart is a remarkable, vital muscle that warrants great care and maintenance. Yet 1 in every 4 deaths is due to heart disease. While there are some inherent risk factors such as aging or family history, poor lifestyle choices are often to blame for the onset of heart disease. The good news is that making better lifestyle choices reduces your risk of heart disease -- and it's not as hard as you might think. Heart-Healthy Living Works A study published in the Journal of the American Medical Association found that people who most closely followed the diet and lifestyle recommendations of the American Heart Association (AHA) had a 76 percent lower risk of dying from heart disease, and a 51 percent lower risk of all-cause deaths than those who didn't follow recommendations as closely. The study also found that only a small number of people follow all or most of the AHA guidelines for heart health. So it's not surprising that heart disease is still the leading cause of death for men and women in the United States. But it doesn't have to be that way. You can start making changes today that will help make your heart healthier in the long run. Three Changes You Can Make 1. Eat Better One of your best weapons against cardiovascular disease is a healthy diet. Eating a wide variety of foods that are low in fat, cholesterol and salt, but rich in nutrients can help protect your heart. Instead of thinking about a healthy diet in terms of what you can't eat, think about it in terms of what you can eat. Add more: - Fruits and vegetables -- about 4 1/2 cups a day - Whole grain foods -- at least three 1-ounce servings a day - Fish -- at least two 3 1/2-ounce servings a week - Nuts, legumes and seeds -- at least four servings a week About 25 percent of the cholesterol in your blood comes from the foods you eat. Eating healthy foods low in cholesterol, trans fats and saturated fats, as well as foods that are high in fiber, can help keep cholesterol levels in check. Another way to help control cholesterol levels is by incorporating soy protein into your healthy diet. An extensive body of research has shown that soy-based diets can reduce LDL cholesterol (bad cholesterol) and triglycerides, and raise HDL cholesterol (good cholesterol). One of the key components in soy's cholesterol lowering properties is something called lunasin, a naturally occurring soy peptide. It was found to work at the earlier stage of cholesterol production in the body, or at what's known as the epigenetic level. This indicated that heart disease and other hereditary conditions might be controllable by adding lunasin to your diet. Research on lunasin was so promising that scientists found a way to extract lunasin from soybeans so that it could be made available in a pure form. Lunasin content in soy-based foods varies by product and by brand. For example, LunaRich soy powder delivers the lunasin equivalent of 25 grams of soy protein. To get that same amount from other foods, you would need to drink approximately 32 ounces of soy milk, or eat approximately 12 ounces of tofu. Learn more about lunasin at www.reliv.com/lunasin. 2. Get Moving According to the AHA, nearly 70 percent of Americans don't get the physical activity they need. But daily physical activity can increase your quality and length of life. Moderate exercise can help you lose weight, reduce your chances of stroke, diabetes and heart disease complications, lower your blood pressure and prevent other serious medical complications. Aim for at least 30 minutes of moderate activity a day, five times per week. Here are some easy ways to get moving: - Start walking -- Walk just fast enough to get your heart rate up. Try taking brisk, 10-minute walks throughout the day. Park farther away from your destination. Take the stairs instead of the elevator. Walk the dog after dinner or walk to a neighborhood destination instead of driving. - Do chores -- Outdoor chores like gardening, raking leaves and washing the car are good ways to get moving. Cleaning house does it, too. Try turning on some music and dancing while doing chores. Even small changes like these can give you health benefits, but you'll see bigger benefits when you increase the duration, frequency and intensity of your activities. Always talk with your doctor to find out if there are any activities that you should not be doing. 3. Lose Weight Being overweight is a risk factor for heart disease all on its own. Extra weight puts more burden on your heart, lungs, blood vessels and bones. Being overweight increases the risk of high blood pressure, high cholesterol and diabetes, as well. Losing even 10 pounds can produce a significant reduction in blood pressure. - Talk to your doctor -- Find out your body mass index (BMI), which is your body weight relative to your height. Find out what your BMI should be, and find out what your calorie intake should be for someone of your age, gender and level of physical activity. - Keep track of what you eat -- This will tell you a lot about your eating habits and help you make smart decisions, like controlling portion sizes and choosing nutrient-rich foods. - Set reasonable goals -- Don't go for fad diets that claim you'll lose 10 pounds in a week. Slow and steady weight loss is more likely to stay off, and you'll be healthier in the long run. The good news is, if you put steps one and two into place -- eating healthier foods and getting more active -- step three should be a natural by-product of your efforts. Your heart works hard for you -- start taking better care of it today so that it can keep working for you for a long time. The Food and Drug Administration approved the health claim that "25 grams of soy protein per day as part of a diet low in saturated fat and cholesterol may reduce the risk of heart disease." Additional research over the last decade indicates that soy, and a peptide within soy called lunasin, could work to prevent a variety of other hereditary health conditions. About Family Features Editorial Syndicate This and other food and lifestyle content can be found at www.editors.familyfeatures.com. Family Features is a leading provider of free food and lifestyle content for use in print and online publications. Register with no obligation to access a variety of formatted and unformatted features, accompanying photos, and automatically updating Web content solutions.
<urn:uuid:77f4cf01-65cb-4606-9958-f3938ce2080c>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/how-to-love-your-heart-1738469.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00281-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940904
1,408
2.59375
3
The Earth is the "Blue Planet" because more than 70 percent of its surface is covered in water. But what does the Blue Planet look like without the blue? How would Earth appear as ... the Green Planet? Something like this, apparently. The Suomi NPP satellite, NASA's Earth-observing research satellite, has been gathering data about the world's vegetation from its delightfully lofty perch. The data themselves come from Suomi NPP's Visible-Infrared Imager/Radiometer Suite, or VIIRS, instrument. VIIRS, as its name (sort of) suggests, is able to detect changes in the reflection of light -- which allows it, in turn, to capture images that measure vegetation changes over time. (In order to power photosynthesis, vegetation absorbs visible light, and leaf cells strongly reflect near-infrared light -- which means that lush areas of the planet show less visible light and more near-infrared light than their relatively barren counterparts.) The Suomi satellite operates as a partnership between NASA and the National Oceanic and Atmospheric Administration. So NOAA did something great: It compiled the past year's worth of Suomi/VIIRS data into a series of striking images. The composite mosaics, stripped of everything but plant life, depict the world as, in a very real sense, a greenhouse.
<urn:uuid:73e48f9b-c6bb-4b3b-8083-18b044508d7c>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/2013/06/going-really-really-green-earths-plant-life-seen-space/65564/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00401-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936637
274
3.6875
4
Instruct the camera of the current device orientation. camera_error_t camera_set_device_orientation(camera_handle_t handle, uint32_t val) The handle returned by a call to the camera_open() function. The orientation value, such as 0, 90, 180, or 270. Zero represents the default orientation (landscape or portrait), 90 represents rotated to the right, and 180 degrees represents upside down based on the marking on the device. You can specify values such as 0, 90, 180 or 270 degrees, where 0 degrees is the default orientation of the device. It is the responsibility of an application to update the camera when the device orientation changes. Use this function to let the camera on the system know how the user is holding the device. This allows the camera to adjust internal parameters, such as exposure weighting, face detection, or other orientation-dependent features to match the orientation of the device. This function has no effect on the output image rotations. It is simply used to inform the camera hardware that the orientation of the scene has changed in order to optimize internal algorithms, such as metering and face detection. CAMERA_OK when the function successfully completes, otherwise another camera_error_t value that provides the reason that the call failed.
<urn:uuid:161ef2ef-5755-42ff-af50-ac12f2ea1a8d>
CC-MAIN-2017-04
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.camera.lib_ref/topic/camera_set_device_orientation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.802997
270
2.578125
3
Fewer Women and Minorities Entering IT Workforce The Information Technology Association of America (ITAA) found in a recent study that racial minorities and women have made few inroads into high-tech careers between 1996 and 2002. The study is based on data found in the U.S. Bureau of Labor Statistics’ Current Population Surveys. According to the report, the percentage of women in the overall IT workforce fell from 41 percent to 34.9 percent between 1996 and 2002. The percentage of African Americans fell from 9.1 percent to 8.2 percent in the same period. The ITAA findings show that these groups are still underrepresented in the IT workforce as compared to their representation in the overall U.S. workforce. Women represented 46.6 percent of the U.S. workforce in 2002, and African Americans made up 10.9 percent of the U.S. workforce in 2002. According to Harris N. Miller, president of ITAA, the findings are easily explained. “Women and minorities earn significantly fewer undergraduate degrees in computer science and engineering than their representation in the U.S. population,” he said. Miller added that the percentages of women and minorities in the IT workforce are not likely to change significantly until the education system produces more qualified candidates. In fact, the study shows that women earned only 22 percent of computer science and engineering undergraduate degrees in 2000. African Americans earned only 7 percent, Hispanic Americans earned 5 percent, and Native Americans earned 1 percent of these degrees in 2000. The report shows that Hispanic Americans, Native Americans and Asian Americans made gains in IT. Hispanic Americans went from 5.4 percent to 6.3 percent of the IT workforce between 1996 and 2002, Native Americans went from 0.2 percent to 0.6 percent in the same time period, and Asian Americans jumped from 8.9 percent to 11.8 percent of the IT workforce between 1996 and 2002. Like African Americans and women, Hispanic Americans and Native Americans are underrepresented in the IT workforce compared to their representation in the overall U.S. workforce. Hispanic Americans comprised 12.2 percent of the U.S. workforce in 2002, and Native Americans made up 0.9 percent of the U.S. workforce in 2002. Asian Americans, on the other hand, are almost three times as prevalent in the IT workforce as in the overall U.S. workforce. For more information on the study, see http://www.itaa.org. Emily Hollis is associate editor for Chief Learning Officer Magazine. She can be reached at firstname.lastname@example.org.
<urn:uuid:f931f6ed-1556-43ea-aad1-375f57d7dd67>
CC-MAIN-2017-04
http://certmag.com/itaa-fewer-women-and-minorities-entering-it-workforce/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956852
534
2.65625
3
The sendmail command implements a general purpose internetwork mail routing facility under the UNICOS/mp operating system. It is not tied to any one transport protocol -- its function may be likened to a crossbar switch, relaying messages from one domain into another. In the process, it can do a limited amount of message header editing to put the message into a format that is appropriate for the receiving domain. All of this is done under the control of a configuration file. Due to the requirements of flexibility for sendmail, the configuration file can seem somewhat unapproachable. However, there are only a few basic configurations for most sites, for which standard configuration files have been supplied. Most other configurations can be built by adjusting an existing configuration file incrementally. The main daemon, /usr/libexec/sendmail, performs two functions: it listens on the SMTP socket for connections (to receive mail from a remote system), and it processes the queue periodically to insure that mail gets delivered when hosts come up. sendmail is normally started at system boot time from the /etc/init.d/mail script. For a complete list of options, see the sendmail(1) man page. The default command in the startup script is: sendmail -bd -q15m The -bd flag causes sendmail to run as a daemon and the -q flag specifies the mail queue processing interval, in this example every 15 minutes. See the sendmail(8) man page. The /etc/init.d/mail script can be used to start or stop the sendmail daemon. For example, to implement changes to the configuration file, you must stop all running sendmail processes, refreeze the configuration file, and restart the sendmail daemon before the new configuration file will take effect. This script takes a single argument, either start or stop, which starts or stops the sendmail daemon respectively. You must be superuser (root) to use this script. This is the main configuration file for sendmail. The /etc/mail/sendmail.cf file is an ASCII file that contains most of the configuration information and is read at run time. This file is designed to be generated from the m4 processor and should NOT be edited directly. The preinstalled sendmail.cf file is configured for a generic system and may suit your needs. You may want to personalize this file, for instance, if you want to configure the sendmail daemon to relay all messages to a central mail server on your network. You should not edit sendmail.cf directly. See Section 7.4 for information regarding using the m4 macro processor to create a new sendmail.cf file. The unicosmp.m4 file contains definitions for the sendmail environment under the UNICOS/mp operating system. It should not be modified. The .mv file used to generate sendmail.cf under UNICOS/mp includes these definitions with the line: OSTYPE(unicosmp) The aliases file contains the text form of the alias database used by the sendmail program. The alias database contains aliases for local mail recipients. For example, the following alias delivers mail addressed to jd on the local system to email@example.com: jd:firstname.lastname@example.org When sendmail starts up, it automatically processes the aliases file into the file /etc/mail/aliases.dir and /etc/mail/aliases.pag. The aliases.dir and aliases.pag are NDBM versions of the aliases database. The NDBM format improves sendmail performance. Note: The newaliases program must be run after modifying the alias database file. The newaliases program is used to rebuild the NDBM version of the aliases database. This program must be run any time the text version of the aliases file is modified. If not, the changes are not incorporated into the NDBM alias database and are not seen by the sendmail program. To rebuild the NDBM version of the database without restarting sendmail, enter the newaliases command. Executing this command is equivalent to executing sendmail with the -bi flag: % /usr/lib/sendmail -bi The mail queue, /var/spool/mqueue, is the directory where the mail queue and temporary files reside. The messages are stored in various queue files that exist under the /var/spool/mqueue directory. Queue files take these forms: Control (queue) files for messages A file used when a unique ID is created Transcript file of the current session Normally, a sendmail subdaemon processes the messages in this queue periodically, attempting to deliver each message. (The /etc/init.d/mail script starts the sendmail daemon so that it forks a subdaemon every 15 minutes to process the mail queue.) Each time sendmail processes the queue, it reads and sorts the queue, then attempts to run all jobs in order.
<urn:uuid:3a760baa-ee2f-455d-9033-fcff1b21d240>
CC-MAIN-2017-04
http://docs.cray.com/books/S-2341-22/html-S-2341-22/z1028736068smg.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.83471
1,041
2.953125
3
Data center water usage and conservation is a critical aspect of green building design and environmental sustainability. Most data centers use large amounts of water for cooling purposes in order to maintain an ideal operating temperature for servers, hardware and equipment. But how do water conservation efforts affect the cost and operational efficiencies of a data center? While 70% of the earth’s surface is covered in water, only 2.5 percent is fresh water, most of which is currently ice. Historically, the demand for fresh water has increased with population growth, and the average price has risen around 10-12 percent per year since 1995. In contrast, the price of gold has increased only 6.8 percent and real estate 9.4 percent during this same period. So how much water do data centers use? While the average U.S. household uses 254 gallons per day, a 15 MW data center consumes up to 360,000 gallons of water per day. As the cost of water continues to rise with demand, the issue becomes one of both economics and sustainability. How are data centers addressing the problem? In order to control costs in the long term, data center operators are finding creative ways to manage water usage. Options include using less freshwater and finding alternative water sources for cooling systems. - Reduced water usage – Designing cooling systems with better water management, resulting in less water use. - Recycled water – Developing systems that run on recycled or undrinkable water (i.e., greywater from sinks, showers, tubs and washing machines). Internap’s Santa Clara facility was the first commercial data center in California to use reclaimed water to help cool the building. - No water – In some regions, air economizers that do not require water can be used year round. While using less freshwater provides long-term cost and environmental benefits, alternative solutions also create new challenges. The use of recycled water can have negative effects on the materials used in cooling systems, such as mild steel, galvanized iron, stainless steel, copper alloys and plastic. Water hardness (measure of combined calcium and magnesium concentrations), alkalinity, total suspended solids (TSS – e.g. sand and fine clay), ammonia and chloride can cause corrosion, scale deposits and biofilm growth. Data center operators must proactively identify susceptible components and determine a proper water treatment system. Implementing a water quality monitoring system can provide advanced warning for operational issues caused by water quality parameters. With the rising cost and demand for freshwater, conservation measures are essential to the long term operations of a data center. Internap is committed to achieving the highest levels of efficiency and sustainability across our data center footprint, with a mix of LEED, Green Globes and ENERGY STAR certifications at our facilities in Dallas, Los Angeles, Atlanta, New Jersey, and Santa Clara. To learn more, download the ebook, Choosing a Green Colocation Provider.
<urn:uuid:8e1e4892-882f-4f68-8d01-2ea4bd852bbd>
CC-MAIN-2017-04
http://www.internap.com/2014/07/11/data-center-water-conservation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00209-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917089
601
3.59375
4
The Dangers of XSS Attacks Two recent incidents highlighted the severity of this vulnerability: - Apple’s Developer Website was recently hacked and the hacker used XSS vulnerabilities to achieve its goal. Tens of thousands of customer data records were at risk as a result of the attack and the developer website was non-functional for more than a week. - Canonical’s Ubuntu Forums were also hacked using a XSS vulnerability: the attacker sent private messages to three administrators claiming that there was a server error on the announcement page and asking the Forum administrators to take a look. The private message contained an XSS exploit and the attacker managed to steal their cookies gaining access to the administrator control panel. 1.82 million logins and email addresses were stolen. As seen in the examples above, XSS vulnerabilities can be very dangerous and should be fixed as soon as possible. Acunetix Web Vulnerability Scanner is the market leader at detecting XSS vulnerabilities and in version 9 we make it even better with improvements in the detection of DOM-based XSS vulnerabilities. While a traditional cross-site scripting vulnerability exploits server-side code, document object model (DOM) based cross-site scripting is a type of vulnerability which affects the script code being executed in the client’s browser. DOM-based XSS vulnerabilities are much harder to detect than classic XSS vulnerabilities because they reside on the script code from the website. An automated scanner needs to be able to execute the script code without errors and to monitor the execution of this code to detect such vulnerabilities. Very few web vulnerability scanners can really accomplish this. In comparison, classic XSS vulnerabilities are easier to detect as the detection process doesn’t require the capability of executing and monitoring script code. Most web vulnerability scanners can only detect the classic XSS vulnerabilities. How Acunetix detects DOM XSS vulnerabilities Acunetix uses DeepScan technology, which drastically improves the automatic detection of DOM-based XSS by tracing the execution of the script code from the scanned website. Acunetix can monitor a list of sources such as document location, referrer, and window.name, and trace the data flow until it reaches various sinks that can cause an XSS vulnerability. Examples of such sinks are eval function, document.write, location change and so on. Here is an example of a DOM-based XSS vulnerability discovered in our testhtml5 website. In the alert details shown above, the data from window.name source reaches an ‘evaluate code’ sink (such as the function eval, setTimeout) and therefore the script code is deemed to be vulnerable. However, other DOM-based XSS vulnerabilities are much harder to detect than this one. Nowadays, more and more modern HTML5 websites are using the location hash to store custom data. A good example of such web applications are Single Page Applications (SPA). A single-page application (SPA) is a web application or web site that fits on a single web page with the goal of providing a more fluid user experience akin to a desktop application. In an SPA, the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. Our new test HTML 5 website was built as a SPA web application. Its URLs are designed to look like: All the URLs shown above are using the location hash to determine the target page. There is only one real page (/) and this is page is loading various sections of the website by using the value of the location hash parameter. The web server doesn’t see any of the URLs above, everything is happening only in the client’s browser and the page is not reloaded (making the navigation faster). However, even this type of web applications can have vulnerabilities. For example, the number after the /#/latest/page hash sequence can be manipulated and see how this data is being parsed. Acunetix is capable of automatically finding such complex vulnerabilities. It gathers a list of all the location hash URIs and analyzes them trying to identify patterns. It then split them in fragments (like it does on URL path fragments) and manipulates each fragment individually. For example, the URI /#/latest/page/1 is split in 3 fragments (based on the / character) and each fragment is tested. DeepScan manipulates each fragment and monitors the script execution in order to identify if the execution reaches any DOM XSS sinks. In this case, the page parameter is indeed vulnerable and Acunetix issued the following alert: Using the location hash printed above is possible to exploit the DOM-based XSS vulnerability. Acunetix goes even further. Another interesting URI is /#/redir?url=http://pwnies.com/nominations/index.html. This URI is using the query string notation of specifying parameters but inside the location hash. Acunetix can handle this situation by understanding that the URL is a query string parameter and manipulates it accordingly and issues the following alert: The URL parameter from the /#/redir hash is used to redirect to a certain URL. The code looks like this: <script> var redirUrl = decodeURIComponent(window.location.hash.slice(window.location.hash.indexOf("?url=")+5)); if (redirUrl) window.location = redirUrl; </script> The code looks for the ?url= in the location hash and if found, it assigns what follows to the window.location property. This of course is causing an XSS vulnerability. The detection of DOM-based XSS vulnerabilities is very laborious, making them difficult to detect manually. The situation is not going to improve, since DOM XSS vulnerabilities are expected to be more widespread in modern HTML5 web sites. Acunetix can detect such vulnerabilities automatically, thereby reducing the resource-intensive task of detecting such vulnerabilities.
<urn:uuid:81b36453-df05-422f-9d2e-8a89253ac000>
CC-MAIN-2017-04
https://www.acunetix.com/websitesecurity/improving-dom-xss-vulnerabilities-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.892721
1,236
2.859375
3
Everyone creates a password with one thing in mind: keeping data secure. But creating a password and keeping up with it is never easy. Many people average ten or more passwords for various home or office accounts, social media sites, and e-mail. Passwords are the first defense against intrusion and are created for safekeeping of personal information. Use these tips to improve password management and make passwords stronger and easier to remember. 1. Refuse to reuse There is an increase in personal information available on the internet. Therefore it is crucial to increase the number of passwords used for different accounts. Reusing passwords maximizes the risk of using a compromised password. 2. Strength in length Short passwords make it easier for a hacker to crack. The more characters there are in a password the more difficult it is for a hacker to get in and retrieve information. You want to have eight or more characters in any password. Spice up your password; do not stick to standard letters. Create a passphrase and replace letters with numbers or symbols. You want to complicate your password and make it impossible to access. The key is password balance; easy for you to remember but tricky for others to guess. 4. Getting too personal Avoid personal information that is near and dear to you such as a pet’s name, birthdays, anniversaries, and family names. Your personal information should be just that — personal. Personal information makes it too easy for a hacker to intrude. 5. Shhh, it’s a secret Your password is just that: your password! You want to keep it secure and protected, and be sure not to share it with anyone. It is nearly impossible to memorize all of your passwords. Avoid the old stand-by of writing it down on a sticky note. There are plenty of password management tools online for you to use to keep track of your passwords.
<urn:uuid:64856233-ddaa-4a94-98c8-f992635f87c7>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/11/26/hinder-the-hacker-5-password-protection-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00235-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919494
390
3.15625
3
When I first began my IT career the only computer link we had to the outside world was a modem hooked up to the telephone which plinked away when dialling and broadcast a bunch of white noise around the room when connected. At that point we knew we had a live link to a local bulletin board and could upload some files. Quite frankly the only intrusion detection system we ever needed was a rather loud receptionist who controlled access to our building. Inevitably, as the internet took hold and business realised the benefits of being online with email and the worldwide web, modem connectivity was quickly replaced by ISDN lines and finally broadband. Switching connections on and off just didn't figure anymore, and from the early 1990's onwards organisations were hooked up to the internet 24x7. With this nascent "always on" computing it was soon apparent that the door to businesses computer networks was open for all and sundry to enter and steal or damage data. Something clearly had to be done and quickly, so the IT security experts turned their minds to systems that could prevent and detect intruders. Intrusion Prevention, Detection and Unified Threat Management Intrusion, in the context of IT security, is the attempted or actual entry into a computer system by an unauthorised person. Occasionally this would be an attempt to steal data or, more often, a way of causing damage or propagating a virus or other malware. Sometimes this may be a denial of service (DoS) attack designed, in most cases, to overwhelm an IT infrastructure. In practice, most intrusions are self-propagating malware that search the worldwide web looking for vulnerable systems. There is evidence that some system intrusions are now being initiated by organised criminals. Some have blackmailed online service providers, such as betting operations, with a threat to launch a DoS attack on busy sporting days. There are even indications that some governments are actively targeting systems belonging to countries they consider a threat—a sort of online cold war. In an effort to prevent or defeat such attacks we have Intrusion Prevention Systems, Intrusion Detection Systems and now Unified Threat Management. Introducing Intrusion Prevention Systems (IPS) IPS works on the principle that prevention is better than cure. In fact many intrusion prevention activities can be undertaken without investing in expensive hardware or software. Creating and adhering to a good IT security policy is a great way of preventing intruders, as is running up-to-date and well configured anti-malware software on each client endpoint in your organisation. Of course you do need to have in place technology, of which a firewall, well configured, will be the mainstay. Introducing Intrusion Detection Systems (IDS) Intrusion Detection Systems are normally technology-based and used to detect if a system is being targeted. The system will monitor network traffic as it enters and leaves an organisation with a view to sounding an alarm if an unusual event occurs, which, in turn, may indicate a potential intruder. Often an IDS will have a pre-set action to take when an intruder is detected to minimise any possible damage from an attack. Unified Threat Management (UTM) Clearly the notion of having separate systems to detect and protect intruders can be inefficient. As is having separate systems to manage anti-virus or other anti-malware activities. To that end there is a considerable move in the market away from pure IPS/IDS to Unified Threat Management (UTM). With UTM, defence systems are aggregated in single management consoles and the overall control of threats is coordinated from one place. That way system duplication is eliminated and the ever-important cost of ownership reduced as much as possible. Over the coming years the differentiation between intrusion detection and prevention will become less important and we will be using the terms less and less. Instead unified threat management will become the catchall phrase. Intrusion Detection Systems in Detail The simplest way of thinking of an IDS is to think of a burglar alarm. As a burglar enters a property an alarm is sounded so that the police can be summoned. With IDS an alarm is raised (often via email or pager) and administrators informed of an intruder. Associated with such a system are false negatives and false positives. Worse case scenario is a false negative, when your expensive IDS fails to trigger an alarm when an event occurs. The first you may know about it is users complaining about off line websites that have been nobbled in a denial of service attack. False positives, on the other hand, may be more irritating but are less problematic. These occur when the IDS believes that an attack has happened. On investigation it transpires that the event was not an intruder, rather an unusual business activity, but nothing to worry about. More advanced Intrusion Detection Systems will raise an alarm along with a confidence factor based on an immediate assessment of the problem. This will be determined by the system logic and may be based on heuristics or learned behaviour once the system has monitored routine business traffic. Alert thresholds can be set, such as those with a 90%+ confidence factor will alert via a pager and probably those of a lesser confidence factor alert via email. Clearly there is a lot of responsibility on the security team to ensure that the IDS system has been correctly set up In practice Intrusion Detection Systems work to protect the network, a host server or an application. Each system requires a different approach to protect it which led to the evolution of Network IDS (NIDS), Host based IDS (HIDS) and Application IDS (AppIDS) systems. In reality, vendors soon realised there were benefits and drawbacks of each approach and current best-of-breed solutions, under the Unified Threat Management banner, will monitor all three areas using a single product. For the sake of simplicity we will look at each of these areas in isolation to understand how IDS works in practice. Network-Based Intrusion Detection Systems (NIDS) A NIDS will often be an appliance solution and will be connected to a network segment with the job of monitoring network traffic as it passes up and down the wire. Packets are analysed to determine if there is any odd or out of character behaviour which may indicate an attack. An example may be a sudden influx of packets that appear to be related, which, in turn, could indicate an imminent denial of service attack. Other packet patterns could indicate a port scan in progress, where common ports are explored to see if common network services are running which could be exploited. We cover this in more detail later. Generally, network-based intrusion detection systems can detect a lot more attacks than host-based intrusion detection systems, as they are closer to the network traffic and can see more of what is happening from minute to minute. The downside is they require additional, and often complicated, setup and maintenance. The type of monitoring a NIDS undertakes depends on the network topology and the type of attack you are trying to test for. Often a system will be used to monitor a group of computers or a specific network segment. Before the widespread adoption of network switches, intrusion detection systems could be connected to a network hub and be guaranteed to be able to monitor all network traffic that passes through. Unfortunately the downside was that hubs represented a security risk as, once compromised, it was easy to monitor all traffic that was being processed. Network switches create a more secure network as they create point-to-point links between their ports, but this in turn makes traffic interception far more difficult. To overcome this, network intrusion detection systems are normally attached to a monitoring socket called the SPAN or switched port analysis port to capture passing traffic. NIDS use a number of techniques to determine if an attack is underway or not. Signature matching looks for attack patterns by comparing activity on the network with known signatures in their databases. This uses clever techniques to reassemble packets using protocol stack verification, where packets are examined for their structural integrity and application protocol verification where packets are examined for their specific use. Protocol stack verification will monitor for malformed packets that do not meet the standard rules for the TCP/IP protocol. This can be useful in preventing denial of service attacks which often rely on the creation of malformed packets, which, in turn, can take advantages of weaknesses in the operating system or application. With application protocol verification, protocols such as HTTP or FTP can be monitored to check for strange packet behaviour as some attacks can take the guise of valid protocol packets but in very large numbers. Like most IT solutions, network intrusion detection systems have advantages and disadvantages: - Few devices can be used to monitor large networks - Little disruption when deployed as NIDS are passive devices - Some NIDS may be overwhelmed by the volume of network traffic - NIDS cannot analyse encrypted packets - NIDS have to use monitoring ports which are not present on all switches If network intrusion detection is not suitable then there is an alternative—host based intrusion detection of HIDS. Host-Based Intrusion Detection Systems (HIDS) HIDS are host-based as they sit on a specific computer or server being monitored, rather than at the segment level found in NIDS. Their role is to monitor a host and detect if an intruder is attempting to make changes to system files or attempts to change specific monitored parts of the system, such as the Windows registry. HIDS use a change-based approach to security. Monitored files are initially checked as to their size, creation dates and any other measurable attribute. Any subsequent change to one of these files will create an alert to the systems administrator. Likewise system logs will be monitored to determine who is accessing which of these files and appropriate alerts raised. Often system logs themselves will be attacked by more sophisticated hackers trying to hide their activities. To overcome this most HIDS will create their own, well hidden, log files for monitoring. HIDS will also monitor system directory files on a server and their own file structure in case there is an attempt to disable the HIDS as a precursor to a coordinated attack. A major advantage of host-based intrusion detection is that it can often be configured to sit on a host computer and access information that would otherwise have been encrypted as it travelled over the network. How the network data actually reached the host computer is irrelevant, all HIDS worry about is the integrity of their host system. To improve manageability, some HIDS can be deployed across multiple hosts and monitored from a central location with data being reported back to a single console. Criteria can be set up to determine what events trigger an alert and the way in which the alert should be communicated; normally via email or pager/SMS. On the average host computer there may be thousands of files. Some of these will need active monitoring whilst others are not so important. During setup the administrator needs to determine which files are vital and therefore need constant monitoring; for example system files. To assist with this, some HIDS allow files to be triaged using a red, yellow and green colour code. Red files are the most actively monitored, yellow files may be less so and green files not monitored by the HIDS. Other HIDS allow a numerical ranking of files according to their system importance. Similar to network intrusion detection systems, host-based systems have advantages and disadvantages: - As HIDS work on a host computer they can be used to monitor previously encrypted traffic - The network architecture is irrelevant to a HIDS as they do not need to monitor ports on switches - HIDS often need more management than NIDS as they monitor individual files and logs, each of which are capable of raising an alarm - HIDS have a poorer ability to deal with some denial of service attacks - HIDS can consume large amounts of disk space with their monitoring services and logs Application-Based Intrusion Detection Systems (AppIDS) AppIDS take the notion of host-based intrusion detection one step further. Instead of monitoring an entire host system they will monitor a specific application that may be running on the host. During this monitoring, the AppIDS system will be looking for any out-of-course activity or other anomalous behaviour that could indicate an attack. An AppIDS can be tuned to monitor specific user activity and determine who is doing what on a system. Similar to a HIDS, AppIDS sit above any encryption that may be in place. Typically an AppIDS will monitor file reads and writes, configuration settings and the use of application execution space in the system memory. Advantages and disadvantages of AppIDS include: - An AppIDS can be finely tuned to monitor specific application attributes - AppIDS sit above any encryption algorithms being used - AppIDS will work irrespective of the network topology - AppIDS are more susceptible to attack as they sit at the application layer - AppIDS can be fooled by some forms of spoofing and Trojan Horses Common Intrusion Threats—Port Scanning This is the computer security equivalent of a burglar checking to see what doors or windows may be left unlocked in your house. With the TCP/IP protocol there around 65,000 ports that can be used for services, applications or for programs to communicate on. The first 1024 TCP ports are referred to as the well-known ports and host services such as FTP and HTTP. A port scan is a process of automatically scanning each of a system's ports to determine which ones may have been left open either deliberately or accidentally. This open port can then be probed further to see if there is an underlying weakness in the system waiting to be abused. As can be seen port scans are a crude way of checking for vulnerabilities and are one of the first attack vectors that a decent Intrusion Detection System will prevent. Common Intrusion Threats—Denial of Service (DoS) Attack This is another reasonably crude way of attacking a computer system and can come in a number of forms, each of which is designed to slow down or stop a computer operating. In fact, the simplest Denial of Service attack may be someone locking your office door—if you can't get in you can't do any work. One of the most common technical DoS attacks tries to prevent users accessing a system by overwhelming it with data. A ping flood is sometimes used to overload a computer with ping packets, which are used in legitimate circumstances to see if a computer is present on a network. If the sending computer has more bandwidth than the computer under attack then an unprotected computer is likely to collapse under the volume of ping packets. A SYN flood uses a feature of TCP/IP to connect to computers on a network. A message is sent from one computer (often hijacked using malware) to the computer under attack asking for a connection. The computer under attack responds to say it is ready to communicate but never receives confirmation of the connection request. That way the computer under attack sits waiting with half opened connections. With lots of these hanging half connections the attacked system will be unable to respond to legitimate connection requests and be unable to work normally. The good news is that both port scans and denial of service attacks can be prevented by using intrusion detection systems. Strengths and Weaknesses of Using Intrusion Detection Systems Few would doubt that adding an intrusion detection system to your security portfolio is probably a good idea, but there are some drawbacks as well as advantages. An intrusion detection system is a very useful adjunct to good security practices and policies. There is no point in having an expensive IDS if you allow your users to download malware or copy files from USB thumb drives. Education is vital, as is leadership to demonstrate that the business takes security very seriously. Any violations of a well communicated policy need to be taken seriously. An IDS is a useful way of creating a security baseline and then detecting any deviations from that which may indicate an attack. In this way an IDS will allow you to act before any damage is done and prevent loss to the business. On the downside, installing and configuring any form of IDS will take time and effort. During the learning phase there may quite well be a lot of false positives and false negatives as the administration team get to fully understand what the system can do and how to tune it. No IDS can 100% guarantee that all attackers will be deterred and, in fact, a determined, educated attacker will probably succeed whether you have an IDS or not. That said, a very large percentage of untargeted attacks will be prevented with the most basic intrusion detection system. IT Security is a tough gig. It needs administrators to work with developers, database administrators and the business to get the correct balance of security across the organisation. The simplest option can at times appear to be the easiest—unplug your systems from the internet and you will no longer need to worry about intrusion detection systems. We all know the reality is very different and in today's modern, connected world internet connectivity is mission critical for most businesses. To that end it is important that we get security right from the start. Putting in place an intrusion detection system, as part of your unified threat management strategy, is now as vital as installing Office productivity software on user's PCs. We can no longer rely on receptionists with shrill voices to protect our organisations from intruders.
<urn:uuid:c98362f7-cd91-4244-b949-394d93345d40>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/intrusion-prevention-detection-and-unified-threat-managemen/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00539-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947487
3,558
2.546875
3
Cybercriminals have a new attack vector that security watchdogs are worried about -- the growing number of devices that routinely use the Internet to function. Machine-to-machine (M2M) security is closely connected with what's known as "The Internet of Things" and involves a host of devices that use mobile modules to connect to the Internet. There's the vending machine, for example, that communicates with a distributor when supplies get low or the E-ZPass toll-paying system. "It's not a huge target yet, but it's potentially a huge target," said Lawrence M. Walsh, president and CEO of the 2112 Group, a research and consulting firm in New York City. "It's projected that there will be 50 billion embedded mobile devices worldwide over the next 15 to 20 years," he noted. [See also: Fortinet: Top 6 threat predictions for 2013.] Until now, M2M has dodged the attention of the larger hacker community. It's still difficult to crack the devices because of a lack in intelligence in the networks they're connect to, said Anthony Cox, an associate analyst with Juniper Research in Basingstoke, UK. Cox recently blogged about the prospect of M2M attacks. That's changing, though, he said. "As systems offer more complete communication from the module to the management platform that looks after these modules, the chance of creating avenues where hacking may be successful will increase," he said in an email. M2M systems that rely on richer communication already exist, such as those transponders used for automatic toll payments. "There's an M2M device at the toll booth reading your sensor, transmitting the data to another M2M device which is aggregating it and sending it to a server," Walsh explained. "Is it possible to hack that system to give everybody free tolls for the day or conversely, double everybody's tolls for the day," he asked. "Absolutely." M2M is used, for instance, in a variety of energy, public utility, transportation and security systems, he said. "You have the potential to disrupt infrastructure, disrupt economic activity and steal money," he said. "M2M systems have a similar concern to SCADA systems," he said. "SCADA systems were once thought to be impervious to conventional Internet attacks because most of them weren't connected to the Internet and were operating on dedicated operating systems." "Now, they're not," Walsh continued. "They're integrated with Windows and Linux and they have connections to networks connected to the Internet that makes them vulnerable." Making matters worse is that M2M devices -- unlike smartphones, tablets and PCs -- have limited power, processing and storage. That could make them harder to secure from hackers. "Putting active security devices on them is going to challenging," Walsh said. "The security of M2M systems [does] worry me." Adding to M2M's security problems is an over-reliance on encryption to secure systems, said Tom Kellermann, vice president of Cyber Security for Trend Micro in Cupertino, Calif. Breaking encryption is less of a problem for M2M networks than compromising credentials that can be used to decode the encrypted security measures that are supposed to protect them. "We need better ways of protecting private key management," Kellermann said, "but even if that problem were solved, the over-reliance of encryption to solve The Internet of Things security problem is highly problematic. "...Hackers know how to bypass encryption and use that encryption to hide their forensic trails."
<urn:uuid:23f92f75-35aa-4adc-8d8c-785c60c9bada>
CC-MAIN-2017-04
http://www.csoonline.com/article/2132941/mobile-security/m2m-offers-hackers-a-new-frontier-for-mischief.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965086
744
2.578125
3
This is the first of several articles discussing the various technologies and design criteria used for HPC systems. Building computer systems of any sort, but especially very large systems, is somewhat akin to the process an apartment real-estate developer goes through. The developer has to have an idea of what the final product will look like, its compelling features, and the go-to-market strategy. Do they build each unit the same, or provide some level of heterogeneity, different floor plans. Do they construct one monolithic building or a village with walkways? What level of customization, if any, should be permitted? In contemporary HPC design we face similar decision-making. Do we build tightly coupled systems, emphasizing floating point and internode bandwidth, or do we build nodes with extensive multi-threading that can randomly reference data sets? In either case, we always need to scale out as much as possible. And finally we have the marketing picture of the system under a beautiful clouded-blue sky with mountains or lake in the background. Since we intend to market to international buyers, we have to figure out which languages to support on our marketing web site. Almost forgot: do we sell these systems outright or base our financial model on a timeshare condo arrangement? Is programming an HPC system equivalent to the above? For example, there’s a choice to be made between extending existing languages or creating new ones. Are the languages domain specific unique to a particular application space, like HTML, Verilog or SQL; or do we add new features to existing languages, like global address space primitives, such as UPC? For this initial piece, we will discuss these design issues in the context of “big data.” It’s seems reasonable to suggest that building an exaOPS system for big data systems is different from building an exaFLOPS machine for technical applications. But is it? While clearly the applications are different, that doesn’t necessarily mean the underlying architecture has to be as well. The following table compares some of the characteristics of OPS versus FLOPS at the node level. Examining the attributes listed above would initially lead one to the observation that there are substantive differences between the two. However, looking at a hardware logic design reveals a somewhat different perspective. Both systems need as much physical memory as can be directly supported, subject to cooling and power constraints. Both systems also would like as much real memory bandwidth as possible. For both systems, the logic used by the ALU’s tends to be minimal. Thus the amount of actual space used for a custom design floating point ALU is relatively small. This is especially true when one considers that 64×64 integer multiplication is an often-used primitive address calculation in big data and HPC applications. In many cases, integer multiplication is part of the design of an IEEE floating point ALU. If we dig a little deeper, we come to the conclusion that the major gating item is sustained memory bandwidth and latency. We have to determine how long it takes to access an operand and how many can be accessed at once, Given a specific memory architecture, we need to figure out the best machine state model for computation. Is it compiler managed-registers using the RAM that would normally be associated with a L3 cache, or keep scaling a floor plan similar to the one below? The overriding issue is efficiency. We can argue incessantly about this. As the datasets get bigger, the locality of references — temporal and spatial — decreases and the randomness of references increase. What are the solutions? In HPC classic, programmers (and some compilers) generate code that explicitly blocks the data sets into cache, typically the L2 private or L3 shared cache. This technique tends to work quite well for traditional HPC applications. Its major deficiencies are the extra coding work and the lack of performance portability among different cache architectures. Several techniques, especially the ones supported by the auto-tune capabilities of LAPACK, work quite well for many applications that manipulate dense matrices. Consequently, the memory systems are block-oriented and support is inherent in the memory controllers of all contemporary microprocessors. For big data, however, accesses are relatively random, and this block approach tends not to work. As a function of the data structure — a tree, a graph, a string — different approaches are used to make memory references more efficient. Additionally, for big data work, performance is measured in throughput of transactions or queries per second and not FLOPS. Coincidentally, perhaps, the optimal memory structure is HPC classic, meaning, highly interleaved, word-scatter/gather-oriented main memory. This was the approach used in Cray, Convex, Fujitsu, NEC, and Hitachi machines. There is another interesting dynamic of cache- or register-based internal processor storage: power consumption and design complexity. While not immediately obvious, for a given amount of user-visible machine state, a cache has additional transistors for maintaining its transparency, which translates into additional power consumption. For example, there is storage for tags and logic for the comparison of generated address tags with stored cache tags. There is additional logic required for the control of the cache. It is difficult to quantify the incremental power required, but it is incremental. Another aspect of cache versus internal state, especially for big data, is the reference pattern. Random references have poor cache hit characteristics. But if the data can be blocked, then the hit rate increases substantially. The efficiency of managing large amounts of internal machine is proportional to the thread architecture. We have to determine if we have lots of threads with reasonable size register sets, or a smaller number of threads, like a vector machine, with a large amount of machine state. The latter approach places a burden on physical memory design. Attaching private L1 and L2 caches per core is relatively straightforward and scales as the number of cores increases. A shared L3 cache increases the complexity of the internal design. We need to trade off bandwidth, simultaneous accesses, and latency and cache coherency. The question that needs to asked is if we are better off using internal static RAM for compiler-managed data registers per core/thread. Obviously both memory structures have their own cost/performance tradeoffs. A cache-based memory system tends to be more cost-effective, but of lower performance. The design of the memory subsystem is easier, given that off-the-shelf DRAM DIMMS are commercially available. The HPC classic architecture results in higher performance and is applicable to a wider range of applications. The available memory bandwidth is more effectively used, and operands are only loaded and stored when needed; there is no block size to deal with. In summary, this article discusses the single-node processor architecture for data-centric and conventional high performance computing. There are many similarities and many differences. The major divergence is in the main memory reference model and interface. Data caches were created decades ago, but it’s not clear if that this architecture is still optimal. Will Hybrid Memory Cube (HMC) and Processor in Memory (PIM) architectures make tradeoffs for newer designs that move away from the traditional memory designs? Time will tell. The next article will discuss the design approaches for global interconnects.
<urn:uuid:4d13c2a9-c7a5-4361-ab01-7d6ef80b1513>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/10/17/designing_hpc_systems_ops_versus_flops/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92708
1,512
2.53125
3
Next month, with the help of a variety of high-tech gear, researchers will begin a wide-ranging project to better understand the origin, structure and evolution of tornadoes with the ultimate goal of being able to better predict when the destructive storms will happen and get people out of harms way faster. The National Science Foundation has given $9.1 million to the project known as Verification Of Rotation in Tornadoes EXperiment 2, or more simply, VORTEX2, which will take place from May 10-June 13. Researchers say Vortex2 is the largest attempt in history to study tornadoes, and will involve more than 50 scientists and 40 research vehicles, including 10 mobile radars covering 900 square miles of ground in southern South Dakota, western Iowa, eastern Colorado, Nebraska, Kansas, the Texas panhandle and western Oklahoma. A variety of high-tech applications will be part of the project including: - Collaborative Adaptive Sensing of the Atmosphere (CASA) will use a numerical computer model that, when seeded with one-hour of assimilated data (at 5-minute intervals) from National Weather Service and its own radar system that has successfully predicted, two hours ahead of time, at almost the exact time of an observed tornado near Minco, Oklahoma. The predicted location is only about 8 km from the observed tornado. The CASA radar network is a series of X-band radars mounted on cell phone towers. - The Center for Severe Weather Research will deploy its Radar Observations of Tornadoes And Thunderstorms Experiment that uses three Doppler On Wheels (DOW) mobile radars to observe the process of tornado formation, tornado structure, tornado lifecycle, and tornado death. ROTATE uses what researchers call a Rapid-DOW system that sends 6-10 simultaneous beams to scan the sky every 10 seconds. The idea is to catch the very beginnings of tornado formation. - Other mobile networks, sensor networks and radar will also be used as well as unmanned aircraft. Researchers said that while unmanned aircraft technology is developing rapidly and has great potential, the use of those aircraft for meteorological purposes is still in its infancy, researchers stated. For VORTEX2, unmanned aircraft will be tethered to vehicles on the ground, researchers said. VORTEX2 is a follow-on the ground-breaking research gathered from the initial VORTEX experiments in 1995. Researchers said technological advances that have occurred since VORTEX1 (e.g., advances in ground-based mobile radar technology and improvements in our ability to obtain thermodynamic and microphysical observations) will let investigators explore aspects of tornadoes and their formation that they could not pursue in VORTEX. VORTEX2 will take full advantage of cutting-edge remote and in situ mobile and fixed observing systems, as well as data assimilation techniques that can improve analyses, researchers stated. "An important finding from the original VORTEX experiment was that tornadoes happen on smaller time and space scales than scientists had thought," said Stephan Nelson, NSF program director for physical and dynamic meteorology. "New advances from VORTEX2 will allow for a more detailed sampling of a storm's wind, temperature and moisture environment, and lead to a better understanding of why tornadoes form--and how they can be more accurately predicted." Layer 8 in a box Check out these other hot stories:
<urn:uuid:58c17c5a-2c36-49fe-96b3-b63117fc9e54>
CC-MAIN-2017-04
http://www.networkworld.com/article/2235221/software/largest-high-tech-tornado-chase-ever-set-to-spin.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00044-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932198
694
3.65625
4
Your data. Your wallet. Your identity. In the darker corners of the internet, it's all fair game, and disaster could be one unwitting click away. Protect yourself by learning about the web's most common dangers, and how to avoid them. Also, take some basic steps to make your PC more resistant to harm. Install security software No, seriously. Go do it now. If you're running a Windows-based PC, security software is an absolute must. PCWorld recently tested 10 premium security suites--pick one! And even if you can't or won't pay for protection, you can bolster your defenses by building a comprehensive free security suite. Good security software stops web-based dangers in their tracks and can prevent malware infection before it happens. Viruses and malware A lot of people see viruses and malware as something that malicious hackers slip onto your computer. In reality, the vast majority of infections happen because of something the user does. Specifically, downloading and running files from websites or email attachments that you don't trust is a great way to wreck your computer. Hackers especially like to serve up viruses on seedy websites, such as those claiming to offer movies, music, commercial software, and porn for free. Steer clear of these sites altogether, and you'll greatly reduce your chance of getting a virus. + ALSO ON NETWORK WORLD 15 free security tools you should try + To fend off malware, download programs onlyA from trusted websites. Deny all others! Be sure to perform a virus scan on any software you download before you install it, as well. Sometimes, shady websites disguise malware downloads as fake update or error warnings. If you encounter a prompt like that, just close the tab or window--don't click the warnings shown on-screen. Instead, browse to the official website of the software that's allegedly out-of-date and look for updates there. Most viruses come from files downloaded off the internet, but an insidious variant called a "drive-by" virus can infect you if you simply visit certain websites. Drive-by viruses exploit vulnerabilities in your operating system, browser or other software, so the key to avoiding them is to keep everything updated--your browser and any plug-ins like Flash and Adobe Reader. Make sure you have Windows updates turned on, and if you ever get a notice that one of your plugins is out of date, take care of it right away. Microsoft will stop supporting Windows XP this April, so XP holdouts really need to upgrade to a new OS by then--even if it means switching to Linux. Connecting to the internet with an unsupported operating system puts you at a very big risk for being infected with a virus. You can prevent some infections by disabling Java in your browser--to do this just search in the Start Screen for "Configure Java." In the Java preferences screen you'll find a security tab, which allows you to disable Java in web browsers. Or you could just delete the notoriously leaky Java from your PC altogether--you won't miss it unless you truly need it. To avoid all possible risk, or to take a protected walk on the wild side--like visiting a sketchy download site or installing a program that you're not sure about--there's a way to make sure you don't do any damage to your computer. It's called Sandboxie. Sandboxie acts like a latex glove for your computer, dumping programs into a walled-off "sandbox." (Hence the name.) When it's activated, you can run programs or surf the web and be sure that those programs can't actually have any effect on your file system. That means they can't install software, or deposit or delete files on your machine. Instead, Sandboxie intercepts any changes, and you can decide if you want to let any of them affect your PC proper. Once you've downloaded and run the installer, a quick tutorial will run, telling you to open up a browser window is sandboxed mode. To do that, just find the desktop shortcut for your browser, right-click on it, and choose Run Sandboxed. You can use this procedure to get your browser or any other program to run with Sandboxie's protection. If you actually do want the sandboxed browser to be able to download a file, just download it as you normally would, and choose to save it in your Documents or Downloads folder. Sandboxie will display a dialog box asking whether you want to transfer the file out of your sandbox and onto your real hard drive. Sandboxie isn't an excuse to throw caution to the wind, but it does give you a safe way to try out software that you're unsure about. Use it responsibly! Thanks to sophisticated spam filtering on popular webmail service like Gmail, email isn't quite the wasteland it once was. Still, it's one of the easiest ways to get infected with a virus or to have your identity stolen. Just follow these two rules, and you'll be fine--doubly so if you have a security suite installed, as most premium options protect against email-based risks. - Email rule #1: Don't open attachments that you weren't expecting. If you get an email from someone you don't know that comes with any sort of attachment--even something that sounds totally normal like a JPG or PDF--don't open it. Simple as that. If the email is from someone you do know, but you weren't expecting to receive a file from them, don't open it. Send them a text message or something and ask if they meant to send it to you. One of the most common ways viruses propagate themselves is by taking over your address book and sending a copy of themselves to all your friends. - Email rule #2: Don't log-in to sites you visit via email. You've probably heard of phishing by now, but just to refresh your memory, it's the name for the class of email scams that work by tricking you into going to a fake version of a popular website. Once you're on the fake website, you're prompted to log in. When you enter your account name and password, that precious data goes straight to identity thieves. The simple way to avoid getting phished is to avoid sites that require logging in through an email link. If you get an email from Amazon or eBay that says you need to make some change to your account, just visit the site like you always would--type it into your browser's address bar, click on a browser bookmark, or search for it in Google--and take care of the problem there. But wait, there's more These basic steps should keep you safe in the deep, dark corners of the web. If you're interested in learning how to safeguard yourself against more specific security woes, however--like hackers, zero-day attacks, spearphishers, and more--check out PCWorld's guide to protecting your PC against devious security traps. Keeping your PC safe and secure is simple enough...but only if you know what to look for. This story, "Protect your PC in the web's worst neighborhoods" was originally published by PCWorld.
<urn:uuid:fe681b0e-470c-496c-994d-1625300e93a0>
CC-MAIN-2017-04
http://www.networkworld.com/article/2174325/lan-wan/protect-your-pc-in-the-web--39-s-worst-neighborhoods.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931575
1,480
2.703125
3
It’s time for smartphone users to give up the fear of smartphone tracking in favor of letting mobile apps use location for relevancy. The challenge with using location to improve context in mobile apps is engineering a balance of power consumption, accuracy, and coverage. Android has unique advantages in giving app developers the tools for power conservation course and fine-grained control over location, including improvements to both indoor and outdoor accuracy. A practical example of why a smartphone user would want an app to know his or her location is Llama, which lets the user set up location-based profiles that turn off loud or inappropriate ringtones at work or prevents work-related messaging and social notifications from buzzing in your bedroom. Similarly, task-list maker Any.do can recognize when users are in a mall and remind them buy specific items. In addition to being suspicious of smartphone tracking, users are wary of the battery drain of GPS. But contextual use of location does not drain battery life as much as turn-by-turn GPS based directions do. For example, an app that searches for the theaters showing a particular movie can use the very low-power cellular position to list the theaters based on distance. Cellular location is not as accurate as GPS or Wi-Fi location methods, but when searching for movie theaters, it’s good enough. Most Android smartphones have built-in accelerometers, gyroscopes, compasses and barometers. These sensors can determine movement in three dimensions, direction, speed and even mode of travel (walking, biking or driving). These lower-power sensors can be used to enhance the accuracy of location-based apps while reducing power consumption. An example of zero-power-consumption location is where, after the user’s location is determined, the sensors indicate that the he or she has not moved since then. There would be no reason to incur the power consumption expense of GPS and/or Wi-Fi to determine location when the last known position could be used. Google recently released Fused Location Services, which simplifies development of location-aware contextual apps while increasing accuracy and reducing power consumption. In creating its Fused Location Provider, Google combined GPS, Wi-Fi, Cellular and sensor-based location determination methods. GPS is very accurate, but it consumers the most battery power. While WiFi is less accurate, it is less power hungry. GPS works well outside but, as Google’s Jaikumar Ganesh said at Google I/O, it “drops dead at the door” when the smartphone user enters a building. The opposite is true when using Wi-Fi location positioning. But sensors can smooth out accuracy and algorithmically determine when to spend the power budget on GPS and Wi-Fi. Moving from GPS to Wi-Fi in an indoor transition becomes more accurate, with the direction and speed served by the sensors. Because the location services are fused with algorithms developed by the Android team, the developer can use a straightforward application programming interface (API) based on the priority of an app’s required degree of accuracy. With Fused Location Provider, an app does not have to be running all the time. The location process can wake up the app when a location event of interest occurs. For instance, an application that turns on the lights at home would not run if the smartphone user was away in another city, but would reactivate when the user returned home. Android apps can have up to 100 geofences. A geofence is a virtual boundary around a physical area in which an application can recognize when the smartphone enters or leaves. The earlier example of being reminded of a shopping list when entering a mall is a geofence. Another example is the Llama profile changes when a geofence of the bedroom or office is crossed. The famed computer scientist Alan Kay once said “Context is worth 80 IQ points.” In the context of the relevancy of computer apps and location, users should allow smartphones to add IQ points by coping with the mundane and creating efficiency.
<urn:uuid:a8fe843f-5671-47ad-bb81-07722178b324>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224836/mobile-apps/a-case-for-smartphone-tracking.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941253
827
2.53125
3
You may not know the term, "CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart)," but you've used it. You may not, however, be using it for much longer. Every time you've had to puzzle out the letters and numbers from a distorted, scrambled jumble before you can sign up for a new Web services account, such as Live Hotmail, Yahoo Mail, and Gmail or post a story on an online discussion systems like Digg, you've used CAPTCHA. It's meant to make sure that you're a real person and not a bot seeking to spread malware and spam. For a while CAPTCHA worked. If you're like me, you found it annoying, because there were times when you couldn't tell the difference between 's' and 'S' either. Still, even though it was, and is, a pain, I was willing to put up with it since it actually did help block spammers. The key word above is 'did.' In late 2007, hackers started getting some success against CAPTCHCA schemes. By January 2008, Yahoo Mail was cracked; Hotmail was crunched in early April; and Gmail was cut open in April. None of the CAPTCHA cracking program really seems to be that good. But, then, they don't have to be. Web security firm Websense's resident CAPTHCA expert Sumeet Prasad explained in a blog posting that while only 10% to 15% of each attempt on Hotmail is successful, a CAPTHCA cracker system only needs six seconds for every attack. I think we can safely presume that there are other CAPTHCA crackers for the other major free e-mail systems with about the same level of efficiency. Since no ISP or spam-blocking service in its right mind is about to try to blacklist Gmail, hotmail or yahoo e-mail accounts, it looks to me like CAPTHCA will soon be in the security junkyard of obsolete technology. Or, maybe not. Developers at Penn State have applied for a patent on a novel new kind of CAPTCHA that they're calling IMAGINATION. It, in turn, is based on ALIPR (Automatic Linguistic Indexing of Pictures). This is an image-based system. In it, you're first required to pick out the geometric center of a distorted image from a page that's filled with similar overlapping pictures. Then, if you get that right, you're presented with another carefully distorted image and asked to pick a word to describe what you're seeing. Frankly, when I first read about the idea, I wasn't impressed. Then I tried it. Now, I am impressed. You can give it a try too at their sample ALIPR page. It's a radical retake on the CAPTCHA idea. The core idea, as the developers explain on their site, is that the "IMAGINATION System requires solving a harder AI problem, that of image recognition, in order to break. Therefore, in principle, the system is more secure than text-based CAPTCHAs, with image recognition being a harder problem, and the 'space' of images being much larger." In other words, as they explain on the results page once you've passed the test, "If you think a robot can also pass our test, give it a try and we'd love to know how far your robot can get." That's mighty darn confident of them to throw down a challenge that way, but they've reason to feel sure about this system. I don't see the IMAGINATION CAPTCHA system being broken for at least a couple of years. For now, my only worry about this system prolonging CAPTCHA's usefulness for security isn't whether today's hackers can break it-I doubt they can-but how people with color-blindness will do with it. If color-blindness isn't a problem, I think IMAGINATION has the potential to become the new online security system of choice. And, that's a good thing, the old-line CAPTCHA still being used today is useless and needs to be retired as soon as possible.
<urn:uuid:88a65839-3ef3-4552-a9f8-539ec56453f3>
CC-MAIN-2017-04
http://www.computerworld.com/article/2478308/web-apps/can-captcha-be-saved-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00374-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968781
879
2.71875
3
Let’s say that we have a router with several interfaces, as shown in Figure 1: Now, imagine that we want to manage our router remotely via Telnet, SSH, SMTP, SDM or some other IP utility. To accomplish this, we’ll have to supply one of our router’s IP addresses to the management software. Let’s say that we choose 172.16.1.1, the address of Serial1/0. Assuming that the interface is “up/up” and running the routing protocol so that our management host can find it, we should be fine … but what if it’s not? In that case, we’d need to specify another one of our router’s addresses for management purposes. Okay … but suppose that this router has twenty interfaces (each with an IP address), and we have hundreds (or thousands) of routers? That’s a lot of IP addresses to keep track of. We’d have to carry around a book (or a netbook!) listing which routers had which IP addresses, and for each router try its various addresses until we found one that worked. The bottom line is that managing large numbers of routers using the addresses on the physical interfaces or subinterfaces is not scalable. Instead, let’s create a virtual interface (called a “loopback”), give it an IP address, and configure the router to advertise that address. Assuming that the loopback’s address is reachable via at least one physical path, we should be able to successfully connect to the router and manage it remotely. With Cisco IOS, we create a loopback interface and assign it an IP address like this: Router(config)#interface loopback 0 Router(config-if)#ip address 192.168.1.1 255.255.255.255 Note that the mask in use on the loopback interface is a “/32” (making the loopback’s address a host route). This is commonly done with management loopbacks to conserve IP address space so that we’re not tying up a large subnet (or an entire classful network) for one loopback address. Our router now appears as shown in Figure 2: Note that the loopback interface does not physically exist (it’s a software emulation of an interface, similar to a VLAN interface on an Ethernet switch), and it appears as a “C” (connected) route in the router’s IP routing table. At this point the loopback would be reachable by the router itself, but perhaps not from other routers. We’ll deal with this issue in the next installment.
<urn:uuid:72b5b7d8-2786-4b70-b453-44096072702a>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/11/29/loopbacks-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928795
571
3.625
4
OTDR (Optical Time Domain Reflectometer), as one of the important fiber optic testers, is most commonly used by technicians or installers to certify the performance of new fiber optic links and detect the issues of existing fiber links. There are some specifications of an OTDR which may affect its performance. To understand these specifications can help users get maximum performance from their OTDRs. Today, one of the key specifications—Dead Zone will be introduced here. The OTDR dead zone refers to the distance (or time) where the OTDR cannot detect or precisely localize any event or artifact on the fiber link. It is always prominent at the very beginning of a trace or at any other high reflectance event. In simple terms, OTDR dead zone is caused by a Fresnel reflection (mainly caused by air gap at OTDR connection) and the subsequent recovery time of the OTDR detector. When a strong reflection occurs, the power received by the photodiode can be more than 4,000 times higher than the backscattered power, which causes detector inside of OTDR to become saturated with reflected light. Thus, it needs time to recover from its saturated condition. During the recovering time, it can not detect the backscattered signal accurately which results in corresponding dead zone on OTDR trace. This is like when your eyes need to recover from looking at the bright sun or the flash of a camera. In general, the higher the reflectance, the longer the dead zone is. Additionally, dead zone is also influenced by the pulse width. A longer pulse width can increase the dynamic range which results in a longer dead zone. In general, there are two types of dead zones on an OTDR trace—event dead zone (EDZ) and attenuation dead zone (ADZ). The event dead zone is the minimum distance between the beginning of one reflective event and the point where a consecutive reflective event can be detected. According to the Telcordia definition, event dead zone is the location where the falling edge of the first reflection is 1.5 dB down from the top of the first reflection. The attenuation dead zone is the minimum distance after which a consecutive non-reflective event can be detected and measured. According to the Telcordia definition, it is the location where the signal is within 0.5 dB above or below the backscatter line that follows the first pulse. Thus, the attenuation dead zone specification is always larger than the event dead zone specification. Note: In general, to avoid problems caused by the dead zone, a launch cable of sufficient length is always used when testing cables which allows the OTDR trace to settle down after the test pulse is sent into the fiber so that users can analyze the beginning of the cable they are testing. There is always at least one dead zone in every fiber—where it is connected to the OTDR. The existence of dead zones is an important drawback for OTDR, specially in short-haul applications with a large number of fiber optic components. Thus, it is important to minimize the effects of dead zones wherever possible. As mentioned above, dead zones can be reduced by using a lower pulse width, but it will decrease the dynamic range. Thus, it is important to select the right pulse width for the link under test when characterizing a network or a fiber. In general, short pulse width, short dead zone and low power are used for premises fiber testing and troubleshooting to test short links where events are closely spaced, while a long pulse width, long dead zone and high power are used for long-haul fiber testing and communication to reach further distances for longer networks or high-loss networks. The shortest-possible event dead zone allows the OTDR to detect closely spaced events in the link. For instance, testing fibers in premises networks (particularly in data centers) requires an OTDR with short event dead zones since the patch cords of the fiber link are often very short. If the dead zones are too long, some connectors may be missed and will not be identified by the technicians, which makes it harder to locate a potential problem. Short attenuation dead zones enable the OTDR not only to detect a consecutive event but also to return the loss of closely spaced events. For instance, the loss of a short patch cord within a network can now be known, which helps technicians to have a clear picture of what is actually inside the link. OTDR is one of the most versatile and widely used fiber optic test equipment which offers users a quick, accurate way to measure insertion loss and shows the overview of the whole system you test. Dead zone, with two general types, is an important specification of OTDR. It is necessary for users to understand dead zone and select the right configuration in order to get maximum OTDR performance during test. In addition, OTDRs of different brands are designed with different minimum dead zone parameters since manufacturers use different testing conditions to measure the dead zones. Users should choose the suitable one according to the requirements and pay particular attention to the pulse width and the reflection value. Fiberstore offers various OTDRs of the major brands, such as JDSU, EXFO, YOKOGAWA etc., as well as other portable and handheld OTDRs with wide options. For more information, please contact us via firstname.lastname@example.org.
<urn:uuid:a7897d0c-3da1-4211-abe5-85984417d43f>
CC-MAIN-2017-04
http://www.fs.com/blog/understanding-otdr-dead-zone-specifications.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00576-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925836
1,090
3.515625
4
Architecting Linux High-Availability Clusters - Part 1 By Tau Leng; Jenwei Hsieh, Ph.D.; and Edward Yardumian (Issue 4 2000) This article, the first in a series on Linux high-availability (HA) clusters, provides an overview of the Linux HA cluster. It describes the two common types of clusters, the Linux Virtual Server for IP HA and load balancing, and HA application clusters. Future articles in the series will cover product-specific implementations and features. Linux is known as a stable operating system. However, a Linux client/server configuration can have several points of failure, including the server hardware, the networking components, and the server-based applications. As more administrators choose Linux for critical applications, the demand for high-availability (HA) clustering for the Linux platform is increasing. In response, a number of Linux distributors have designed and implemented bundled HA solutions in their products, and numerous third-party add-ons are now available. However, several aspects of these technologies are not always clear, such as how the technologies work, the types of applications for which they are suitable, and the kinds of hardware required. HA Clustering Provides Availability, Performance, and Capacity For networked computers, clustering is the process of connecting multiple systems together to provide greater overall system availability, performance, capacity, or some combination of these. Because the term clustering itself is so broad, other terms—such as load balancing, failover, parallel, and Beowulf—are used to describe specific cluster implementations. For example, Beowulf clusters are designed to provide scalability and parallel processing for computational functions. HA clustering solutions, however, seek to provide enhanced availability for a service or application. Common Types of HA Clusters in Linux Two common types of HA clusters are emerging in the Linux environment: HA IP clusters and HA application clusters. HA IP clusters ensure availability for network access points, which typically are IP addresses that clients use to access network services. HA IP clusters on the Linux platform achieve high availability using the Linux Virtual Server (LVS) mechanism. By using this mechanism to support virtual IP addresses, some HA IP cluster implementations can also load-balance certain types of applications if their contents are completely replicated on a pool of application servers. Applications that the HA IP cluster can load-balance include static Web and File Transfer Protocol (FTP) servers and video streaming servers. HA application clusters, on the other hand, are more suitable for stateful, transactional applications, such as database servers, Web application servers, file servers, and print servers. HA application clusters ensure availability through the failover of applications, along with all of the resources that the applications need-such as disks, IP addresses, and software modules-to remaining servers. HA IP Clusters: The LVS Presents a Single System Most HA IP cluster implementations, such as Piranha, Red Hat High Availability Server 1.0, TurboCluster from TurboLinux®, and Ultra Monkey (supported by VA LinuxTM ), use an LVS mechanism as well as a group, or pool, of cloned application servers. The LVS presents the pool of application servers to network clients as if it were a single system. The LVS is represented by the virtual IP address or addresses clients use to access the clustered services, including the specific port and protocol—either UDP/IP or TCP/IP. An LVS server maintains the LVS identity and dispatches client requests to a physically separate application server or pool of application servers. Network clients are kept unaware of the unique physical IP addresses used by the LVS server nodes or any of the application nodes; the clients access only a virtual IP address managed by the LVS. The LVS server is responsible for routing client requests to cloned application servers. To accomplish this task, the LVS is configured with scheduling policies that allocate and forward incoming connections to the application servers. High availability is achieved by having multiple destinations capable of processing requests. If one of the application servers fails, one or more application servers will be available to continue service through the same virtual IP address. Heartbeats Monitor Server Health To provide uninterrupted service, the LVS continuously monitors the health of the application servers (see Figure 1 ). Health monitoring between the LVS and application servers ensures timely failure detection and cluster membership status. This monitoring is performed via a heartbeat mechanism managed by the LVS. Heartbeat packets are sent between cluster nodes at regular intervals (on the order of seconds). If a heartbeat is not received after a predefined period of time—typically a few heartbeat intervals—the absent machine is presumed failed. If this machine is an application node, the LVS server stops routing client requests to it until it is restored. Depending on the implementation, the heartbeat protocol may run through a TTY serial port, UDP/IP over Ethernet, or even over shared storage connectivity. Figure 1. Monitoring the Health of Application Servers Providing continuous service requires that the applications are installed locally in each of the application servers. For this reason, the application servers are often referred to as "clones." Any data, such as Web or FTP content, must be completely replicated to all of the application servers to ensure a consistent response from the application server pool. In the event of a server failure, the LVS heartbeat mechanism detects the failure, makes the necessary changes to the cluster's membership, and continues forwarding requests to the remaining server or servers. HA IP Clusters Support Load Balancing HA IP clusters not only provide high availability for an IP address, but the manner in which client requests are forwarded from the LVS server to the cloned application servers also supports load balancing. In fact, load balancing—simply by adding additional application nodes when demand increases—can help the appropriate applications achieve tremendous scalability. To help spread client requests, or workload, across the pool of application servers, each LVS implementation uses a set of basic scheduling policies. The most commonly used policies are round robin and least connections. Round robin simply forwards requests to each application server one at a time and perpetually repeats the process in the same order. With least connections, the LVS assesses the current number of connections on each application server and forwards the request to the server with the fewest number. Some Linux distributions or products also use more advanced algorithms that can actually examine the load on each application server and distribute the incoming requests accordingly. For Web sites that do not maintain state information outside the Web server itself, most LVS implementations have a persistency mode that redirects clients to the appropriate servers throughout a session. At a minimum, an HA IP cluster can be implemented by using one of the application servers for the LVS mechanism (see Figure 2 ). All requests are first handled by the server running the LVS, which initially determines whether the request will be handled locally or shipped to another application server. In this configuration, however, the LVS mechanism can become a bottleneck because one server handles all routing functions and application requests. Figure 2. Hosting the LVS Mechanism on an Application Server An Active/Passive LVS Helps to Prevent Bottlenecks Assigning smaller loads to the server running the LVS can mitigate the risk of a bottleneck. Ideally, the LVS should be built with a pair of dedicated servers, one actively functioning as the LVS, the other acting as its hot standby (see Figure 3 ). This configuration is often referred to as an active/passive LVS, because one server actively serves as the LVS and routes requests to the application servers while the passive LVS server waits to assume control only if the active server fails. In a failure scenario, the standby server assumes the virtual IP address of the LVS, while retaining its own unique physical IP address, through a process referred to as IP failover. Clients are automatically reconnected to the LVS running on the other server without reconfiguration. Figure 3. Redundant LVS Server Mechanism Currently, most HA IP cluster distributions do not support active/active configurations. An active/active configuration would include two or more LVS servers, each active and responsible for a different LVS in addition to being available in the case of a failover. Furthermore, no LVS implementations currently support load-balancing configurations in which multiple LVS servers share routing responsibilities for the same virtual IP address. As traffic to the site grows and the LVS server routes an increasing number of requests, the LVS server may have to be upgraded or replaced to ensure that CPU, memory, or network bandwidth is adequate. To ease the load on an LVS server that typically routes all application server responses back to clients using network address translation (NAT), responses from application servers can be sent directly to clients by creating another physical route and using IP tunneling or direct routing techniques. Data Replication is a Challenge Data replication often is the greatest challenge when implementing HA IP clusters with a pool of application servers. If just two application servers are required, a shared storage system can be built using Dell's cluster-ready PERC 2/DC RAID controllers and PowerVault 200S or 210S storage systems with Enclosure Services Expander Module (ESEM) or SCSI Enclosure Management Module (SEMM) cluster modules (see Figure 4 ). In shared storage configurations, both servers can access the same set of files using the global file system to share files between application servers. If more than two nodes are required or if shared storage is not desired, Intermezzo's distributed file system enables directory tree replication and can be used to replicate files to the application servers' internal disks. Figure 4. Two-Node Shared Storage Configuration When transactions are involved or when mirroring must be nearly instantaneous, complex distributed locking techniques often are required to maintain data integrity and consistency. While replication and mirroring may work for some applications that involve writes, the replication can incur substantial overhead. In the cases of databases, messaging, and most application services, it is difficult to implement HA IP clustering because of their read/write and transactional natures and the complexities in replicating or synchronizing their content. Therefore, these applications are better suited for Linux HA application clusters. HA Application Clusters HA application clusters, such as LifeKeeper® from SteelEye®, Convolo ClusterTM from Mission Critical Linux, RSF-1 from High-Availability.com, and VERITAS Cluster ServerTM , are appropriate for transactional applications, such as databases, groupware, file systems, and other applications containing business logic. While the LVS mechanism is the enabling technology for HA IP clusters, HA application clusters take the concept of the LVS a step further to ensure the availability of applications. HA application clusters achieve this availability by continuously monitoring the health of an application and the resources the application depends on for normal operation, including the server it is running on. Should any of the application resources fail, the HA application cluster will restart, or fail over, the application on one of the remaining servers. To ensure that all of the resources an application needs are closely monitored and will fail over to one of the remaining servers, they are grouped together in a resource hierarchy. In resource hierarchies, the resources are grouped and arranged in dependency trees so that they can be moved (between physical resources) or restarted on different servers. A dependency tree ensures that the resources come online in the right order. Lower level resources such as disks and IP addresses are brought online first, and application modules are brought up last, after all the dependent resources are ready. For example, the resource group hierarchy for a file share could include a disk where the files are stored, an IP address, a server name, and file share resources (see Figure 5 ). The disk resource and the IP address have no dependencies and are brought online first. The disk is among the first resources to come online because it would be futile to connect to the servers before the disk they are stored on is online and mounted. Likewise, it is important to bring up the IP address before bringing up the server's name. Finally, after all of the required dependencies are brought online, the application services are made available on the network. Figure 5. Example Resource Group Hierarchy Key elements that assist application failover are the resource manager and application recovery kits. The resource manager enables the user to define the resource hierarchy for applications and specify the dependencies among resources. Application recovery kits are tools, or a set of scripts, that provide the mechanism to automatically restart an application and all of its resources, in the proper order, on one of the remaining servers should a failure occur. Application recovery kits are usually provided by vendors of HA application clusters for packaged software, including databases and the most commonly used applications on Linux servers, such as Apache Web server, sendmail, and print services. Moreover, recovery kits at the system level for the Linux file system are becoming more widely available. For applications that have no associated recovery kit, users can create custom scripts by employing application programming interface (API) commands and utilities. Generally three approaches exist to ensure that application data or storage remain available to reminding servers after failover. Figure 6 summarizes the advantages and disadvantages of these approaches. Figure 6. Approaches to Ensure Data or Storage Availability After Failover Passive Standby and Active/Active Modes Ensure High Availability Similar to the LVS concepts of active/passive and active/active, HA application clusters use the passive standby mode and active/active mode terminology. A straightforward approach to achieve high availability is the passive standby mode, in which one server acts as the primary server, while a secondary server remains available for use should the primary server fail. In this passive backup mode, the secondary server is not used for any other processing; it simply stands by to take over if the primary server fails. This configuration enables maximum resources to be available to the application in the event of a failure. However, this configuration is expensive to implement because it requires twice the amount of hardware to be purchased. For all but the most mission-critical applications, active/active configurations can be highly effective. In active/active configurations, each server performs useful processes, while still possessing the ability to take over for another server in the event of a failure. Drawbacks of active/active configurations include increased design complexity and the potential introduction of performance issues upon failover. Multinode Solutions Are Now Available Although the most common HA application cluster configurations are currently for two nodes, several multinode (more than two nodes) failover solutions for Linux have recently become available. With multinode clusters, configurations such as N+1 and cascaded failover help administrators meet high-availability needs in a complex environment while also providing better resource utilization. For example, in an N+1 configuration, a single dedicated server runs in passive mode while the rest of the servers actively process requests for their applications. If a server completely fails, the passive node provides all the resources of an unused server, rather than squeezing the application onto another server already responsible for several applications. Multinode configurations can be implemented either by mirroring content locally to each server or through a switched storage fabric. Mirroring often requires complex replication techniques and network overhead to push and pull the content to all of the servers. The technology is now widely available to build switched fabrics by using storage area networks (SANs). SANs (see Figure 7 ) provide multinode clusters with excellent server-to-storage performance—even as additional servers are added to the SAN—as well as the ability to effectively scale the amount of storage the cluster nodes use. Figure 7. A Four-Node Switched SAN Configuration Combine Both HA Cluster Types for a Multitier Solution High availability and scalability are equally important to the construction of an e-commerce or a business-critical system. Providing continuous service for a distributed, multitier application is possible by deploying HA IP and HA application clusters together. Figure 8 shows an e-commerce configuration in which a pair of LVS servers are responsible for two Web-based applications, one for static content running on a pair of servers, and one for commerce applications running on three servers. Both of these sites are load balanced through IP HA clustering and the active LVS server. High availability for the database component of the site is achieved with an active/active HA application clustering solution. One node of the cluster runs a database used for the site's catalog and inventory, while the other node runs the orders database. Although both servers cannot coprocess the same databases, each server can run all of the databases simultaneously if one of the two servers fails. Figure 8. Using HA IP and HA Application Clusters Together Support is Growing Support for high availability and scalable services under Linux is growing. As these technologies mature, we will cover more specific implementations and features in future articles. Presently, Linux is more widely used as the front end of distributed, multitier configurations for stateless mode operations, such as load-balanced Web serving. While the need for high availability and scalability expands far beyond Web farms, the technologies mentioned in this article provide a good starting point for advanced solutions to come. As HA implementations continue their migration from UNIX to Linux, the number of proven options developed in the UNIX space will continue to expand for Linux. Tau Leng (firstname.lastname@example.org) is a system engineer in the Scale Out System Group. His product development responsibilities include cluster product solutions from Dell including Linux high-performance and high-availability clusters. Tau earned an M.S. in Computer Science from Utah State University. Currently he is a Ph.D. candidate in Computer Science at the University of Houston. Jenwei Hsieh, Ph.D. (email@example.com) is a member of the Scale Out Systems Group at Dell. He has published extensively in the areas of multimedia computing and communications, high-speed networking, serial storage interfaces, and distributed network computing. Jenwei has a Ph.D. in Computer Science from the University of Minnesota. Edward Yardumian (firstname.lastname@example.org) is a technologist specializing in distributed systems, cluster computing, and Internet infrastructure in the Scale Out Systems Group in the Enterprise Server Products division at Dell. Previously, Ed was a lead engineer for Dell PowerEdge Clusters.
<urn:uuid:fd3162a2-9ea4-4b70-b9cb-ee07606b310c>
CC-MAIN-2017-04
http://www.dell.com/content/topics/global.aspx/power/en/ps4q00_linux?c=us&l=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906746
3,782
2.640625
3
The world's first Linux Internet-worm has been reported "in-the-wild" Cambridge, UK, January 25, 2001 - Kaspersky Lab, an international data-security software-development company, warns users about the real threat posed by the Ramen Internet-worm. According to recent reports, the worm has already caused several incidents of Web sites in different parts of the world being defaced; therefore, Ramen has become the first malicious code for Linux that has been detected "in-the-wild." Harking back to when Ramen was originally discovered in the middle of January 2001, we recall that it has the ability to spread via the Internet and penetrates systems running Red Hat Linux versions 6.2 and 7.0. In order to gain access to a computer, the worm exploits three known security breaches in these particular operating systems. These breaches allow Ramen to take over the root access rights and unbeknownst to the user execute its code on the target systems. During the past several days, Kaspersky Lab has received confirmation of Ramen penetrating into several corporate networks. Among them are the National Aeronautics and Space Administration (NASA), Texas A&M University, and Taiwan-based computer hardware manufacturer Supermicro. These organizations' Web sites have been attacked by a worm causing the Web sites' title pages to appear as follows: "The discovery of the Ramen worm 'in-the-wild' is a very significant moment in computer history. Previously considered as an absolutely secured operating system, Linux now has become yet another victim to computer malware," said Denis Zenkin, Head of Corporate Communications for Kaspersky Lab. During the past 8 years since Linux was first developed, there have been discovered about 50 malicious programs for this operating system, but none of them had yet to be sighted "in-the-wild." It is important to emphasize that the aforementioned security breaches were discovered more than half a year ago. Right after this, Red Hat Linux developers immediately released corresponding security patches eliminating the problem. "The fact that Ramen penetrated into several respected organizations, including NASA, shows that even the most professional network engineers don't pay enough attention to timely installation of security patches in order to protect their systems. This worries us most, as neglecting basic enterprise security rules can stimulate hackers to develop malicious code for Linux," adds Denis Zenkin. Kaspersky Lab recommends users immediately install all the available security patches for the Linux operating system regardless of the Linux distribution you currently use. You can download the patches and read what Red Hat officials have said about the Ramen worm at the following address: http://www.redhat.com/support/alerts/ramen_worm.html. More detailed technical information about the Ramen Internet-worm can be found in Kaspersky Virus Encyclopedia at www.viruslist.com. Kaspersky Anti-Virus, including a version for Linux, can be purchased in Kaspersky Lab online store or from your nearest Kaspersky Anti-Virus distributor.
<urn:uuid:2ec07344-f415-4171-a197-4b01cb88ff38>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2001/Ramen_Has_Broken_Free_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00236-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948661
628
2.765625
3
NOAA tracks toxic Great Lakes algae from space - By David Hubler - Jul 24, 2012 Algae forming in lakes and rivers not only harms the water, it also can kill indigenous marine life and the aquatic vegetation needed to sustain a healthy ecosystem. A new pilot program is taking a close-up look at the problem from afar. The program, involving the National Oceanic and Atmospheric Administration, the University of Toledo and Blue Water Satellite Inc., is using satellite imagery of Lake Erie’s western basin to monitor the harmful algal blooms (HABs) that have been increasingly threatening the Great Lake for the past several years. Mixing supercomputing, social networking for a better view of Earth NOAA app delivers aerial, satellite imagery to mobile devices The harmful algal blooms in Lake Erie commonly contain cyanobacteria, also known as blue-green algae. Many cyanobacteria release toxins that are known to cause liver and nerve damage in humans, and kill pets and other animals. If proven successful, the monitoring project could become an ongoing service during HAB outbreak season, roughly April through October each year. "This experimental research project uses a collaboration between public and private entities to push the state of the art," said Dr. Marie Colton, director of NOAA’s Great Lakes Environmental Research Laboratory in Ann Arbor, Mich. Each entity brings its unique knowledge and experience to the collaboration, she said. “This public-private sector collaboration can pave the way to new knowledge creation, and processes that may ultimately lead to job growth as the project transfers from research to commercial production." Researchers at the university and Blue Water Satellite, of Bowling Green, Ohio, will combine the data from the NASA MODIS satellite, the U. S. Geological Survey's Landsat 7 satellite and the DigitalGlobe WorldView 2 satellite. The data produced could give governmental agencies the ability to see early bloom-formation conditions of the toxic algae across the entire western Lake Erie region within 24 hours of each satellite overpass. Blue Water Satellite will process low-resolution satellite data daily by using algorithms developed by Dr. Richard Becker, assistant professor at Toledo’s Department of Environmental Sciences. The company also will process high-resolution satellite imaging every 16 days or on demand using algorithms developed by Blue Water Satellite and Dr. Robert Vincent at Bowling Green State University. "The fusion of this low-resolution and high-resolution satellite data can provide additional insights into early HAB formation never before possible," Becker said. In addition to the HAB imagery and data, Blue Water Satellite will provide measurements of total phosphorus for the entire area, having developed the only algorithm in the world that performs this total phosphorus detection and measurement function using satellite data. Increasing levels of total phosphorus have contributed to the severe HAB outbreaks in Lake Erie in recent years. David Hubler is the former print managing editor for GCN and senior editor for Washington Technology. He is freelance writer living in Annandale, Va.
<urn:uuid:b0f3db38-a298-4181-a755-b38edc349d4b>
CC-MAIN-2017-04
https://gcn.com/articles/2012/07/24/noaa-satellite-lake-erie-project.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916086
619
3.234375
3
Here is a collection of highlights from this week’s news stream as reported by HPCwire. Researcher Advances Understanding of Alzheimer’s Disease As the most common neurodegenerative disorder, Alzheimer’s disease affects over 5 million people in America alone, according to the National Institutes of Health (NIH). University of Akron researcher Jie Zheng came a step closer this week to finding a cure for this devastating disease. Using supercomputing resources at the Ohio Supercomputer Center, Zheng created computer simulations that show how “misfolded” proteins in the brain contribute to degenerative disorders like Alzheimer’s. The announcement includes an explanatory paragraph of the science involved: In the nucleus of nearly every human cell, long strands of DNA are packed tightly together to form chromosomes, which contain all the instructions a cell needs to function. To deliver these instructions to various other cellular structures, the chromosomes dispatch very small protein fibers — called oligomers — that fold into three-dimensional shapes. Misfolded proteins — called amyloid fibrils — cannot function properly and tend to accumulate into tangles and clumps of waxy plaque, robbing brain cells of their ability to operate and communicate with each other, according to NIH. Zheng explains computer simulations allow scientists to “see” the amyloid oligomers at the molecular level, enabling them to determine the exact mechanism of the amyloid formation and the origin of its toxicity. This degree of understanding was just not possible using traditional experimental techniques. Says Zheng: “Molecular simulations…allow one to study the three-dimensional structure and its kinetic pathway of amyloid oligomers at full atomic resolution.” Adding credibility to the potential for Zheng’s work with amyloid proteins is the fact that he recently received a CAREER Award from the National Science Foundation (NSF). This five-year award is one of the NSF’s most prestigious recognitions and comes with $400,000 honorarium. Takeaway: This research is vital for understanding how plaque forms and accumulates in the brain, how it contributes to the breakdown of cells and, ultimately, how the process might be prevented. Cray System Selected for Brazil’s National Institute for Space Research This week, Cray was chosen by the Foundation for Space Technology, Applications and Science (FUNCATE) to outfit the National Institute for Space Research (INPE) with a Cray XT6 supercomputer. FUNCATE is the Brazilian agency responsible for the procurement of high performance computers in Brazil. The new Cray system will be used for weather forecasting and climate studies. Once the Cray system is in place, Brazil will be home to one of the largest numerical weather prediction and climate research centers in the world. Haroldo Fraga de Campos Velho, associate director for space and environment at INPE, explains what this means for his agency: “The INPE scientific team asserts that continued increase in supercomputing capacity is paramount to the advancement of simulation capabilities and improvement in forecast quality. The Cray XT6 supercomputer is designed to support the most challenging high performance computing workloads in demanding operational environments, and INPE scientists are looking forward to applying the system’s computational resources in their simulations of atmospheric phenomena.” The contract, which is valued at more than $20 million, includes the supercomputer and multi-year services. The system is expected to be production ready later this year. The Cray XT6 supercomputer, announced during SC09, is Cray’s highest performing system, the runup to the XT5, which occupies several of the top spots on the current TOP500 list, including the number one and number three spots. The XT6 features AMD Opteron 6100 Series processors and Cray’s Seastar interconnect, and the compute blades can be configured with up to 96 processor cores per blade or more than 2,300 processor cores per cabinet. This deal marks Cray’s first foray into Brazil and comes on the heels of another big announcement for a contract with the National Nuclear Security Administration (NNSA) worth more than $45 million. That one is for Cray’s forthcoming next-generation “Baker” system, which builds on Cray’s XT architecture.
<urn:uuid:c426907b-f46e-4ce8-93f5-15e9d0bba033>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/04/22/the_week_in_review/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00137-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922907
901
2.984375
3
At the Internet Engineering Task Force (IETF) 85 conference, there was a Birds of a Feather (BoF) meeting on Web PKI operations. The plan will be to start an IETF working group on Web PKI operations, with a mandate to document how the Web PKI currently works. So, why do we need to do this? The Web PKI has been around since about 1995. The IETF has been documenting PKI for about the same time period. That’s enough time to consider this all to be history. Unfortunately, it is not. The Web PKI is probably the largest and most used PKI in the world. Many certification authorities (CA) contribute their roots to the Web PKI and these roots are distributed by many browsers and operating systems. The CA industry issues certificates to secure servers, whose trust is then provided by validating that the root certificate is trusted by the software doing the validation. The issue is that the Web PKI deployment did not follow the IETF standards. Just to be clear, most of the Web PKI does match the standards, but along the way a few items got sidetracked. I won’t go into the reasons why. I’m sure there are a lot of them. However, now with so much history behind us, we may run into the issue of “breaking the web” if we try to fix anything. So, what’s the big deal? Why worry about this now? Well, why not? What we have seen over the past are issues that come up that should work, but they don’t. An instance from 2011 was when Comodo and DigiNotar issued fraudulent certificates. If you follow the standards, a simple revocation action should have taken place and the problem fixed. Not the case. Revocation does not really work with the Web PKI. Many browsers do not have certificate status-checking turned on by default. Those that do have it turned on react to revocation in different ways. Some browsers allow the user to proceed to the website even though it knows that the certificate has been revoked. The solution for Comodo and DigiNotar was to make changes to the browsers or operating systems. The bad certificates or bad issuing CA certificates had to be disclosed within the software and the trust decision would be made without the revocation information. Another issue is the implementation of domain name constraints (nameConstraints). The RFC 5280 states that this extension must be marked as critical. If the client application does not understand a critical extension, then it should fail the connection — or break the Web. As such, the name constraints extension is not used, as many browsers and operating systems do not know what to do with this extension. If we just make the extension non-critical, then the client software that understands it will comply and the software that does not will ignore it and the Web will not be broken. So, the goal of the working group is to document the Web PKI and look for the problems. As the problem list grows, they will be analyzed. In the future, the group’s mandate may be changed to find solutions to the problems or the problems may be left for another working group to address.
<urn:uuid:34254660-0bdb-4217-a67d-9eacff000db3>
CC-MAIN-2017-04
https://www.entrust.com/web-pki-birds-of-a-feather/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00101-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947251
671
2.515625
3
Google, which has been testing balloon-powered Internet access in underdeveloped area, said one of its balloons circled the Earth in 22 days. Google has been taking its Internet-delivering balloons to new heights. Well, at least to much greater distances. Google, which has been testing balloon-powered Internet access, announced Thursday that one of its balloons circled the Earth in 22 days. Now beginning its second lap around the world, the balloon marked the project's 500,000th kilometer. "It enjoyed a few loop-de-loops over the Pacific ocean before heading east on the winds toward Chile and Argentina, and then made its way back around near Australia and New Zealand," wrote Google's Project Loon team on its Google+ page. "Since last June, we've been using the wind data we've collected during flights to refine our prediction models and are now able to forecast balloon trajectories twice as far in advance." Google released this image of the flight path that one of its Project Loon balloons took as it traveled around the world. (Image: Google) Google has been testing high-altitude balloons since last year in an attempt to bring Internet connectivity to the two out of three people around the world who don't have access to a fast and affordable connection. Last June, GoogleX, the company's research arm, launched 30 high-altitude balloons above the Canterbury area of New Zealand as part of a pilot test with 50 users trying to connect to the Internet via the balloons. The goal is to build a ring of balloons, flying around the globe on stratospheric winds about 12.4 miles high, bringing Internet access to remote and underserved areas. The balloons communicate with specially designed antennas on the ground, which in turn connect to ground stations that connect to local Internet service providers. The company isn't alone in its efforts to bring Internet connectivity to underserved areas. News circulated last month that Facebook is reportedly in talks to acquire Titan Aerospace, a New Mexico-based company known for making solar-powered drones. Analysts speculated that Facebook would use the drones similarly to Google's balloons - for Internet connectivity. Titan Aerospace builds lightweight, high-flying drones that can remain aloft for five years. According to Google, the balloon's world trips has provided researchers with more information on how to efficiently fly them, even maneuvering the balloons away from the powerful wind currents of the polar vortex. "We can spend hours and hours running computer simulations, but nothing teaches us as much as actually sending the balloons up into the stratosphere during all four seasons of the year," the company wrote. This article, Google's Internet-delivering balloon circles the Earth, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Google gets practice flying its high-altitude balloons in Project Loon" was originally published by Computerworld.
<urn:uuid:3dabb751-5324-48a4-9bee-6e32a70513e5>
CC-MAIN-2017-04
http://www.networkworld.com/article/2175857/wireless/google-gets-practice-flying-its-high-altitude-balloons-in-project-loon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956637
667
2.9375
3
When a network has fallen victim to a hacker, there may be cause for investigations to follow in order to report the crime that has been committed. Computer Hacking Forensic Investigators will extract evidence of the incident in order to do so. They will also carry out audits in order to prevent future attacks to their organisation’s networks. The Computer Hacking Forensic Investigator certification will teach students to do the following: - Investigation process - Rules of a first responder - Recovery of deleted files - Password cracking - E-mail tracking - Investigating web- and wireless attacks
<urn:uuid:52ea9cef-550b-48ad-93be-afc895c706c3>
CC-MAIN-2017-04
https://www.itonlinelearning.com/course/computer-hacking-forensic-investigator/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926852
119
2.59375
3
7.6 What is a group signature? A group signature, introduced by Chaum and van Heijst [CV91], allows any member of a group to digitally sign a document in a manner such that a verifier can confirm that it came from the group, but does not know which individual in the group signed the document. The protocol allows for the identity of the signer to be discovered, in case of disputes, by a designated group authority that has some auxiliary information. Unfortunately, each time a member of the group signs a document, a new key pair has to be generated for the signer. The generation of new key pairs causes the length of both the group members' secret keys and the designated authority's auxiliary information to grow. This tends to cause the scheme to become unwieldy when used by a group to sign numerous messages or when used for an extended period of time. Some improvements [CP94] [CP95] have been made in the efficiency of this scheme. - 7.1 What is probabilistic encryption? - Contribution Agreements: Draft 1 - Contribution Agreements: Draft 2 - 7.2 What are special signature schemes? - 7.3 What is a blind signature scheme? - Contribution Agreements: Draft 3 - Contribution Agreements: Final - 7.4 What is a designated confirmer signature? - 7.5 What is a fail-stop signature scheme? - 7.6 What is a group signature? - 7.7 What is a one-time signature scheme? - 7.8 What is an undeniable signature scheme? - 7.9 What are on-line/off-line signatures? - 7.10 What is OAEP? - 7.11 What is digital timestamping? - 7.12 What is key recovery? - 7.13 What are LEAFs? - 7.14 What is PSS/PSS-R? - 7.15 What are covert channels? - 7.16 What are proactive security techniques? - 7.17 What is quantum computing? - 7.18 What is quantum cryptography? - 7.19 What is DNA computing? - 7.20 What are biometric techniques? - 7.21 What is tamper-resistant hardware? - 7.22 How are hardware devices made tamper-resistant?
<urn:uuid:d8e978b6-9b9a-49c0-9845-148b045238b9>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-a-group-signature.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908881
499
3.140625
3
Planning for natural disasters can save years of research materials Friday, Aug 30th 2013 Research labs are increasingly sensitive to change and with natural disasters, a severe alteration to the environment could damage and even destroy all of their work. Having a plan in place to preserve data and ongoing research projects is necessary for research labs to continue their studies without worrying about losing the samples. At Mississippi State University, there is always the potential for tornadoes, hurricanes and floods to strike, however, the college has created a strategy to better protect research projects. Emergency power generators are always at the ready for power outages and the cryogenic freezer has a tank of liquid nitrogen to help maintain the temperature, according to the university. Many of the samples are critical and can't be replaced, making it necessary for the lab to be prepared for all eventualities. These systems were activated during the 2011 tornadoes in the area that knocked out the power to multiple facilities. Observing regular temperature monitoring, the researchers were able to ensure that the samples remained in the appropriate conditions throughout the outage. "Quite often, researchers have several months or even years invested in their work," head of basic sciences at MSU's College of Veterinary Medicine, Dr. Stephen Pruett said. "It would be devastating if something were to happen to their data or samples." Keeping projects under control While natural disasters are unpredictable, there are ways to prepare for them. However, if there are malfunctions in the freezers, specimen could be lost. This was the situation that affected McLean Hospital in 2012 and damaged autism brain samples. The freezer is typically kept at minus 80 degrees Celsius, however, due to a malfunction, the external thermostat on the appliance read consistent temperature when the actual environment was at 7 degrees, according to The Boston Globe. The samples thawed out and were then unusable. Implementing a professional temperature sensor can ensure that research specimens are always stored in the appropriate conditions and will send constant updates to detail the freezer's environment. "Research represents a fabulous investment of time, money and resources into our future and livelihood," MSU assistant professor for community preparation and disaster management Ryan Akers said. "These contingency plans must be carefully thought out, properly designed prior to the onset of any project, no matter the size -- and meticulously followed. As with most any emergency situation, not allowing for the proper prevention and preparation protocols to protect valuable work could be most unfortunate in the event of a disaster."
<urn:uuid:218bc62f-a645-44bc-ab27-efd45d157b51>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/planning-for-natural-disasters-can-save-years-of-research-materials-500167
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96055
501
2.78125
3
Working with the NETSTAT Command When Windows has trouble communicating over a TCP/IP network, the common fix in the past was to use TCP/IP utilities such as PING and TRACERT to diagnose the problem. However, Windows XP takes network troubleshooting to a new level. While all of the standard TCP/IP troubleshooting utilities still exist in Windows XP, Microsoft has been kind enough to throw in several new troubleshooting utilities. In this article, we'll discuss one such utility -- the NETSTAT command. The NETSTAT command is designed to help you quickly determine whether or not TCP/IP is working correctly. If TCP/IP is having problems, then NETSTAT can help you to determine where the problem is. NETSTAT is a command line utility. To use this utility in its most basic form, you need only open a command prompt window and enter the command. When you do, NETSTAT will display a list of the current TCP/IP connections. The information presented on this screen includes the protocol (usually TCP), the local address (the MAC address), the foreign address (the IP address), and the connection state. Entering the NETSTAT command with the -A switch causes the program to display all connections and listening ports. The result is a list that tells you which TCP and UDP ports that the machine is aware of, and which of those ports that the computer is presently listening to. Sometimes, when you're troubleshooting a network problem, you may have questions as to whether any packets are flowing in or out of the machine at all. The NETSTAT command lets you quickly make such determinations when you enter the NETSTAT command with the -E parameter. The -E parameter tells NETSTAT to report the system's Ethernet statistics. You'll see information such as the number of bytes, unicast packets, non-unicast packets, discards, and errors that have been sent and received. The cool thing about this utility is that it doesn't force you to treat TCP/IP as a single entity. TCP/IP is made up of many sub-protocols. If you enter the NETSTAT command with the -E and -S parameters, you can see a list of Ethernet statistics based on protocol. This means that you'll see the same list of sent and received bytes, unicast packets, etc., but this time, the list will be subdivided into categories such as IPv4, ICMPv4, TCP, and UDP. The NETSTAT command even allows you to examine a single sub protocol by using the -P switch. Simply append the -P switch and the protocol name to any of the other command line switches, and the results will be based solely on the protocol that you specified. Your choices are TCP, UDP, TCPv6, UDPv6. When the -P switch is used in conjunction with the -S switch, you may also specify the IP, IPv6, ICMP, ICMPv6, TCP, TCPv6, UDP, and UDPv6 protocols. One of the biggest concepts in TCP/IP networking is routing. NETSTAT allows you to examine a computer's routing tables by following the NETSTAT command with the -R parameter. For each active route, NETSTAT will display the destination address, the net mask, the gateway, the interface, and the metric. Beneath this information, NETSTAT will display persistent routes seperately, NETSTAT also differentiates between the routes associated with each network interface on multihomed machines. One other noteworthy thing that NETSTAT can do is to use an interval. Earlier, we looked at using this utility to look at the number of bytes that had been sent and received. When used in this manner, you see a static display of a value that's very dynamic. Therefore, you can use the INTRAVAL switch to specify how often NETSTAT should generate a new report. When you use the INTRAVAL switch, NETSTAT will continuously loop until you press CTRL+C. As you can see, NETSTAT is a great utility for helping you to diagnose and repair TCP/IP problems. Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense.
<urn:uuid:1a241bc0-3908-4426-a4c9-154f2453c1d5>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/937411/Working-with-the-NETSTAT-Command.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924976
891
2.671875
3
VDI and BYOD Help Teachers and Students SucceedBy Maggie O'Neill | Posted 2013-09-24 Email Print A Virginia school district uses VDI and BYOD to provide online access to all the applications, files and information available in a classroom computer or lab. By Maggie O'Neill Business leaders have known for years the value of virtual desktop infrastructure (VDI) and bring-your-own-device (BYOD) initiatives, but school districts are just starting to offer similar technologies in the classroom. One of those school districts is the York County School Division in Virginia, which first built its VDI infrastructure and then implemented a BYOD program. Douglas Meade, the district's director of IT, points out that "some people put the cart before the horse." Without VDI, he explains, BYOD is basically just linking more computing devices to the Internet. With VDI, mobile devices brought in by students and teachers can not only log on to the Internet, but also can access all the applications, files and information that are available in a classroom computer or computer lab. "VDI levels the playing field and makes all devices--whether school-owned or BYOD—equal when it comes to accessing information," Meade says. Of course, this initiative took planning. The school district began small, implementing VDI at a district high school and middle school, using money available through federal revitalization funds. "By the fall of 2009, our sleeves were rolled up and we were in knee deep testing different technologies," Meade says. In less than two years, all the schools in the district had VDI, provided through a Citrix-based XenApp and XenDesktop environment via a Windows platform. To accomplish that, the district placed servers and filers at every school site, using NetApp's fabric-attached storage (FAS) 3140 and 2040 filers. "We chose NetApp at the time of the VDI project," Meade reports. "We had a couple of small storage systems before, but with virtualization you need a lot of storage. NetApp was the only storage solution at the time to do NFS [network file system] out of the box." The school district now has 218 physical servers and 349 virtual servers, but not all of them are used for VDI. "Because we were not sure of the bandwidth demands, we did not centralize the initial deployment," Meade continues. "When we start to rethink this in the next year or two, we'll look at centralizing a lot more than we did originally." At the York County School Division, virtualization has meant breaking down barriers for its 900 teachers and 12,300 students. After logging in, students can access applications—such as Adobe Creative Suite 5 for the graphics class—that they might never purchase on their own. It's a change for teachers, too. "They no longer have to lug anything home," Meade says. "They never have to remember to copy a file onto a thumb drive or carry home a notebook computer. They can go home and log in from any computer at their house and have access to everything they have at school." Bringing on BYOD After the VDI deployment was in place, the school district implemented its BYOD program. (They call it BYOT, for bring your own technology.) It was prototyped in the spring of 2011 and is now in its third year. The district's BYOT program allows students, teachers and other staff with any type of mobile device—smartphone, tablet, laptop—to access the district's wireless service and get online. It sized its wireless IP address scheme based on an average of three devices per person. Additionally, the district uses a metropolitan network service available through Verizon. The metropolitan network provides 100MB service at all its schools and 1GB service at one school and the school board office. One of the greatest challenges to the district's VDI and the BYOD program is the need for teachers to learn how to use these technologies effectively. Meade chalks that up to "the normal growing pains of bringing something new and exciting into the classroom." And those efforts are being recognized: The York County School Division received the Pinnacle of Excellence award for its work from the Association of School Business Officials International. "In students and teachers, we have a ready-made audience who immediately benefit from having this access," Meade says. "Students have homework and projects to complete, and teachers often need to take work home with them. "The teachers and students appreciate the access—anywhere, anytime and on any device."
<urn:uuid:678b138d-65e8-41b3-aeff-8e93628d7dcd>
CC-MAIN-2017-04
http://www.baselinemag.com/virtualization/vdi-and-byod-help-teachers-and-students-succeed.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00265-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963402
957
2.625
3
Building, Installing, and Configuring a RADIUS Server The benefits of a RADIUS server are many. In addition to speed, you receive heightened security with user access monitoring, reporting and tracking functions and personalized restrictions. Setting it up costs less than $60 and this white paper walks you through each of the steps, settings, configurations and the equipment you will need. I work often with a variety of networking devices from different manufacturers. In many cases the equipment is simply being evaluated, configured for demonstration purposes, or incorporated into a lab for classroom use. RADIUS is supported by a lot of these gadgets but I have never found a handy-dandy inexpensive and easy to use RADIUS server. Well, let's build one. RADIUS is an acronym for Remote Authentication Dial-In User Services. It is an AAA tool intended to be useful in instances where the user would like to centralize management of authentication, authorization, and accounting functions (hence the AAA). Authentication, or proving that users are who they claim to be, is the primary reason that most people are driven to want a RADIUS server. But the authorization (limiting what a user is allowed to do once they have been authenticated) and accounting (logging and billing functions) will be of interest to some people, too. Where I most often like to demonstrate the use of RADIUS is in the configuration of Ethernet switches and IEEE 802.11 access points. For switches, RADIUS is most often used in conjunction with IEEE 802.1x port-based network access controls, which can in turn be used to control the identity of users who are allowed access to specific ports. For access points the same mechanism is actually in play, but it is used to limit who can associate with the wireless network. You might have bumped into this feature if you ever clicked on the "enterprise" settings for WPA or WPA2 while configuring your AP. Raspberry Pi is a handy platform to building a simple RADIUS server. I chose the Raspberry Pi 2, which has a multicore ARM processor and 1GB of RAM. For the OS I decided on Raspbian, which is basically Debian Linux for the Pi. A case and heatsinks creates a nice package about the size of a pack of cigarettes. To build and configure the server you will also require a display that supports HDMI and a USB keyboard and mouse initially. Once the appliance is built you will no longer need the latter items as you can manage it via the network. The cost is minimal. The Raspberry Pi 2 costs $35 and the case and heatsinks add less than $15. You do need a class 10 micro SD card with at least 16GB, so add another $10. For less than $60 you can have a RADIUS server. My advice though would be to spend a little more and get a 32GB SD card-the 16GB option does not leave as much room for growth and experimentation. That bumps the total to about $75. By the way, if you have an older Raspberry Pi Model B or B+ laying around, it will work. The minimum SD card size is 8GB. There are quite a few steps involved in getting things up and running as you will see in the details later in this paper. But the big picture is not all that tough to understand. Here are the major tasks: - Acquire and build your Raspberry Pi and assemble it with mouse, keyboard, display, and Internet connection. All of the connections are standard off-the-shelf cables. - Download Raspbian and install it on the SD card. Boot it up and go through the initial OS installation and customization. - Install and configure Apache, MySQL, and PHP, turning this into a LAMP server. - Install and configure FreeRADIUS and the daloRADIUS GUI. - Test, tune, experiment, and enjoy. Let's get started. Building the hardware platform As a starting point I suggest that you visit raspberrypi.org. There are links there to various distributors. I bought my Raspberry Pi 2 Model B from MCM Electronics. I purchased the case and heatsinks from Amazon. Be careful. Make sure your case is for the Raspberry Pi 2-not its predecessors. The class 10 micro SD card came from Walmart. I already had the other things laying around my house. For the display I used a small 1080p TV. The mouse and keyboard have to be USB devices. Note that the Pi is powered by a USB connection; so that requires a separate power supply.
<urn:uuid:ce034845-d365-43fe-a0de-9551837b4e99>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/building-installing-and-configuring-a-radius-server/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00265-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937621
950
2.53125
3
Get 'em while they're young seems to be the MPAA and RIAA's game plan; when it comes to education, U.S. students rank behind international peers in reading, math and science, but the MPAA, RIAA and others intend to take time away from those studies by adding a curriculum for kindergarten through high school students that teaches the evils of online piracy. The Center for Copyright Information, which is supported by music labels, Hollywood studios and major Internet service providers, has commissioned a curriculum about copyright infringement sins. The Center for Copyright Information also manages Six Strikes, officially called the Copyright Alert System. Why teach kids about creative content online? The Center for Copyright Information replied, "Because students need it." For grades K-6, "lessons introduce age-appropriate (non-legal) concepts of sharing and ownership." The curriculum for grades 9-12 will "explore copyright as a legal concept" and "important issues like fair use....The goal of the curriculum is to introduce age-appropriate concepts to children about artistic creations, including that children can be creators and innovators just like their favorite musicians, actors and artists." Los Angeles Times reported: Called "Be a Creator," the proposed copyright curriculum is for students in kindergarten through sixth grade. It includes lesson plans, videos and activities for teachers and parents to help educate students about the "importance of being creative and protecting creativity," with topics such as "Respect the Person: Give Credit," "It's Great to Create," and "Copyright Matters." According to the second grade curriculum [pdf], a teacher is prompted to say "We're going to talk about an interesting, grown up idea for a minute - PERMISSION." It's about a bike and whether or not to share it; there's also a video. The lesson for second graders is only supposed to take 20 minutes, but that doesn't jive with the suggested idea of also showing the first grade video and taking photos throughout the day with an iPad. If time permitted for taking those photos, then the mind game kicks in as the teacher is prompted to have students look through the "photograph collection to decide which photographs he wants to give to friends, post online, sell to neighbors, or keep for his family." The teacher is supposed to then say: You're not old enough yet to be selling your pictures online, but pretty soon you will be. And you'll appreciate if the rest of us respect your work by not copying it and doing whatever we want with it. The evils of copyright infringement video, embedded below, is an example for sixth graders. This lesson is slated to take 30 minutes. In the version for sixth graders [pdf], as reported by Wired's David Kravets, "teachers are asked to engage students with the question: 'In school, if we copy a friend's answers on a test or homework assignment, what happens?' The answer is, you can be suspended from school or flunk the test. The teachers are directed to tell their students that there are worse consequences if they commit a copyright violation." "This thinly disguised corporate propaganda is inaccurate and inappropriate," Mitch Stoltz, an intellectual property attorney with the Electronic Frontier Foundation, told Kravets. "It suggests, falsely, that ideas are property and that building on others' ideas always requires permission. The overriding message of this curriculum is that students' time should be consumed not in creating but in worrying about their impact on corporate profits." While the program is still being drafted by iKeepSafe and the California School Library Association, some in the California Teachers Association are not thrilled about the curriculum that "promotes the biased agenda of Hollywood studios and music labels." According to spokesman Frank Wells, "Some teachers would have a concern that adding anything of any real length to an already packed school day would take away from the basic curriculum that they're trying to get through now." Even though most kindergarten children are younger than age 7, the age of reason when a child's conscience matures enough to guide his or her actions, it's not too young to push Hollywood's agenda in the classroom. But since most parents allow unsupervised internet access to children at age eight, where's the curriculum to teach online safety? After all, 51% of parents suggested that teaching online safety is the responsibility of teachers. How about teaching online security, is that not more important than Hollywood's message? How about teaching the evils of over-sharing and online privacy wisdom? Perhaps the proposed copyright brainwashing curriculum starts in kindergarten, as opposed to online security or privacy lesson plans, because by the time those kids are adults, there won't be any privacy, just a police state of surveillance? Of course if the Trans-Pacific Partnership (TPP) secret trade agreement isn't stopped in its current form, the internet will soon be a completely different beast. Like this? Here's more posts: - How Microsoft invented, or invisibly runs, almost everything - Microsoft cybersecurity report warns users about the evils of clinging to XP - Wireless feature disabled on pacemaker to stop hackers from assassinating Cheney - IE zero-day attack delivers malware into memory then poofs on reboot - CryptoLocker crooks charge 10 Bitcoins for second-chance decryption service - That's no poltergeist invading your privacy: Spooky spying hacks make homes seem haunted - Porn-surfing corporate bosses infect networks, then keep data breaches a secret - Microsoft warns of zero-day attack, graphics vulnerability exploited through Word - Captain Justice: Epic legal trolling reply to govt's motion to ban the word 'government' - Chris Hemsworth goes to 'nerd school' for hacking in cyber-terrorism thriller 'Cyber' - Battling against zero-day exploit black market, Microsoft expands $100,000 bug bounty - 2013 top words, phrases & names include 'drones', 'surveillance', 'NSA', 'fail', '404' Follow me on Twitter @PrivacyFanatic
<urn:uuid:de1903b3-3c21-405b-b183-81f892dfd597>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225837/microsoft-subnet/hollywood-s-anti-piracy-propaganda-turned-into-k-12-curriculum-in-california.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00173-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94848
1,238
3.171875
3
A year ago the National Oceanic and Atmospheric Administration (NOAA) conducted a series of experiments in the Arctic Circle. The program, called the Aerosol, Radiation and Cloud Processes affecting Arctic Climate (ARCPAC), was conceived to measure changes in the Arctic atmosphere. ARCPAC scientists also studied how radiation, cloud formation and airborne aerosols are affecting climate. For the experiments, the NOAA fitted a Lockheed WP-3D Orion aircraft's fuselage and wings with cloud probes, aerosol spectrometers and other sophisticated equipment. In this image, the WP-3D is preparing to takeoff from an airfield in Fairbanks, Alaska. Photo credit: NOAA
<urn:uuid:8623c52a-8559-40b2-967b-31b6bc09af33>
CC-MAIN-2017-04
http://www.govtech.com/transportation/NOAA-Studies-Changes-in-Arctic-Atmosphere.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00503-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915897
137
3.75
4
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. RSA Security and Hifn, two of the companies represented on the committee, have stated that the technique can be applied to existing equipment. A committee of the Institute of Electrical and Electronics Engineers (IEEE) approved the fix, which is responsible for Wired Equivalent Protocol (WEP) and a clutch of other wireless LAN standards. The fix for the WEP encryption standard uses a technique called fast-packet keying to rapidly generate unique encryption keys for each data packet transmitted. According to RSA and Hifn, equipment suppliers can distribute the fix either as a software or firmware patch, allowing users to update vulnerable devices. Traffic on wireless LANs can be overheard by anyone with an appropriate radio receiver, so the WEP standard was adopted by the IEEE 802.11 standards committee as a way of encrypting this traffic to make it as secure as traffic on wired LANs. However, flaws in the encryption algorithm meant that it was relatively simple to guess the keys with which successive packets of data were encrypted on WEP wireless LANs, because the keys were too closely related to one another. Current implementations of the WEP standard use RSA Security's RC4 algorithm for encryption. RSA Security defended its encryption algorithm, saying the successful attacks against WEP were not a result of any weakness in RC4, but rather in how WEP created encryption keys for each data packet based on a code known only to the base station and the remote terminal in the wireless LAN. The keys for different packets were too similar, RSA said, meaning that hackers could exploit the similarity to deduce the secret code and, with it, the content of all network traffic. RSA Security said the fast-packet keying method could be used to reduce the similarity between keys used to encrypt successive data packets, making it harder for hackers to guess the secret code known to the network terminals.
<urn:uuid:85b53430-54d1-4519-9aa3-1b62d8566b71>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240043617/Broken-wireless-LAN-gets-fix
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00560-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960235
411
2.640625
3
The X' (ML) Files - Finding Data in the Deep Web With so much data being created daily (161 exabytes in 2006 alone—that’s a 10 with 18 zeros after it—according to IDC), finding complete answers to any search requires the ability to query textual and database information anywhere, internally or on the Web. The problem is that most of that data isn't contained in a webpage, but within a database that only gets displayed in response to user action. “In 2000/2001 we did some analysis and realized that the quantity of documents from these deep web databases was far bigger than what everyone was calling the Internet,” said Jerry Tardif, vice-president of Bright Planet Corp., a search firm headquartered in Sioux Falls, S.D. You can't just plug some search terms into Google to access all this data. It requires the use of a federated search tool. “Google makes search look simple, but in fact search is not simple, particularly when completeness is important, said David Fuess, a computer scientist in Lawrence Livermore National Laboratory's Nonproliferation, Homeland and International Security (NHI) directorate. His team uses Bright Planet's Deep Query Manager (DQM) to look for information on end users of export-controlled goods which might have military uses. “To be effective you must strike a proper balance that maximizes the probability that the information you seek is in the results and that the results can be reviewed within the response time allowed.” Traditional Web searches, or searches against an organization's own content, consists of creating a database of words in those documents and then running a query against that database. Federated search is the ability to execute queries against multiple databases at the same time. Internally, for example, a user could run a query on a customer that would turn up both invoices contained in the finance system as well as that customer's contract contained in the document management system. Expanding its scope to outside sources, the federated search engine could also pull up the most recent stock quote from Dow Jones, the bond rating from Moody's and the customer's latest filings with the Securities and Exchange Commission. Creating a federated search engine is more than just a matter of installing some software. “IT staff needs to understand that this is not a trivial undertaking,” said Abe Lederman, president of Deep Web Technologies, which develops the Explorit federated search software. “It is very unlikely that this is something an IT person can just purchase a copy of it, set it up and run it.” The first step is surveying what resources are available to be searched. This is relatively simple when dealing with data the organization owns, but gets far more complex when locating outside sources. No one knows exactly the number of publicly available databases on the Internet, but the CompletePlanet directory has a searchable and browse-able list of more than 70,000 online databases and specialty search engines. “If an agency is federating search on their own databases, they generally know what they have, where it is, and the type of information that is in there,” said BrightPlanet's Tardiff. “But if they are doing something on the outside, they need subject matter expertise on what public sources are available.” Once you have selected the databases to include in the search, there is the matter of creating links and writing the code needed to execute the query on each of those databases. This can include writing appropriate log in scripts. These scripts need to be checked regularly and updated whenever the underlying database structure changes. A final step is to refine the user interface that aggregates the data from these different sources and presents it to the end user. Roy Tennant, the User Services architect for the California Digital Library, (the group that provides centralized digital access to the collections of all University of California campuses, as well as hundreds of other databases) found that an off the shelf product didn't provide the needed functions without extensive customization. “Since the user interface of the commercial product was not as flexible as we required, we needed to build our own user interface layer and use the application program interface (API) of the commercial application to handle the connections to multiple sources, the searching, merging of search results, de-duplication, and ranking, said Tennant. “This also required us to work with the vendor and the product user community to create a prioritized list of enhancements to the vendor’s API and wait for those enhancements to be provided (which they were).” The work doesn't stop once a site is up and running. The U.S. Department of Energy's Office of Scientific and Technical Information (OSTI) maintains the science.gov site which provides a common public search interface for thirty scientific databases of a dozen federal agencies, as well as the newly launched worldwidescience.org site which searches the scientific databases of ten countries. In February, OSTI released the 4.0 version of science.gov—created and maintained by Deep Web Technologies—which included relevance ranking based on the full text of document, rather than just the metadata and summary. “Adding full-text relevance ranking was the most significant improvement, but there were others,” said OSTI director Walt Warnick. “We also added alert services where you can put a query in and each week you get an email about anything new that has turned up in any of the thirty databases, without repeating what you found previously.” And that, as Fuess said, is the key to developing an effective federated search engine: Including all the relevant data sources, but without burying the user with more hits than he can possibly look at in the time available.
<urn:uuid:33349543-4e6b-401d-9d68-adfde7104ae5>
CC-MAIN-2017-04
http://www.cioupdate.com/print/trends/article.php/11047_3697526_2/The-145X146-ML-Files-150-Finding-Data-in-the-Deep-Web.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00376-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946788
1,200
2.640625
3
The U.S. House of Representatives overwhelmingly passed legislation to improve communication and tracking capabilities within our nation's mines. Over the past few years, several mine tragedies resulting in multiple deaths have brought nationwide attention to the significance of mine and miner safety. The Sago mine explosion in West Virginia in 2006, followed by the 2007 disaster at the Crandall Canyon mine in Utah, has underscored the need for such improvements. "Mining families who watch their loved ones go down into the mine every day, need to know that we will be able to communicate in the event of an emergency. As we learned in Utah's Crandall Canyon mine tragedy, there's a gap in our ability to find these miners in deep underground mines. We need next-generation technology in order to be able to track and communicate with mine workers and now is the time to jump-start that effort," said Rep. Jim Matheson (D-UT). Between 2000 and 2006, mine related accidents have led to nearly 40,000 injuries and resulted in 234 deaths. H.R. 3877, the Mine Communications Technology Innovation Act, authored by Rep. Matheson, is designed to accelerate efforts to develop new technologies and adapt existing technology to improve the communication services in mines, especially in emergency situations, in an effort to protect the life and health of miners. It requires the Director of the National Institute of Standards and Technology (NIST) to create a proposal to promote the research, development, and demonstration of miner tracking and communications systems. NIST has a proven track record for such work. The agency has previously provided groups such as the National Institute of Occupational Safety and Health (NIOSH) and the Mine Health and Safety Administration (MHSA) with mine safety assistance.
<urn:uuid:321c02f2-3d51-40e7-98ed-5b99e3220f23>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/House-Endorses-Bill-to-Improve-Mine.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00010-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951537
360
2.671875
3
This Course provides UNEM Administrator knowledge to ensure the correct and reliable operation of the UNEM and the UNIX/LINUX platform. The capabilities of in-depth analysis of the UNEM operational environment using script files and other process-checking techniques are introduced. The comprehensive use of the in-built UNEM utility, NEMTool, is explained. Administrator knowledge is required to create new users, check Log files and perform other maintenance tasks. To do these tasks, the Administrator uses basic UNIX/LINUX commands and must understand the principles of a distributed Network. The participant will learn how to maintain the UNEM database, how to backup the database to tape, to file or to a remote Workstation - and then how to restore the database. |Target Group|| | He will be able to use Administration Tools, and with a selected set of UNIX/LINUX /commands will be able to work in a Terminal Session in UNIX/LINUX. He will learn how to control the size of the Log files and Alarm files and how to remove and reopen these files. Finally, the participant will learn to create a new user with the relevant access rights. |Key Topics|| | Each participant receives a printed copy of the course documentation.
<urn:uuid:51615777-88a5-467e-b801-2e0e9d12add9>
CC-MAIN-2017-04
https://www.keymile.com/en/services/schulungen/kursbeschreibungen/unem_courses/unem_administration
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz
en
0.847914
260
2.75
3
As high-performance computing (HPC) enters the petascale age, the scientific challenges facing researchers have never been greater. Nor has the might of today’s production petascale machines. The recent exponential growth in the power of modern supercomputers has gone hand-in-hand with an increased demand on resources — as machines have gotten bigger and faster, the amount of resources required for their operation has likewise increased. As a result, HPC centers now face unprecedented power demands from the very machines they rely on to tackle today’s most daunting scientific challenges, from climate change to the modeling of biological processes. However, recent energy-saving innovations at ORNL are setting a new standard for resource-responsible HPC research. The laboratory has taken an all-angles approach, seeking energy savings from a suite of different areas. ORNL’s leadership system, a Cray XT known as Jaguar, is now the fastest computer in the world for open science with a maximum speed of 1.6 petaflops. With this great power comes great responsibility, especially when it comes to energy consumption. “We take energy utilization very seriously,” said ORNL’s Leadership Computing Facility Project Director Buddy Bland. “The scale of this machine is just phenomenal. There are very few places in the world where this computer could have been built.” Needless to say, feeding this animal is no small task: simulation at the petascale requires robust power and cooling networks to ensure maximum production from these machines. But now those necessary support networks, and the system itself, have been designed with unprecedented efficiency, responsibly satisfying Jaguar’s energy appetite. These advances make ORNL among the most energy-efficient locations for HPC, enabling groundbreaking research with minimal resource impact. It all starts with the building. ORNL’s Computational Sciences Building (CSB) was among the first Leadership in Energy and Environmental Design (LEED)-certified computing facilities in the country, meaning that its design satisfies criteria used by the U.S. Green Building Council to measure the efficiency and sustainability of a building. Take the computer room for example: it’s sealed off from the rest of the building by a vapor barrier to reduce the infiltration of humidity. The air pressure inside the computer room is slightly higher than the surrounding area so air will flow out of the computer room without the air outside flowing in. Because ORNL is located in an area of the country with high humidity, keeping moisture out of the air is a high priority said Bland, one that the building was designed to tackle as efficiently as possible. Too much moisture in the air can lead to water condensation on equipment, while too little moisture can cause static electricity to build up — both of which can be problematic for a room filled with expensive electronics. Both removing moisture from or adding it to the air uses a lot of power, so keeping the humidity stable is a great tool for reducing energy consumption. Another computing building on the ORNL campus adjacent to the CSB was recently certified LEED Gold, and Bland points out that the laboratory plans on an equal rating for future HPC facilities. But the innovation doesn’t stop with the building — there is plenty more under the roof. Jaguar requires huge amounts of chilled water to keep the machine cool. To accomplish this as efficiently as possible, the laboratory uses high-efficiency chillers, which are the first step in a multifaceted, efficient cooling design. A newly introduced Cray cooling system for Jaguar, dubbed ECOphlex, complements the chillers and the CSB’s efficiency. Using a common refrigerant and a series of heat exchangers, ECOphlex efficiently removes the heat generated by Jaguar to keep the computer room cool. The combination of air- and refrigerant-based cooling is much more efficient than traditional systems, which rely almost solely on air for temperature control. Without ECOphlex, the number of air-based units would not fit into the CSB’s computer room. This high-efficiency cooling system makes Jaguar possible. ECOphlex also allows ORNL to reduce the amount of chilled water used to cool Jaguar by accommodating a broader inlet temperature range for the cooling water. Considering the fact that thousands of gallons of water per minute are necessary to keep Jaguar cool, a reduction in the volume of necessary chilled water means a proportionate reduction in the energy used to cool it. Simply put, warmer water can mean big energy savings for the laboratory and the taxpayer. Whereas most centers use 0.8 watts of power for cooling per every watt of power used for computing, ORNL enjoys a far more efficient ratio of 0.3 to 1, one of the lowest of all data centers measured. Another important innovation is one that ORNL has been working on with Cray for several years. Instead of using the more common 208-volt power supply that Jaguar used in the past, the system now runs directly on 480-volt power. This seemingly “minor” change is saving the laboratory $1 million in the cost of copper used in the power cords for the cabinets. Furthermore, keeping the voltage high allows a lower current, which means lower resistance and less power turned into heat as it travels down the wires. The reduction in electrical resistance will reduce energy costs by as much as half a million dollars. Finally, ORNL gets a little help from history. The power grid for the city of Oak Ridge was designed when the work conducted during the Manhattan Project used one-seventh of all the electricity in the country. The grid was constructed with every protection possible out of the fear that any interruption in supply would drastically set back development. The result: an extremely resilient local power grid. Because of this grid, said Bland, Oak Ridge doesn’t need huge uninterruptable power supply (UPS) systems, which generally consume lots of electricity. However, the laboratory does have flywheel-based UPSs in case of an emergency. If there is a problem, the flywheel keeps generating power, which is a much more efficient process than conventional UPSs and therefore a greener method of supplying backup power. Because the flywheel-based UPS is mechanical as opposed to battery-operated, it also generates less waste in the long-term as battery replacement is not a concern. While all of these steps are important, taken together they are greater than the sum of their parts. “There is no silver bullet,” said Bland. By tackling energy efficiency from multiple angles, ORNL is helping to ensure that the groundbreaking research taking place on its petascale machines is conducted as responsibly as possible, setting new standards in both HPC and energy responsibility.
<urn:uuid:4926d2bc-a388-4a95-a1b3-2b2f281ae467>
CC-MAIN-2017-04
https://www.hpcwire.com/2009/03/05/supercomputing_seeks_energy_savings/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00404-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951176
1,381
3.484375
3
Information Security and Risk Management Questions derived from the CISSP – CISSP ISC2 Self-Test Software Practice Test. Objective: Information Security and Risk Management SubObjective: Understand professional ethics Item Number: CISSP.5.18.2 Single Answer, Multiple Choice Which statement is true of trade secret law? - Trade secret law involves protection of an idea’s expression. - Trade secret law protects information that is vital to a company’s survival and profitability. - Trade secret law involves protection of either a word or a symbol that is used to represent the company. - Trade secret law promotes the use of information between different companies to ensure a homogeneous environment. B. Trade secret law protects information that is vital to a company’s survival and profitability. Trade secret law protects information that is vital to a company’s survival and profitability. Trade secret law preserves the proprietary information pertaining to a company’s business. Trade secrets provide a company with a competitive advantage. Special skill and talent is required to develop trade secrets. The Trade Secret Act qualifies company information as a trade secret only if the information fulfills the following conditions: - The information must not be easily accessible. - The information must have economic value for the company’s competitors. - The information must be protected by the company using all reasonable means. The following are examples of company trade secrets: - Customer identities and preferences - Product pricing - Marketing strategies - Company finances - Manufacturing processes - Other competitively valuable information Unlike a copyright, a trade secret does not protect either an idea or an expression. Copyright law protects an idea’s expression rather than the idea itself. The ideas are protected by the use of patents, and the corresponding expression is controlled by copyrights. Trademark refers either to a word or to a symbol that is used to represent a company to the world. Trademarks are protected because each trademark is a unique symbol to represent the company, and the organization has spent time and effort to develop a trademark. Trade secret law prevents unauthorized disclosure of a company’s confidential information and does not ensure a homogeneous environment. Many companies require their employees to sign nondisclosure agreements (NDAs) to ensure trade secret protection. A resource can be protected by trade secret law if it is not generally known and if it requires special expertise, creativity, or expense and effort to develop it. CISSP All-in-One Exam Guide, Chapter 10: Law, Investigation and Ethics, Trade Secret, p. 774.
<urn:uuid:601954e8-d52b-4b6f-b85b-ac80d33b079a>
CC-MAIN-2017-04
http://certmag.com/information-security-and-risk-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.871297
539
2.875
3
All session tokens (independent of the state mechanisms) should be user unique, non-predictable, and resistant to reverse engineering. A trusted source of randomness should be used to create the token (like a pseudo-random number generator, Yarrow, EGADS, etc.). Additionally, for more security, session tokens should be tied in some way to a specific HTTP client instance to prevent hijacking and replay attacks. Examples of mechanisms for enforcing this restriction may be the use of page tokens which are unique for any generated page and may be tied to session tokens on the server. In general, a session token algorithm should never be based on or use as variables any user personal information (user name, password, home address, etc.) Even the most cryptographically strong algorithm still allows an active session token to be easily determined if the keyspace of the token is not sufficiently large. Attackers can essentially "grind" through most possibilities in the token's key space with automated brute force scripts. A token's key space should be sufficiently large enough to prevent these types of brute force attacks, keeping in mind that computation and bandwith capacity increases will make these numbers insufficieint over time.
<urn:uuid:c120da72-e4d8-42a4-be20-0652f0a7d08a>
CC-MAIN-2017-04
http://www.cgisecurity.com/owasp/html/ch07s02.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00184-ip-10-171-10-70.ec2.internal.warc.gz
en
0.861045
242
2.578125
3
This week, NASA announced it would soon be launching a new HPC and data facility that will give earth scientists access to four decades of satellite imagery and other datasets. Known as the NASA Earth Exchange (NEX), the facility is being promoted as a “virtual laboratory” for researchers interested in applying supercomputing resources to studying areas like climate change, soil and vegetation patterns, and other environmental topics. Much of the work will be based on high-resolution images of Earth that NASA has been accumulating since the early 70s, when the agency began collecting the data in earnest. Originally known as the Earth Resources Technology Satellite (ERTS) program, and later renamed Landsat, its mission was to serve up images of the earth, allowing scientists to observe changes to our planet over time. This includes tracking forest fires, urban sprawl, climate change and a host of other valuable information. Data generated by these satellites has been extremely popular in the global science community. In the last 10 years, more than 500 universities around the globe have used Landsat data to support their research. Over time though, the program’s growth created a logistical problem. Multiple datasets eventually spanned facilities around the US, which presented challenges for researchers looking to retrieve satellite imagery. Recognizing the issue, NASA created the NEX program with the goal of increasing access to the three-petabyte library of Landsat data. NEX will houses all data generated by Landsat satellites and related datasets, as well as offering analysis tools powered by the agency’s HPC resources. We spoke with NASA AMES Earth scientist Ramakrishna Nemani, who explained the purpose behind the NEX facility and how it has been implemented. “The main driver is really big data,” he told HPCwire. “Over the past 25 years we have accumulated so much data about the Earth, but the access to all this data hasn’t been that easy.” Prior to NEX, he said, researchers would be tasked with locating, ordering and downloading relevant data. The process could be time consuming because the satellite imagery they wanted could be housed at one or more locations. Even after locating the desired images, data transfer times would often be prohibitive. NASA set out to solve the problem, leveraging one of their strongest assets: supercomputing. The agency decided to take all of the disparate datasets and migrate them to the AMES research center. “We said ‘let’s do an experiment.’ We already have a supercomputer here at AMES, so we can bring all these datasets together and locate them next to the supercomputer,” said Nemani. That system, known as Pleiades, is the world’s largest SGI Altix ICE cluster and the agency’s most powerful supercomputer. Pleiades has been upgraded over time accumulating several generations of Intel Xeon processors: Harpertown, Nehalem, Westmere, and, most recently Sandy Bridge. For extra computational horsepower, the Westmere nodes are equipped with NVIDIA Tesla GPUs. Linpack performance is 1.24 petaflops, which earned it the number 11 spot on the June 2012 TOP500 list. The system also includes 9.3 petabytes of DataDirect storage. Given that, AMES is now able to host the three petabytes of image data at a single location. But NEX was created to do more than hold all the satellite imagery under one roof. A collection of tools was developed to help researchers analyze the data using the Pleiades cluster. For example, a scientist could create vegetation patterns with the toolset, piecing together images like a jigsaw puzzle. The program estimates that processing time for a scene containing 500 billion pixels would take under 10 hours. Without the NEX toolset, scientists would have to create their own computational methods to perform similar research. While making Pleiades’ compute resources available was beneficial for researchers, it posed somewhat of a challenge for the NEX project team, since a certain level of virtualization is required to support concurrent access. The marriage of virtualization and supercomputing can be “tricky business,” according to Nemani, but the program had a unique plan in this regard. “We have two sandboxes that sit outside of the supercomputing enclave,” he said. “We bring in people and have them do all the testing on the sandboxes. After they get the kinks worked out and they’re ready to deploy, we send them inside.” Eventually, the program would like to have scientists run their own sandbox program and upload it to the supercomputer as a virtual machine. While NEX has some cloud elements to it, NASA could not feasibly run the project on a public cloud infrastructure. “We are trying to collocate the computing and the data together, just like clouds are doing. I would not say this is typical cloud because we have a lot of data. I cannot do this on Amazon because it would cost me a lot of money,” said Nemani The NEX program also features a unique social networking element, which allows researchers to share their findings. It’s not uncommon for scientists to move on after working a particular topic. However, this reduces access to codes and algorithms utilized in their research. These social media tools provided by NEX allow peers to go back and verify the results of previous experiments. Combined with access to HPC and the legacy datasets, the facility provides what may be the most complete set of resources of its kind in the world. “Basically, we are trying to create a one-stop shop for earth sciences,” said Nemani.
<urn:uuid:7a4e9fdf-340d-45ce-9759-602ab8d746c7>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/07/25/nasa_builds_supercomputing_lab_for_earth_scientists/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955226
1,180
3.671875
4
Agriculture is the backbone of Zimbabwe’s economy inasmuch as Zimbabweans remain largely a rural people who derive their livelihood from agriculture and other related rural economic activities. It provides employment and income for 60-70 percent of the population, supplies 60 percent of the raw materials required by the industrial sector and contributes 40 percent of total export earnings. Despite the high level of employment in the sector, it directly contributes only 15-19 percent to annual GDP, depending on the rainfall pattern (Government of Zimbabwe, 1995), and this is a statistic that understates the true importance and dominance of the agricultural industry. It is generally accepted that when agriculture performs poorly, the rest of the economy suffers. Three main policy frameworks have affected the performance of agriculture in Zimbabwe in the past two decades. First, there was the €œgrowth with equity programme€ pursued by the government between 1980 and 1990. It sought to redress the colonial legacy in favour of communal farmers. Second, there was the €œstructural adjustment market-oriented reforms€, the Economic Structural Adjustment Program (ESAP), adopted in 1991. Finally, with more profound implications for the sector, there was the programme of €œfast-track land resettlement and redistribution€ started in 2000 and currently in progress. Zimbabwe Agriculture Market is developing due to rapidly increasing tourism, Increase in consumption, advanced technologies, low water requirement in Horticulture process and Government Initiatives. Zimbabwe have abundance of arable land. Horticulture is the new trend in agriculture production which have given boost to the production. Moderate Population and high GDP gives boost to agro sector of Zimbabwe. High competition in exports, low contribution of agro sector in GDP are the biggest constraints for the sector in Zimbabwe. Also, post production management is worst in the country, which makes it difficult to transport fresh food on time. Hyper Inflation have destroyed the economy of the country, resulted in weak agro industry. What the report offers: The study identifies the situation of Zimbabwe, and predicts the growth of its agriculture market. Report talks about Agriculture production, consumption, import and export with prices and market trends, Government regulations, growth forecast, major companies, upcoming companies & projects, etc. Report will talk about Economic conditions of and future forecast of its current economic scenario and effect of its current policy changes in to its economy, reasons and implications on its growth. Lastly, the report is divided by major import & export and importing and exporting partners.
<urn:uuid:5e1b174b-5ecc-457b-9f2c-1376f23026b1>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/agriculture-in-zimbabwe-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933713
505
3
3
A malicious program that secretly integrates itself into program or data files. It spreads by integrating itself into more files each time the host program is run. To salvage the infected Excel document for further use, the user may instruct F-Secure Anti-Virus to disinfect the file. Alternatively, if desired, the user may instruct the antivirus program to simply delete the document. For more general information on disinfection, please see Removal Instructions. Virus:X97M/Laroux is the first real Microsoft Excel macro virus and was found in July 1996. Laroux was written in Visual Basic for Applications (VBA), a macro language based on Visual Basic. This virus is be able to operate under Excel 5.x and 7.x under Windows 3.x, Windows 95 and Windows NT. It also works under localized version of Excel (for example, versions of Excel translated to French or German). This virus does not work under any version of Excel for Macintosh or Excel 3.x or 4.x for Windows. ExcelMacro/Laroux is not intentionally destructive and contains no payload; it just replicates. At the time, Laroux was one of the most common viruses. Laroux consists of two macros, auto_open and check_files. The auto_open macro executes whenever an infected Spreadsheet is opened, followed by the check_files macro which determines the startup path of Excel. If there is no file named PERSONAL.XLS in the startup path, the virus creates one. This file contains a module called "laroux". Once the Excel environment has been infected by this virus, the virus will always be active when Excel is loaded and will infect any new Excel workbooks that are created as well as old workbooks when they are accessed. If an infected workbook resides on a write-protected floppy, an error will occur when Excel tries to open it and the virus will not be able to replicate. PERSONAL.XLS is the default filename for any macros recorded under Excel. Thus you might have PERSONAL.XLS on your system even though you are not infected by this virus. The startup path is by default set as \MSOFFICE\EXCEL\XLSTART, but it can be changed from Excel's Tools/Options/General/Alternate Startup File menu option. Some of the Laroux variants use PLDT.XLS instead of PERSONAL.XLS and thus are sometimes called XM/PLDT virus. See also: Concept. Description Created: 2005-12-15 15:37:00.0 Description Last Modified: 2012-01-15 10:00:00.0
<urn:uuid:a388686a-d933-484c-bfdd-cde532b24ab0>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/virus_x97m_laroux.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91463
560
2.546875
3
The use of technology in local government has become increasingly mainstream over the last decade. According to a recent survey conducted by the International City/County Management Association (ICMA) and Public Technology, Inc. (PTI), 97 percent of U.S. cities use computers to support city operations. There has been only a slight increase in the number of cities using computers since the last survey conducted by these two organizations in 1993; however, the methods, types and uses of technology have changed quite a bit. Cities are expanding their uses of technology to help foster economic development, communicate better with their residents and increase the means by which services are delivered throughout their jurisdiction. The survey, conducted in the fall/ winter of 1997, was mailed to all cities with populations 2,500 and greater and to cities with populations under 2,500 that are recognized by ICMA as having the position of a professional manager. ICMA/ PTI received 3,673 responses -- a response rate of 49.7 percent. The survey results serve as a snapshot into the everyday use of information technology in U.S. cities. What are the most common types of technology? How prevalent are cities on the Web? What type of technology applications are being used on a daily basis and in what manner? The most popular computer systems used by the cities surveyed were: * PCs (97.5 percent), * Laptop PCs (79.6 percent), * Workstations (78.4 percent), and * Microcomputers (67 percent). It is not surprising that PCs and laptop PCs are so popular in cities. Recent price drops in the computer industry and the increase in PC capability have made the personal computer a valuable work tool. Many cities are taking advantage of the laptop's mobility and installing them in police cars and fire engines to provide realtime access to criminal profiles, sketches and fire/crime site statistics. Sixty-five percent of the responding municipalities indicated they budgeted under $50,000 for IT expenditures for fiscal-year 1998. The majority of jurisdictions with populations 250,000 and greater budgeted for more than $150,000. Sixty-six percent of municipalities will keep their data processing budget the same for fiscal-year 1999, with only 5.2 percent of responding cities expecting an increase in the budget, and 4 percent expecting a significant decrease in IT expenditures. Of those expecting a decrease, 64.3 percent responded that the prospect of contracting out data-processing services is very unlikely during the next fiscal year. Management Policy and Computer Use The survey results indicate that the key decision-maker for IT acquisition in cities varies based on a jurisdiction's size. Overall survey results indicate that decision-makers fall into the following rank: * Manager/CAO (55.1 percent), * Department heads (44.8 percent), * Council members (33.9 percent), and * IS/DP directors (25.7 percent). A closer look at the results show that, the larger the population, the more frequently the IS/DP director is the decision-maker in the acquisition process as opposed to the manager/CAO. At least 41 percent of municipalities with populations 50,000 and greater give the IS/DP director the responsibility for making IT decisions. This trend is not surprising, as many smaller jurisdictions not only lack the IS/DP director position, but many do not have data processing staff. The survey results indicate that larger jurisdictions -- populations greater than 50,000 -- were more likely to have their jurisdiction's data processing structure as an independent department, while smaller jurisdictions tend to merge those responsibilities with part of the city administration. As in many work environments, the most frequent users of computers are administrative staff, such as administrative assistants, secretaries and clerks (94.3 percent). The most significant increase in computer use for citywide staff since the 1993 survey was among department heads. In 1993, only 11 percent of department heads used computers; in 1997, the number climbed to 88.5 percent. The city manager/CAO position also saw an increase in computer use, albeit not as drastic, with 76.4 percent of managers using computers as compared to 57 percent of users in 1993. These results indicate that computers are more common in the city workplace among higher-level positions of staff. While the numbers for who uses technology has changed drastically over the last four years, the problems that occur due to technology have remained the same. In 1993, personnel training and underutilization of computer capacity were cited as the greatest technology problems encountered by municipalities. Four years later, even with the advance of technology, the same problems remain. Personnel training (60.1 percent), underutilization of computer capacity (48.3 percent), resistance to organizational change (40.2 percent) and resistance to use (39.2 percent) were the top four problems encountered by cities regarding computer use. Internet Use and Online Communication Internet use is still relatively new to the majority of cities across the country. While larger jurisdictions have ways and means to "get connected," smaller jurisdictions, especially those in rural areas, have yet to come online. This is most evident when dealing with the Internet and the establishment of Web sites. Twenty-six percent of cities responded that they did not have Internet e-mail. Ninety-four percent of those respondents have a population under 25,000. Forty-eight percent of all cities responding indicated that 1 percent to 10 percent of their employees had Internet e-mail, while only 5.4 percent have more than 70 percent of employees on Internet e-mail. Almost 40 percent of cities have a local-government Web site separate from any sites maintained by the chamber of commerce or department of tourism. Not surprisingly, the highest concentration of Web sites occurs in cities with large populations, as 100 percent of responding cities with populations over 1,000,000 have Web sites. Encouragingly, 22 percent of municipalities with populations under 2,500 have Web sites. The primary purpose of these Web sites is for: * Information dissemination (87.7 percent), * Resident education/infor - mation (73.7 percent), and * Economic development (45.1 percent). In order to further enhance Web sites, local governments offer a variety of services on their sites, including council meeting information, e-mail access to local government staff, parks and recreation information, and local government employment information. Many local governments count on the Web to enhance community participation and economic development within their community. The Web, in many cases, allows for realtime access to city services, such as tax/bill payment information and property assessment. Long Range IT Planning and Y2K Two of the most disturbing survey results were that 71.2 percent of cities did not have a long-range IT plan, and 55 percent of respondents claimed their local government computers would not be affected by the year-2000 problem. Although long-range planning is more important in larger jurisdictions with more computers, smaller cities that wish to enhance services by using technology can benefit from long-range planning. In an interesting aside, 24 percent of the cities that do not have long-range planning consider their jurisdictions to be very progressive from a technology prospective; 40 percent consider themselves moderately progressive. The survey specifically questioned whether local government computers would be affected by the Y2K problem; it is a cause for concern that such a large amount of municipalities indicate they won't be affected by the Y2K bug. Keeping in mind that year-2000 compliant computers have been on the market for only a few years, and that the majority of responding cities believe the average useful life of a computer is more than four years, it is highly probable that many cities have noncompliant computers. Many cities that have realized they will be affected by the computer "glitch" are trying to solve the problem by: * Working with an outside vendor (32.5 percent), * Using in-house staff (28 percent), and * Working with a consultant (24.7 percent). Twenty-four percent of cities that responded they would be affected by the year-2000 problem have not begun to solve the problem yet. Lack of financial resources, staff and awareness of the problem may be a few reasons why these local governments have not yet attempted to solve this labor-intensive problem. Applications of Technology The geographical information system (GIS) was the most popular technology application used in municipal government. Although GIS is still one of the most frequently used applications -- 51.4 percent of cities responded that they are currently using GIS -- other methods of technology are becoming popular as well. For example, wireless service (87.3 percent), including mobile radios and cellular phones, was the application most frequently chosen by respondents this year. Fax back/reply service (55.2 percent) and fiber optics (54.5 percent) are also technologies becoming more popular. All of these technologies can help cities deliver goods and services more effectively and efficiently to residents. Video arraignment, which frees up courtrooms for trials and eliminates the security risks of taking prisoners out of jail, is another technology application increasing in popularity. Thirty-eight percent of cities, including Houston, Texas, are now taking advantage of video arraignment. In Houston, realtime, full-motion color video, and full-duplex audio with echo/feedback elimination, are used to remotely arraign prisoners. Touch-screen kiosks, used by 15.3 percent of cities, have been a success in Springfield, Mo. The kiosks, which contain an interactive program that provides users with material on recycling to election information, are located in public places throughout the city. In the first two quarters of 1997, the kiosks had nearly 8,000 users, and the cost savings have been tremendous. As U.S. cities aim to provide better service to their residents in a more timely manner, they have begun to use technology in a more effective way. The use of technology by city staff is becoming more common, as is the use of technology applications to deliver goods and services throughout communities. What does the future hold for local governments in relation to technology? "Technology will be embedded in every service local government provides. Technology will not be thought of as a separate piece, but an important part of the business process," said Dinah Neff, chief information officer of Bellevue, Wash. ICMA is the professional and educational organization of more than 8,200 appointed administrators and assistant administrators serving cities, counties, other local governments and regional entities around the world. PTI is the nonprofit technology organization of ICMA, the National League of Cities and the National Association of Counties. PTI's mission is to bring technology to local and state governments. For the past decade, leaders in business, government and academia in Austin, Texas, have collaborated to implement a vision of the city's future that embraces science and technology advances. To a great extent, the city has staked its future success in technology. As a result, Austin has emerged as a thriving high-tech center characterized by high energy, a collaborative spirit and high expectations of the city government's continuing leadership. In late 1994, as the Internet and World Wide Web worked their way into the mainstream vocabulary, City Manager Jesus Garza asked, "Why don't we put the city online to allow everyone direct, 24-hour access to information and services?" Simultaneously, a team of employees from business and technology disciplines proposed a three-pronged strategy for using the Internet to improve service and communication: * Content and services -- Put useful information and, ultimately, transactions on the Web. Local and global customers who use city services are the primary * Employee access -- Give all city employees the tools and training they need to serve via the Internet and to continually increase the online offerings. * Public access -- Collaborate with business and public partners, such as the Austin Free-Net, Metropolitan Austin Interactive Network, the University of Texas, schools and governments to ensure everyone in Austin has access to online opportunities. Thus was born the Austin City Connection, a commitment to use Internet technologies to connect people with useful information, services and people. The next step was to walk the talk online. In January 1995, the city manager challenged a team of 12 employees with technical and business expertise to build a Web site in 30 days. Residents and fellow employees jumped in to help. At a Feb. 21, 1995, community celebration, the Austin City Connection went live on the Web, with over 300 files of information as basic as the city charter and weekly council agenda and as comprehensive as a database of community services for youth, weekly job openings and the purchasing office's bidding opportunities list. Over time, usage has greatly increased, going from an average of 14,000 files a week in February 1995, to an average of 75,000 files a week in June 1997. Content has grown from 300 to 2,000-plus files/databases maintained by city employees. A graphical interface to the public-library catalog went online in April 1997, due, in part, to an MCI grant. Interactive services continue to grow with the addition of the online traffic suggestion/complaints form, neighborhood association database and live election results in spring 1997. In response to customer requests, the Austin City Connection now offers a text-only version of every page, allowing users with text-only browsers or visually impaired users to navigate the site easily. Over 2,000 of 8,000 city employees have access to Internet e-mail and the Web. More are coming online daily and learn to add information to the Connection via a Web-based publishing system developed by employees for employees. For more information, contact Chris Kelley, webmaster, Austin City Connection, 625 East 10th St., 9th floor, Austin, Texas 78701. Call 512/499-6550. Fax: 512/499-2091. June Table of Contents
<urn:uuid:3bab4b25-a5fd-453f-8cce-51ad7b208459>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/More-Cities-Utilizing-Technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00533-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954173
2,862
2.859375
3
How SOA and the Cloud Relate The terminologies used to discuss SOA and Cloud computing and the breadth of what they cover is often interpreted as per-need. One often sees the paradigm stereotyped to a manifestation or specific implementation: SOA to Web services for example or Cloud to computing units. The definitions below attempt to create a generalization of terms and thereby create broad applicability of the technologies, patterns of usage and deployment scenarios. SOA - Service oriented architecture is an architectural approach for constructing software systems from a set of building blocks, called services. Services differ from components as services are autonomous, defined by their interface, loosely coupled and often support multiple technologies in integration. SOA adoption requires a methodology where these services are developed in tandem by both IT and the business. SOA adoption also requires a governance process where services are defined, changed, modified, combined, versioned, reused and orchestrated to support ever-changing business. Cloud computing - As defined by Gartner, Cloud is a style of computing where massively scalable IT-related capabilities are provided as a service using Internet technologies to multiple external customers. IBM describes it as an emerging computing paradigm where data and services reside in massively scalable data centers and can be ubiquitously accessed from any connected devices over the Internet. With SOA and Cloud computing, cost reduction and business agility are common drivers, yet the benefits are achieved in different ways. Cost savings is a medium- to long-term driver for SOA as cost savings occur only when reuse of services reach levels where the cost of building the service can be offset. Dont make the mistake of expecting cost savings in your first SOA project. Your costs will in fact be higher. On the other hand, cost savings are immediate when a Cloud based infrastructure is leveraged appropriately. While rationalization is also a common need of SOA and Cloud computing, the targets vary: business functionality in case of the former and infrastructure that is used to deliver them in case of the latter. The chart below outlines the similarities and differences of the drivers for SOA and Cloud. Specific stakeholders of an organization are common to both SOA and the Cloud and may have significant interests. This refers to select members of the IT organization―CIO, delivery teams and data center personnel who work to implement services and maybe host them on the Cloud infrastructure. Its worth mentioning that failed initiatives under both these paradigms are mostly people related, as both require strong processes and governance frameworks to be put in place. While technology related challenges do occur, they are usually overcome eventually. Maturity levels of an organization in using IT and prior demonstration of agility in adopting IT related changes is a good measure of appetite and indication of outcome for such initiatives. Stakeholders of SOAStakeholders of Cloud Computing Sponsored by Business and implemented by CIO organization, sometimes both by latterSponsored and implemented by CIO organization Requires active participation from Business, IT and Data Center operationsRequires active participation from IT and Data Center operations Often flows from strategic initiatives and mandates in the organizationPresently more often used to meet tactical and very specific needs Provisioning and Lifecycle Services often are created to meet tactical needs and evolve into enterprise class as use and reuse grows. Service lifecycle management governs this evolution and is also essential to provide a roadmap and sustenance to enterprise services. On the other hand, its simpler to procure and use Cloud services. Life cycle challenges for Cloud users are mostly limited to compatibility between upgrades, which again is often addressed through SLA guarantees from the service provider. When it comes to managing the two classes of assets there are differences. However, synergies exist in the way one is used to manage the other and vice versa akin to a process running on an operating system that may itself be a collection of the running process instances. Ubiquity of technologies like HTTP, XML and the Web services standards that emerged, henceforth, have lead to them being used as preferred implementation platforms for SOA and administration of Cloud infrastructures. Provisioning of ServicesProvisioning of Cloud Computing infrastructure Initiated by one project/program with the intention of being used by othersOften initiated to meet specific project/program needs. Others may follow suit *Service created by using components from diverse platforms and technologies or from other Services*Infrastructure may be created and administered through Web-Services Strong Governance needed throughout Service LifecycleNo concept of Life Cycle. Governance required to regulate employing Cloud based infrastructure Service may be deployed on a Cloud based platformCloud leverages specific software packages and hardware deployment configurations Paradigm Focus In order to determine how to meet internal vs. external needs, enterprises commonly focus on SOA and Cloud computing as paradigms. These include the business opportunities that each enable for the other. Security concerns are often overlooked when deploying SOA services and overstated when using the Cloud. The reluctance to move critical systems to the Cloud is the biggest limiting factor for its adoption. Industry-wide mindset shifts are needed to create the wave of Cloud adoption. SOA ParadigmCloud Computing Paradigm Used predominantly to expose services within the Enterprise and with partnersMay be used to further the reach of services through infrastructure available with the Cloud vendor Often used to improve efficiency and reduce cost within the enterpriseMay also be used to realize new business models for service delivery and consumption like SaaS Used to design systems handling sensitive information as Security concern exists only within the EnterpriseEnterprises shy away from deploying sensitive data and systems on the Cloud as infrastructure is shared among Cloud customers SOA is off the hype curve. It is becoming a viable approach for implementing enterprise applications. There are many lessons learned from successful and failed initiatives. Cloud, on the other hand, is still up there in the hype curve and can go either way. The synergies between the two weigh slightly in favor of SOA as the Cloud can really further the reach of services beyond the enterprise and open up new business opportunities. Many of the regular concerns are getting trivialized, such as SLAs being met due to rapid growth of a customer base and upfront investments in infrastructure in an unknown market for services offered. Regunath Balasubramanian leads the Architecture Services group at MindTree and is a practicing architect. He is an advocate of open source both in use and in contribution. He blogs frequently at mailto:http://regumindtrail.wordpress.com.
<urn:uuid:0274ab55-329e-4a7f-b1e4-7be3f590c6f8>
CC-MAIN-2017-04
http://www.cioupdate.com/print/reports/article.php/3853076/How-SOA-and-the-Cloud-Relate.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00349-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946585
1,307
2.765625
3
Lack of Binary Protection is the last one in OWASP Mobile Top 10 Risk Android Application are delivered through an .apk file format which an adversary can reverse engineer it and can see all the code contained in it. Below are scenarios of reverse engineering an application:- - Adversary can analyze and determine which defensive measure are implemented in the app and also find a way to bypass those mechanism. - Also adversary can also insert the malicious code, recompile it and deliver to normal users. - For example, gaming apps which have some feature unlocked are widely downloaded by youngster through insecure sources(sometimes through Google PlayStore as well). Most of those modified apps contains malware and some contain advertising to gain profit from those users. An adversary is the only threat agent in this case. Below is the demostration of reverse engineering an app. I will use Sieve app for demonstration. First use dex2jar to convert .apk file in to .jar file. Open up the jar file in jdgui If you followed the above steps, you will be able to see the code of Sieve App. How To Fix Application Code can be obfuscated with the help of Proguard but it is only able to slow down the adversary from reverse engineering android application, obfuscation doesn’t prevent reverse engineering. You can learn more about proguard here. For security conscious application’s application, Dexguard can be used. Dexguard is a commercial version of Proguard. Besides encrypting classes, strings, native libraries, it also adds tamper detection to let your application react accordingly if a hacker has tried to modify it or is accessing it illegitimately.
<urn:uuid:8f1079f6-6296-42a3-beb7-66df4855dd37>
CC-MAIN-2017-04
https://manifestsecurity.com/android-application-security-part-9/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00525-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921229
350
2.640625
3
Many of us like to think that we “live off” of caffeine, but our caffeine-addicted culture just doesn’t compare to the bacterial cultures growing in Texas laboratories, where scientists from the University of Texas and the University of Iowa have created synthetic bacteria that need caffeine to live and reproduce. “The major application is that whenever you are processing coffee berries, you are left with a lot of nutrient-rich plant material,” says Jeffrey Barrick, a biochemist at the University of Texas who worked on the experiment. “But because of the plants’ high caffeine levels, they are also really toxic—the bacteria could help turn that plant material into nutritious livestock feed,” which could make a big difference for farmers in developing countries. Coffee’s energy-boosting properties sometimes come across as miraculous, but caffeine is basically just a bunch of oxygen, nitrogen, carbon and hydrogen atoms. Of these, hydrogen and carbon combine to form three methyl groups, which stick out from caffeine’s central ring structure. The genetically-engineered bacteria target these methyl groups: their enzymes break them down so they can reach the ring and make the DNA bases they need to survive. So, theoretically, you could add the bacteria to a can of Red Bull and watch them grow. In the short time it takes for the enzymes to break down the caffeine to its chemical precursor, xanthin, you could obtain a perfectly decaffeinated energy drink—with lots of bacterial growth for added flavor.
<urn:uuid:0c9f8a3b-89c7-4a2a-bbb6-5dcd75eb2846>
CC-MAIN-2017-04
http://www.nextgov.com/health/2013/03/scientists-just-made-bacteria-love-coffee-much-you-do/62183/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00001-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948319
314
3.59375
4
It may be difficult to attribute the following points to a specific source, but here are all of the guidelines I can remember off the top of my head about bar charts vs. line charts, mostly learned from Edward Tufte and Stephen Few. It’s a bit of an art, and how you represent your data depends on what exactly you are intending to find in it, so it’s difficult to write finite rules that dictate what to do. If you want to learn more, check out their books on our Reading page. - When to use them: Line charts should be used only for time series (chronological) or when there is some other sequence to the dimensions on the x-axis, e.g. dates, months, sequence of stages of a project, sequence of meters along on a gas pipeline, and they should be used to detect trends and patterns, not to give people exact quantitative readings. - Scale: As line charts are not really intended to give people exact numbers, forcing zero scaling is not necessary and can make it considerably more difficult to detect said trends and patterns. - When to use them: Bar charts should be used for comparing specific x-axis values, though they can certainly be used for time series, like line charts. They can also be used to display parts of a whole in favor of pie charts, in which case, the space between the bars should be reduced. - Orientation: Do not use vertical or diagonal text to label the axis of a bar chart. If the x-axis has longer text descriptions, use horizontal bar chart, so the text can read left-to-right, horizontally (the way we normally read). - Scale: As the area of bars implies volume, it can be deceptive to use dynamic scaling with bar charts (see: Lie Factor). If the differences between the data points is difficult to distinguish with forced-zero scaling, use symbols/points in favor of bars and use dynamic scaling. Applies to both - Dimension order: There should be some logical order to the dimensions on the x-axis. In the case of a line chart, it should follow the chronological, process, or stage order that caused you to select a line chart in the first place. In the case of bar charts, the order should have some rhyme and reason to it: sorted by y-axis value, alphabetical, etc., depending on the content of the chart and what its intended use is, e.g. ranking, distribution. - Scale labels: If the numbers are already being displayed on the data points, it is redundant to label the axis with numbers, too. - Axis labels: If you can incorporate the metric names and dimension names into the chart title or legend, do not waste space on axis labels.
<urn:uuid:1a48ae14-78ec-43ad-9b1b-0f9e9a2ab1b6>
CC-MAIN-2017-04
http://www.axisgroup.com/bar-charts-vs-line-charts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00029-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898871
572
3.1875
3
Definition: A computation which takes some inputs and yields an output. Any particular input may yield different outputs at different times. Formally, a mapping from each element in the domain to one or more elements in the range. Specialization (... is a kind of me.) function, binary relation. Note: Cosine is a function, since every angle has a specific cosine. Its inverse cos-1(x) is a relation, since a cosine value maps to many (for this relation, infinitely many) angles. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 18 December 2003. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "relation", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 18 December 2003. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/relation.html
<urn:uuid:64d86352-26c0-4ae9-9280-a7f6a8174824>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/relation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00029-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875608
221
2.625
3
Raspberry Pi Unveils a Smaller, Cheaper Minicomputer The organization behind the Raspberry Pi is releasing the latest iteration of the minicomputer that is smaller and even more affordable than the original. The Raspberry Pi Foundation on Nov. 10 unveiled the Model A+, a Linux-based computing board that includes many of the features of the Model A—which was introduced in 2012—including the BCM2835 ARM-based application processor from Broadcom and 256MB of memory. However, the Model A+ is smaller—65mm long, as compared with 86mm for the Model A—more power-efficient and costs $20, according to Eben Upton, founder of Raspberry Pi and CEO of the foundation's engineering team. The original Model A is priced at $25. In addition, the Model A+ includes many of the enhancements the foundation made to the Model B+, which was announced in July. That includes having 40 pins in its general-purpose input/output (GPIO) header rather than 26, and supports the foundation's HAT (Hardware Attached on Top) standard, a spec introduced in July with the Model B+ for third-party add-on boards. "A significant feature of HATs is the inclusion of a system that allows the B+ to identify a connected HAT and automatically configure the GPIOs and drivers for the board, making life for the end user much easier," James Adams, director of hardware at Raspberry Pi, wrote in a post on the foundation's blog at the time of HAT's introduction. In addition, the Model A+ includes better audio capabilities through the use of a dedicated low-noise power supply and the Micro SD has been improved by removing the older friction-fit socket and replacing it with a better push-push micro SD version, Upton wrote in a post on the foundation's blog. The Model A+ version is a significant step forward for the foundation's efforts, according to Upton. "When we announced Raspberry Pi back in 2011, the idea of producing an 'ARM GNU/Linux box for $25' seemed ambitious, so it's pretty mind-bending to be able to knock another $5 off the cost while continuing to build it here in the UK, at the same Sony factory in South Wales we use to manufacture the Model B+," he wrote. The group released the first versions of the credit-card-size computer in 2012. Foundation members hoped students and enthusiasts would embrace the device and use it to learn how to program. The popularity of the Raspberry Pi grew beyond expectations, and some companies have found ways to use the system. The Raspberry Pi Foundation reportedly has sold more than 2 million units of its Model A (which has one USB port and sells for $25) and Model B (with its two USB ports and a $35 price tag).
<urn:uuid:f29c8de3-a52d-4462-96c1-0b438f4151e9>
CC-MAIN-2017-04
http://www.eweek.com/blogs/first-read/raspberry-pi-unveils-a-smaller-cheaper-mini-computer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00239-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955189
586
2.53125
3
Mainframe TCP/IP Commands This course provides the learner with a basic understanding of IBM mainframe networks with z/OS. It introduces traditional SNA subarea, SNA APPN networks and mainframe TCP/IP networks, including some of the network equipment associated with each. Current topics such as the future of SNA, and concepts of using TCPIP to transport SNA traffic are also covered. Finally, the learner is introduced to basic VTAM and TCP/IP commands needed to use, control and investigate mainframe networks. This course is oriented to Systems Programmers and Operators. However anyone needing to understand IBM Mainframe networks will benefit from this course. Completion of the "IBM Mainframe Communications Concepts" course and basic knowledge of TCP/IP and the IBM operating system. After completing this course, the student will be able to: - Issue standard TCP/IP commands such as ping and netstat from the mainframe. - Start and Stop TCP/IP and its related processes. - Use TCP/IP client applications from the mainframe. - Manage TCP/IP server applications from the mainframe. Introducing Mainframe TCP/IP Commands How TCP/IP runs on the mainframe. The TCP/IP Daemon TCP/IP Support Daemons TCP/IP Daemon Commands TSO/E and USS Commands Starting and Stopping TCP/IP TCP/IP Console Commands TCP/IP Client Application Commands Sending and Receiving Files Accessing Remote Computers Sending and Receiving emails. TCP/IP Server Application Commands FTP and Remote Execution Servers Mainframe TCP/IP Commands Mastery Test Instant Web Chat: - Chat with tech support or - have a product question? Contact our Learning Consultants or call us at 770-872-4278
<urn:uuid:22c7b6a0-411b-4548-ad8f-ebeb2e5e905b>
CC-MAIN-2017-04
https://www.interskill.com/course-catalog/IBM-Mainframe-TCP-IP-Commands.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00057-ip-10-171-10-70.ec2.internal.warc.gz
en
0.832318
394
2.5625
3
RATs – Remote Access Trojans – are often used by cyber attackers to maintain a foothold in the infected computers and make them do things unbeknownst to their owners. But, in order to do that and not be spotted, RATs must employ a series of obfuscation techniques. Take for example the FAKEM RAT variants recently analyzed by Trend Micro researchers: in order to blend in, some try to make their network traffic look like Windows Messenger and Yahoo! Messenger traffic, and others as HTML. Usually delivered via spear phishing emails, once executed the malware copies itself using the into the %System% folder. When contacting and sending information to remote servers, the malicious traffic begins with headers similar to actual Windows Messenger and Yahoo! Messenger traffic. But checking the traffic after it clearly shows its malicious nature. The communication between the compromised computer and the RAT’s controller is also encrypted. The RAT starts with sending out information about the compromised system, and can receive simple codes and commands that make it do things like execute code, go to sleep, execute shell commands, allows the attacker to browse directories, access saved passwords, and more. “Now that popular RATs like Gh0st and PoisonIvy have become well-known and can easily be detected, attackers are looking for methods to blend in with legitimate traffic,” the researchers noted . “While it is possible to distinguish the network traffic FAKEM RAT variants produce for the legitimate protocols they aim to spoof, doing so in the context of a large network may not be not easy. The RAT’s ability to mask the traffic it produces may be enough to provide attackers enough cover to survive longer in a compromised environment.”
<urn:uuid:6c1ea7e0-2982-4f20-bb53-ecd64ce06fcd>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/01/18/new-rat-family-makes-its-traffic-look-legitimate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00019-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913025
361
2.78125
3
Google Glass is somewhat of a controversial product these days, with some people wary of its threats to privacy. However, one potentially positive use of the technology is being piloted by researchers at Newcastle University, who've been given five pairs of Glass to study how it can be used to support people with long term physical conditions. As the video below discusses, the researchers have already seen Glass help people with Parkinson's, by reminding them to take medication, making it easier to interact with mobile phones and by generally giving them a greater sense of security and confidence when they're out and about. Here are some other recent news items and information about the development of accessible technology: Researchers at the Worcester Polytechnic Institute are developing a more intelligent wheelchair that can be controlled with body movements such as facial expressions, using off-the-shelf electronics to ensure a low cost and the ability to retrofit standard wheelchairs. Unclear about how and when to best use text alternatives for images, such as ALT tags, to improve website accessibility? Dey Alexander of 4 Syllables has recently updated a decision tree originally created several years ago to help guide you. If you use Agile methods for your web development and want to make sure that your website is accessible, Kathy Wahlbin of Interactive Accessibility has written a guide for how to write user stories for web accessibility. Was there other news or interesting information from the world of accessible technology that I missed? Let me know in the comments. Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:a81b5d12-4d3f-4045-b20f-9ad5a511f5ec>
CC-MAIN-2017-04
http://www.itworld.com/article/2697830/mobile/google-glass-shows-promise-as-an-aid-to-people-with-parkinson-s.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00140-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933535
357
2.671875
3
NIST draft standard details approximate matching The National Institute of Standards and Technology’s draft publication SP 800-168, Approximate Matching: Definition and Terminology, provides a description of approximate matching and includes requirements and considerations for testing. Approximate matching is a technique designed to identify similarities between two digital artifacts or arbitrary byte sequences such as a file. A similarity between two artifacts is determined by a particular approximate matching algorithm. One process the technology uses to find these similarities is resemblance. In this method, two similarly sized objects are compared and searched for common traits. For example, successive versions of a piece of code are likely to share many similarities. A second way approximate matching measures similarities is containment. This method examines two different sized objects and determines whether the smaller one is inside the larger one, such as a file and a whole-disk image. This technology is very useful for security monitoring and forensic analysis by filtering data. It provides a result from a range of outcomes [0, 1], which are interpreted as a level of similarity. The reliability of a result is assessed by the robustness of the algorithm, its precision, and whether the algorithm includes security properties designed to prevent attacks, as the manipulation of the matching technique. A public comment period on Special Publication 800-168 begins on Jan. 27, 2014, and runs through March 21, 2014. Comments can be sent to email@example.com with “Comments on SP 800-168” on the subject line. Posted by Mike Cipriano on Jan 31, 2014 at 7:38 AM
<urn:uuid:7c61c302-db14-4e55-bc31-3fe6f8a3fe74>
CC-MAIN-2017-04
https://gcn.com/blogs/pulse/2014/01/approximate-matching.aspx?admgarea=TC_SecCybersSec
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00378-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918535
325
2.703125
3
Internet of Things: 4 Security Tips From The MilitaryThe military has been connecting mobile command posts, unmanned vehicles, and wearable computers for decades. It's time to take a page from their battle plan. The Internet of today, what some are calling the Internet of Things (IoT), is a network enabled by embedded computers, unobtrusive sensors, worldwide systems, and big-data analytic environments. These systems, sensors, and devices are communicating amongst themselves and feeding a ubiquitous network seamlessly integrated with our lives. While the efficiencies and insights gained through the deployment of this massive interconnected system will bring new benefits, it could also bring new risk. Experience shows us that when everything is connected, everything is vulnerable. In fact, this approach to creating systems of systems is not new. The military has been connecting mobile command posts, unmanned vehicles, and wearable computers in the battle space for decades. These devices and systems are connected to a network that feeds into a common operating picture for the warfighter. The expertise gained by companies creating these systems of systems for the military has provided a unique perspective on information security risks. As cyberthreats become more sophisticated and aggressive in this expanding IoT environment, four areas of concern will rise in importance. All organizations should: 1) Make sure information is reliable and systems are resilient. With the large amount of data generated by the IoT, a key question will be: “How do I know the data generated by this system is reliable?” Chief information security officers (CISOs) can find answers within information assurance strategies. Data can be encrypted with simple tools like Secure/Multipurpose Internet Mail Extensions (S/MIME) or more complex systems like Information Rights Management solutions. Additionally, data separation and risk containment can be provided through virtual machine technology, database containers, and cross-domain solutions brought over from the military domain. Systems must be hardened, not just patched; unnecessary services and applications must be removed and remaining software configured appropriately. So many systems built for the IoT either on the device side or the cloud side are based on multipurpose operating systems and are left with many features running that unnecessarily expose risk. 2) Keep pace with technology. With each new device that enters the IoT domain, new vulnerabilities and threats are introduced. A cyber adversary will not only have this new target with its vulnerabilities to exploit, but he will also have a new path from which to attack the other entities on your network. Companies will succeed in the IoT environment when they understand both the new opportunities gained from new devices in their business ecosystem and the new risks they take on, and preplan how best to manage them. Security organizations should have a lab and do their research on new devices to understand, not just how to use a device, but also what is embedded in the device; what data is generated and transmitted; where does the device transmit its data; and what connections will it accept from other devices in an environment, among a host of other concerns. Most importantly, if adversaries have access to the sensors and data generated by this device, including the personal devices users are bringing into the building, organizations must know and prepare for the advantages it would give them. 3) Focus on the insider threat. The IoT is about connections among devices, the masses of data generated by sensors, cloud processing and storage, and automated actuators. Threats to this environment may be slowed by perimeter defenses, but security experts know the most dangerous threat is the one inside -- where the most serious damage can be done. The Target, Wikileaks, and Snowden breaches are evidence of this damage, particularly regarding financial costs and loss of trust. The Target example is all about the IoT, whereby adversaries were able to penetrate the point-of-sale (POS) devices by first entering through a heating, ventilation, and air conditioning controller. As a result, banks and credit unions lost more than $200 million, according to the Consumer Bankers Association. In this new environment, it’s critical for companies to have insider-focused security and continuous monitoring solutions that can detect anomalies, unauthorized privileged user activity, and determine when information has been accessed inappropriately. These must be behavioral analytics, not just simple rules and policies. 4) Embrace (big and community) data analytics to minimize cyberthreats. The IoT will generate more data as new devices and systems are added to the ecosystem. Innovations in analytics will drive more than efficient processes but also new ways to detect threats. For example, successful data analytics programs apply algorithms that automatically identify areas of cyber security interest in large volumes of data. In this new ecosystem, analytics will hold the key to predicting threats before they happen. The IoT has moved from the military to everyday life, allowing us to create and process more data than ever before on everything from the products we buy, to critical power and water, to how we drive on the highway. Making sure this system of systems is secure will help us ensure the IoT delivers its promise of convenience and efficiency. Michael K. Daly is CTO, Cybersecurity and Special Missions of Raytheon Intelligence, Information and Services. Raytheon Intelligence, Information and Services provides cyber security products and services and offers a full range of training, space, logistics, and engineering ... View Full Bio
<urn:uuid:99dfa2ef-9032-4059-a820-b0de48a1cd84>
CC-MAIN-2017-04
http://www.darkreading.com/mobile/internet-of-things-4-security-tips-from-the-military/a/d-id/1297546?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00286-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940099
1,075
2.796875
3
Google has developed Leak Finder Google's Chrome team created the tool, as memory leaks can be common in browsers-particularly some versions of Microsoft's Internet Explorer, developers say. A memory leak occurs when a computer program consumes memory but is unable to release it back to the operating system. In object-oriented programming, a memory leak happens when an object is stored in memory but cannot be accessed by the running code. A memory leak has symptoms similar to a number of other problems and often can only be diagnosed by a programmer with access to the program source code. A memory leak can diminish the performance of the computer by reducing the amount of available memory. Eventually, in the worst case, too much of the available memory may become allocated and all or part of the system or device stops working correctly, the application fails, or the system slows down unacceptably. on memory leaks: "When a system does not correctly manage its memory allocations, it is said to leak memory. A memory leak is a bug. Symptoms can include reduced performance and failure." Crockford also noted, "In a memory space full of used cells, the browser starves to death." In an Aug. 8 post The Chrome team delivered goog.Disposable to help out with the leakage issue. goog.Disposable is an interface for disposable objects. "Before dropping the last reference to an object, which is an instance of goog.Disposable (or its subclass), the user code is supposed to invoke the method dispose() on the object," the Chrome team wrote in the post. "This method can release resources, e.g., by disposing event listeners. However, a Web application might forget to call dispose() before dropping all the references to an object." Leak finder can detect goog.Disposable objects that were not disposed. It produces machine-readable output and can be used as a part of test automation. To find leaks, Leak Finder relies on goog.Disposable monitoring mode. The mode gathers all created but not yet disposed instances of goog.Disposable. The default configuration of the tool detects goog.Disposable objects that were not disposed, Google said.
<urn:uuid:6206f0f9-0321-4e17-a7d7-8a4622684f35>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Google-Leak-Finder-A-New-Tool-for-Finding-JavaScript-Memory-Leaks-125767
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00286-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93432
457
3.375
3
The Mont Blanc project, an effort by a number of European supercomputing centers and vendors that seeks to create an energy-efficient supercomputer based on ARM processors and GPU coprocessors, has put together its third prototype. That is one more step on the path to an exascale system. The third generation machine, which is being shown off at the SC13 conference in Denver this week, is by far the most elegant one that the Mont Blanc project has created thus far. This prototype supercomputer actually bears the name of the project this time around, and was preceded by the Tibidabo and Petraforca clusters, which were based on a different collection of ARM processors and GPU accelerators. Just because this design is elegant, don’t get the wrong idea, though. The Mont Blanc machine is still a prototype, cautions Alex Ramirez, leader of the Heterogeneous Architectures Research Group at BSC who heads up the Mont Blanc project. “In order to make this a production product, we would have to go through at least one more generation,” he says. It stands to reason that the Mont Blanc project is waiting for the day when 64-bit ARM chips with integrated interconnects and faster GPUs are available before going into production. But for now, software can be ported to these prototypes and things can be learned about where the performance bottlenecks are and what reliability issues there might be. The exact size of the Mont Blanc prototype cluster has not been determined yet, but Ramirez says it will have two or three racks of ARM-powered nodes. “It will be big enough to make scalability and reliability claims, but we are trying to keep the cost down on a machine that is not a production system,” he says. The server node in the Mont Blanc system is based on the Exynos 5 system-on-chip made by Samsung, which is a dual-core ARM Cortex-A15 with an ARM Mali-T604 GPU on the die. The ARM CPU portion of the system-on-chip has about twice the performance of the quad-core Cortex-A9 processor used on the Petraforca prototype that was put together earlier this year. (There were actually two versions, but the second one is more important.) That machine used Nvidia Tesla K20 GPU coprocessors to test out how a wimpy CPU and a brawny GPU might be married. Specifically, the ARM processors, which were Tegra 3 chips running at 1.3 GHz, were put into a Mini-ITX system board with one I/O slot that was linked to a PCI-Express switch that in turn had one GPU and one ConnectX-3 40 Gb/sec InfiniBand adapter card. The dual-core Exynos 5 chip from Samsung is used in smartphones, runs at 1.7 GHz, and has a quad-core Mali-T604 GPU that supports OpenCL 1.1. It has a dual-channel DDR3 memory controller and a USB 3.0 to 1 Gb/sec Ethernet bridge. Each Mont Blanc node is a daughter card made by Samsung that has the CPU and GPU, 4 GB of memory (1.6 GHz DDR3), a microSD slot for flash storage, and a 1 Gb/sec Ethernet network interface. All of this is crammed onto a daughter card that is 3.3 by 3.2 inches that has 6.8 gigaflops of compute on the CPU and 25.5 gigaflops of compute on the GPU for something around 10 watts of power. That works out to around 3.2 gigaflops per watt at peak theoretical performance. The Mont Blanc system is using the Bull B505 blade server carrier and the related blade server chassis and racks to house multiple ARM server nodes. In this case, the blade carrier is fitted with a custom backplane that has a Broadcom Ethernet crossbar switch on it that links fifteen of these ARM compute nodes together. Every blade in the carrier has an Ethernet bridge chip, made by ASIX Electronics, that converts the USB port into Ethernet and then lets it hook into that Broadcom switch in the carrier. Here is how you stack up the Mont Blanc rack: In this particular setup, says Ramirez, the location had some power density and heat density restrictions, so it was limited to four Bull blade server chassis. But the system is designed to support up to six chassis if the datacenter has enough power and cooling. Each blade has fifteen nodes, and is a cluster in its own right. The blade delivers on the order of 485 gigaflops of compute and will burn about 200 watts. (Ramirez is estimating because he has not actually been able to do the wall power test yet because the machines just came out of the factory a few days prior to SC13.) That works out to 2.4 gigaflops per watt or so after the overhead of the network is added in. The 7U blade chassis can hold nine carrier blades, for a total of 135 compute nodes. That works out to 4.3 teraflops in the aggregate per chassis at around 2 kilowatts of power, or 2.2 gigaflops per watt. With two 36 port 10 Gb/sec Ethernet switches to link the chassis together and 40 Gb/sec uplinks to hook into other racks, a four-chassis rack would deliver 17.2 teraflops of computing in an 8.2 kilowatt power envelope, or about 2.1 gigaflops per watt. With six blade chassis, you can get 25.8 teraflops into a rack. That is 810 chips in total per rack, by the way, with a total of 1,620 ARM cores and 3,240 Mali GPU cores. This Mont Blanc effort will get very interesting next year, when many different ARMv8 processors, sporting 64-bit memory addressing and integrated interconnects, become available from a variety of vendors, including AppliedMicro, Calxeda, AMD, and maybe others like Samsung. Many of the components that had to be woven together in this third prototype will be unnecessary, and the thermal efficiency of the cluster will presumably rise dramatically once these features are integrated on the chips. These future ARM chips will also come with server features, such as ECC memory protection and standard I/O interfaces like PCI-Express. “There will be enough providers that at least one of them will have exactly the kind of part you want at any given time,” says Ramirez, a bit like a kid in a candy store. The Mont Blanc project was established in October 2011 and is a five-year effort that is coordinated by the Barcelona Supercomputer Center in Spain. British chip maker ARM Holdings, French server maker Bull, French chip maker STMicroelectronics, and British compiler tool maker Allinea are vendor participants in the Mont Blanc consortium. The University of Bristol in England, the University of Stuttgart in Germany, and the CINECA consortium of universities in Italy are academic members of the group, and the CEA, BADW-LRZ, Juelich, and BSC supercomputer centers are also members. So are a number of other institutions that promote HPC in Europe, including Inria, GENCI, and CNRS. Mont Blanc was originally a three year project with a relatively modest budget of €14.5 million, and it has secured an additional €8.1 million in funding from the European Commission to extend it two more years. The funds are not just being used to create an exascale design, but also to create a parallel programming environment that will run on hybrid ARM-GPU machines as well as creating check pointing software to run on the clusters.
<urn:uuid:24d94851-184e-4eb8-bd14-0eb0f719261f>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/22/mont-blanc-forges-cluster-smartphone-chips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00194-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947296
1,604
2.578125
3
Optical fiber measurement can be divided into three steps: using OTDR testers for parameter setting, data acquisition and analysis. Measuring parameters include artificial Settings. (1) the wavelength selection (λ): Because of different wavelengths corresponding to different features (including attenuation, slightly curved, etc.), generally following the test wavelength and wavelength corresponds to the principle of transmission communication system, the system open 1550 wavelengths, the test wavelength of 1550nm. (2) Pulse Width: The longer the pulse width, the bigger the dynamic measurement range, measurement of distance is longer, but blind area is bigger in OTDR curve waveform; Short pulse injection of low light level, but can reduce the blind area. Usually stand by ns for pulse width cycle. OTDR testers measurement range is refers to the maximum distance of OTDR data sampling, the choice of this parameter determines the size of the sampling resolution. Best measurement range for 1.5 ~ 2 times the distance between optic fiber network length. (4) Average time: The backward scattering light signal is very weak, generally USES the statistical average method to improve the signal-to-noise ratio, average, the longer the average time, the higher signal-to-noise ratio. (5) Parameters of optical fiber Optical fiber parameter setting including the refractive index n and Backscatter coefficient of η. Refractive index parameters related to distance measuring, backscatter coefficient effects the measurement results of reflection and return loss. After Parameter Settings, OTDR can be sent and received by the optical fiber link light pulse scattering and reflected light, the photodetector outputs sample, get the OTDR curve, the curve is analyzed to understand quality of optical fiber. Experience and skills (1) Simple discriminant of fiber quality Under normal circumstances, OTDR test the light curve of the subject (single or several plate cable) slope are consistent, if a certain section of the slope is bigger, indicates the period of decay; If subject to irregular shape curve, the slope is volatile, it is bent or arc, suggests that bulk fiber cables quality degradation seriously, do not conform to the requirements of the communication. (2) The choice of wavelength and the test of Uni and Bi-direction: 1550 wavelengths to test distance is farther, a 1550nm – 1550nm fiber is more sensitive to bending than 1310. In an actual optical cable maintenance, compare both wavelengths to get good test results. (3) Clean the joint: Before accessing optical union, must be cleaned carefully, including OTDR output fiber assembly connectors and measured union, or the insertion loss will too big, otherwise don’t reliable, much noise can stop the measurement, it may also damage the OTDR. Avoid using alcohol cleaning or other refractive index matching liquid, because they can dissolve the adhesive within optical fiber connector. (4) The use of additional optical fiber Additional optical fiber is used to connect OTDR and optical fiber which under test, 300 ~ 2000 m long fiber, its main role is: the front insert measuring blind area processing and terminal connector. In general, OTDR and optical fiber connectors between blind area under test is the largest. In actual measurement of the optical fiber in the OTDR with fiber and after a period of transition, the front end blind area falls within the transition of optical fiber, the fiber under test head fell on the OTDR curve in the linear region. Optical fiber system between connector insertion loss by OTDR testers and optical fiber to measure for a period of transition. As to measure the first and end connector on both ends of the insertion loss, can add a transitional fiber in every side.
<urn:uuid:d5c5e945-c8a9-437f-b373-d49972a1da06>
CC-MAIN-2017-04
http://www.fs.com/blog/how-to-measure-the-fiber-optic-network-by-using-otdr-testers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868343
760
2.703125
3
Information overload isn’t the problem we once thought it was. In fact, as the Internet Age got started, it renounced its entire heritage, and even changed its basic character. It’s as if Jenna Bush changed her last name to Gore and then transformed from a person into, say, a climate. In the case of info overload, the change tells us a lot about our current age. The term “information overload” was coined as a follow-on to “sensory overload,” a term with a related and revelatory history. The first use of the actual phrase that I can find was in a paper by Donald B. Lindsley at a conference at Harvard Medical School in June 1958. But it entered popular use in the mid-1960s. For example, an article in The Nation in 1966 introduces the phrase as if it were unfamiliar to readers: “Recent experimentation, however, has confirmed the significance of the problem of sensory overload; that is, of an inability to absorb more than a certain amount of experience in a given time.” In 1968, in testimony to a Senate panel on drug experience, a witness used the term and again had to explain what it means. So, we can put the phrase’s rise into ordinary usage right at the beginning of the popular career of psychedelic drugs. Sensory overload concept The concept of sensory overload (not the phrase) is usually traced back to an article by Georg Simmel, “The Metropolis and Mental Life,” written in 1903, when we didn’t yet know about the joys of LSD and the Grateful Dead. Simmel points to the effects the onrush of sensations have on the mental life of city dwellers. “Man is a creature whose existence is dependent on differences, i.e., his mind is stimulated by the difference between present impressions and those which have preceded,” he wrote. Put us in a city, and we’ll cope with the onrush of “violent stimuli” by becoming more head than heart, by becoming indifferent, by becoming numb. This psychological observation was in sync with what people had seen happen during World War II. Soldiers were known to sleep through artillery attacks, so numbed were they by the overwhelming landscape of sensation. Our “channel capacity” Simmel’s article was translated into English in 1950, and it began to have an effect, in part because it rode on the back of the burgeoning new science of information. The brain started to look like one end of a communication system, connected by “channels” that could get overloaded the way a telephone wire could have so many inputs that all you got at the other end was noise. That’s exactly how Alvin Toffler explained the notion of information overload in his 1970 bestseller, Future Shock. Suppose we were being overwhelmed not by mere sensations—the constant sounds of cars, the mingled smells of multiple sidewalk carts—but by information? Toffler is thinking of information here not as a information scientist—sequences of bits with varying degrees of predictability—and not as mere sensation, but as small, intelligible facts about our world. In explaining information overload, however, Toffler uses the concepts of information science: The amount of information we’re given in the modern world can exceed our “channel capacity” and our brain’s processing power. When information overload started off, it created the same sorts of difficulties as sensory overload: Info overload was a psychological syndrome in which we lose our ability to act rationally. Overload us with information and we won’t be able to make good decisions, we were cautioned. “Sanity itself thus hinges” on avoiding information overload, Toffler warns. But that’s not how we think about information overload now, even though the amount of information far outstrips what Toffler feared would unhinge us. (It’s actually quite amusing to read the research from the mid-1970s that thought that consumers faced with 16 different fields of information on the labels of competing products would suffer from information overload. Sixteen? Hahahaha.) We now think of information overload as a social issue, not a psychological one. We do not worry about losing our minds so much as not being able to find the information we need. This is a remarkable story of adaptation. What we thought as a predicament that would destroy our ability to make rational decisions and might even drive us mad has now become simply our environment. It’s where we live. Rather than fleeing from the overload of information, our concern is that we’re not getting enough of it. We have adapted well. Or, perhaps, gone mad.
<urn:uuid:429b0d93-4fa6-4a6a-9e87-ae1c6e06e09d>
CC-MAIN-2017-04
http://www.kmworld.com/Articles/Column/David-Weinberger/Bringing-on-the-info-overload-60750.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964807
992
2.671875
3
Find an Answer Learn about what "records" are and how record management can help you. A record is a general term that refers to the distinct items created within Service Desk when technicians add incidents, problems, changes, releases, or knowledge articles. Though each type of record has a different function and different information, they all use similar features and an interface that contains the same components. All records include a similar format and features (such as tasks). The following items are all considered Service Desk records: An incident is a single event, disturbance or query that affects the quality of a service to a customer. When a customer is affected by an incident which is then resolved, the service for that customer is restored to normal levels. Incidents can be logged by technicians, and administrators can allow customers to submit incidents as well using the Customer Portal. Because incident management focuses on getting the customer back on track as quickly as possible, fixes for incidents are often "band-aid" fixes and do not always allow the underlying root cause to be further explored and resolved. See the following articles to learn more about incidents: A problem is a recurring issue that affects the quality of a service to a customer, such as a technical glitch, something unclear about the user interface or the absence of a feature or certain information repeatedly requested by customers. Problems are typically incidents that continue to arise, sometimes despite a short-term fix. When a customer is affected by an incident which is resolved with a"quick fix" or workaround, the service for that customer will likely be impaired again because the problem is still active and waiting to produce the recurring incident again and again. See the following articles to learn more about problems: Changes – short for ITIL's term "Request for Change (RFC)" – refers to any changes being made to the service or any of its components (including configuration items). Change records allow technicians to not only justify why time and money is being spent on the change effort, but also to track and streamline the effort and any other components that might be related. Changes can stem from many sources, such as fixes for incidents or problems, new or improved functionality within the service, additions or modifications to configuration items or updates to the service (e.g., new compliance obligations). Service Desk change forms are designed to enable the following development process: planning, approval, building, and testing. See the following articles to learn more about changes: Releases are the implementation of incidents and changes into services. When incidents are resolved and changes are completed, they are ready to be deployed to customers. Releases provide a way of tracking when incidents and changes are implemented and who is responsible for aspects of the release process. Service Desk release forms are designed to enable the following development process: planning, approval, testing and deployment. See the following articles to learn more about releases: Knowledge articles are articles that are written and maintained by Service Desk technicians and are a way of providing your staff and/or customers with a clear and common understanding of your services. They can be internal only or made available to your customers via the Customer Portal, allowing you to keep your staff and customers in the know by sharing insights and experiences of your services. See the following articles to learn more about knowledge articles:
<urn:uuid:e7c85042-ce3c-4af9-9efb-688f5a086a19>
CC-MAIN-2017-04
http://support.citrixonline.com/en_US/GoToAssistServiceDesk/help_files/G2ASD150001?title=Use+Records+and+Record+Management
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00526-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953765
665
2.671875
3
Cyber security professionals and black-hat hackers are constantly engaged in a battle of wits where the stakes can be high, and the demand for cutting-edge intelligence is ever present. As engaging as this situation may be for savvy security professionals, the reality is that even the smartest IT department can’t curb malicious hacking on its own. A comprehensive cyber security effort requires the cooperation the entire organization. The decisions and behaviors of each person on a company’s network can have a major impact on the success of security efforts. So it’s crucial that all employees, across all departments, are on board with an organization’s cyber security best practices. Like it or not, though, cyber security is a topic that many employees just don’t understand. Achieving company-wide compliance with security standards and ensuring employees’ online behaviors are secure are among an organization’s greatest IT challenges. This is mainly because non-technical staff members often don’t perceive security measures in the same way IT does. Rather than a necessary and important step in thwarting malware or fraud, these measures are often seen as an inconvenience and an impediment to performing regular responsibilities. Art Gilliland, general manager of enterprise security products at HP, observes that most average users don’t consider online security the same way they consider physical security. While there are certain common sense security behaviors employees will likely always apply offline (e.g. not leaving a wallet unmonitored in a public space), they don’t apply the same reasoning to security situations in a digital environment. This creates a learning opportunity for many users on a typical company’s network. And by approaching education in the right way, an organization can make strides in the protection of its sensitive corporate data and personal data of its staff. Making connections between physical and digital security, using real-life examples of data breaches and their consequences, and explaining the reasoning behind security practices – in plain language – are a few techniques that can make a big difference in how successful corporate cyber security training can be. While informal discussions and well-publicized cyber security best practices can help educate employees, formalized training can go a long way in improving an organization’s security posture. Lunarline’s School of Cyber Security offers a wide variety of courses that can give employees a better understanding of cyber security measures and their importance. Our certified educators are seasoned experts with years of practice in cyber security. They understand not only how to keep a company’s data and IT resources secure, but also how to motivate everyday users to take responsible actions online. Lunarline provides personalized training specifically for your security policies on site or at our school in Arlington, Virginia. For more information about our course offerings and how we can tailor them for your company, please visit SchoolofCyberSecurity.com.
<urn:uuid:c6a250aa-1f54-4ad0-a337-4d0dbb2cba4a>
CC-MAIN-2017-04
https://lunarline.com/blog/2014/09/the-talk-why-you-need-to-discuss-cyber-security-with-your-staff/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00526-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934695
591
2.5625
3
Malware 101: An IT primer on malicious software This feature first appeared in the Winter 2015 issue of Certification Magazine. Click here to get your own print or digital copy. Malware is perhaps the most dangerous threat to the security of the average computer system. Research by Microsoft recently estimated that 17.8 percent of computers worldwide were infected by malware during a three-month period. That is an astonishing number that underscores the clear and present danger posed by malicious software on the modern internet. Information technology professionals must educate themselves about the risks posed by malware and use that knowledge to defend their organizations against the malware threat. In this article, we provide background information on malware and describe ways that you can create a defense-in-depth approach to protecting your computing assets. What is malware? Malware is a shorthand term for “malicious software.” While software developers normally create programs for a useful purpose, such as editing documents, transferring files, or browsing the web, some have more malicious intent in mind. These developers create malware that they design to disrupt the confidentiality, integrity, or availability of information and computing systems. The intent of malware varies widely — some malware seeks to steal sensitive information while other malware seeks to join the infected system to a botnet, where the infected system is used to attack other systems. The three major categories of malware are viruses, worms and Trojan horses. They mainly differ in the way that they spread from system to system. Viruses are malware applications that attach themselves to other programs, documents, or media. When a user executes the program, works with the document, or loads the media, the virus infects the system. Viruses depend upon this user action to spread from system to system. Worms are stand-alone programs that spread on their own power. Rather than waiting for a user to inadvertently transfer them between systems, worms seek out insecure systems and attack them over the network. When a worm detects a system vulnerability, it automatically leverages that vulnerability to infect the new system and install itself. Once it establishes a beachhead on the new system, it uses system resources to begin scanning for other infection targets. Trojan horses, as the name implies, are malware applications that masquerade as useful software. An end user might download a game or utility from a website and use it normally. Behind the scenes, the Trojan horse carries a malicious payload that infects the user’s system while they are using the program and then remains present even after the host program exits. Hackers create new malware applications every day. Many of these are simple variants on known viruses, worms and Trojan horses that hackers alter slightly to avoid detection by antivirus software. Some viruses, known as polymorphic viruses, actually modify themselves for this purpose. You can think of polymorphism as a disguise mechanism. Once the description of a virus appears on the “most wanted” lists used by signature-detection antivirus software, the virus modifies itself so that it no longer matches the description. Malware and the Advanced Persistent Threat Recently, a new type of attacker emerged on the information security horizon. These attackers are groups known as Advanced Persistent Threats (APTs). APTs differentiate themselves from a typical hacker because they are well-funded and highly talented. The typical APT receives sponsorship from a government, a national military, or an organized crime ring. While normal attackers may develop a malware application and then set it free, seeking to infect any system vulnerable to the malware, APTs use a much more precise approach. They carefully select a target that meets their objectives, such as a military contractor with sensitive defense information or a bank with sensitive customer records. Once they’ve identified a target, they study it carefully, looking for potential vulnerabilities. They then select a malware weapon specially crafted to attack that particular target in a stealthy manner. The advanced nature of APTs means that they have access to malware applications that are custom-developed and unknown to the rest of the world. These attacks, known as zero-day attacks, are especially dangerous because signature-based detection systems do not know they exist and are unable to defend against them. One of the most well-known examples of an APT in action was an attack in 2010 using malware termed “Stuxnet.” In this attack, believed to have been engineered by the U.S. and Israeli governments, malware infected and heavily damaged an Iranian uranium enrichment plant. Analysis of the malware by security researchers later revealed that it was very carefully developed by talented programmers with access to inside information about the enrichment plant. Defending against malware Organizations seeking to defend themselves against malware attacks should begin by ensuring they have active and updated antivirus software installed on all of their computing systems. This is a basic control, but one where organizations often fall short. There are many quality signature detection products on the market that will help protect systems against known threats. This base level of protection will easily defend against the majority of attacks. Of course, these signature detection systems are not effective against zero-day attacks. That’s where more advanced systems come into play. Businesses seeking defense against APT-style attacks should consider implementing advanced malware defense techniques, such as application detonation and browser isolation. Application detonation systems “explode” new software in a safe environment and observe it for signs of malicious activity. New applications are only allowed on endpoint systems after passing this test. The most common source of malware infection is unsafe web browsing. Users visit a website containing malware and inadvertently download a file containing malicious code that installs on their system. Education and awareness programs can help reduce this threat, but browser isolation systems go a step further. In this approach, users browse the web through an isolation appliance located outside of the network firewall. The isolation appliance handles all of the web processing and presents the user with a safely rendered version of the website. Any code execution takes place on the appliance and never reaches the end user’s system, isolating it from the malware. If you are unlucky enough to experience a malware infection, you have a few options at your disposal. If it is a straightforward infection, your antivirus software may be able to completely resolve it. If you experience more complex symptoms, you may need to either rebuild the system from scratch or call in a malware removal specialist. Malware and you Would you like to become a malware specialist? If you plan to design or implement a malware defense program for your organization, many of the basic security certifications may come in handy. The Security+, SANS GIAC Security Essentials and CISSP certifications all offer basic training in malware prevention, removal and analysis. If you’re looking to dive more deeply into malware studies, consider pursuing the SANS GIAC Reverse Engineering Malware (GREM) certification. This certification program prepares individuals to perform advanced analysis of malware for forensic investigations, incident response and system administration. Candidates for the credential must successfully pass a two-hour 75-question examination with a score of 70.7 percent or higher. Malware remains the most common threat to cybersecurity today. Thousands of viruses, worms and Trojan horses exist on the Internet, seeking to quickly pounce on vulnerable systems. You can defend your organization by educating yourself on the threat, installing antivirus software and considering the deployment of advanced malware defense mechanisms.
<urn:uuid:e64d7844-ac61-4e4d-b96e-3872916caf38>
CC-MAIN-2017-04
http://certmag.com/malware-101-primer-malicious-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933648
1,511
3.59375
4
Cloud storage, especially object storage, is often marketed by touting its “durability,” with many providers boasting eleven or thirteen “nines”, in other words 99.999999999% reliability. It sounds great—as close to 100% reliable as you can get. But what is durability in relation to storage, and do you really need those eleven nines? All storage resides on an underlying media, in most cases hard disk drives, and in some cases flash storage arrays. Regardless of where the media is located within the data center, different technologies can access it, split it up, and share it among different hosting products. You can read more about types of cloud storage to see how the primary platforms of file, block, and object differ. Durability is a measurement of the tiny errors that occur in files due to these underlying media. When you write, read, and rewrite gigabytes, terabytes, and petabytes of information to the same drive, one or more individual bytes can get corrupted or lost. Not every service provider even offers a durability rating as it can be difficult to measure and guarantee. A more important question to ask your cloud hosting provider is about how they are protecting against data loss generally. What technologies are in play? What are your odds of recoving data? How can you tie in backup? For object storage, which is designed around storing massive quantities of files, especially media-rich files like documents, images, and video, durability becomes especially important. Once you reach the petabytes, dropping even a single nine of durability, say from 99.999999999% to 99.99999999% might mean losing 90 or 200 extra files in the case of data loss. One method of fighting byte loss is erasure coding. When a file is copied to cloud storage, erasure coding splits it up and adds an extra piece of the file that is a duplicate. This means that when a single file is lost, it can be reconstructed from the pieces spread across the entire storage area. So instead of worrying about the number of nines, which is hard to prove anyway, ask if erasure coding or another backup method is available to ensure the availability of your data at all times. Erasure coding may not be available for all forms of cloud storage, however. Deduplication is another way that copies can be kept without storing two complete duplicate versions of every file for every backup. The system only copies newly changed files to your backup, keeping the storage footprint down. If the backup is corrupted, it can not be reconstructed, unlike erasure coding, but a deduped backup of block or file storage is a good way to hedge your bets against data loss. A full copy is also faster to restore than one rebuilt from erasure-coded storage. When planning your cloud storage, the vital questions become “What type of storage is best suited for my environment?” and “Can this data stand reduced durability?” Critical business data that you need for daily operations should absolutely have a full backup and preferably multiple backups in geographically separated data centers. If you have lower file volume, in the gigabytes or a few terabytes, durability is much less important, as losing a few bytes and corrupted files will be much smaller proportionally compared to a petabyte or exabyte environment.
<urn:uuid:d69986b2-5f47-4eea-8805-8436d9400c72>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/what-is-cloud-storage-durability-do-you-really-need-11-nines
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947766
687
2.796875
3
VXLAN – Overlay protocol In my last blogpost, I elaborated on EVPN as the next-generation datacenter interconnect. Within datacenter interconnect, VXLAN works as an overlay protocol. What is VXLAN and what application areas are there? In this blog, I will give you the answers to those questions. VXLAN stands for Virtual Extensible LAN. VXLAN is an encapsulation technique in which layer 2 ethernet frames are encapsulated in UDP packets. VXLAN is a network virtualization technology. When devices communicate within a Software Defined Datacenter, a VXLAN tunnel is set up between those devices. Those tunnels can be set up on both physical and virtual switches. The switch ports are known as VXLAN Tunnel EndPoints (VTEPs) and are responsible for the encapsulation and de-encapsulation of VXLAN packets. Devices without VXLAN support are connected to a switch with VTEP functionality. The switch will provide the conversion from and to VXLAN. The different VXLAN networks are provided with a VXLAN Network Identifier. This VNI number is similar to a VLAN ID. The VNI identifies the layer 2 Ethernet frame segment. Virtual machines that are linked to each other by the VNI are able to communicate with each other on layer 2. In a Software Defined Data Center we use an underlay- and overlay network. VXLAN is a technique that is used in an overlay network. So, there is also an underlay network next to the overlay network. The various components are connected with each other in a Software Defined Data Center in an underlay network. This underlay network may be a L2 or L3 network and it is used to connect the various components to each other. The VXLAN tunnels are set up on top of the underlay network. Modern data centers are also often equipped with a reliable and scalable L3 clos architecture on which the VXLAN tunnels can be transported. Since the current L2 Ethernet frames are encapsulated in a UDP frame, and thus overhead is added, it is of great importance that the MTU of the underlay is adjusted to 1600 instead of the standard MTU. Without this adjustment VXLAN cannot function. Which problem does VXLAN solve? VXLAN is a technique that occurs in two application areas. The technique is used within the Software Defined Data Center and within data center interconnects. VXLAN has several important advantages: - VXLAN uses 24-bit VNIs. Thus, it is possible to define 16 million different VNIs. This ensures that the technique is more scalable compared to 802.1q with a maximum number of 4096 VLANs; - The solution is ideal for a multi-tenant environment. Through the use of VXLAN, the same IP-series can be used in various VNIs; - By using network virtualization and an underlay network, it is possible to carry out all network adaptations quickly from the software without having to adjust all physical network components; - With VXLAN, layer 2 networks can be stretched across L3 underlay networks By using network virtualization in combination with VXLAN, scalable, multi-tenant networks can be built which makes VXLAN a vital link in the Software Defined Data Center.
<urn:uuid:6bd68745-6066-450e-9387-075d95bd7745>
CC-MAIN-2017-04
https://www.securelink.be/vxlan-overlay-protocol/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00396-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903434
703
3.359375
3
Infrared Networking Basics One of the coolest but least understood native Windows 2000 features is infrared networking. Infrared networking allows you to perform such tasks as transferring files between machines and printing to infrared-enabled printers without the need for wires. In spite of this capability, very few of the Windows 2000 books I've read even mention infrared support. In this article series, I'll explain how you can take advantage of Windows 2000's infrared networking capability. I'll begin by discussing how infrared networking works. I'll then explain how to install, configure, and work with Windows 2000's infrared support. How It Works Before you can really appreciate Windows 2000's infrared networking support, it's necessary to understand how infrared networking works. Although some aspects of infrared networking are consistent with traditional networking, many aren't. To see why this is the case, let's compare infrared networking with traditional networking. A traditional network requires a minimum of two PCs that are equipped with network cards and are attached to a communications medium. Each of these PCs must also have a unique computer name for identification purposes and share a common protocol with the other PCs on the network. In infrared networking, this definition is revised a bit. Whereas traditional networking requires a minimum of two computers, infrared networking is usually limited to only two computers. Actually, they don't both have to be computers--one of the devices could be a pocket PC or a printer. In spite of the fact that there are usually only two devices involved in infrared communications, a computer name and common protocol are still required. The computer name is required in case multiple infrared devices are present in a given area. The computer name allows the devices to determine which devices should be communicating. In the case of infrared networking, the infrared port takes the place of the network card. (I'll discuss the infrared port in more detail in a future article.) As far as a communications medium goes, whereas traditional networks use copper wire or fiber, infrared networks don't require a physical connection between the two devices. The only requirement from a connection standpoint is that a direct line of sight exists between the two devices. The Need for a Protocol A shared protocol is required even in infrared networking because of the nature of infrared communications. To see why this is the case, it's necessary to understand how infrared communications work on a more basic level. At its simplest, infrared communication involves using an infrared emitter to send pulses of infrared light to an infrared receiver. Infrared light is used instead of other types of light that fall into the spectrum of visible light, because it's less susceptible to interference than visible light. An example of very simple infrared communication is the remote control for your television or stereo. Such a remote contains an infrared emitter. When you press a button on the remote, it emits pulses of infrared light, which the infrared receiver on the television or stereo receives. In the case of a remote control, a chip inside the remote causes the infrared receiver to flash a different pattern of invisible light for each button pressed. If you hold down a button, the flash pattern repeats. It's possible to watch an infrared remote function; certain types of digital cameras can record infrared light. In Figure 1, you can see a stereo remote. The image on the left shows a remote with the infrared emitter turned off, as would occur during idle times or between pulses of light. However, the image on the right shows what it looks like when the infrared emitter emits a pulse of light. A comparable chip inside the device you're controlling is set up to look for the various patterns of flashing light. If the device detects a flash pattern that it recognizes, the action associated with that pattern is performed. If the flash pattern is unrecognized, it's ignored. This prevents the device from functioning erratically in the presence of other types of devices remotes. I've given this long explanation of how your television remote works because PC-based infrared communications work on the same basic principal. Figure 2 shows a portable PC's infrared port. The left image shows the infrared emitter in the off position, and the image on the right shows the infrared emitter emitting a pulse of infrared light. As you can see, computing devices tend to emit much brighter bursts of infrared light than remotes. As you can see from the two examples I've provided, infrared communications work by sending and receiving pulses of infrared light. These pulses consist of periods of light and darkness. In the case of a TV or stereo, these pulses are nothing more than recognized patterns. However, in the case of computing devices, the pulses are binary code. When the infrared emitter is on, it's essentially sending a binary one. Likewise, when the infrared emitter is dark, it's considered to be sending a binary zero. This is where the need for infrared-based protocols comes in. The protocol regulates the timing of the infrared signal. It makes sure that the receiving device is checking the on/off status of the emitter at the same frequency the emitter intends. For example, if the emitter sent pulses at 4-millisecond intervals, but the receiver was expecting 2-millisecond pulses, then a single pulse of light could be mistaken for two pulses. The protocols must also negotiate things such as packet length (where one segment of binary code ends and the next one begins). For example, suppose the sender sent the following two packets: 10101100 00110010. Without proper timing, the receiver might pick up part of both packets and think it was a single packet. For example, if the receiver picked up the last four bits of the first packet and the first four bits of the second packet, it would receive the code as 11000011. As you can see, this is much different from the intended message. |You can get an exclusive new technical article by Windows 2000 expert Brien Posey in your e-mail box every week. Just take a moment to sign up for the CrossNodes Windows Networking Tech Notes newsletter.| In Part 2 ( Installing and Configuring Infrared Support ), I'll continue the discussion by explaining more of the logistics that must be present for infrared communications between two computing devices. I'll then go on to discuss the way that Windows 2000 supports infrared communications. // Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:f80e1cad-b0cb-4cc4-ab56-453bbf066afb>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/624891/Infrared-Networking-Basics.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00030-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93942
1,362
2.578125
3
An artificially intelligent system developed by scientists at Lawrence Technological University converts audio into visual representations called spectrograms, and analyzes it. Through identification of about 3,000 different traits in the images, the system accurately placed when a particular song by The Beatles was recorded, all based off the band’s long-term style progression. The album Let It Be was the band’s last, but the AI system recognized that the songs contained on that album were recorded earlier than those found on Abbey Road. Similarly, the system can see that the band’s 10th studio album, Help!, was recorded before their sixth album, Rubber Soul. Most people can’t do that without prior knowledge, Assistant Professor Lior Shamir told Phys.org. While the project is perhaps just a passing novelty to fans of music and computers, the work also represents an impending future where machines can replace not just humans who perform menial tasks, but those who rely on intuition, too.
<urn:uuid:4ba729ac-11a7-427c-abb6-f4f98134118c>
CC-MAIN-2017-04
http://www.govtech.com/question-of-the-day/Question-of-the-Day-for-072514.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966749
202
3.109375
3
'Nanoprobes' For Better Nanobiotechnology, a specialised field of utilizing nanotechnology for biotechnology, is an emerging area within nanotech. Nanobiomaterials specifically could significantly impact the medical sector. The US Department of Energy's Oak Ridge National Laboratory is amongst a host of organizations exploring nanotech for various applications, including medical uses. A research group lead by Tuan Vo-Dinh has developed a novel nanoprobe that has the potential to be employed in various applications. Commenting about the research work, Vo-Dinh says in a press statement, "The significance of this work is that we are now able to perform direct analysis of samples--even dry samples--with no preparation of the surface." Based on the light scattering technique, these nanoprobes could be utilized to detect and analyze drugs, chemicals and even explosives--at a single-molecule level. Vo-Dinh continues, "Also, the small scale of the nanoprobe demonstrates the potential for detection in nanoscale environments, such as at the intracellular level." The nanoprobe has been developed by tapering an optical fiber at a tip measuring a minuscule 100 nm. Additionally, a very thin coating of silver nanoparticles helps to enhance the Raman scattering effect of the light. (The phenomenon of light reflection from an object when illuminated by a laser light is referred to as Raman Scattering.)The reflected light demonstrates vibration energies unique to each object (samples in this case), which can be characterised and identified. The silver nanoparticles in this technique provides for the rapid oscillations of electrons, adding to vibration energies, and thus enhancing Raman Scattering--commonly known as surface-enhanced Raman scattering (SERS). These SERS nanoprobes produce higher electromagnetic fields enabling higher signal output--eventually resulting in accurate detection and analysis of samples. This project has been funded by the Department of Energy's Office of Biological and Environmental Research and the Laboratory Directed Research and Development program. Environmental monitoring, intracellular sensing and medical diagnostics are some of the immediate application capabilities. Apart from these, ultrasensitive detection tools could be developed based on this research work. Tuan Vo-Dinh, Principle Investigator, Life Sciences Division Advanced Biomedical Science and Technologies Group Ridge National Laboratory Box 2008 Oak Ridge, TN 37831 comment on this article, write to us at out more about Technical Insights and our Alerts, subscriptions and research
<urn:uuid:6285c0f6-f1b1-414d-9e68-3a1778930b23>
CC-MAIN-2017-04
http://www.growthconsulting.frost.com/web/images.nsf/0/35ECC41521A8C41165256F4D001639BC/$File/TI%20Alert%20-%20Medical%20Devices%20NA.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00424-ip-10-171-10-70.ec2.internal.warc.gz
en
0.861909
557
2.875
3
It's hard to process yesterday's deaths of 19 firefighters in Arizona. The tragedy is so stark an outlier that most states haven't seen that many deaths of firefighters due to wildfire in their combined histories. But there is one worrisome trend: fires are getting bigger and often deadlier. The National Interagency Fire Center tracks wildfire incidents, scale, and damage for the United States. Included in that data is a list of firefighter fatalities through 2011, broken down by cause and county. No state has seen more firefighter deaths than California, which had 324 through that year. It is one of 11 states that has had more deaths than Arizona saw yesterday — with one of those states being Arizona. That largely tracks with this interesting map of fires by location. Most fires occur in the Southwest; many in California. Over time, the number of firefighter deaths has been consistently low. Only two incidents on the NIFC's list were more deadly than yesterday's: a 1910 fire in Idaho and a 1933 blaze in Los Angeles. (This is only wildfires, of course. September 11th was the deadliest day for firefighters overall, and total firefighter deaths are declining.) But, as the graph below shows, there's been a slight uptick in deaths per year over the past few decades:
<urn:uuid:baeda0aa-c48f-4452-aee1-4afe56c8f177>
CC-MAIN-2017-04
http://www.nextgov.com/defense/2013/07/troubling-data-behind-americas-growing-wildfires/65882/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00424-ip-10-171-10-70.ec2.internal.warc.gz
en
0.982167
255
3.015625
3
The Windows' fan club likes to point out that Windows is far more popular than Linux. The reason for that has nothing to do with quality and everything to do with monopoly. Nothing shows that better than the semi-annual TOP500 list of the world's most powerful supercomputers. In the latest ranking, where performance is everything and nothing else matters, Windows is stalled out at the starting line, and Linux is lapping the field. Specifically, Linux has increased its already substantial supercomputer market share to 88.6%. Linux is followed by hybrid Unix/Linux systems with 5.8%; Unix, mostly IBM's AIX, with 4.4%; and running close to last, Windows HPC (high-performance computing) with 1%. Only BSD, with a single representative on the list, trails Windows. In the lead at the number 1 spot with 1.105 petaflop/s (quadrillions of floating point operations per second) is the Los Alamos National Laboratory Roadrunner system by IBM. Roadrunner was the first system, to break the petaflop/s Linpack barrier in June 2008. How fast is that? According to the Department of Energy, which paid for the Roadrunner, "One petaflop is 1,000 trillion operations per second. To put this into perspective, if each of the 6 billion people on earth had a hand calculator and worked together on a calculation 24 hours per day, 365 days a year, it would take 46 years to do what Roadrunner would do in one day." And, of course, the Roadrunner is fueled by Linux. In fact, all the top ten run Linux. The hardware the supercomputers run on is quickly shifting over to multi-core processors. In this latest ranking, only four supercomputers still use single-core CPUs. Quad-core processor-based systems are found in 383 systems, while 102 systems are using dual-core processors. In addition, four supercomputers are now using IBM's Sony PlayStation 3 processor with 9 cores. Yes, that's right, top of the line supercomputers use the same top of the line processors found in PlayStations. Neat isn't it? Most of the supercomputer processors though come from Intel. To be exact, 399 systems, 79.8% are Intel. IBM Power processors come in second with 55 systems, 11% with AMD Opteron family with 43 systems in the third spot. Regardless of the processor, one thing isn't just staying the same, it's actually growing, and that's Linux in supercomputers. When being the fastest of the fast is all that matters, Linux isn't just winning, it's extending its lead.
<urn:uuid:74f3c189-86aa-40a8-bd8c-0be63d6d5c60>
CC-MAIN-2017-04
http://www.computerworld.com/article/2482078/high-performance-computing/linux--it-doesn-t-get-any-faster.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00058-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946927
560
2.734375
3
A group of researchers from Technion and Tel Aviv University have demonstrated new and unexpected ways to retrieve decryption keys from computers. Their research is “based on the observation that the ‘ground’ electric potential in many computers fluctuates in a computation-dependent way.” “An attacker can measure this signal by touching exposed metal on the computer’s chassis with a plain wire, or even with a bare hand. The signal can also be measured at the remote end of Ethernet, VGA or USB cables,” they explained. “Through suitable cryptanalysis and signal processing, we have extracted 4096-bit RSA keys and 3072-bit ElGamal keys from laptops, via each of these channels, as well as via power analysis and electromagnetic probing.” Their attacks have been leveraged against GnuPG, and they used several side channels to do it. They measured fluctuations of the electric potential on the chassis of laptop computers by setting up a wire that connected to an amplifier and digitizer. They also found a way to measure the chassis potential via a cable with a conductive shield that is attached to an I/O port on the laptop. Most surprisingly, the signal can also be measured after it passes through a human body. “An attacker merely needs to touch the target computer with his bare hand, while his body potential is measured,” they explained, adding that the measuring equipment is then carried by the attacker. Finally, they also succeeded in extracting the keys by measuring the electromagnetic emanations through an antenna and the current draw on the laptop’s power supply via a microphone. The bad news is that each of these attacks can be easily and quickly performed without the user being none the wiser (the researchers included realistic, every-day scenarions in the paper). More information about the attacks can also be found here.
<urn:uuid:99599a99-bfdc-4e00-bf46-7e9d7799f266>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/08/22/extracting-encryption-keys-by-measuring-computers-electric-potential/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00452-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964176
388
3.125
3
I know it sounds mean, but when I hear about someone getting hit by a car or falling down an open manhole while texting, I think: Darwin Award! Seriously, how dumb do you have to be to walk across the street with your head down and your eyes and brain occupied by a smartphone? But those kinds of incidents are happening fairly often as smartphones take over the world. Do a Google search for “accidents while walking and texting” and you’ll get more than 100,000 hits, including this breathless news clip that shows a guy walking into a wall while texting, a women falling into a fountain, and a man practically running into a bear that somehow appeared on a city street because he couldn’t see what was happening right in front of him. I live in San Francisco, ground zero for gadget addiction, and I’m a pretty big guy. But people often don’t see me when they’re texting on the street or in the grocery store. It’s kind of funny until one of these bozos bumps into me or makes me move like a wide receiver dodging Seahawks cornerback Richard Sherman. A new study by researchers in Australia confirms the obvious proposition that texting while walking is dangerous. Why they needed to study this escapes me. Perhaps they were fresh out of research ideas or hard up for grant money? In any case, the people who took part in the study had their movement tracked while they walked a length of around 30 feet – once while texting, once while reading a text and once without distraction. When people walk and use their phones they slow down and swerve, even if they think they are walking in a straight line, the researchers noted. Secondly, people walk “like a robot” while texting, one of the researchers, Dr Siobhan Schabrun, told Guardian Australia. “They hold their body posture really rigid,” she said. “Their arms, trunk and head are all fixed together and they walk a little bit more like a robot.” Schabrun said this upsets a person’s balance, making them more susceptible to tripping, and also makes it harder to recover their balance when they do trip. I’m not so insensitive as to laugh off the death in 2011 of a Melbourne teenager who died after falling from a parking garage while texting to a friend. That was, of course, a tragedy. But an easily preventable one. One more serious note: If texting while walking is dangerous, consider texting while driving. C’mon folks. Cut it out.
<urn:uuid:5c815c6a-b9f6-4a0b-be4f-c025b54362a8>
CC-MAIN-2017-04
http://www.cio.com/article/2370143/smartphones/researchers-say-texting-while-walking-turns-you-into-a-robot--sort-of-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955793
538
2.625
3
The fight to keep Moore’s Law alive has been stacked with physical limitations. Fittingly, Technology Review published an article on Friday the 13th, discussing the behind-schedule technology required to manufacture the next generation chips. Known as extreme ultraviolet lithography (EUV), the process was expected to fabricate 22-nanometer chips on the market today. Unfortunately that didn’t pan out, leaving the silicon to be manufactured by tweaking the aging process of standard lithography. The problem has prompted Intel to invest a sum of $4 billion into ASML, a Dutch company that makes the equipment used to fabricate chips. Both Intel and ASML are trying to get large chipmakers to join an effort keeping silicon alive. While the tweaked version of lithography has been used to manufacture 22-nanometer chips, this process is only viable for the next two generations. The stopgap solution should cover the creation of 14 and 11 nanometer chips, which is expected to suffice until 2013. After that, the only process expected to keep Moore’s law going is EUV. While current lithography employs 193-nanometer ultraviolet light to makes chips, EUV uses higher energy UV rays with wavelengths around 13 nanometers. The process involves writing a pattern into a chemical layer on top of a silicon wafer. The layer is then etched into the silicon using a chemical process. Unfortunately, the technology is not yet feasible for a production environment. A main roadblock to EUV technology is the need for powerful light sources. Because the wavelength is so short,, it gets absorbed by all types of matter. The EUV machines attempt to alleviate this issue by passing the beam through a vacuum, but it becomes too weak by the time it hits the silicon wafer. Changing the direction and focus of the light has been tried, but ends in similar results. At this point, ASML’s most advanced EUV prototype can generate beams only half as strong as chipmakers required to make the process viable. The company and Intel are investing in second-generation EUV, but the money is also being used get the first-generation technology working. ASML spokesman Ryan Young expressed the urgency about getting EUV off the ground, saying, “Clearly, there is no next generation if we don’t get this generation working.”
<urn:uuid:af89f779-ccf8-4608-bed4-c370848c439c>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/07/16/developing_diminutive_transitors_is_a_fight_against_physics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944005
479
3.40625
3
There is considerable disagreement among experts regarding the effects of technology on child growth and development. Some regard technology as advancing intellectual development. Others worry that technology may overstimulate and actually impair brain functioning. One of the problems is that most researchers have taken too narrow a focus on the issue. They have looked at the impact of a particular technology rather than at the technological environment as a whole. One might argue that taken as an aggregate, technologies such as computers, television and cell phones create a digital culture that has to be looked upon in its entirety rather than piecemeal. The question becomes: What is it like growing up in a high-tech world, and how does that differ from growing up at an earlier time? Part of the answer lies in the fact that the digital youth has a greater facility with technology than their parents and other adults. As a result, there is a greater disconnect between parents and children today, and some adolescents have even less respect for the knowledge, skills and values of their elders than they did a generation ago (hard as that may be to believe). Digital children evidence other worrisome traits, but first, let’s explore the culture itself. It is certainly a speed-dominated culture—fast and getting faster. Online, we get impatient if it takes more than a second or two to get a response from a site hundreds, maybe thousands, of miles away. Second, it is a screen culture. The movie screen has been followed by the television screen, which became a computer screen, and is now downsized to a cell phone screen. Today, young people spend a large portion of their waking hours in front of one or another screen. Third, it is an information culture. In their homes, children and youth now have as immediate access to information as do the most erudite scholars in the world’s best libraries. Science, literature, history, drama and the arts are all at their fingertips. Finally, it is a communication culture. The Internet and cell phone have made communication with peers an instant—and at all times possible—connection. Growing up in this technological culture affects the language and concepts that children learn, and shapes their perceptions of reality. Terms like cyberspace, Internet, DVD, VCR and so on all refer to digital realities unknown to children of even the previous generation. The language, music and dress of teenagers all speak to their lack of respect for the older generation and their need to have clearly delineated generational boundaries. Independence from parents and adults means greater dependence on peers for advice, guidance and support. The availability of cell phones and immediate access to friends through instant messaging has only exaggerated this trend and quite possibly worsened the divide between children and their parents. Digital children also have a different comprehension of space than did children of even 30 years ago. Virtual realities are such that children and youth can check out new books, games and toys; explore college campuses; and make bets on sports teams, all while sitting in front of their computers. The virtual spaces of many computer games are extraordinarily intricate. The ability to explore and create spaces digitally, without going anywhere, keeps many young people at the computer and almost certainly contributes to obesity among the younger generation. Children’s sense of time has changed as well. The speed of digital communication allows us to be more productive than ever. Perhaps that is one of the reasons we seem to believe we can accomplish more in the time we have than we did in the past. Young children may not only attend day care or after-school programs, they may also be on two teams in one season such as soccer and T-ball, or gymnastics and another sport. School-age children are burdened with even more commitments and homework starting with the early grades. Indeed, the focus on speed has contributed to education being seen as a race. Many parents erroneously believe that the earlier they start a child on academics, the earlier and the better he’ll finish. At the same time, because we now have so many ways to communicate—e-mail, cell phones and IM—we feel busier, and more harried. Young people incorporate this sense of urgency and too often feel guilty about taking time off to play. The high-tech culture has also changed children’s social relationships. Before the digital culture predominated, there was a language and lore of childhood that was orally passed down from generation to generation. They consisted of games, riddles, rhymes, jibes and so on that were adapted to the child’s immediate environment. Some were universal, like the superstitions ("Step on a crack, break your mother’s back") or incantations like "Rain, rain go away, come again another day." Others were imported like "London Bridge Is Falling Down." The culture of childhood made it easy for a child to become part of a group. All she had to do was learn the language and lore. Such play rituals were passed down in the city streets and in country glens. They were intergenerational and made it easier for parents and children to connect. This traditional culture of childhood is fast disappearing. In the past two decades alone, according to several studies, children have lost 12 hours of free time a week, and eight of those lost hours were once spent in unstructured play and outdoor pastimes. In part, that is a function of the digital culture, which provides so many adult-created toys, games and amusements. Game Boys and other electronic games are so addictive they dissuade children from enjoying the traditional games. Yet spontaneous play allows children to use their imaginations, make and break rules, and socialize with each other to a greater extent than when they play digital games. While research shows that video games may improve visual motor coordination and dexterity, there is no evidence that it improves higher level intellectual functioning. Digital children have fewer opportunities to nurture their autonomy and originality than those engaged in free play. In many ways, then, digital children have a far different sense of reality than previous generations. This digital reality is extraordinarily rich and complex. Yet children are still children in many respects. Though they may be sophisticated about technology, they still love a good story told at their level. The success of the Harry Potter books attests to this truism. And while many contemporary teenagers are sophisticated users of all forms of technology, they remain as naive as preceding generations about the human condition. Young people today, like those of earlier generations, harbor mythical ideas about sexual behavior; many still believe you will not get pregnant if you do it standing up. For all of those reasons, it is more incumbent than ever that parents continue to reach out and connect with their children. At a deeper level, our young still very much want and need the love, support and guidance of their parents. Even digital children and adolescents need a hug.
<urn:uuid:1afca6a5-0919-40cd-91ae-173c354dd59d>
CC-MAIN-2017-04
http://www.cio.com/article/2441936/it-organization/david-elkind--technology-s-impact-on-child-growth-and-development.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972647
1,400
3.40625
3
Rare earth metals or rare earth elements (REEs) are a relatively abundant group of seventeen elements found in the periodic table. Out of the seventeen, fifteen elements comprise the lanthanide series found between atomic number 57 and 71. The rare earths can further be divided into two categories—heavy rare earth elements (HREE’s) and light rare earth elements (LREEs). The demand for rare earth metal in Europe was estimated 5,739.5 tons in 2012, and is projected to reach 8,470.1 tons by 2018, growing at a CAGR of 6.7% from 2013. Cerium oxide is a major type of major type of rare earth elements, and has a huge demand in Europe. Rare earth elements are widely used in applications such as permanent magnets, and catalysts. These elements are relatively abundant in nature even though their name suggests otherwise. Each REE is more common in the earth's crust than silver, gold or platinum, while cerium, yttrium, neodymium, and lanthanum are more common than lead. Thulium and lutetium are the least abundant REEs with crustal abundance of approximately 0.5 parts per million. The radioactive element promethium does not occur freely in nature. Rare earth metals are widely used in various applications such as permanent magnets, metal alloys, glass polishing, glass additives, catalysts, phosphors, ceramics, and others. The European chemical industry is a significant part of the country’s economy.The industry is divided in four segments including Base chemicals, Specialty chemicals, Pharmaceuticals, and Consumer chemicals. Germany is the largest chemical producer in Europe, followed by France, Italy, and The Netherlands. These four countries together account for 64.0% of the Europe chemical sales. In the past, most of Europe’s chemical industry growth was driven by domestic sales, but these days, the country’s growth is shared dependent on both the domestic and the export market. The types of rare earth metals studied include lanthanum, cerium, praseodymium, neodymium, samarium, europium, gadolinium, terbium, dysprosium, yttrium, and others. Further, as a part of qualitative analysis, the Europe Rare earth elements Market research report provides a comprehensive review of the important drivers, restraints, opportunities, and burning issues in the rare earth elements market. The Europe Rare earth elements Market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles of and competitive strategies adopted by various market players, including Alkane Resources Ltd (Australia), Arafura Resources Ltd (Australia), Avalon Rare Metals Inc. (Canada), and Baotou Hefa Rare Earth Co. Ltd. (China). With Market data, you can also customize MMM assessments that meet your Company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters: - Market size and forecast (Deep Analysis and Scope) - Consumption pattern (in-depth trend analysis), by application (country-wise) - Consumption pattern (in-depth trend analysis), by type of rare earth elements (country-wise) - Country wise market trends in terms of both value and volume - Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level - Production Data with a wealth of information on Rare earth elements Raw material suppliers as well as producers at country level with much comprehended approach of understanding - Comprehensive data showing Rare earth elements plant capacities, production, Consumption, trade statistics, Price analysis - Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the Europe Rare earth elements market - Detailed analysis of Competitive Strategies like new product Launch, expansion, Merger & acquisitions etc. adopted by various companies and their impact on Europe Rare earth elements Market - Detailed Analysis of various drivers and restraints with their impact on the Europe Rare earth elements market - Upcoming opportunities in REE market. - SWOT for top companies in REE market - Porters 5 force analysis for rare earth elements market - PESTLE analysis for major countries in rare earth elements market. Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:b2f314ac-fd26-4889-8400-686ab5beeb03>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/europe-rare-earth-metals-3477668352.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904933
940
3.375
3
In 2011, Google added public key pinning to Chrome. They white-listed the certification authority public keys that could be used to secure Google domains. The intent was to mitigate man-in-the-middle attacks that were performed by using a fraudulent SSL certificate. The approach was not scalable for all websites, but Google did offer to pin other large, high-security websites. To address the scalability issues, a new approach to public key pinning is being proposed and is documented in a yet-to-be-published RFC. The goal will have website operators define their public key pins through an HTTP header. The browsers would respect these headers and produce an error when the pin has been violated. The public key pin will be identified by the SHA-1 or SHA-256 algorithm of the key. The public key can be the website certificate, an intermediate CA or the root CA. The methodology pins public keys, rather than entire certificates, to enable operators to generate new certificates containing old public keys. The public key pin will also have directives to define the maximum age of the pin (in seconds), whether it supports subdomains, the URI where to report errors, and the strictness of the pinning. An example pin would look as follows: Based on the effectiveness of the current public key pins, I believe the RFC will be finalized and will be supported by all the mainstream browsers. Will provide more updates as the RFC gets finalized. Update May 1 2015: Public Key Pinning Extension for HTTP is RFC 7469.
<urn:uuid:2bc9f892-c835-4e30-9fe4-0ff5535e5c5f>
CC-MAIN-2017-04
https://www.entrust.com/public-key-pinning-extension-for-http/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00013-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9181
313
2.53125
3
Nuclear power expansion exploded in the United States during the 1970s, heralded as a cheap, unlimited source of clean energy. Dramatic electricity usage predictions during that decade prompted aggressive plans for further expansion. By the mid-1980s, those predictions proved overblown, and mass cancellations followed. A few accidents during the 1970s helped sour America on nuclear power development for more than 20 years. The unthinkable nearly happened in 1979 at Three Mile Island Nuclear Generating Station in Dauphin County, Pa. The plant suffered a partial core meltdown, but the meltdown caused no deaths or injuries. Before that, a fire at the Unit 1 reactor at Browns Ferry Nuclear Power Plant, near Huntsville, Ala., prompted a scare in 1975. Now, however, nuclear power may be poised for a dramatic comeback. Supply and Demand Volatile natural gas prices, updated nuclear plant designs, demand for a carbon-free energy source and massive government incentives may enable the nuclear power industry to rise again in the South. After more than 20 years of inactivity, the Unit 1 Browns Ferry reactor resumed splitting atoms in May. So far, utilities have announced plans to request federal licenses in the next two years to build up to 30 reactors, mostly in the South. Just receiving a license does not commit a utility to actually building a plant, though. Energy giant Southern Co. announced plans to pursue several new licenses, but won't commit to building any plants. "We are in negotiations right now with Westinghouse to determine if nuclear energy is the best option for our customers," said Beth Thomas, spokesperson for Southern Co. "Nuclear is going to have to be cost-competitive with the other base-load sources." A base-load power plant is one that, in theory, is available 24 hours a day, seven days a week, and primarily operates at full capacity. Coal, natural gas and nuclear plants are common forms of base-load electricity sources. "[Getting a license] is a very low-cost way of going through step one," said Jerry Taylor, senior fellow at the Cato Institute. However, "low-cost," in this case, means a $50 million investment, according to Marilyn Kray, president of NuStart Energy Development, a consortium of 10 power companies seeking nuclear plant licenses from the U.S. Nuclear Regulatory Commission (NRC). She said preparing an application for a license takes nearly two years of expensive research, and if printed, the "application" would be roughly 25 volumes of 4-inch binders. Interest in nuclear power is growing because the Energy Information Administration (EIA) projects electricity generation in the United States will increase by 40 percent in the next 25 years. Natural gas - which powered nearly every U.S. electricity plant constructed during the 1990s - has long been the preferred option for electric power generation. But as the third millennium arrived, the United States' natural gas supply plummeted far below the "proven reserve" estimates made during the '90s. Utilities responded with plans to revert to coal-fired plants, infuriating environmentalists. Other opponents contend that power-generation needs can be satisfied by better energy efficiency, more conservation and greater use of renewable energy sources - wind, biomass, geothermal and solar. But those technologies, some say, are too immature to satisfy future energy demands. Meanwhile, utilities insist they will need more "base-load capacity," meaning construction of massive new high-performance power plants is unavoidable. Since the American public currently appears to favor carbon-free electricity, some utilities are starting to view nuclear power as a reasonable balance. But according to critics, nuclear technology remains unsafe, and some contend the projected need for base-load capacity is overblown, like it was during the 1970s. Though the serious accidents of the 1970s gave nuclear power a black eye, manufacturers say they have since simplified plant designs to make them safer. And, although
<urn:uuid:2c75a6c2-45a6-4799-b916-b84a2f0eb126>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Nuclear-Revival.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953246
803
2.890625
3
IBM harvests supermarket data to spot, predict foodborne illnesses Tis the season for socializing and picnicking in backyards and parks across the nation. But that also means it’s high season for foodborne illnesses, when a contamination can spread across the nation’s food chain. A number of tools have been introduced in recent years to help public health departments track the path of food poisoning and other food borne illnesses. The City of Chicago Department of Public Health, for example, was the first to test the potential of social media in identifying foodborne outbreaks, according to the Journal of the American Medical Association. Together with the Smart Chicago Collaborative, it is developing apps to monitor Twitter for possible food poisoning references. A similar project is underway in New York, where the New York City Department of Health and Mental Hygiene is working with Columbia University and reviews website Yelp to filter restaurant-goers comments for clues to an outbreak. Elsewhere, the Food and Drug Administration in 2012 introduced iRISK, a Web-based system to analyze data on microbial and chemical hazards in food and estimate their impact on a population. The tool is designed to enable users to conduct “fully quantitative, fully probabilistic risk assessments of food safety hazards relative rapidly and efficiently,” according to the FDA. The latest technology for checking the health of the food supply debuted this month when IBM introduced an analytic system it said could help public health departments not only track but predict contaminations in the food supply and accelerate the health care response. IBM, which recently published its research on the project in the journal PLOS Computational Biology, described the tool as a “breakthrough” technology, capable of identifying contaminated products, “within as few as 10 outbreak case reports.” The IBM system uses algorithms, visualization and statistical techniques to sort through date and location data on “billions” of products in the food supply to help identify “guilty” or contaminated products, according to the firm. To help accelerate the investigation, IBM is using petabytes of food-based sales data in inventory systems used by food retailers and distributors, some of which manage up to 30,000 food items at any point in time. IBM’s system “automatically identifies, contextualizes and displays data from multiple sources to help reduce the time to identify the mostly likely contaminated sources by a factor of days or weeks,” according to the company. The system also integrates retail data with geocoded public health data to allow investigators to map the distribution of suspect foods. Researchers can look at geographic information on a map and access case and lab reports from clinical encounters in specific locations. The algorithm also learns from each new report and recalculates the probability of each food that might be causing the illness. “Predictive analytics based on location, content and context are driving our ability to quickly discover hidden patterns and relationships from diverse public health and retail data," said James Kaufman, manager of public health research for IBM Research. In announcing the research, IBM pointed out that the speed of an investigation often depends on upon food industry firms to supply relevant data to be analyzed. “This can be achieved by combining innovative software technology with already existing data and the willingness to share this information in crisis situations between private and public sector organizations," said Dr. Bernd Appel, head of the Department Biological Safety at the German Federal Institute, which is working with IBM on the research. IBM is working with public health organizations and retailers in the United States to scale the research prototype and begin processing information from 1.7 billion supermarket items that are sold each week in the country, according to Kaufman. Connect with the GCN staff on Twitter @GCNtech.
<urn:uuid:69f19f7e-9ee0-4ff1-9a62-8ea0c7f43831>
CC-MAIN-2017-04
https://gcn.com/articles/2014/07/08/food-safety-analytics.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00003-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938173
778
2.859375
3
Definition: The complexity class of languages that can be accepted by a deterministic Turing machine in polynomial time. See also NP, BPP. Note: From Algorithms and Theory of Computation Handbook, page 24-19, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC. History, definitions, examples, etc. given in Comp.Theory FAQ, scroll down to P vs. NP. Wikipedia entry on Complexity classes P and NP. Scott Aaronson's Complexity Zoo If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 9 September 2013. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "P", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 9 September 2013. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/p.html
<urn:uuid:02d25a9e-3c8b-4ad9-a59b-f6b4c80f3a9e>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/p.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.783467
245
2.84375
3
This information may be relevant to this discussion: Authority Checking Rules When a user attempts to perform an operation on an object, the system verifies that the user has adequate authority for the operation. The system first checks authority to the library or directory path that contains the object. If the authority to the library or directory path is adequate, the system checks authority to the object itself. In the case of database files, authority checking is done at the time the file is opened, not when each individual operation to the file is performed. During the authority-checking process, when any authority is found (even if it is not adequate for the requested operation) authority checking stops and access is granted or denied. The adopted authority function is the exception to this rule. Adopted authority can override any specific (and inadequate) authority The system verifies a user's authority to an object in the following order: 1. Object's authority - fast path 2. User's *ALLOBJ special authority 3. User's specific authority to the object 4. User's authority on the authorization list securing the object 5. Groups' *ALLOBJ special authority 6. Groups' authority to the object 7. Groups' authority on the authorization list securing the object 8. Public authority specified for the object or for the authorization list securing the object 9. Program owner's authority, if adopted authority is used Note: Authority from one or more of the user's groups may be accumulated to find sufficient authority for the object being accessed. Reply or Forwarded mail from: Kenneth E Graap
<urn:uuid:700ac3d3-83b1-4fae-91eb-df3fce2364eb>
CC-MAIN-2017-04
http://archive.midrange.com/midrange-l/201301/msg00044.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.854081
338
2.5625
3
Duchamp C.,Applied Research Unit on Predator and Depredator species | Boyer J.,Applied Research Unit on Predator and Depredator species | Briaudet P.-E.,Applied Research Unit on Predator and Depredator species | Leonard Y.,Applied Research Unit on Predator and Depredator species | And 8 more authors. Hystrix | Year: 2012 The wolf recovery in France dates back to 1992, following the natural range expansion of the remaining Italian population since the late 1960's. Facing a high level of interactions between wolves and sheep breeding, decision makers had to quickly balance the need for managing livestock depredations with the conservation of wolves as a protected species. The French authorities therefore required a reliable assessment of changes in the species range and population numbers, as well as a reliable monitoring of depredations on livestock, all being key variables to be further included within the governmental decision making process. Because of their elusive behaviour, high mobility, and territoriality, applying a standard random sampling design to the monitoring of awolf population would lead to almost no chance of collecting any signs of presence. In order to increase detectability, we use a dual frame survey based on two spatial scales ('population range' and 'reproductive unit') investigated sequentially thanks to a network of specifically-trained wolf experts distributed over 80000 km2 to collect the data. First, an extensive sign survey at a large scale provides so-called cross-sectional data (pool of signs from unknown individuals for a given year), thereby allowing the detection of new wolf occurrences, new pack formations, and the documentation of geographical trends. Secondly, an intensive sign survey within each detected wolf territory, based on standard snow tracking and wolf howling playback sessions, provides some yearly updatable proxies of the demographic pattern. The combination with a non invasive molecular tracking provides longitudinal data to develop mark-recapture models and estimate vital rates, population size and growth rate, while accounting for detection probabilities. The latter are used in turn to control for proxies' reliability and to implement demographic models with local population parameters. Finally, wolf activity patterns in connection with predator-prey dynamics is investigated through a pilot study carried out with both wolves and four ungulate preys radio-collared. A particular attention is paid to checking the reliability of presence sign data, as well as improving the cost-efficiency ratio of the monitoring. Finally, these results are also used by the government as one of the components in the decision making process related to the management of coexistence with wolves. ©2012 Associazione Teriologica Italiana. Source
<urn:uuid:9dbf1023-a5f1-41e0-88c3-06ffc12e78a3>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/applied-research-unit-on-predator-and-depredator-species-1939093/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899983
537
2.59375
3
These fibers are designed such as to yield a maximum performance solution for both common and advanced applications. Clothing and textile materials are carriers of microorganisms such as bacteria and fungi. The inherent properties of a textile product can provide the ideal opportunity for these organisms to grow, which explains why odors develop on damp textiles or body odor is retained by clothing until it is washed and cleaned. Antimicrobial and antistatic fibers are used to inhibit the activity of microorganisms and to eliminate the buildup of static electricity. These fibers are used in public places where contract fabric is found, including upholstery in offices, hotels, casinos, cruise ships and hospitals, as well as broadloom carpet and carpet tile, to prevent disruption of computers or electronic devices, The demand for antimicrobial fibers and antistatic fibers has increased sharply since the mid-1990s and the main driver which is driving the market is the growing awareness among consumers on the importance of personal hygiene and the health risks posed by certain microorganisms. However the main constraint for this market is the growing concern among different environmental groups who claim that some antimicrobial agents -- notably silver nanoparticles is toxic and should be banned from use in consumer products. Extensive R&D practices are carried out to develop effective and durable antimicrobial products which are less likely to be harmful to human health and the environment. An in-depth market share analysis, in terms of revenue, of the top companies is also included in the report. These numbers are arrived at based on key facts, annual financial information from SEC filings, annual reports, and interviews with industry experts, key opinion leaders such as CEOs, directors, and marketing executives. A detailed market share analysis of the major players in global antimicrobial and antistatic fibers market has been covered in this report. The major companies in this market Foss Manufacturing Co. Inc. (U.S.), Jinda Nano Tech. (Xiamen) Co., Ltd. (Hong Kong), Qingdao Hengtong X-Silver Speciality Textile Co., Ltd, (China), Smart Silver (U.S.), Woven Fabric Company (India), Antibacteria International Co. Ltd (Taiwan), PurThread Technologies, Inc. (U.S.), Zhejiang Donghua Fibre Manufacturing Co. Ltd. (China), Yiwu Huading Nylon Co. Ltd.(China) and so on. Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:3b72021c-e979-48ec-bce3-81e2bdaa7ab2>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/antimicrobial-anti-static-fibers-reports-8350921713.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94268
535
2.640625
3
NASA contest yields Space Apps for Earth, too - By Kevin McCaney - May 09, 2012 NASA has opened the voting for its International Space Apps Challenge, an effort to provide a platform to let developers find innovative solutions to problems both in space and on Earth. The contest drew more than two dozen apps from teams on six continents around the world, some of them working from multiple locations. On the contest site, each team submitted a video explaining its application. The submissions range from health care-related apps and an app that uses an iPhone to locate stars on a cloudy night, to several farming apps and a telerobotic submarine — build mostly with off-the-shelf parts — that would let anyone explore underwater. There also are entries such as Vicar2png, which lets anyone view and work with images from NASA’s Planetary Image Atlas, whose VICAR format is otherwise unreadable by open-source tools. And an Australian team submitted and app that uses “people as sensors” in an early-warning system by monitoring social media for word of disasters and quickly putting emergency warnings on a map. NASA, along with Innovation Endeavors and Talenthouse, are asking people to check out the apps and vote for their favorites. Those votes and a jury from Innovation Endeavors will determine the winners. Voting closes May 15. Apps contests are becoming a common way for agencies to engage the public while finding useful public-service applications that could have cost a significant amount to develop via a contract. Cities such as New York and Washington have staged app contests in which developers made use of those cities’ data. NASA, along with the Harvard Business School, since October 2010 has held a number of developer competitions through its NASA Tournament Lab. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:44f971e3-0e6a-414f-b3aa-b6e43c1b4711>
CC-MAIN-2017-04
https://gcn.com/articles/2012/05/09/nasa-space-apps-challenge.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960946
381
2.625
3
Yes, the title of this article is a bit facetious: carbon (the element) isn’t going anywhere. But attitudes about it (in particular, about emission compounds like carbon dioxide) may be changing. But what could that mean for the data center industry? Popular ideologies and concepts are a bit like market bubbles: everybody (meaning a large segment of the population) jumps on the bandwagon, inflating the bubble, until some series of events causes the bubble to burst. “Carbon” is one of those concepts: carbon dioxide gained a sour reputation in the wake of growing concern about climate change (once labeled “global warming,” a term that seems to have fallen out of popular parlance). Sentiment against carbon emissions became so strong (at least in media and political circles) that severe regulations curbing such emissions seemed to be guaranteed. Perhaps the most popular idea in this regard was a carbon trading scheme, whereby companies would each be assigned a certain number of “credits” that permitted some given amount of emissions (a “cap”—hence the name “cap and trade” for this type of scheme). Those companies that exceeded their allotment would be required to purchase additional credits from companies that were more frugal and had credits to spare. A carbon credit trading market—similar to the stock market—would then handle trading. And, of course, somewhere in the system would be a tax. The goal of such a system would be to reduce emissions and increase energy efficiency. Such a scheme is currently in place in Europe under the European Union Emission Trading Scheme. But the ongoing economic difficulties in Europe and much of the rest of the western world have overshadowed enthusiasm for such schemes. In particular, proposals for a cap-and-trade system in the U.S. seem all but dead on arrival, as opponents claim that they would place an even greater burden on an already struggling economy. In addition, questions seem to be rising with greater frequency regarding climate change, and particularly its causes. Numerous scandals revolving around fudged climate data are just one indication that problems are brewing. Furthermore, “climate science” is not as scientific as some proponents might claim: the climate is a tremendously complex system, and computer models (let alone those who create those models) do not understand all the factors that influence the weather. All this is not to question the need for environmental stewardship: clearly, there is a need (both aesthetic and practical) to protect the world in which we live. But the recent obsession with carbon—which may well be based on fallacious assumptions about how destructive carbon dioxide really is—may simply be giving environmental stewardship a bad name. The best approach is a balanced one that considers both the environment and energy consumers. The U.K.’s CRC: Bust? The U.K.’s Carbon Reduction Commitment (CRC) was originally intended to be a carbon trading scheme similar to the EU’s Emission Trading Scheme. But according to DatacenterDynamics (“The taxing issues in London today”), “The government had announced its Carbon Reduction Commitment (CRC)—first meant to be a complicated ranking, measuring and carbon-charging solution—would now be a simple tax.” According to the official CRC website, “Organisations required to participate must monitor their energy use and purchase allowances, for each tonne of CO2 they emit that falls within the scheme. The more CO2 an organisation emits that fall within the scheme, the more allowances it must purchase. This will provide a direct incentive for organisations to reduce their energy use emissions.” Since data centers are energy hogs, companies that run them will be directly affected by these regulations. The precise reasons for the CRC failing to implement the planned-for cap-and-trade scheme are not clear, but regardless of the rationale, the U.K. government is still getting its share: more tax money. In light of the growing fiscal (government) crisis in Europe, this change could be a way to avoid the hassles and problems of a trading scheme (such as that implemented by the EU) while still bringing in more money to a government struggling against growing debt (the U.K.’s current public debt is around 76% of the nation’s GDP according to the CIA’s World Factbook). Ostensibly, the CRC is still about increasing efficiency and reducing carbon emissions, but the momentum seems to have shifted away from growing support for a complex regulatory system. This is not to say that tomorrow all interest in climate change will be abandoned. But this change, along with a virtual abandonment of the idea in the U.S., could signal the beginning of a shift away from either the notion of government regulation of carbon emissions or, perhaps more broadly, less credence in the concept of anthropogenic (manmade) global climate change. The Waiting Is the Hardest Part DatacenterDynamics notes in the above-mentioned article that “there are very few occasions when an industry breathes a sigh of relief because of a new tax. But after a year of debate, confusion and sitting on the fence, the UK data center industry did exactly that.” A similar problem—recently resolved in the U.K.—troubles the U.S. data center industry as well: what will the government do? Because the future of regulations regarding carbon emissions is up in the air, data centers in the U.S. are unsure what to expect. If a costly tax is to be imposed on these emissions, as has been done in the U.K., then data center operators will likely wish to begin energy efficiency improvements sooner rather than later. But such improvements could turn out to be expensive, and the potential return on the investment will depend in large part on the requirements of potential regulations (the return on investment may be significantly less if no regulations are enacted, even though overall efficiency is increased). Given that a major election year in the U.S. (2012) has almost arrived, companies will likely get no better sense about new regulations for some time. Given the current precarious position of the economy, which is teetering on the edge of a second recession, the federal government is unlikely to add new taxes and regulations in this area—at least until after the election. Even then, however, the failure of “green jobs” and “clean energy” to catch on as an economic driver likely means little popular support for more environmental regulations. Furthermore, controversies surrounding climate change data have brought the entire matter of the danger of carbon emissions into question. (Granted, climate change proponents like to say that “the science is settled”; but why do they keep having to repeat themselves if that’s the case?) The change in the U.K.’s CRC from a complicated cap-and-trade scheme to a simpler tax seems to be rather benign, but it may signal a change in attitudes regarding climate change and concomitant government regulations. Currently, the data center industry in the U.S. is in a state of uncertainty, as the future of such regulations is in doubt. Because data centers consume so much energy—and because energy is such a large portion of data center budgets—carbon emission regulation could be a tremendous blow to the industry. Despite the uncertainty, however, some signs seem to indicate that the climate change bubble may be bursting. How long this process would take, however, is uncertain, and the federal government may institute regulations and/or a tax anyway.
<urn:uuid:ccbd062d-4078-4ccf-b29e-72aab208ef0b>
CC-MAIN-2017-04
http://www.datacenterjournal.com/is-carbon-on-the-way-out/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00418-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9548
1,574
2.65625
3
Virtualization and Cloud Overview (e) - H5v - Course Length: - 1 hour of eLearning NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop. Mobile Communication Service Providers (CSPs) are on the cusp of a multitude of network and business transformation choices. A good conceptual understanding of the new networking and CSP business paradigms is essential for professionals in the communication industry. This course provides a high level view of the impact and benefits of the cloud infrastructure, the benefits of virtualization, the vision and opportunities created by future CSP networks, as well as an overview of the impact of OpenStack cloud infrastructure on the service provider's network. The course is intended for all that are interested in understanding what OpenStack is and how it will transform the CSP network over the next few After completing this course, the student will be able to: • Identify the main elements of virtualization • List the key components of cloud Infrastructure as a Service (IaaS) • Describe the role of Orchestration 1. Key Attributes of Cloud Computing 2.1. Why Virtualization? 2.2. A real world example – Virtualization 3. Virtual Machine and Hypervisor 3.1. Virtual machine 3.2. The Hypervisor 3.3. Hypervisor defined 4. Functions of the Hypervisor 4.1. Functions of the Hypervisor 4.2. Networking in the virtual world 5. The Cloud 5.1. Why Cloud? 5.2. Multi-tenancy (users) in action 6. The Role of the Orchestrator 6.1. Cloud orchestration 6.2. Cloud Orchestration defined 7. OpenStack IaaS 7.1. OpenStack IaaS 7.2. OpenStack release timeline 8. OpenStack Architecture 8.1. Conceptual architecture 8.2. OpenStack IaaS at a Service Provider 9. End of Course Assessment Purchase for:Login to Buy Create a flexible eLearning plan to purchase eLearning courses for one or more individuals, where course prices are discounted dependent on the number of courses purchased.
<urn:uuid:6bb54bc8-58aa-426e-ad3f-eb5037949295>
CC-MAIN-2017-04
https://www.awardsolutions.com/portal/elearning/virtualization-and-cloud-overview-e-h5v?destination=elearning-courses
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00170-ip-10-171-10-70.ec2.internal.warc.gz
en
0.827499
476
2.53125
3
The Federal Aviation Administration (FAA) this week put out a call to fuel producers to offer options that would safely let general aviation aircraft stop using leaded fuel by 2018. The FAA says there are approximately 167,000 aircraft in the United States and a total of 230,000 worldwide that rely on the current 100 octane, low lead fuel for safe operation. It is the only remaining transportation fuel in the United States that contains the addition of tetraethyl lead (TEL), a toxic substance, to create the very high octane levels needed for high-performance aircraft engines. Operations with inadequate octane can result in engine failures, the FAA noted. The FAA said it wants the oil producing industry to submit fuel option proposals by July 14, 2014 that would wean those aircraft to unleaded fuel. The FAA said it will assess the viability of candidate fuels in terms of their impact on the existing fleet, their production and distribution infrastructure, their impact on the environment and toxicology, and economic considerations. By Sept. 1, 2014, the FAA will select up to 10 suppliers to participate in phase one laboratory testing at the FAA's William J. Hughes Technical Center. The FAA will select as many as two fuels from phase one for phase two engine and aircraft testing. That testing will generate standardized qualification and certification data for candidate fuels, along with property and performance data. Over the next five years, the FAA will ask fuel producers to submit 100 gallons of fuel for phase one testing and 10,000 gallons of fuel for phase two testing. The FAA says it has already tested over 279 fuels in an attempt to find what it calls a "drop-in" alternative lead fuel solution, which would require no aircraft or engine modifications. "This week's request however responds to an Unleaded Avgas Transition Aviation Rulemaking Committee report to the FAA, which noted that such drop-in unleaded replacement fuel is unavailable and may not be technically feasible," the FAA stated. The FAA says a new plan called the Piston Aviation Fuels Initiative (PAFI) will facilitate the development and deployment of a new unleaded avgas with the least impact on the existing piston-engine aircraft fleet. The FAA and industry-group leaders also recently formed the PAFI Steering Group (PSG), to facilitate, coordinate, expedite, promote and oversee the PAFI. Check out these other hot stories:
<urn:uuid:c91b06d7-d94b-42c8-a586-d3e317f5150d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224782/data-center/faa-wants-all-aircraft-flying-on-unleaded-fuel-by-2018.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00078-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951712
486
2.6875
3