text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
WiGig, or IEEE 802.11ad, is fully a part of Wi-Fi. It is Wi-Fi in 60 GHz. This year will be pivotal for WiGig with the first access points and devices incorporating the technology. Next year will be a breakout year for WiGig. Several flagship smartphones with WiGig will ship in higher volumes, along with more PCs, access points, VR headsets, and other products with WiGig. Here’s why: - Millimeter wave spectrum is hot. Advances in radio and antenna spectrum now allow this spectrum to be used for access and not just a point-to-point connection. The complex methods of strengthening the signal in this spectrum are precisely what also provides it with other advantages – narrow beams are formed between WiGig radios, which allow more WiGig connections to be used on the same frequency at the same time in the same vicinity. This is why 5G mobile technology will be designed around centimeter and millimeter wave spectrum. - The 60 GHz band has a lot of spectrum available. 2.4 GHz spectrum is becoming increasingly crowded. It is used by proprietary technologies, Wi-Fi, Bluetooth, and ZigBee. This is why the Wi-Fi market has shifted to dual-band products – especially with dual-band 802.11n/ac. The 5 GHz band has much more spectrum available – multiples of the spectrum available for use in the 2.4 GHz band. The 60 GHz band has even more spectrum. Respectively, the amounts of spectrum available in the 2.4 GHz, 5 GHz, and 60 GHz bands are 60 MHz (20 MHz X 3 non-overlapping channels), about 600 MHz (this varies by country), and about 8 GHz of spectrum (supporting 3 or 4 ultra-wideband channels in most countries). - Ultra-wideband spectrum allows for much higher data rates. 802.11b, 802.11g, and 802.11n use 20 MHz channels in the 2.4 GHz band. In the 5 GHz band, 802.11n uses up to 40 MHz channels, 802.11ac uses up to 80 MHz channels, and 802.11ac Wave 2 can use up to 160 MHz channels. Channel width allows for much higher data rates. In the 60 GHz band, 802.11ad uses channels that are about 2 GHz wide. Wi-Fi has gone ultra-wideband. The same is true for 5G. - WiGig products are ramping up. More brands and models of PCs can be ordered with WiGig as an option. Access points with WiGig are hitting the market. Smartphones with WiGig are imminent. VR headsets with cords need WiGig to ditch the cord. Several WiGig chipset vendors are in discussions with all of the VR gear vendors. - There are many chipset vendors that will support this market. Not all of them have chipsets ready yet like Intel, Nitero, Peraso, SiBEAM (a part of Lattice Semiconductor), and Qualcomm, but more will be ready soon, including from Broadcom, Marvell, MediaTek, and others. To sum it up, WiGig will use the 60 GHz band where multiple ultra-wideband channels are available and provide Wi-Fi with faster data rates and, more importantly, a massive increase in network capacity. The ecosystem for WiGig spans the mobile, PC, and consumer electronics industries across the consumer, enterprise, and service provider markets. It is supported by a number of large and small WiGig chipset vendors and is a part of the bigger Wi-Fi ecosystem. Its increased availability in portable PCs has provided a base for this technology as 2016 becomes a pivot year as Wi-Fi access points and smartphones enter the picture. The foundation is being laid in 2016 for 2017 to be a breakout year for WiGig. ABI Research details the whole WiGig ecosystem, driving factors, and some of our market forecasts in our white paper on this topic here: https://www.abiresearch.com/pages/mu-mimo-and-802-11ad/
<urn:uuid:d0c2932c-a873-402e-8600-d69c74e27dc8>
CC-MAIN-2017-04
https://www.abiresearch.com/blogs/wigig-market-verge-breakout-year/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00199-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913445
860
2.546875
3
GPS-tagged toucans solve how seeds spread through rainforests - By Kevin McCaney - Aug 01, 2011 Researchers are using Global Positioning System backpacks on toucans to study how seeds are spread through tropical forests. And it turns out that they can wind up pretty far from the tree. The dispersal of seeds is important to understanding how forests work, “but is hard to study because tracking seeds is difficult,” the researchers, from the Smithsonian Tropical Research Institute in Balboa, Panama, write at ScienceDirect. So they used a combined high-resolution GPS receiver with a 3D-acceleration bird tracking system attached to toucans that feed on the nutmeg-like Virola noblis in a rainforest in Panama. The toucans process the outer shell of the trees’ seeds and spit out the hard seed inside, the researchers write. The birds are on the move while they're feeding, spitting out seeds after an average of 25.5 minutes. By tracking the birds’ movements via GPS and modeling their dispersal of seeds, the biologists found that toucans will disperse seeds about 145 meters from the tree, on average. They said that's about twice the distance previously thought. The researchers also found that the toucans were more active in the morning, suggesting that seeds from fruits that ripen in the morning would be spread farther. The backpacks attached to the toucans were lightweight aqnd designed to fall off after 10 days, according to Wired. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:bfccf90d-0ce0-48dc-960c-1c88f1f8424b>
CC-MAIN-2017-04
https://gcn.com/articles/2011/08/01/toucans-gps-seed-tracking.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953623
337
3.59375
4
Neither the Defense nor the Veterans Affairs department -- which operate the world’s largest electronic health records systems -- tracks treatments used for post-traumatic stress disorder, according to a report the Institute of Medicine issued last week. What’s more, Defense does not even know how many PTSD treatment programs it or the services provide. Sandro Galea, professor and chairman of the department of epidemiology at Columbia University’s Mailman School of Public Health and chairman of the IOM committee responsible for the study said, “DoD and VA offer many programs for PTSD, but treatment isn't reaching everyone who needs it, and the departments aren't tracking which treatments are being used, or evaluating how well they work in the long term.” He added, "In addition, DoD has no information on the effectiveness of its programs to prevent PTSD." According to the report, Treatment for Post-Traumatic Stress Disorder in Military and Veteran Populations, “no single source within the DoD or any of the service branches maintains a complete list of such [treatment] programs, tracks the development of new or emerging programs, or has appropriate resources in place to direct service members to programs that may best meet their individual needs.” Defense and VA lack the capability to track the efficacy of PTSD treatment beyond prescriptions of psychotropic drugs, IOM reported. “The committee learned from the VA that it plans to add a template to its medical records to track psychotherapy progress notes. The committee does not know if the DoD has similar plans. Lack of a system to identify which treatments, other than pharmaceuticals, were provided to which patients, makes it difficult to determine the extent to which [cognitive processing therapy] or [prolonged exposure] therapy is being used at the local level and the outcomes of the treatments.” IOM released its report as both Defense and VA grapple with the mental health costs of 11 years of war in Afghanistan and Iraq borne by an all-volunteer force of 1.4 million troops, many of whom have made multiple deployments since 2001. The Armed Forces Health Surveillance Center reported last week that 102,549 active-duty service members, or 8 percent of the force, have been diagnosed with PTSD since 2001. The institute said in its report in fiscal 2010 that VA treated 82,239 Afghanistan and Iraq veterans for PTSD, or 24.6 percent of all veterans from those two wars treated by VA. These figures do not include 191,501 veterans of the two wars who used community-based vet centers, many of whom sought help for PTSD, report said. The institute evaluated numerous forms of treatment that Defense and VA use. It said studies have shown both exposure therapy -- revisiting a traumatic event with a psychiatrist or counselor, and cognitive therapy -- confronting negative thoughts about a traumatic event such as survivor guilt, are effective PTSD treatments, but need more research backup.< IOM cautioned against prescribing drugs, particularly selective serotonin reuptake inhibitors such as Prozac to treat PTSD. “The evidence was inadequate to determine the efficacy of SSRIs and all other drug categories in PTSD [treatment],” the report concluded. The report also said there was a lack of empirical evidence to support therapies such as massage, biofeedback, yoga and acupuncture for PTSD treatment, which the Army surgeon general suggested in April. VA should expand its telehealth technologies to treat rural veterans who live far from department hospitals or clinics, IOM recommended.
<urn:uuid:542c33af-39ed-4308-8de1-eb11cddca2a3>
CC-MAIN-2017-04
http://www.nextgov.com/health/2012/07/defense-and-va-cant-track-ptsd-treatments-report-finds/56844/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957573
711
2.5625
3
Nowadays DVI and HDMI connections become more and more widely used in digital signage, we are usually asked that which is better, DVI (or HDVI) or VGA and even or SDI? I think there is no clear answer to this problem. First, there are some knowledge about these elements we need to know. DVI and HDMI are the same as one another regarding image quality and resolution. The main differences HDVI and DVI are that HDMI carriers audio and video signal; moreover HDMI uses different types of connectors. And both the HDMI and DVI use the same encoding technology, and for that reason DVI source can be connected to an HDMI connector or display, with a DVI/HDMI cable, with no use of signal converter. SDI is a professional video transmission interface, commonly used in broadcast video equipment. SDI and HDMI interface all support digital video transmission which without compressed. HDMI and SDI have different applications and they will not have a conflict. HD-SDI can reach up to 1080I and the latest version of the 3G-SDI, it can up to 1080P. VGA signal type is analog signal, the video output interface for the 15-pin female socket, and the input interface is 15-pin male plug. What are the difference between DVI, HDMI, SDI and VGA? DVI, HDMI, SDI and VGA Videos are all video signals which support a variety of resolutions, each one of them deliver the signal from source to display in different ways. The main differences is that DVI/HDMI delivers the signal in a digital format. VGA is an analog format, which deliver the signal, not as a digital stream, but as a set of carrying voltages representing the red, green and blue components of the signal. The figure below represents the in turn: DVI, HDMI, SDI and VGA. DVI, HDMI and VGA deliver signals as red, green, and blue color components, together with sync information. The DVI/HDMI standard delivers these along three data channels in a format called T.M.D.S, which stands for “Transmission Minimized Different Signaling” . This basically involves a blue, red and green sync that are added, and separate the channels. VGA is delivered, almost similarly, with the color information slit up three ways. However, VGA uses a color difference type signal, which includes Luminance, the green red or blue channel, representing the total brightness of the image. The sync pluses for both horizontal and vertical are delivered on the Y channel. SDI is divide into SD-SDI and HD-SDI, and now among high definition digital television production equipment, it has been mainly connection ways. SDI can not transmit compressed digital signals directly, digital video recorder, hard disk and other devices that can record compressed signal replay and must be decompressed and according to SDI connector and then enter into the system. If you repeated the compression and decompression, it will cause the image’s quality. HDMI, DVI, SDI and VGA signal types are similar fundamentally, because they break up the image in similar ways, and deliver the same type of information to the display. How the differ, as we’ll see, it depends on a great extent upon the particular characteristics of the source and display devices, and depend upon cabling as well. What is better Digital or Analog? Digital signal transfer, it is assumed that s error-free, while analog VGA signals are always subject to some amount of degradation and information loss. There is an element of truth to this argument, but it tends to back fire in real-world testing. First, there is no reason to get signal degradation of an analog VGA signal in digital signage installation where the distance between the player and the screen is short. Digital signage installation in a large retail or education facility for example can present a challenge for analog cabling. But, it is a flawed assumption to suppose that digital signal handling is always error-free. DVI and HDMI signals aren’t subject to signal error correction like downloading a file; once information is lost, it’s lost for good. That is not a consideration with well-made cable over short distances, but can easily become a factor at long distance. We know that SDI and HDMI both support digital video transmission without compression, the differences are that HDMI need to 19 cables to transmit and the SDI just need 75Ω coaxial cables. HDMI transmit only 40 meters at most currently and the SDI can transmit 120-400 meters. The Answer: It Depends So, which is better, DVI, HDMI, SDI or VGA? The answers unsatisfying, but the truth is that it depends. It depends upon your source player, type of cable, type of display and the distance, and there’s no good way, in principle, to say in advance whether the digital or the analog connection will draw a better picture. You may find that some digital signage player looks better through its DVI or HDMI output, while a different player using VGA output better image through its RGB cable, and SDI output better video than others. On the same display. Some installers reports ghosting/blurriness with text using analog and when switching to digital interface made things much crisper. On the other hand, other installers who used DVI/HDMI long cable found that the text was blurry, colors were off and the image didn’t scale to fit correctly and got no issues at all with VGA.
<urn:uuid:4f713d5b-262d-465d-884c-e15ecbbf47df>
CC-MAIN-2017-04
http://www.fs.com/blog/dvihdmivgasdi-video-signal-connectors-in-digital-signage.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00128-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927884
1,164
2.984375
3
Lack of regulation has contributed much to the success of the Internet, and made it a hotbed for new ideas. But there are some things that should be regulated and enforced in order for it to remain just that, and net neutrality is one of them, says the European Commissioner for Digital Agenda Neelie Kroes. “The 2011 study by European regulators showed that, for many Europeans, online services are blocked or degraded – often without their knowledge. For around one in five fixed lines, and over one in three mobile users,” she said on Tuesday while addressing at the European Parliament in Brussels . “It is obvious that this impacts consumers, but start-ups also suffer. Because they lack certainty about whether their new bright ideas will get a fair chance to compete in the market.” She pointed out that ISPs have legitimate reasons for managing traffic such as data congestion and anti-spam efforts. But the fact is that different users have different network needs, she says, and they should be guaranteed to get the speeds and the quality of Internet services they paid for. Kroes also shared that she will be putting forward proposals to the College of Commissioners, which will aim to assure that citizens get “the fairest deals, the most choice, the best new services over the fastest networks,” keep the Internet open, and provide ISPs with incentives to improve the infrastructure. These proposals will address the need for: - Transparency – no more complex contracts with hidden fees and unclearly defined obligations for the ISPs - Users having a real possibility of choice when it comes to selecting an ISP – no more countless barriers to ISP switching, or the automatic extension of the contract - Fair competition as a way to spur innovation – “Services like VoIP or messaging services – like Skype or WhatsApp – offer real innovation for consumers,” says Kroes. “But some ISPs deliberately degrade those services, or block them outright, simply to avoid the competition.” But she is aware that the telecoms single market is far from complete, and that a failure to take coordinated action on net neutrality would shatter the fragile construction. “If we don’t address net neutrality, wider problems will arise and tomorrow’s innovative services might have to stop at the border,” she concluded.
<urn:uuid:4e1538a6-bd2e-42d3-bdf9-f0d56c047a31>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/06/05/net-neutrality-soon-to-be-on-eus-agenda/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961008
475
2.671875
3
Standard electrical equipment often creates internal sparks which can ignite flammable substances present in hazardous environments where flammable materials are handled and any leak or spill has the potential to form a an explosive atmosphere. In these areas, it’s essential that mobile devices are rated either non-incendive or Intrinsically Safe (I-Safe). But what do these terms mean and how do you know whether you need either of these rated mobile devices? Intrinsic Safety is a protection technique for safe operation of electronic equipment in explosive atmospheres. A device termed intrinsically safe is designed to not contain any components that generate sparks or a hot surface due to any type of electrical fault that could cause an ignition. The National Electrical Code classifies hazardous locations by class and divisions. In these environments, mobile devices must be used with a corresponding rating. In tomorrow’s post, we’ll cover the Intrinsically Safe definitions and standards. Contact DecisionPoint Systems, Inc. to learn more about non-incendive and Intrinsically Safe (I-Safe) mobile devices and what you need to protect your employees.
<urn:uuid:b7e9b536-29d7-4ce3-9598-03731897c641>
CC-MAIN-2017-04
http://blog.decisionpt.com/intrinsic-safety-101
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00274-ip-10-171-10-70.ec2.internal.warc.gz
en
0.889841
235
2.640625
3
GE’s IoT technologies are helping gas and oil plants to become more efficient. Ideologically, many people working within the tech sector are looking for the energy industry to pivot for good, move away from fossil fuels, and instead focus on greener, renewable alternatives. The realists though will probably admit the need in the short term to simply make the process as clean and efficient as possible, especially in developing countries where such projects are only just getting underway. GE and IoT In the last year, digital industry solutions specialist GE has launched digital power plant systems for gas and coal plants. In new plants, GE’s technologies have increased the average conversion efficiency from 33 percent to 49 percent. For the longer-standing coal plants, efficiency improvements are substantially less, although emissions of greenhouse gases can be reduced by 3 percent. These efficiency gains come about through a clever blend of Internet of Things (IoT) technology and active monitoring. Optimising fuel combustion, tuning the plant to adjust to the properties of the coal being burned, adjusting the oxygen levels in the boiler, and reducing downtime due to equipment failures all have an impact. Lana Ginns, marketing manager at flare gas specialists Fluenta, highlighted the importance of smarter processes across the industry. “Access to precise, real-time information on the amount of gas flared at different sites can be compared to more effectively manage the flaring process – site to site, country to country, or process to process – enabling continuous improvement based on best practice from top performing (low emission) sites,” she told Internet of Business. “Information can be presented on a dashboard for real-time analysis, enabling a business to reduce workforce costs, increase employee safety, reduce carbon tax obligations and provide significant environmental benefits by reducing emissions and fossil fuel waste.” Connected power plants help integrate renewables in the longer term Increasingly connected power plants, although on the one hand extending the life of fossil fuels, can actually help to integrate renewables in the long term. As outlined in the MIT Technology Review, this is made possible because smarter plants are more flexible and better able to respond to fluctuations in the power supplied by intermittent sources like wind and solar. In short, more connectivity means more efficiency, which allows fossil fuel power plants to go from zero to one-hundred percent output in less time than ever before. Speaking with MIT Technology Review, Scott Bolick, head of software strategy and product management at GE, suggested that it’s now a case of making power plants as efficient as possible. He said “In places like China and India, they’ve already locked in plans to build brand new coal power plants, and those plants are going to be on the grid for 30 to 40 years. We look at it as our responsibility to make sure those plants are as sustainable as they can possibly be.”
<urn:uuid:cf1d45e1-137d-4e9e-9ef6-3f3670d65e80>
CC-MAIN-2017-04
https://internetofbusiness.com/ge-iot-revolutionise-power-plants/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945444
592
2.875
3
Researchers at MIT’s Lincoln Lab have developed a radar system that can “see” through concrete walls up to eight inches thick. Developed with an eye toward military applications, the device, it’s claimed, can operate from 60 feet away. Using wavelengths similar to those employed by Wi-Fi, the device yields real-time video of moving objects behind walls. Currently, the systems displays moving things – such as people – as blobs that researcher Gregory Charvat said in an MIT news release “requires a lot of extra training” to understand. But Charvat and his colleagues are working on enhancements to improve the images. To see the system in action, watch the video below.
<urn:uuid:30f4a410-33d5-4778-a41f-eab1281acbeb>
CC-MAIN-2017-04
http://www.govtech.com/technology/MIT-Researchers-New-Radar-Technology-Sees-Through-Walls.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957052
146
2.640625
3
In the World Economic Forum’s 2015 edition of Global Risks, cyber-attacks were specifically cited as a clear and present danger to business and government. The report stated, “2015 differs markedly from the past, with rising technological risks, notably cyber attacks…”. We’re going to shed some light on this dark art and show you some of the more important aspects of managing cyber risk. What is cyber risk? According to National Institute of Standards and Technology (NIST) Special Publication 800-30, “Risk is a function of the likelihood of a given threat-source’s exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization.” While the Information Systems Audit and Control Association (ISACA) in the Risk IT Framework defines it as, “The business risk associated with the use, ownership, operation, involvement, influence and adoption of IT within an enterprise.” To boil it down into its simplest terms, “Risk is the likelihood that something bad will happen.” What is the risk landscape? Risks can be categorized into three broad areas: - Unauthorized access of IT systems for the purposes of theft, industrial espionage, extortion or embarrassment. - An unintentional breach by staff, contractors or vendors. - Operational risk through improper systems integration, e.g., poor legacy integration or inadequate testing during mergers and acquisitions. Who sets the risk appetite for a company? The company’s Risk Committee can be organized at the executive, board level or utilize a hybrid approach. It is responsible for the risk management policies of the company and oversight of the risk management program, which includes determining the risk appetite, risk management, compliance framework and the supporting governance structure. The committee should also have the appropriate resources and authority appropriate to carry out its defined duties. (See Why written policies are vital to your cyber strategy.) How is risk determined? Determining risk starts with two methods, qualitative and quantitative. According to NIST SP 800-30, qualitative risk analysis relies on empirical data to assess risks based on non-numerical categories (e.g., very low, low, moderate, high, very high). The advantage to a qualitative approach is that it is easier to communicate the risk to a broader audience. This method may also find risks and inter-dependencies not identified with other methods. The disadvantage is a number of subject matter experts can be shown the same data and not reach a consensus. Also, everyone including subject matter experts are prone to cognitive bias. Simply stated, cognitive bias is the tendency of people to color their perception by filtering it through their own experiences, prejudices, likes, and dislikes. Quantitative risk management may be defined as, “A numerical scoring or rating which is assigned through verified mathematical modeling using high quality data.” This type of mathematical modeling will enable the company to make cost-effective investments in security technology and reduce cyber risk. The downside to this methodology is that it can require a significant investment of time and resources. The general practice is to use a qualitative risk analysis to feed the quantitative risk management process. Used properly, these two methods of risk analysis are codependent. One of the most important steps in the post production process of both methodologies is to test the results and feed them back into the next round. Risk management is a continuous process of development and refinement as the company changes, grows, and moves new directions. Simply stated, it is a journey without an end. What does the company do with risk? Risk acceptance does not reduce the effects of risk; however, it is still considered a risk strategy. This is a common option when the cost of other risk management strategies such as avoidance or mitigation may outweigh the cost of the risk itself. Why deploy an expensive counter measure where this is a low likelihood of loss? Though caution should be taken when using this strategy, there is legal precedent. In the United States v. Carroll Towing Co. 159 F.2d 169 (2d. Cir. 1947) decision from the 2nd Circuit Court of Appeals, Judge Learned Hand proposed a test to determine the standard of care for the tort of negligence. Simply stated the ruling asserts: - If (Burden < Cost of Injury × Probability of occurrence), then the accused will not have met the standard of care required. - If (Burden ≥ Cost of injury × Probability of occurrence), then the accused may have met the standard of care. Risk avoidance is a risk management strategy that seeks to eliminate the possibility of risk by avoiding engaging in activities that create exposure to risk. The down side to risk avoidance is that it can limit a company’s opportunities. Risk mitigation is the most common risk management strategy used by businesses. This strategy limits a company’s exposure by utilizing countermeasures, processes, and policies. An example of risk mitigation would be a company determining that a network may fail or become over utilized and deploying a cloud-based solution which would provide redundancy and scalability. Risk transference is the strategy of assigning risk to a third party. This usually takes the form of assigning the risk to a vendor and or utilizing cyber-risk insurance. In the case of a vendor, this can be beneficial to a company, by transferring a risk function that is not a core competency. Cyber-risk insurance can assist a company in limiting the financial impact of a cybersecurity breach. However, insurance companies will closely inspect the company’s information security management and cyber-risk programs for sufficiency. Cyber insurance will not mitigate the impact of reputational damage nor does it transfer regulatory compliance liability. A corporate cyber-risk strategy is critical to good governance by the board and senior management. The board and executive level risk committees, so prevalent in the finance and insurance industry, will proliferate outward into every corner of the private sector. This is especially true as we see a tougher legislative and regulatory compliance environment on the horizon. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:7d266453-a1c1-42f6-8465-0520e0b38c11>
CC-MAIN-2017-04
http://www.csoonline.com/article/3025172/security/how-to-manage-cyber-risk.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934207
1,256
2.828125
3
Fast NASA Action Begets World's Largest Linux Supercomputer Mar 31, 2005 5:00 AM PT The space shuttle Columbia disaster on Feb. 1, 2003 sparked development of the world's second largest supercomputer, a system with 10,240 Intel Itanium 2 processors capable of performing 51.87 trillion calculations per second. Not only did the supercomputer, appropriately dubbed Columbia, stretch Linux's performance boundaries, but it also strained NASA's installation capabilities and the manufacturing ability of the hardware supplier, Silicon Graphics Inc. That's because the supercomputer went from design to deployment in 120 days rather than the years typically associated with building such a complicated system. The rapid installation schedule stemmed from the U.S government's Return to Flight initiative, which was launched with the goal of taking the steps necessary to ensure that future shuttle flights would be safe. "Because another shuttle launch was set for the end of the year, we quickly needed to put a computer system in place that could provide us with information about the impact of any problems with the take off or any of the equipment," stated Dr. Walt Brooks, division chief at the NASA Advanced Supercomputing center. Need for Speed As a result, governmental bureaucratic red tape, which typically adds years to the purchasing process, was cast aside and a team was assembled to select a new supercomputer. At the time, NASA was thinking about adding a supercomputer with 10 teraflops to 15 teraflops of computing power to its arsenal. "We were running into performance problems with some of our more sophisticated applications," explained NASA's Brooks. The agency examined linking thousands of dual-processor commodity servers into a sprawling cluster, but that approach did not mesh with the scientific applications that the agency ran. Instead, the agency opted for a large multiprocessor system. NASA had worked with SGI to build Kalpana, a 512 processor system named in honor of Kalpana Chawla, a NASA astronaut lost in the Columbia accident. Satisfied with the results from that project, the agency turned to SGI to build the new system. Once the bastion of proprietary supercomputers, the high performance computing market had been shifting to more commodity components and open-source software, like Linux -- changes that help to lower the cost of these multi-million dollar systems. Once the system design was established, SGI and NASA had to build the new supercomputer. Since the installation timeframe was unprecedented, Dick Harkness, vice president of SGI's manufacturing facilities who oversee about 200 employees, threw out all of the company's traditional project management techniques. "We literally put new business processes in place as the project unfolded," he told LinuxInsider. "It was obvious that the established ones would not be able to scale up and support the enormity and complexity of the tasks involved with delivering the new system." SGI had perfected the process of building two supercomputer processor nodes simultaneously and had to increase that number so that six were being constructed at one time. "We had to continue filling orders for our other customers as we put the NASA system together," noted Harkness. To meet its shipment goals, the company expanded its manufacturing facility and brought in scores of outside contractors. "Many employees volunteered to work nights and weekends in order to help us meet our production deadlines," Harkness said. Is the Pipeline Full?Keeping things moving meant ensuring that there were adequate components in the pipeline. "We knew what we needed to deliver the system, but what we learned was we needed more insight into our vendors' supply chains, so we could see how likely, or unlikely, it was that they could deliver needed components to us on time," SGI's Harkness explained. As a result, the firm took an interest in items like cabling and LED assembly, which before were of little to no concern. There were also technical challenges. Squeezing more than 10,000 processors into NASA's supercomputing room in Mountain View, Calif., meant Columbia had to incorporate eight 512-processor nodes configured in a new high-density, high-bandwidth version of the SGI Altix 3000 system. Also a 440-terabyte SGI InfiniteStorage system was used to store and manage the terabytes of new data that would be generated every day. Because the supercomputer was so vast and the circuitry so dense, the hardware supplier had to improve its cooling system. To ensure that the new system would not overheat, SGI designed new water-cooled doors -- the first time such features were offered on systems other than Cray supercomputers. NASA also had to retool its operation. The agency had to redo the plumbing for its supercomputer water cooling system, so it could handle heat thrown off by the new system. The agency's supercomputing space had to be reconfigured without disturbing existing users, and the agency had to prepare to support a larger number of end users. Live Beta Testing Surprisingly, these problems proved more difficult than the design and deployment of the supercomputer itself. SGI completed basic quality assurance tasks, but the devices could not be fully tested until they were installed at NASA. In effect, NASA employees used beta versions of new hardware that were less than a week old, and in some cases, the processors were running only 48 hours after they had left the assembly floor. This beta testing model was quite different from the typical testing process which takes months, sometimes even years. "We were concerned about software installation and application compatibility but fortunately encountered no major problems," said Jeff Greenwald, senior director marketing at SGI. The SGI Altix 3700 supercomputer presented NASA with a significant performance boost. The supercomputer relies on industry standard 64-bit Linux microprocessors, and each node scales up to support 256 processors with 3 TB of memory. Round trip data transmission speeds can take as little as 50 nanoseconds, so the supercomputer completes 42.7 trillion calculations per second sustained performance on 16 of 20 systems, an 88 percent efficiency rating based on the LINPACK benchmark. The added horsepower has made it easier for NASA to complete certain tasks. Hydrogen gas flow chamber simulations in the space shuttle propulsion systems can now be done in days instead of weeks. New applications include earth modeling, space science and aerospace vehicle design, so scientists are now able to more easily map global ocean circulation, predict large scale structures in the universe, and examine the physics of supernova. Use of the system is expected to expand. "As word about the processing power we now have has spread, scientists have found new applications that can take advantage of it," concluded NASA's Brooks. "We are quite busy, not as busy as we were during the installation, but still there is plenty for us to do."
<urn:uuid:df9a4c33-3caf-4684-a60d-b04008c51423>
CC-MAIN-2017-04
http://www.linuxinsider.com/story/open-source-software/41685.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966544
1,390
2.828125
3
Subsea is a term that refers to equipment, method, and technology employed in undersea geology, underwater mining, offshore oil & gas developments, and marine biology. In the oil & gas industry, the term subsea relates to the drilling, exploration, and development of oil & gas fields in underwater locations, usually at the shallow water and deep water level. Subsea equipment is usually used at a deep water depth for the exploration and production of oil & gas. This equipment includes subsea umbilical risers & flowline (SURF), subsea manifolds, subsea tree, subsea boosting systems, subsea separation systems, subsea injection systems, subsea compression systems, and so on. Latin America is one of the major markets for subsea equipment. The rising offshore exploration activities along with increasing capital spending drive the Latin American subsea equipment market. The subsea equipment market in Latin America is expected to grow at a CAGR of approximately 8% from 2014 to 2019. Brazil, Venezuela, and Argentina are the major oil producers in the region. Due to the discoveries of new oil reserves and growing energy demand, the exploration & production activities are increasing in the region. However, due to the maturity of onshore fields, the attention towards offshore exploration is increasing. Hence, the increasing capital spending of the subsea market along with the increasing consumption drive the subsea equipment market. The market is segmented and forecast based on type and application. On the basis of type, the market is segmented into subsea production systems, subsea processing systems, subsea dredging, trenching & excavation equipment, and others. The applications included in this report are upstream–oil & gas and midstream–oil & gas. The market share analysis, by revenue, of the top companies has also been included in the report. The market share analysis of these key players are arrived at, based on key facts, annual financial information, and interviews with key opinion leaders such as CEOs, directors, and marketing executives. In order to present an in-depth understanding of the competitive landscape, the report on the subsea equipment market in Latin America consists of company profiles of key market players. This report also includes the market share and value chain analyses, along with market metrics such as drivers and restraints. Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:885fa9d3-0e6e-4911-94b3-eed6216acc4c>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/latin-america-subsea-equipments-8093985527.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950053
518
2.515625
3
According to the American Bar Association's 2008 Legal Technology Survey, laptop use among lawyers has risen from 69% to 83% within the last year. As lawyers become increasingly mobile in their work habits, thieves have access to more devices full of confidential client information. IT should protect all of the firm's notebook PCs and removable storage through data encryption. Levels of Encryption: Disk, Folder, or File Encryption tools typically provide one or more of the following three encryption types: - Full-disk. Everything but the master boot record is encrypted. This encryption level is the most secure, because it leaves the thief with no access to any usable data. - Folder-based. This form of encryption designates a folder as a protected container. All files and subfolders therein are encrypted until authentication is completed. In most cases, authentication for decryption is tied to system login, so no additional passwords are required. - File-based. These systems encrypt individual files, providing the most granular level of protection. Each file can be protected with its own password, which increases the risk of password confusion or loss.
<urn:uuid:bcead476-c7b7-4ca3-8173-4305f61cc952>
CC-MAIN-2017-04
https://www.infotech.com/research/low-cost-encryption-for-law-firms
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92818
228
2.734375
3
NOAA upgrades rescue system Digital 406 MHz frequency helps satellite network locate boats, planes and individuals - By William Jackson - Feb 09, 2009 A satellite network that has helped rescue more than 6,000 people in the United States since 1982 has moved to a new digital system that gives responders even more reliable information about the locations of planes, boats and individuals in distress. As of Feb. 1, the National Oceanic and Atmospheric Administration no longer processes 121.5 MHz emergency beacon signals on the weather satellites that are part of the Search and Rescue Satellite-Aided Tracking (SARSAT) system. Under an international agreement, SARSAT now receives only 406 MHz signals from Earth-based beacons. The new digital signals provide more information and can determine location to within 5 kilometers, compared to 18 kilometers for the old, analog signals. The digital signals are transmitted at 5 watts, compared to a fraction of a watt for the analog signal, and there is less noise and interference in the 406 MHz band, reducing the number of false alarms. “It is vital that anyone with an old 121.5 MHz beacon make the switch to 406 MHz immediately so their distress signals can be heard,” said Chris O’Connors, NOAA's SARSAT program manager. “Plans for this changeover started in 2000, and we want everyone who relies on these devices to have the proper equipment.” All aircraft, large boats and some individuals who travel in remote areas use the beacons. The change probably will not pose a problem for most users on land and water, said Shawn Maddock, NOAA’s SARSAT support officer. Personal locator transmitters have operated in the 406 MHz band since they were authorized in 2003, and manufacturers have phased out the old frequencies on the Emergency Position Indicating Radio Beacons used on boats. Now only models using the newer frequency can be sold, at the direction of the Coast Guard, which enforces beacon requirements for boats. However, planes could be another matter. About 270,000 aircraft in the United States are registered as using emergency locator transmitters, but because no agency enforces aircraft beacon requirements, only about 30,000 use the 406 MHz equipment. SARSAT is a joint effort with Russia, Europe and India. In the United States, the system piggybacks on two Geostationary Operational Environmental Satellites operated by NOAA: GOES East, which is stationed above the equator at 75 degrees west longitude, and GOES West, which sits at 135 degrees west longitude and the equator. SARSAT also relies on five Polar Operational Environmental Satellites, which circle the Earth in polar orbits. The geostationary satellites give continuous coverage of most of the Western Hemisphere. The polar satellites give global coverage but only intermittently in any given area. When activated, beacons ping the satellites every 50 seconds. The polar satellites use Doppler frequency shift to establish the location of an emergency signal. However, because the geostationary satellites are not in motion, they cannot establish the location of a signal. But they can receive signals immediately from almost anywhere in the United States. If a geostationary satellite is the first to receive a signal, Maddock said, NOAA can retrieve the beacon owner’s registration information from its database and alert local authorities. A polar satellite capable of determining the location of the signal would pick it up within 48 minutes, so rescuers would know where to target their efforts. In the United States, six NOAA ground stations — in Maryland, Florida, California, Alaska, Hawaii and Guam — receive distress signals, which are relayed to NOAA’s Mission Control Center in Suitland, Md. If the beacon is registered to a foreign user, the signal is also sent to that country’s mission control center. In the United States, NOAA’s mission control alerts the Air Force for land rescues and the Coast Guard for water rescues. William Jackson is a Maryland-based freelance writer.
<urn:uuid:812f6156-9aa5-4779-9fae-350a6777a69f>
CC-MAIN-2017-04
https://gcn.com/articles/2009/02/09/noaa-upgrades-rescue-system.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00082-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922166
836
2.578125
3
Many people are confused by Cisco’s NAT/PAT naming conventions, such as “inside local”, etc. Since a picture is worth a kiloword, I thought that perhaps a few diagrams might help. Let’s say that our host H1 has IP address 10.1.1.1/24, an RFC 1918 private address. We want to cross the Internet to access host H2, which has the public IP address 18.104.22.168, as shown in Figure 1: Because we can’t advertise RFC 1918 addresses to the public Internet, in order to make this work we’re going to perform PAT (overloading) on R1, our edge router. Let’s assume that our internal LAN, with address space 10.1.1.0/24, is connected to R1’s Ethernet 0 interface. We’ll be overloading on Serial 0, which has the IP address 22.214.171.124, in public space. To accomplish the translation, the commands would be: - R1(config)#access-list 1 permit 10.1.1.0 0.0.0.255 - R1(config)#ip nat inside source list 1interface serial 0 overload - R1(config)#int e0 - R1(config-if)#ip nat inside - R1(config-if)#int s0 - R1(config-if)#ip nat outside Figure 2 summarizes the situation at this point: Inside and Outside: These refer to the physical location of the device whose address is being referenced. The physical locations were unambiguously defined by configuring the appropriate router interfaces with these commands: - ip nat inside - ip nat outside In our example, R1 is doing the translation, with the “inside” interface attached to our LAN, and the “outside” interface on the WAN link to our service provider. Key point: Because of the way we have configured R1’s interfaces, “inside” is defined to mean “located on our side of R1”, and “outside” means “located on the far side of R1”. Local and Global: These refer to the perspective (viewpoint) from which the address is being observed, not where the device is located. Key point: In our example, “local” is defined to mean “looking at it from our side of R1”, and “global” means “looking at it from the far side of R1”. Yeah, yeah…but what about “inside local”? The key to the jargon is to realize that it describes what we’re looking at, and from where in that order. It’s location-perspective. In other words, “inside local” means that we’re looking at the address of an “inside” device (on our side of our router), from our “local” perspective (from our side of our router). Given the two locations (inside and outside), and the two perspectives (local and global), there are four possibilities: - Inside Local - Inside Global - Outside Local - Outside Global We’ll discuss each of these in turn. Since humans can’t actually see the packets as they traverse the media, by “looking” we mean what we would “see” if we were to install protocol analyzers “locally” and “globally”, and examine the addresses within the packets’ IP headers. Since I can’t draw a protocol analyzer, in the diagrams I’ll use an “X-ray eyeball” that’s able to see the packet headers. Inside Local: Viewing an “inside” device from the “local” perspective, as shown in Figure 3. This is how we see the IP address of H1 from our side of R1. In our example, the “inside local” address is 10.1.1.1, which is the actual address of H1. Inside Global: Viewing an “inside” device from the “global” perspective, as shown in Figure 4. This is how the Internet sees the IP address of H1. In our example, the “inside global” address is 126.96.36.199, the translated address of H1. Outside Local: Viewing an “outside” device from the “local” perspective, as shown in Figure 5. This is how H1 sees the IP address of H2. In our example, the “outside local” address is 188.8.131.52, which is not being translated. Outside Global: Viewing an “outside” device from the “global” perspective, as shown in Figure 6. This is how the Internet sees the IP address of H2. In our example, the “outside global” address is 184.108.40.206, which is H2’s actual address. Since in our example we’re not translating the destination address on the way out (nor the source address on the way back), the “outside local” and “outside global” addresses are identical. Figure 7 shows our current situation, including the four combinations of location and perspective, along with the corresponding addresses. Key point: It’s possible that NAT/PAT is also being done on H2’s side, but there is no way we can tell that from our side. If so, they would have their own versions of inside/ outside and local/global that have nothing whatsoever to do with ours. No matter what terms Cisco chose to use, it would boil down to the same location-perspective issue. So it is what it is, and that’s “what we’re looking at, and from where”. And, yes, I still often put my fingers on the diagram and talk to myself when figuring this stuff out. “Inside global…so we’re talking about the inside stuff” while my left hand is pointing to the LAN…“from the global perspective” …my right hand is pointing to the Internet, and sliding from right to left to show the direction I’m looking. By the way, although we used PAT (overloading) in our example, the “location-perspective” terms work the same way for static and dynamic NAT. Author: Al Friebe
<urn:uuid:3be4354a-78e0-43c3-876d-4fdbe94f8030>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/10/19/nat-and-pat-part-4-a-new-perspective/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00476-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924599
1,425
3.015625
3
The report, titled “Supply Chain Decarbonization”, examines the role that the logistics and transport sector plays in reducing emissions, both in its own operations and by influencing shippers and buyers to undertake broader supply chain improvements. According to the report, logistics and transportation activities contribute approximately 5 percent of the 50,000 mega-tonnes* of carbon-dioxide emissions generated by all human activity annually. The report reviews 13 commercially viable opportunities for reducing supply chain carbon emissions—within the logistics and transport sector as well as across the extended supply chain—and assesses them according to carbon-dioxide abatement potential and feasibility to implement. According to the report, the five opportunities with the greatest carbon-dioxide abatement potential and greatest implementation feasibility are: • clean-vehicle technologies (175 mega-tonnes CO2 abatement potential); • de-speeding the supply chain (171 mega-tonnes); • packaging design initiatives (132 mega-tonnes); • optimized networks (124 mega-tonnes); and • energy-efficient buildings (93 mega-tonnes). These five opportunities address emissions that originate within the logistics and transport sector and represent approximately one-half of the 1,440 mega-tonnes CO2 abatement potential presented by all 13 opportunities. While the remaining eight opportunities address emissions generated by shippers and buyers within their own operations, the report concludes that organizations in the logistics and transport sectors are in a position to influence shippers and buyers to collaborate across the extended supply chain in an effort to achieve the greatest de-carbonization impact. “Clearly, the logistics and transport sector can contribute a great deal to the reduction of carbon emissions and obtain strategic business benefit from doing so,” said Narendra Mulani, managing director of Accenture’s Supply Chain Management practice. “However, the greatest strides will be achieved by collaborative end-to-end supply chain optimization that includes shippers and buyers in addition to logistics and transport providers.” In addition to identifying the opportunities, the report also provides a number of recommendations, for logistics and transport providers as well as for shippers and buyers, to de-carbonize the extended supply chain. The recommendations for logistics and transport providers include: • Adopting new technologies industry-wide; • Improving training and communication industry-wide; • Switching modes where possible; • Developing recycling offerings; • Developing home delivery offerings; and • Promoting carbon offsetting of shipments. Recommendations for shippers and buyers include: • Understanding and reducing the carbon impact of manufacturing through alternative sourcing; • Better planning to allow slower and better optimized transport; • Reducing packaging materials; • Improving carbon labeling, standards and auditing tools; and • Increasing shared loading. “This report makes clear the need to look strategically at the end-to-end supply chain to include all aspects of the product lifecycle, from raw materials to product disposal, when approaching the supply chain de-carbonization challenge,” said Sean Doherty, head of Logistics & Transport at the World Economic Forum. About the Report The World Economic Forum’s “Supply Chain Decarbonization” report is a comprehensive review of the scale of the logistics and transport sector’s carbon footprint, and the principal opportunities for near-term reduction of the sector’s emissions. Researched and co-written by Accenture, the report assesses the legal and commercial drivers for supply chain decarbonization. It establishes a framework for meaningful cuts in emissions across end-to-end supply chain and, through a series of scorecards, analyzes the relative importance of the main opportunities for change. Accenture is a global management consulting, technology services and outsourcing company. Combining unparalleled experience, comprehensive capabilities across all industries and business functions, and extensive research on the world’s most successful companies, Accenture collaborates with clients to help them become high-performance businesses and governments. With more than 186,000 people serving clients in over 120 countries, the company generated net revenues of US$23.39 billion for the fiscal year ended Aug. 31, 2008. Its home page is www.accenture.com. About the World Economic Forum The World Economic Forum is an independent international organization committed to improving the state of the world by engaging leaders in partnerships to shape global, regional and industry agendas. Incorporated as a foundation in 1971, and based in Geneva, Switzerland, the World Economic Forum is impartial and not-for-profit; it is tied to no political, partisan or national interests (http://www.weforum.org). *1 mega-tonne equals 1.1023 megaton # # #
<urn:uuid:f3e2e18f-9b3e-4586-9059-e61a35784c6c>
CC-MAIN-2017-04
https://newsroom.accenture.com/news/accenture-and-world-economic-forum-identify-top-opportunities-for-reducing-supply-chain-carbon-emissions.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00384-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895648
991
2.625
3
Introduction by George Kupczak of the AT&T Archives and History Center. The goal of this film was to aid in reducing customer dialing irregularities by demonstrating the correct way to use the dial telephone. It documents the shift between operator-based connections (which were on the way out) and having to dial the phone and make the connection yourself. The dial telephone was new at this point, although the two-letter, 5-number system was still commonplace. This film even has to explain what a ringing and busy signal sound like! This film opens with the demonstrator pointing out the importance of correctly using the dial telephone. Correct dialing techniques are demonstrated, with an emphasis placed on the following: - Be sure of the right number - Wait for the dial tone - Refer to the number while dialing - Turn the dial until the finger hits the finger stop - Avoid confusing the letter "O" with the "0" - The difference between ringing and busy signals One by one, the conventions described in this film that aren't already gone may disappear imminently - for instance: with voicemail, the norm, when is the last time you got a busy signal on a call? Susann Shaw, the demonstrator in this film, was a popular fashion model throughout the 1940s and 1950s, making frequent appearances in the pages of Vogue. Produced by Charles E. Skinner Productions Footage courtesy of AT&T Archives and History Center, Warren, NJ
<urn:uuid:38a27a22-f7d7-43d9-8731-162e97aef5a3>
CC-MAIN-2017-04
http://techchannel.att.com/play-video.cfm/2011/6/3/AT&T-Archives-Now-You-Can-Dial
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00292-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959514
310
2.96875
3
The Comprehensive National Cybersecurity Initiative (or CNCI) began in 2008 and forms an important component of cybersecurity efforts within the federal government. Anyone can now view or download an unclassified description of the CNCI and each of the 12 initiatives under the CNCI. "Transparency is particularly vital in areas, such as the CNCI, where there have been legitimate questions about sensitive topics like the role of the intelligence community in cybersecurity" said Schmidt. "Transparency provides the American people with the ability to partner with government and participate meaningfully in the discussion about how we can use the extraordinary resources and expertise of the intelligence community with proper oversight for the protection of privacy and civil liberties". - Appoint a cybersecurity policy official responsible for coordinating the Nation’s cybersecurity policies and activities. - Prepare for the President’s approval an updated national strategy to secure the information and communications infrastructure. - Designate cybersecurity as one of the President’s key management priorities and establish performance metrics - Designate a privacy and civil liberties official to the NSC cybersecurity directorate. - Conduct interagency-cleared legal analyses of priority cybersecurity-related issues. - Initiate a national awareness and education campaign to promote cybersecurity. - Develop an international cybersecurity policy framework and strengthen our international partnerships. - Prepare a cybersecurity incident response plan and initiate a dialog to enhance public-private partnerships. - Develop a framework for research and development strategies that focus on game-changing technologies that have the potential to enhance the security, reliability, resilience, and trustworthiness of digital infrastructure. - Build a cybersecurity-based identity management vision and strategy, leveraging privacy-enhancing technologies for the Nation. “There is a working group that has been divided into four tracks dedicated to the international awareness campaign. There have been meetings, there are plans, and there are milestones. We’re making sure that the policy and framework address the international threat, and we’re ensuring that the cybsecurity response plans looks not only at how we co-ordinate, but how we get it right”.
<urn:uuid:e196b680-2040-4489-a4cd-ab84bb0d12c3>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/rsa-schmidt-announces-transparent-national-us/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903262
430
2.828125
3
On Aug. 14, 2003, the largest major blackout in American history affected the northeast region of the U.S. and eastern Canada as a result of a generator failure at FirstEnergy Corp. in Akron, Ohio. About 10 million people in Ontario, Canada were affected as were about 40 million in the U.S. Experts estimate that outage-related losses were between $4 billion and $10 billion. Experts also said that several factors contributed to the disaster, including inadequate disaster preparedness and software deficiencies. Anatomy of a Disaster Specifically, a disaster is any unplanned event that disrupts business as usual. This makes a disaster really a business issue rather than a technology issue. So, in order to successfully manage a disaster, a preparedness plan should be in place. According to a survey conducted by Gartner, two out of five companies that experience a catastrophic event or prolonged outage never resume operations. Of those that do, one of three goes out of business within two years as a direct result of that outage or event. The conclusion: 60 percent of businesses affected by major disasters are out of business within two years. Rather than hypothesize about potential disasters individually, an overall plan should address two major factors. The first is the recovery time objective (RTO): How quickly must lost data be recovered after a disaster? Some systems might not need to be recovered immediately, while others must be brought back online as soon as possible. The second point is the recovery point objective (RPO): To what point do the systems and information need to be recovered? Can a loss of time or a loss of data be incurred? If the last transaction was lost, could it be recovered it in another way? For each aspect of a business, the RTO and the RPO must be identified. Once these two factors are determined, planning will fall easily into place. The first element that must be identified is what needs to be recovered in the event of a disaster. Accounting applications, customer relationship management, financial systems and production management systems must all be brought back online in the event of a disaster. Although running and restoring key operations is imperative, other services and data, including email, voice mail, access to the intranet or to the internet and forms, licenses, and other business information must also be accessible. The next question is when lost data needs to be recovered. The answer will determine recovery priorities. In most businesses, customer-facing functions and communications are imperative. In the event of a disaster, businesses must be able to communicate with their customers and employees. Without this capability, it becomes exponentially more difficult to recover from the disaster or crisis. Personnel must know who to turn to in the event of a crisis, and they also must know what is expected of them. The next question on the list is who will conduct the recovery. Every individual in the company must be aware of his or her responsibilities for disaster recovery as well as what is expected of them. They must also be aware of the time in which their tasks should be completed. The last element is how recovery will take place. Unfortunately, most businesses cannot justify costs for a fail-over hot site, in which data instantaneously flips over and becomes available at an alternate location. Therefore, they must identify what kind of solution their budget will allow, as well as the RTO and RPO. Each level will reflect the time needed to recover certain data and how the data will be stored. The cost will be higher and the technology more advanced for data that must be recovered in a short period of time. Consequently, it will cost less to recover data over a longer period of time. To successfully implement a solution, a plan must be drafted. The RTO and RPO of each business application must then be identified. The next step is the installation of the technology, procedures and documentation of the plan. Following that, a test run is required for each application and business function, during which time staff is cross-trained in recovery operations performance. Finally, tests must be performed at least once per year to ensure that the backup procedures are functional, that all the technologies are still compatible and that the employees are still familiar with the proper procedures. Data recovery processes are imperative to have in place in the case a disaster strikes, but testing the processes is also important. Just as the military practices its drills to maximize effectiveness, it is crucial that a data recovery effort run smoothly. Therefore, the procedures should be tested at least once per year to ensure that everyone knows his or her responsibilities and who to communicate with. Drills are also necessary to ensure that if any systems or procedures have changed, all components of a recovery effort will still be successful and error-free. Although risk management can certainly be a costly venture, more and more companies are taking the proper precautions to ensure that, in the event of a disaster, they are prepared for seamless data recovery. If recent headlines have taught us anything, they have taught us that companies can never be too safe when it comes to protecting their data. Lastly, ensuring that communication, training and procedures are in place will determine whether a company fails or succeeds at disaster recovery. Bill Abram is president and founder of Pragmatix, a diversified IT company that builds custom database and web-enabled applications. He can be reached at 914-345-9444 or via e-mail at email@example.com.
<urn:uuid:86d8a9ae-c0bf-460e-9c47-92d8a2d454b8>
CC-MAIN-2017-04
http://www.cioupdate.com/insights/article.php/3579461/Disaster-Recovery-Made-Easy-Well-Sort-of.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96562
1,124
3
3
But it wasn't always like that. In the 1970s, the company struggled to make TVs with no defects. Finally it gave up and sold the factory to a Japanese firm that quickly changed operations and began cranking out TV sets with one-twentieth the number of defects produced under Motorola's management. Disgusted with its poor manufacturing quality, Motorola's executives took a long, hard look at the problem. They implemented several initiatives aimed at improving quality and customer satisfaction. At the top of the list was something called Six Sigma. According to the American Society for Quality, there are differing opinions on Six Sigma -- whether it's a philosophy, a set of tools or a methodology -- but it's best described by the American Society for Quality as a "fact-based, data-driven philosophy of quality improvement that values defect prevention over defect detection. It drives customer satisfaction and bottom-line results by reducing variation and waste, thereby promoting a competitive advantage." Six Sigma uses statistical and management tools to improve quality. A process -- whether it involves manufacturing or a service -- is said to have reached Six Sigma when it has a failure rate of 3.4 per 1 million, or 99.99966 percent accuracy. The industry average, according to James Lucas in his 2002 article, The Essential Six Sigma, is 6,200 defects per 1 million, otherwise known as Four Sigma. Lucas said the strength of Six Sigma is its "simple and effective management structure." Compatible with Six Sigma is another process that also got its start in manufacturing: Lean Flow. Famed automobile manufacturer Henry Ford is sometimes credited with devising the idea as a way to eliminate waste from the manufacturing and assembly line processes when building the Model T in the early 20th century. Ford likened his Lean Flow process -- also called Lean Manufacturing, Continuous Flow and Just-In-Time Manufacturing -- to a river that flowed continuously. Anything that disrupted the flow was a waste that had to be eliminated. Peter Peterka, president of SixSigma.us, said Lean Flow and Six Sigma have strong commonalities and complement each other. "They share the goal to identify and eliminate sources of waste and activities that do not add value," he wrote in an article posted on his company's Web site . When the two are combined, he said, they make it easier to identify and resolve quality issues. How It Works Together, Lean Flow and Six Sigma have become a defined approach to quality improvement embraced by some of America's leading business firms, including Dell, General Electric (GE), Johnson & Johnson, Lockheed Martin, Motorola, Northrop Grumman and Xerox, to name a few. Motorola's use of Six Sigma improved quality and reduced errors to such an extent that the company won the prestigious Malcolm Baldridge Award for national quality. At GE, Lean Six Sigma (LSS) also had a huge impact. When Jack Welch, former CEO, heard about Motorola's success with LSS, he developed a quality improvement program that became known as the GE Way. His goal was quite far-reaching. "We want to change the competitive landscape by not just being better than our competitors, but by taking quality to a whole new level." In the first two years GE implemented LSS, revenue increased 11 percent and profits grew 13 percent. All organizations that embrace LSS use a standard, five-step, data-driven approach to re-engineering an existing process for quality improvement. Newton Peters, principal at Xerox Global Services, summarizes the five steps, known as DMAIC, in a white paper about Lean Six Sigma in the public sector: Define -- identifies the problems to be solved and establishes success criteria; Measure -- evaluates the current state of an organization's processes and identifies where a process can be streamlined; Analyze -- pinpoints root causes for inefficiencies and determines how to eliminate non value-added steps; Improve -- develops solutions and an action plan; and Control -- implements solutions and monitors for long-term success. As Peters and other Six Sigma experts point out, it takes training to become a leader in LSS projects, not only to implement the DMAIC methodology, but also to learn to use the tool sets that can qualitatively and quantitatively drive process improvement. Finally the training immerses participants in the notion that any process can be defined, measured, analyzed, improved and controlled to eliminate waste and add value. As some say, this immersion is what makes LSS a philosophy, not just a methodology. LSS in Government In 2001, Fort Wayne, Ind., Mayor Graham Richard took the unusual step of instituting what was probably the first Six Sigma program in local government. Richard, who had extensive experience in the business sector, transferred his knowledge of the quality improvement program into a public-sector setting. Ten city employees took Six Sigma training and began LSS projects in various city departments. It wasn't long before the results started rolling in. The Fort Wayne Police Department reduced larcenies by 19 percent in a targeted area of the city. Meanwhile, code inspectors increased the rate of code reinspection by 23 percent while reducing the time it took to conduct a reinspection by a whopping 17 days. Most impressive was the $1.7 million the city saved using LSS in the Water Pollution Control plant. "What I knew, that a lot of people really didn't recognize, is that parts of government are very transaction oriented," Richard said in an interview with iSixSigma Magazine. "They are very comparable, quite frankly, to a manufacturing business or to the transactions that take place in financial businesses. [Government] has a lot more in common with the business sector than people might have thought, but it hadn't been recognized that way." As Fort Wayne's mayor pointed out, the idea of quality improvement in the public sector is not a new phenomenon. All levels of government have been practicing variations of total quality management for years. Former governor of Washington Gary Locke instituted a series of quality management programs during his administration that became a model for other state governments, and received widespread recognition for its success. But the idea of adopting and using a rigorous and data-driven process, such as Six Sigma, is another matter. Aside from Fort Wayne, few government agencies have embraced LSS the same way GE, Motorola and other firms have done in the private sector. Still there are attempts under way. For example, the military, having seen how LSS has worked for its key contractors, began implementing its methodology, tool sets and philosophy. Milwaukee heard about Fort Wayne's success with Six Sigma, and has begun to use LSS in its child support enforcement program, the Department of Parks and Public Infrastructure, and in IT for the city's financial system. According to Xerox's Peters, LSS is particularly suited for reducing waste and improving quality in document and content management situations. "Control of electronic records is often assigned to the IT department, as management of electronic data files is traditionally based on transaction volume and storage requirements," he said. "Yet IT departments sometimes implement solutions with a lack of understanding of the requirements to identify and retain specific records." With LSS, Peters argued, IT shops can establish a structured approach to document and information management, gain efficiencies and eliminate many time-, cost- and labor-intensive, paper-based processes. Some of the gains come through judicious use of technology, such as document imaging, but also through rigorous analysis of workflow. Imagine the public sector, which includes some of the world's most document-intensive organizations, reducing error rates to 3.4 per 1 million. That's the goal of Monroe County, N.Y. Using LSS methodologies, the county Sheriff's Office identified a key problem with its records management system, conducted interviews across all of its function areas, and gathered extensive data on the entire process, measuring everything from how long it took deputies to fill out accident reports to the cost of managing the records. The office instituted many improvements, using a combination of new technology and re-engineered workflow, and quickly began adding up the results. The cost of processing accident reports fell from $28 to $8, and time spent completing reports went from 30 minutes to 5 minutes. Like all new methodologies and techniques, LSS has its detractors. In Quality Digest magazine, John S. Ramberg, a fellow of the American Society for Quality, wrote that one of the common criticisms is, "[LSS] has little to offer that can't be found elsewhere." Another problem, according to critics, is that LSS is more of an appraisal or corrective action system than a proactive approach to solving problems. Others worry that organizations have elevated LSS from its purpose of modeling the best way to do something into an organizational panacea for problems ranging from leadership to process. According to Michael Tatham, CEO of the Tatham Group, LSS can help fine-tune an existing process, but it can't determine whether the process is necessary in the first place. As government scrambles to find new ways to improve processes and customer satisfaction, the business of public service will continue to seek better ways to deliver quality. In the middle of this "lean flow" challenge lies the IT department. CIOs who take the time to find out if LSS is right for their needs and use it where necessary might become the next Jack Welch of technology.
<urn:uuid:116d5c62-7fa3-4227-b72d-3de1d3b5f4f2>
CC-MAIN-2017-04
http://www.govtech.com/magazines/pcio/Trimming-Waste-Fast.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00228-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964621
1,927
2.578125
3
IBM Develops Prototype Chips That Mimic Human BrainBy CIOinsight | Posted 08-19-2011 IBM researchers have created prototype computing chips that mirror the human brain, enabling them to not only collect and analyze information, but essentially learn from their mistakes, understand the data they're seeing and react accordingly. The "cognitive computing" chips are able to recognize patterns and make predictions based on data, learn through experiences, find correlations among the information and remember outcomes, according to IBM officials. The chips represent a significant departure from how computers are traditionally programmed and operated, and open opportunities in a wide range of fields, they said. "Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture," Dharmendra Modha, project leader for IBM Research, said in a statement. "These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government." IBM has been pushing efforts to drive more intelligence into an increasingly wider range of devices, and to create ways to more quickly and intelligently collect, analyze, process and respond to data. Those efforts were on public display in January when IBM's "Watson" supercomputer beat human contestants on the game show "Jeopardy." Watson, like many projects at IBM Research Labs, is focused on analytics, or the ability to process and analyze data to arrive at the most optimal decision. Watson was a revelation because of its ability to think in a humanlike fashion and answer questions posed in natural language -- with puns, riddles and nuances, etc. by quickly running through its vast database of information, making the necessary connections and returning not with a list of possible correct answers, but the correct answer itself. The cognitive computing chips echo those efforts. IBM officials are calling the prototypes the company's first neurosynaptic computing chips, which they said work in a fashion similar to the brain's neurons and synapses. It's done through advanced algorithms and silicon circuitry, they said. It's through this mimicking of the brain's functionality that the chips are expected to understand, learn, predict and find correlations, according to IBM. Digital silicon circuits create what IBM is calling the chips' neurosynaptic cores, which include integrated memory (replicating synapses), computation (replicating neurons) and communication (replicating axons). To read the original eWeek article, click here: IBM Unveils Chip Prototypes That Mimic Human Brain
<urn:uuid:4a58627a-1b24-49ba-b4cb-8ad5190d910a>
CC-MAIN-2017-04
http://www.cioinsight.com/print/c/a/Latest-News/IBM-Unveils-Chip-Prototypes-That-Mimic-Human-Brain-261179
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00349-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943143
523
3.359375
3
Windows 8 has a feature where you can login to your Windows account using a 4-digit PIN. This PIN can consist of any number and is useful if your normal login password is too complicated to type on the virtual keyboard in a tablet. When using a 4 digit PIN there are a total of 10,000 combinations and it should be noted that using touch can leave fingerprints or smudges that may make it easier for people to determine your PIN. Therefore, this feature should only be used if necessary or for limited periods of time when your tablet is not connected to a keyboard. To enable a PIN password, type pin at the Windows 8 Start Screen and then click on the Settings category as shown below. When the search results appear, click on the Create or change PIN option to open the User Settings screen. On the above screen click on the Create a PIN button. You will now be shown a screen asking you to confirm the password for the current account that you are configuring to use a PIN. Enter the password for the currently logged in account and then press the OK button. You will now be shown a screen where you will need to create the PIN that you would like to use to login to your account. A PIN is a 4 digit code that can consist of only numbers. With this said, a valid PIN is 1921. An invalid PIN would be 8a2& as it contains characters and symbols, which are not allowed. When creating the PIN you are also allowed to use the same number twice. To create your PIN, enter the same 4 digit PIN in each of the fields. Once you are done entering your PIN, click on the Finish button. Your PIN will now be activated and ready to use when you login to Windows. When you next login to your account you will now be presented with a prompt to enter your PIN. Simply enter your 4 digit PIN and you will be logged into Windows. If you forget your PIN, you can click on the Sign-in options link and click on the key icon. This will bring you back to a normal textual password prompt where you can use your normal password to login. If at any time you wish to remove the PIN you can go back into the User settings screen and click on the Remove button as shown in the image below. If you have any questions regarding this process, please feel free to ask us in the Windows 8 Forum. Windows 8 has a feature where you can use a 4 digit PIN to sign-in to your Windows account. If you are a domain user, though, this feature is disabled by default. It is possible, though, for an Administrator to enable a policy that allows this feature to be used by domain users. When you wake up Windows 8 after it has gone to sleep, you will be presented with the lock screen. You will then have to enter your password to start using Windows 8 again. For those who feel that their computer is in a secure environment, this feature can be annoying as it requires a few extra steps to get back to what you are doing. This tutorial will explain how you can disable the requirement ... Windows 8 includes a feature called picture passwords that allow you to login to your account by using gestures on an image with your finger or your mouse. This is especially useful for tablets where you want to avoid typing if you can. Instead of typing in a password, you select a picture and then create three gestures on it to act as your password. When you create the picture password you can ... Windows 8 introduced a new boot loader that decreased the time that it takes Windows 8 to start. Unfortunately, in order to do this Microsoft needed to remove the ability to access the Advanced Boot Options screen when you press the F8 key when Windows starts. This meant that there was no easy and quick way to access Safe Mode anymore by simply pressing the F8 key while Windows starts. Instead in ... Windows 8 has has a feature called password pictures where you can login to your Windows account by performing certain gestures on a picture. If you would not like this feature to be available for your users, you can disable it via a group policy.
<urn:uuid:acec0a58-76f0-4110-99d8-83a83134aa1d>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/use-pin-to-login-to-windows-8/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910828
845
2.546875
3
MISSION, KS--(Marketwired - Apr 16, 2014) - (Family Features) Can you imagine a world without wildlife? From the largest trees to the smallest insects, nature is critical to man's survival. Without forests there would be no air to breathe; without insects there would be no fruits and vegetables to eat. The earth is a delicate ecosystem, and it's up to mankind to make choices for sustaining and improving the environment. One choice is to partner with a trusted and reputable leader in conservation. Leading the fight against extinction One such organization is San Diego Zoo Global, which is dedicated to saving endangered species worldwide. This non-profit operates three world-class facilities -- the San Diego Zoo, the San Diego Zoo Safari Park and the San Diego Zoo Institute for Conservation Research. Their conservation work takes place locally at these three campuses and reaches beyond to field projects in 35 countries. The organization's vision is to lead the fight against extinction. Though it is quite an audacious goal, they've proven they can do it, with the help of people who care about this important cause. To date, San Diego Zoo Global has bred over 165 endangered species, and reintroduced 33 species back into the wild including mountain yellow-legged frogs, five species of reptiles, 17 species of birds, and 10 species of mammals. In many cases, these efforts have meant the difference between extinction and survival. The group has also made major contributions to saving the California condor from extinction, leading a multi-organization effort and helping to increase the population from only 22 birds in the '80s to well over 300 today, and releasing the first birds back to the wild in Baja California, Mexico, part of the condor's original habitat range. The organization has also been instrumental in helping to increase the number of giant pandas at China's Wolong Breeding Center from 25 bears to more than 100 by developing new techniques in husbandry, and specifically breeding, then working hands-on to share that knowledge at Wolong and elsewhere. The research continues Scientists working in their labs analyze over 3,000 samples each year and have discovered more than a dozen previously unidentified protozoa, bacteria, viruses, and fungi. The group's Wildlife Diseases Laboratories are taking an active role in saving amphibians by screening for chytrid fungus to keep frog populations healthy. These examples are only a fraction of the stories about how San Diego Zoo Global is committed to generating, sharing, and applying scientific knowledge vital to the conservation of animals, plants, and habitats worldwide. Ending extinction, together The San Diego Zoo Global Wildlife Conservancy was established to create a connection between the public and the experts of San Diego Zoo Global. The Wildlife Conservancy provides a variety of content about the state of endangered species, what is being done for them, and ways someone can help. By joining the Wildlife Conservancy and becoming a Wildlife Hero, people will be directly contributing to ending extinction. Through their conservation work and collaboration with others, the organization hopes to create a future in which people and wildlife can thrive together. To lean more about how you can help worldwide conservation efforts, visit www.endextinction.org. About Family Features Editorial Syndicate This and other food and lifestyle content can be found at www.editors.familyfeatures.com. Family Features is a leading provider of free food and lifestyle content for use in print and online publications. Register with no obligation to access a variety of formatted and unformatted features, accompanying photos, and automatically updating Web content solutions.
<urn:uuid:44b786a9-e4de-4b75-9213-ff1a032c70d1>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/lets-end-extinction-1900426.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92633
726
3
3
Every object that can have an security descriptor (SD) is a securable object that may be protected by permissions. All named and several unnamed Windows objects are securable and can have SDs, although this is not widely known. There does not even exist a GUI for manipulating the SDs of many object types! Have you ever tried to kill a system process in Task Manager and got the message “Access denied”? This is due to the fact that this process’ SD does not allow even administrators to kill the process. But it is, of course, possible, as an administrator, to obtain the necessary permissions, provided a GUI or some other tool is available. Among many others, the following object types are securable: - Files and directories on NTFS volumes - Registry keys (but not values) - Network shares - Active Directory objects Of these types, some are hierarchical in nature (directories, registry keys, …), and some are not (printers, services, …). What is a Security Descriptor (SD)? A security descriptor is a binary data structure that contains all information related to access control for a specific object. An SD may contain the following information: - The owner of the object - The primary group of the object (rarely used) - The discretionary access control list (DACL) - The system access control list (SACL) - Control information Let us have a look at the first property, the owner. This could be any user, group or even computer account. As you might have guessed, it is rather tedious to write “user/group/computer” when talking about the account that is holding a certain permission. For this reason the term “trustee” is used instead. Security IDs (SID) Any trustee can be identified by its name or by its SID. Humans tend to prefer names whereas computers very much prefer SIDs, which are binary data structures. When humans cannot avoid dealing with SIDs they use a certain string format. In this format, the SID of the local group “Administrators” looks like this: S-1-5-32-544. It is important to remember that trustees referenced in SDs are always stored as binary SIDs. This is true for the owner, the primary group and any trustee in any access control list (ACL). This implies that there exists some mechanism which converts trustee names into SIDs and vice versa. This mechanism is a central part of the security accounts manager (SAM) and the Active Directory (AD). The former manages the account database on any NT based system (NT workstations, servers and domain controllers, Windows 2000, XP, …). The latter is only available on Active Directory Domain Controllers where it replaces the SAM. Dissecting Security Descriptors (SD) The control information of an SD contains various bit flags, of which the two most important bits specify whether the DACL respectively SACL are protected. If an ACL is protected, it does not inherit permissions from its parent. Inheritance is discussed in more detail later. An object can, but need not have, an owner. Most objects do, though. The owner of an object has the privilege of being able to manipulate the object’s DACL regardless of any other settings in the SD. The ability to set any object’s owner is controlled by the privilege (user right, see below) SeTakeOwnershipPrivilege, which typically is only held by the local group Administrators. The primary group of an object is rarely used. Most services and applications ignore this property. The DACL is controlled by the owner of the object and specifies what level of access particular trustees have to the object. It can be NULL or nonexistent (no restrictions, everyone full access), empty (no access at all), or a list, as the name implies. The DACL almost always contains one or more access control entries (ACEs). A more detailed description of ACLs and ACEs can be found below. The SACL specifies which attempts to access the object are audited in the security event log. The ability to get or set (read or write) any object’s SACL is controlled by the privilege (user right, see below) SeSecurityPrivilege, which typically is only held by the local group Administrators. Access Control Lists (ACL) and Access Control Entries (ACE) As mentioned earlier, an ACL contains a list of access control entries (ACEs). The maximum number of ACEs is not limited, but the size of the ACL is: it must not be larger than 64 KB. This may not seem much, but should in practice be more that sufficient. Should you ever come in a situation where those 64 KB are not enough, I suggest you review your security concept from the very beginning. ACEs come in three flavors: - Access allowed ACE - Access denied ACE - System audit ACE All three variants are similar and contain the following information: - SID of a trustee to whom the ACE applies - Access mask: the permissions to grant/deny/audit - Inheritance flags: how to propagate the ACE’s settings down the tree Each ACE constitutes a “rule” which defines how the system is supposed to react when an attempt is made to access the object. Each rule (ACE) applies to exactly one trustee. The type of access that is covered by the rule is specified in the access mask. It is important to note that a trustee for whom no rule exists has no access whatsoever to an object. Depending on the type of the ACE the bits stored in the access mask have a different meaning. An access allowed ACE might grant the permission to read a file. An access denied ACE would explicitly deny that kind of access. In case of a conflict (both types of ACEs present on an object for a trustee), the access denied ACE always has precedence! Access allowed and denied ACEs are used in DACLs, whereas in SACLs only system audit ACEs may be used. The access mask of a system audit ACE defines the access types to be logged. If the mask were “full control”, then all kinds of access (read, write, …) would be audited. In Windows 2000 the security model was supplemented with the concept of inheritance. Each ACE has inheritance flags which control how the ACE is to be propagated to child objects. The most common case is full inheritance: child objects inherit all ACEs from their parent and have therefore identical resulting permissions and auditing settings. It is important to note here that an ACE that has been inherited from a parent is marked as being inherited, and cannot be modified on the child object! By means of this mark (or flag) the system is able to tell whether an ACE is set directly on the object or whether it has been inherited from a parent. This feature makes it possible to create permission structures on, for example, a home directory partition, like the following: |e:\||administrators, system||full control| The resulting set of permissions on the folder e:\users\user1 would be: Until here, all this would have been possible with NT4, too. But now we want to add another group to e:\ and want it to have full control on the entire drive. This is not possible on NT4 without resetting the permissions on all child objects and thus losing the users’ permissions (and the help desk’s). In Windows 2000 the system simply adds a new ACE to every object in the directory tree below e:\ and marks those ACEs as inherited! It is, of course, possible to specify exactly how an ACE is to be inherited by its children. The following inheritance flags can be used individually or in any combination: - container inherit: child containers (e.g. directories) inherit the ACE - object inherit: child objects (e.g. files) inherit the ACE - inherit only: the ACE does not apply to the object itself, but can be inherited by children - no propagation: the ACE may not be inherited by children The settings available in Windows ACL Editor (see below) correspond to the following combinations: - This folder only: no propagation - This folder, subfolders and files: container inherit + object inherit - This folder and subfolders: container inherit - This folder and files: object inherit - Subfolders and files only: container inherit + object inherit + inherit only - Subfolders only: container inherit + inherit only - Files only: object inherit + inherit only As mentioned earlier, an object can block inheritance from its parents. If this flag is set, the object is said to be “protected”. Blocking inheritance should be avoided wherever possible, since a directory tree where all objects are protected essentially uses the NT4 style security model with all its disadvantages (and there are many!). A member of the local group Administrators can always take ownership of any object on the system. Once that is done, the Administrator has full control over the object, can manipulate the DACL and SACL, and can even set the ownership back to the original trustee. The latter is not possible with the GUI (ACL Editor), but can be accomplished through the security API. Privileges and Rights Privileges, or rights, as they are often called, are very different from permissions. A privilege allows the exertion of permissions (the right to log on makes it possible to access those files you have permissions for). Privileges are configured in the local security policy or a domain group policy object. Three privileges are noteworthy in this context: |SeSecurityPrivilege||Read and write access to all SACLs| |SeBackupPrivilege||Circumvent NTFS permissions and read (back up) every file and every folder| |SeRestorePrivilege||Circumvent NTFS permissions and write (restore) every file and every folder| |SeTakeOwnershipPrivilege||Set the owner of any securable object| Windows ACL Editor The GUI provided by Windows to manipulate SDs is called ACL Editor. It can be accessed by right-clicking a file and choosing Properties -> Security. I am not going to describe ACL Editor in detail, but rather point out some of its features and peculiarities. ACL Editor is a remarkable piece of software! It handles nearly all side effects of the transition from the NT4 to the W2k style security model. It does this so well that most administrators are not even aware of the inherent problems and difficulties. Consider this scenario: you upgraded your file server from NT4 to W2k. Then you open ACL Editor on an arbitrary directory and it tells you that there are inherited permissions from the parent folder. Which is, technically speaking, not true. During the upgrade process the security descriptors are not changed, which means the flag which marks an ACE as having been inherited is not set, not for one single directory (or registry key, for that matter). And yet, ACL Editor tells you there are inherited ACEs. On XP and W2k3 it even shows you which parent object the ACEs were inherited from! This is done by comparing the object’s ACEs to the object’s parent’s ACEs and determining which ACEs would be marked as inherited had the permissions been set using W2k instead of NT4. All this is done online, when you open ACL Editor. This brings me to an important point: ACL Editor does not necessarily show what’s there, but displays an interpretation of an ACL. SetACL, on the other hand, shows you exactly what is stored in an ACL – thus it is possible that both tools list different ACEs in one and the same ACL. This happens frequently in a situation similar to the following: |e:\data||Administrators||full control||no propagation| |e:\data||Administrators||full control||container inherit + object inherit + inherit only| Of course, both ACEs taken together combine to the standard “Full control” for the group Administrators on the folder e:\data and all of its subfolders and files. That is what ACL Editor shows you: one entry, instead of two, even in advanced view. SetACL, which does not “interpret” ACEs in any way, shows two ACEs. On my Windows XP installation, which has, of course, not been upgraded from NT, this behavior can be observed on the folder “Program Files”. Now That I Know About All This, What Do I Do With All the Knowledge? If you are looking for a powerful tool to manipulate all the stuff described here, check out my free tool SetACL, the swiss army knife for manipulating permissions, downloaded more than 500,000 times!
<urn:uuid:668a8a0e-7c08-466a-b940-4cc982088d80>
CC-MAIN-2017-04
https://helgeklein.com/blog/2009/03/permissions-a-primer-or-dacl-sacl-owner-sid-and-ace-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914738
2,709
2.75
3
One of my favorite blog entries was the one about the relational data page. In that entry, I talked about how so much of the data allocated to a database is formatted. Some people agreed but pointed out that also much storage is dedicated to index pages. And they are correct. It depends on your index strategy. If you add up all the index sizes on some tables, it can exceed the row size itself. Then, likewise the index pages would outnumber the data pages in the database. What about them and what do they look like? There are two basic page formats for any index page, which, like the data page, has size options for the user. I'll repeat my earlier admonition that knowing what goes on at the page level will help you understand better how your decisions affect your performance. I'll start with the most prominent format - that of the leaf page. The leaf page contains a series of key/RID(s) combinations (called entries.) The key is the actual value of the column. RID stands for row/record ID and is comprised of the data page number and the record number within the page. The RID was explained in this post. The RID is how the index is connected to the data pages. All RIDs that connect to the 3rd record on the 123rd data page would be "123-3". If the record there was a customer record for Brad Smith who lives in Texas and there was an index on state, the key/RID combination would be "Texas-(123-3)". Naturally, you would have multiple customers who live in Texas so there would be multiple RIDs in the state index associated with Texas. It might look like "Texas-(123-3),(123-4),(125-6),(125-19),(127-10), etc.". Any index key that shows up multiple times in the table would have multiple RIDs. A unique index would only have one RID associated to each key. Successive entries in an index would not be in order except for the one clustered index on the table. For example, an index on last name could have entries of: This is NOT for a clustered index. If it were, the RIDs would be in numerical order across entries. Most indexes are non-clustered and it is normal for the RIDs to jump all around the table. If you navigate quickly to the Chambers entry, data page 67, record 9 is where you would find "the rest of the record". This is excellent for a query like "Select * from table where lastname = 'Chambers'". But what about that navigation? That comes about from the other index page format - called creatively the non-leaf page. The non-leaf pages contain key ranges of the leaf pages so that an index navigation, which always begins at the root node, can quickly navigate to the correct leaf page. That is the function of the non-leaf pages.In practice, this quickly fills up a single index page (of a few "K" bytes) and then the entries are split into two non-leaf pages and a "root" page now points to the ranges on those pages. Eventually the root page fills up, spilts again and a new level is created. The more levels, the more non-leaf page accesses to get to the leaf page. All index access involves getting to the leaf page. These non-leaf pages are mostly kept in memory. That's the indexing pages and process in a tiny nutshell. It does set us up for talking about what DB2 is doing in 9.7 with indexes. They're compressing them. Compression in general is another way to circumvent the I/O problem. When compression is spoke about, it is almost always about table, not index, compression. In DB2 9.7, however, there are 2 forms of compression happening in the index leaf pages. One is happening to the RID list. As you may have noticed above, RIDs are in sequence within a key. Instead of storing a full RID (which in actuality for many indexes is 6 bytes - 4 for the page number and 2 for the ID), DB2 9.7 can store the first RID and the offsets - both page and record number - for all successive RIDs for the key. For example, the Chamberlin entry would be "Chamberlin-(234-2),(3),(102,-2)". The (3) represents adding 3 to the record number of the previous RID (while leaving the page number the same). The (102,-2) represents adding 102 to the previous page number and subtracting 2 from its record number. This small example may not look like it's much savings, but consider that the 234 is really stored as 4 bytes and the (3) is stored as only 1. And the record number, usually stored as 2 bytes is replaced by -2, stored as one byte. Bits internally help DB2 understand if the byte it's looking at is a key, a RID, a compressed page number or a compressed record number. This is obviously best for those indexes with long many repeat values and long RID lists. The other innovation in index compression in 9.7 is called Prefix Compression, which applies to the key itself. DB2 essentially "bit maps" all the common prefixes on a leaf page in the header section of the leaf page. So these last name values: with a common prefix of "Chamb" could be reduced to: within the leaf page. For keys with densely sequential values like this with common prefixes, this is going to pack more, potentially a lot more, index keys down on a leaf page. Many mutli-column indexes have lots of repeat/common values in the higher level columns of the index. These would be compressed out with Prefix Compression. The upshot is more keys in the dreaded I/O. Indexes will automatically be compressed on any table which is compressed. Indexing is mightily important. So is compression. These 2 techniques are now coming together to provide more capabilities for database workload performance. Posted June 15, 2011 7:31 AM Permalink | No Comments |
<urn:uuid:e83dc6a0-d6d1-4dbf-bf8a-7dbeaa8ee28c>
CC-MAIN-2017-04
http://www.b-eye-network.com/blogs/mcknight/archives/2011/06/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925449
1,294
2.671875
3
Cities are getting a lot of attention these days. They have been ever since 2008, when the United Nations estimated that half of the world’s people live in cities. Three of the world’s biggest technology companies – IBM, Cisco and Siemens – now have divisions focused on creating smarter cities, cities that are smart-plus connected and on sustainable urban development. Thinkers like Richard Florida write about the Creative Class, who tend to live, you guessed it, in cities. Even the cluster approach to economic development created by Michael Porter in the Nineties presupposes a level of density most likely to be found in urban areas. At ICF, we are glad that so many companies and thought leaders are focused on the future of urban areas. But here’s a question. What about the 50% of people who do not live in cities? Even if the UN is right in estimating that cities will hold two-thirds of the world’s people by 2050 – what about the rest? From work with rural communities in the past year, I can tell you one thing about them. The people who live there don’t want to live in cities. They like it just fine where they are. They are proud of the places they live and work. They would just like more economic opportunity to flow their way, so that they and their children can continue to live there and enjoy the quality of life, the traditions and the sense of belonging typical of smaller places. And if there is one thing that today’s Broadband Economy should make possible, it is to provide greater economic opportunity for rural areas. Plenty of businesses still depend heavily on physical transportation of materials and goods but a growing percentage do not, because consumers and companies have adopted information and communications technology (ICT) at such a blinding speed. In other words, broadband should level the playing field between urban and rural areas. As the Gershwin song says, however, it ain’t necessarily so. Rural areas have issues of their own. They tend, for example, to have lower education levels. One group of communities I am working with, in the American Midwest, has a much lower percentage of residents with university-level education than the US average. But the percentage of residents with community or technical school training vastly outpaces the rest of America. That was the skill level needed by industries of the past, when manual work paid a living wage, but it will not be the skill level needed in the Broadband Economy. Fortunately, broadband provides a new means to deliver world-class education if we can figure out how to do it right. When it comes to innovation, how are rural areas going to create clusters of innovative companies in similar industries when their overall population density is low? The likely answer is to think regionally. A single rural community may be too small to bring innovators in a single industry together, but a region of many communities, networked by broadband and linked by relationships between governments and institutions, could achieve the necessary scale. It will not be easy. Rural communities have not organized themselves this way before. Cities have, for thousands of years. It is why people came together in cities and still do: to buy and sell, to defend themselves, to amass wealth. For the first time in history, rural areas have the same opportunity. But seizing it will take serious innovation in the way rural communities live and work, educate and govern – and perhaps most of all in the way they think about their place in the world. The authors that UN report did not publish it to celebrate some kind of victory. They were issuing a warning. Making a city work well is hard. Poverty is now growing faster in urban than in rural areas. One billion people lived in overcrowded, polluted and dangerous urban slums in 2008. They came there and keep coming there to escape lack of opportunity in the countryside. If we are really aiming to become a smarter planet, which sounds like a good idea to me, that lack of opportunity is a problem waiting to be solved.
<urn:uuid:4fb3c199-f4f5-466f-bc09-e6f0e7564072>
CC-MAIN-2017-04
http://www.govtech.com/dc/blog/Why-Half-of-the-Worlds-People-Dont-Live-in-Cities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00275-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967525
829
2.515625
3
[an error occurred while processing this directive] Rescuing Your Vital Records By having a game plan for recovery, knowing who to call for help, and taking some key steps before help arrives, recovery personnel stand a good chance of saving the records vital to the organization after a disaster. Prior to a disaster, there should be at least a rudimentary game plan for the order of recovery. When multiple department records are damaged, conflicts will often arise at the disaster scene over whose records are most time-critical. With the mitigation clock ticking, the time of impact is not the time to determine recovery priorities. A Timely Emergency Response Especially in cases where the quantity of records is so great that pack-out may take days, the ambient conditions of the environment in which they're housed need to be brought under control quickly. When humidity and temperature levels are elevated, degradation of paper is rapidly accelerated. To stabilize the environment, humidity levels must be brought to 40 percent or less and temperatures to 70° F or less. If a significant portion of the building is wet and it's 90° F outside, this is no small feat. The restoration contractor will have temporary temperature and humidity control equipment available to achieve these conditions. If operational, the mechanical systems in the building may also be used. That being said, the following mitigation steps are offered to help recovery personnel through the immediate moments following an event when they may need to fend for themselves. Since 90 percent of all disasters involve moisture from floods or fire suppression, following are water damage mitigation steps for various media. Paper Documents and Books Since water-damaged materials can be frozen safely for an indefinite period of time, planners can freeze all materials immediately and later consider what to restore versus discard, as well as the best restoration options for the situation. There are various methods of restoring paper documents. Several factors will determine the best course of action (see Sidebar). Books are more of a challenge than paper documents because of the binding and the potential for warping. Books must be handled and packed very carefully; they should not be opened or closed; their covers should not be removed; and they should be packed loosely, spine side down, in plastic milk crates or plastic bags placed in boxes. Freeze-drying is the preferred method for restoring books. CDs and Optical Media Magnetic Tapes, Microfilm, X-Rays, If they cannot be dealt with in less than five days, tapes, microfilm, X-rays, and photographs should be frozen. Long-term wet storage will cause additional damage. Tapes can be cleaned on special equipment that removes contaminants and re-tensions the tape. Fire- and Mold-Damaged Media Other types of damage to vital records such as fire damage, toxic chemicals, biological contaminants, and mold can be restored. For fire damage, processes such as ionized air-washing and deodorization can be employed. Soot particulate must be removed, and trimming or re-processing may be required if permanent damage has occurred. For bacteria and mold, Gamma and Electron Beam Radiation may be used to sterilize the documents if they can be transported to a laboratory. Other treatments may include manual cleaning in containment areas using down-draft tables to capture mold spores. Restoration contractors should have proven inventory control systems that prevent document loss and enable any document to be located and retrieved in a timely manner. This is especially important for working documents (medical or court records) and in specialized recovery projects such as large quantities of X-Rays, where the X-ray jacket and accompanying notes are critical to identification. All must be effectively tracked together or important information will be lost. About the Author Historically, paper was dried by exposure to air and sun. This method can still be used but should only be considered when no other options are available because exposure to ultraviolet light and air pollutants can permanently damage paper. Due to advances in research and technology, there are now several effective drying methods, all of which produce better results. Each method has its own merits. The choice of method is based on the type of paper and bindings, the quantity of documents, prioritization, and available time. It's not unusual to use more than one method on a project depending on the type of paper, ink, and bindings (if any). To offer any one method based solely on the availability of equipment or facility can limit the success of the project and/or increase restoration costs. Some of the best methods to consider are:
<urn:uuid:6ab9f8ea-a46d-49ac-98e4-1bb874c596e8>
CC-MAIN-2017-04
http://www.disaster-resource.com/articles/05p_122.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00183-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93321
924
2.5625
3
Understanding and Preventing DDoS Attacks It was in early 2000 that most people became aware of the dangers of distributed denial of service (DDoS) attacks when a series of them knocked such popular Web sites as Yahoo, CNN, and Amazon off the air. More recently, a pair of DDoS attacks nailed The SCO Group's Web site, which many people thought had to be a hoax, since surely any company today could stop a simple DDoS SYN attack. Wrong. It's been almost four years since they first appeared, but DDoS attacks are still difficult to block. Indeed, if they're made with enough resources, some DDoS attacks – including SYN (named for TCP synchronization) attacks – can be impossible to stop. No server, no matter how well it's protected, can be expected to stand up to an attack made by thousands of machines. Indeed, Arbor Networks, a leading anti-DDoS company, reports DDoS zombie armies of up to 50,000 systems. Fortunately, major DDoS attacks are difficult to launch; unfortunately, minor DDoS attacks are easy to create. In part, that's because there are so many types of DDoS attacks that can be launched. For example, last January, the Slammer worm targeted SQL Server 2000, but an indirect effect as infected SQL Server installations tried to spread Slammer was to cause DDoS attacks on network resources, as every bit of bandwidth was consumed by the worm. Thus, a key to thinking about DDoS is that it's not so much a kind of attack as it is an effect of many different kinds of network attacks. In other words, a DDoS may result from malignant code attacking the TCP/IP protocol or by assaulting server resources, or it could be as simple as too many users demanding too much bandwidth at one time. Typically, though, when we're talking about DDoS attacks, we mean attacks on your TCP/IP protocol. There are three types of such attacks: the ones that target holes in a particular TCP/IP stack; those that target native TCP/IP weaknesses; and the boring, but effective, brute force attacks. For added trouble, brute force also works well with the first two methods. The Ping of Death is a typical TCP/IP implementation attack. In this assault, the DDoS attacker creates an IP packet that exceeds the IP standard's maximum 65,536 byte size. When this fat packet arrives, it crashes systems that are using a vulnerable TCP/IP stack. No modern operating system or stack is vulnerable to the simple Ping of Death, but it was a long-standing problem with Unix systems. The Teardrop, though, is an old attack still seen today that relies on poor TCP/IP implementation. It works by interfering with how stacks reassemble IP packet fragments. The trick here is that as IP packets are sometimes broken up into smaller chunks, each fragment still has the original IP packet's header as well as a field that tells the TCP/IP stack what bytes it contains. When it works right, this information is used to put the packet back together again. What happens with Teardrop, though, is that your stack is buried with IP fragments that have overlapping fields. When your stack tries to reassemble them, it can't do it, and if it doesn't know to toss these trash packet fragments out, it can quickly fail. Most systems know how to deal with Teardrop now, and a firewall can block Teardrop packets at the expense of a bit more latency on network connections, since this makes it disregard all broken packets. Of course, if you throw a ton of Teardrop busted packets at a system, it can still crash. And, then, there's SYN, to which there really isn't a perfect cure. In a SYN Flood, the attack works by overwhelming the protocol handshake that has to happen between two Internet-aware applications when they start a work session. The first program sends out a TCP SYN (synchronization) packet, which is followed by a TCP SYN-ACK acknowledgment packet from the receiving application. Then, the first program replies with an ACK (acknowledgment). Once this has been done, the applications are ready to work with each other. A SYN attack simply buries its target by swamping it with TCP SYN packets. Each SYN packet demands a SYN-ACK response and causes the server to wait for the proper ACK in reply. Of course, the attacker never gives the ACK, or, more commonly, it uses a bad IP address so there's no chance of an ACK returning. This quickly hogties a server as it tries to send out SYN-ACKs while waiting for ACKs. When the SYN-ACK queues fill up, the server can no longer take any incoming SYNs, and that's the end of that server until the attack is cleared up. The Land attack makes SYN one-step nastier by using SYN packets with spoofed IP addresses from your own network. There are many ways to reduce your chances of getting SYNed, including setting your firewall to block all incoming packets from bad external IP addresses like 10.0.0.0 to 10.255.255.255, 127.0.0.0 to 127.255.255.255, 172.16.0.0 to 172.31.255.255, and 192.168.0.0 to 192.168.255.255, as well as all internal addresses. But, as SCO discovered, if you throw enough SYN packets at a site, any site can still be SYNed off the net. Brute Force Attacks Common brute force attacks include the Smurf attack and the User Datagram Protocol (UDP) flood. When you're Smurfed, Internet Control Message Protocol (ICMP) echo request packets, a particular type of ping packet, overwhelm your router. Making matters worse, each packet's destination IP address is spoofed to be your local broadcast address. You're probably already getting the picture. Once your router also gets into the act of broadcasting ICMP packets, it won't be long before your internal network is frozen. A UDP flood works by someone spoofing a call from one of your system's UDP chargen programs. This test program generates semi-random characters for received packets with another of your network's UDP echo service. Once these characters start being reflected, your bandwidth quickly vaporizes. Fortunately, for these two anyway, you can usually block them. With Smurfing, just setting your router to ignore broadcast addressing and setting your firewall to ignore ICMP requests should be all you need. To dam up UDP floods, just block all non-service UDP services requests for your network. Programs that need UDP will still work. Unless, of course, the sheer volume of the attack mauls your Internet connection. That's where the DDoS attack programs such as Tribe Force Network (TFN), Trin00, Trinity, and Stacheldraht come in. These programs are used to set DDoS attack agents in unprotected systems. Once enough of them have been set up in naïve users' PCs, the DDoS controller sets them off by remote control, burying target sites from hundreds or even thousands of machines. Unfortunately, as more and more users add broadband connections without the least idea of how to handle Internet security, these kinds of attacks will only become more common. Deflecting DDoS Attacks So what can you do as an administrator about DDoS threats? For starters, all the usual security basics can help. You know the drill: make sure you have a firewall set up that aggressively keeps everything out except legal traffic, keep your anti-viral software up to date lest your computers become a home for DDoS agents like TFN, and keep your network software up to date with current security patches. This won't stop all DDoS attacks, but it will stop some of them like Smurfing. You should also keep yourself current on the latest DDoS developments. The best site for this is the University of Washington hosted Distributed Denial of Service (DDoS) Attacks/tools site. Essentially, these corporate approaches consist of intense real-time monitoring of your network looking for telltale signs of incoming DDoS attacks. These give you a chance to harden your network or even switch to another ISP provider in an attempt to dodge a DDoS attack. For example, Riverhead actually diverts DDoS attacks to its own servers and filters out the good traffic, which it then passes along to your site. You may not think you need these services, since in a worse case scenario you're still going to get knocked off the net. But not every attack will be a massive one with thousands of attackers. For most attacks, these services can definitely help. And, let's face it, today almost all businesses need to be on the net 24-7. With DDoS attacks on the rise according to CERT, you'd be wise to at least familiarize yourself with DDoS prevention services. After all, it's not only your network in danger, it's your business.
<urn:uuid:8e4a8658-aa53-4806-baab-3cc43d69e089>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3297581/Understanding-and-Preventing-DDoS-Attacks.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00183-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947632
1,901
3.09375
3
China Education Industry Report: 2016 Edition - Report Description Table of Contents The ancient education of China began in the Chinese classical works, rather than religious organizations. In Chinese history, clearly documented school for formal teaching began in Xia Dynasty (c. 2100-1600 B.C.). Since the beginning of 21st Century, the Chinese government gave priority to education and put forward the strategic policy of “revitalizing the nation through science and education". Deepening educational system reform, strengthening quality education, and adhering to make nine-year compulsory education universal and eliminating illiteracy are the most important tasks of education. The Chinese education system follows a specific pattern where early childhood education (pre-primary) is followed by primary school (Elementary school), middle school, secondary school (High school), and post-secondary (Tertiary) education. There are two major examinations for students undergoing school education in China, Zhongkao and Gaokao. Students begin their nine-year compulsory education at about six years old, and they progress through elementary school and junior high school over the following nine years. At the end of Grade Nine, all students are required to take Zhongkao, which is the cumulative assessment of this nine-year education period and is the entrance examination for senior high school. The overall growth of the industry will be driven by rising responsiveness of people towards the benefits of early education and continuing education and growing demand for online teaching methods. The trends observed in China regarding the education industry are growing access to internet and declining rates of young population. While on one side, internet opens up the opportunities to grab overseas education, simultaneously it poses a threat to domestic educational institutions. Another challenge faced by the industry is declining rate of young population reflected by the fact that enrollments in primary and secondary education have been declining since 2007. The report, “China Education Industry” analyzes the prevailing condition of the industry along with its major segments including pre-primary, primary, K-12, post-secondary and adult/continuing education. Chinese domestic and international education market along with specific dependence on the US education as well as the market dynamics of the Chinese education industry are being presented in this report. The major players, TAL Education, TARENA International, China Distance Education Holding and New Oriental have being profiled and compared, along with their key financials and strategies for growth.
<urn:uuid:fd02c2e7-e906-4edc-a334-b254b647a90b>
CC-MAIN-2017-04
http://www.marketreportsonline.com/445164.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00577-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957082
487
2.65625
3
Most Internet users know about the existence of software Trojans, but that of hardware ones is less known. They consist of integrated circuits that have been modified by malicious individuals so that when triggered, they try to disable or bypass the system’s security, or even destroy the entire chip on which they are located. As hardware devices are almost exclusively produced in countries where controls about who has access to the manufacturing process are non-existent or, at best, pretty lax, government agencies, military organizations and businesses that operate systems critical to a country’s infrastructure can never be too careful when checking whether the devices they are planning to use have been tampered with. There are a number of techniques for detecting hardware Trojans, but they are time- and effort-consuming. So a team of researchers from the Polytechnic Institute of New York University (NYU-Poly) and the University of Connecticut have decided to search for an easier solution, and came up with the idea of “designing for trust.” “The ‘design for trust’ techniques build on existing design and testing methods,” explains Ramesh Karri, NYU-Poly professor of electrical and computer engineering. Among those is the use of ring oscillators – devices composed of and odd number of inverting logic gates whose voltage output can reveal whether the circuit has or has not been tampered with – on circuits. Non-tampered circuits would produce always the same frequency, but altered ones would “sound” different. Of course, sophisticated criminals could find a way to modify the circuits so that the output is the same, so the researchers suggest creating a number of variants of ring oscillator arrangements so that hardware hackers can’t keep track of them. While the theory does sound good, the researchers have encountered some difficulty when it comes to testing it in the real world. Companies and governments are disinclined to share what hardware Trojan samples they may have, since that would require sharing actual modified hardware that could tip off the researchers to their proprietary technology or can endanger national security. Luckily for them, NYU-Poly organizes an annual Cyber Security Awareness Week (CSAW) white-hat hacking competition called Embedded Systems Challenge (this year’s edition is currently underway), for and during which students from around the country construct and detect hardware Trojans, and these samples are readily available to them and to the public.
<urn:uuid:0bfc4ff1-34ea-43a0-88cb-5643f0f334a2>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2011/11/10/new-techniques-for-detecting-hardware-trojans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949206
504
3.09375
3
Joined: 31 Oct 2006 Posts: 962 Location: Richmond, Virginia A = 45065838 with no fraction, so rightmost 3 digits = 838, with no fractional part. B has room for only 3 digits to the left of the decpt, hence the code is truncating your source value on the left and giving you 838.00000, just as the PIC says. Why are you moving a very large integer value into a field that takes a max value of 999.99999? COBOL does this too - truncates on the left if the target field is too small. It is the programmer's resp to check for this.
<urn:uuid:0c2a9622-dc29-458d-8077-629ee313acd2>
CC-MAIN-2017-04
http://ibmmainframes.com/about19115.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00293-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901068
137
2.53125
3
You’ve heard of identity theft, but what does it mean? Identity theft is officially defined as the deliberate assumption of another person’s identity. Going far beyond credit card fraud, identity theft is a rapidly growing crime that most people will face at some point in their lives. In practice, it is a crime where a criminal acquires and uses the victim’s personal information — such as a Social Security or driver’s license number — to take out loans, obtain new credit cards, rent an apartment, purchase a car, run up debt, file for bankruptcy, and other criminal activities. Identity theft can not only damage someone’s creditworthiness, it can also create unknown criminal records that can result in the identity theft victim being wrongly arrested or denied employment upon a routine background check.
<urn:uuid:1d136996-cbaa-4bc9-92c7-6e078f37d6a5>
CC-MAIN-2017-04
https://www.idnotify.com/tips/faq/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00137-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935741
164
2.859375
3
One of the drawbacks of electric vehicles (EV) is that it can take up to 8 hours to fully charge their lithium-ion batteries. Swiss researchers, however, say that by increasing the electrical charge, EVs can potentially be fully charged in about 15 minutes. In a paper published today, researchers from the Ecole Polytechnique Federale de Lausanne (EFPL) (Swiss Federal Institute of Technology in Lausanne) said an EV charging station with 4.5 megawatts (MV) of power could charge a vehicle in 15 minutes. Unfortunately, 4.5MW is the power equivalent of 4,500 washing machines. "This would bring down the power grid," the researchers stated. To overcome drawing such a significant charge from the power grid at one time, the researchers created a buffer storage system that disconnects from the grid before releasing the 4.5MW charge to an EV. "We came up with a system of intermediate storage," said Alfred Rufer, a researcher in EPFL's Industrial Electronics Lab. "And this can be done using the low-voltage grid (used for residential electricity needs) or the medium-voltage grid (used for regional power distribution), which significantly reduces the required investment." The EPFL researchers, along with other partner universities, built an intermediate storage battery. In the space of 15 minutes, it provided the 20 kilowatt hour (kWh) to 30 kWh needed to charge a standard electric car battery. The "Intermediate" storage is achieved using a lithium iron battery the size of a shipping container, which is constantly charging at a low level of power from the grid. When a car needs a quick charge, the buffer battery promptly transfers the stored electricity to the vehicle. "Our aim was to get under the psychological threshold of a half hour," Massimiliano Capezzali, deputy director of the EPFL Energy Center and leader of the research project, said in a statement. "But there is room for improvement." Supercharger stations are able to partially charge a Tesla Model S sedan in 30 minutes, giving it a 170-mile range. A full charge takes 75 minutes. Superchargers consist of multiple Model S chargers working in parallel to deliver up to 120 kW of direct current (DC) power directly to the battery, according to Tesla. Tesla currently has 591 Supercharger stations with 3,425 Superchargers around the world. Last year, the company released an over-the-air software upgrade for its cars that tracks charging station locations and alerts drivers when they're out of range of those stations. As part of the EPFL Industrial Electronics Lab's quick charging project, researchers built gas station prototypes to determine how they'd need to be modified as gas-powered cars slowly die out and are replaced by EVs. The research showed that a quick charging station able to handle 200 cars per day would need intermediate storage capacity of 2.2 MWh, which require an Intermediate battery system the size of four shipping containers. "Electric cars will change our habits. It's clear that, in the future, several types of charging systems -- such as slow charging at home and ultra-fast charging for long-distance travel -- will co-exist," Capezzali said. This story, "Researchers move closer to charging an EV as fast as filling a tank of gas" was originally published by Computerworld.
<urn:uuid:8cf9399d-b9cd-440b-8b01-b581cca4594c>
CC-MAIN-2017-04
http://www.cio.com/article/3025486/car-tech/researchers-move-closer-to-charging-an-ev-as-fast-as-filling-a-tank-of-gas.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946923
701
3.15625
3
SpaceX, the private rocket launch company founded by Elon Musk of Tesla Motors fame (amongst other cool things), has been working on a reusable launch and landing system called Grasshopper that does what many of us always thought rockets really should do: It takes off and lands vertically. Previous tests of the Grasshopper had been successful and each time achieved new heights. A just released video of the system's June 14 test tops them all with the 10-story tall rocket elegantly and smoothly rising to a height of 1,066 feet (325 meters) then descending with equal grace to land exactly where it took off from. The test, which was conducted at a rocket development facility in McGregor Texas, was filmed by a video camera carried by a remote controlled quadrocopter. Although the altitude and scale of the test are impressive what is really outstanding is the control and accuracy of the flight. All in all, another example of how private industry can do things that NASA only dreams about. See also: Grasshopper article on Wikipedia
<urn:uuid:7ad6b998-5cc5-49e0-9daf-917dfa756468>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224908/data-center/awesome-aerial-view-of-spacex-grasshopper-launch-and-landing-test.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00350-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968228
212
2.578125
3
When it comes to chips, Intel believes in the old clich that you can't be too thin. The world's leading chipmaker is pushing the semiconductor manufacturing envelope by shrinking the size of the circuits that conduct electricity in chips from 90 nanometers to 65. "With this advanced technology, circuit designers can add more circuit features and increase performance, while staying within power limits," says Mark Bohr, Intel's senior fellow and director of process architecture and integration. Intel's 65-nanometer capability has already been demonstrated in production of functional 70-Mbit static RAM devices with more than a half billion transistors, Bohr says. The 65-nanometer chips are on track to be shipped in 2005, he says. The new generation of product will include enhancement to Intel's strained-silicon technology, which was introduced two years ago as part of the 90-nanometer-generation process. The technology stretches electron grid patterns, letting them flow faster and with less resistance. It also lets the silicon atoms switch faster. While there are always issues arising in qualifying new processes for production, Bohr says that the move to 65 nanometer should be easier than the move from 130 nanometer to 90 because many of the chipmaking technologies introduced at the 90-nanometer level have already been proven in production. By using strained technology in a 65-nanometer process, transistor drive current can be increased by 10% to 15%. At 65 nanometers, the transistors will have much less leakage than previously, improving power usage, he says. In addition, chip power consumption can be reduced by 20%. Nathan Brookwood, an analyst with Insight 64, says the announcement demonstrates that Moore's Law, which says transistor density will double every two years, "is alive and well." Also, the new features affecting power dissipation show "Intel has gotten the message that they need to address the power of these chips and the heat they are dissipating, which directly impacts end users," he says. Overall, by moving to a 65-nanometer process, Intel will be able to cut the chip size of existing designs in half, reducing cost and power usage, Bohr says. By keeping the chip the same size, Intel could double the number of transistors in a given die area, allowing for the introduction of new circuit capabilities and improved performance. This story courtesy of Internetweek.
<urn:uuid:f1862d79-51f6-410a-bfd1-66c57aeb8435>
CC-MAIN-2017-04
http://www.crn.com/news/components-peripherals/46200049/intels-incredible-shrinking-chips.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943002
487
3.359375
3
Cloud Computing: Exciting Future for IT Virtualization Cloud computing: exciting future for IT virtualization Cloud computing is a relatively new (circa late 2007) label for the subset of grid computing that includes utility computing and other approaches to the use of shared computing resources. Cloud computing is an alternative to having local servers or personal devices handling users' applications. Essentially, it is an idea that the technological capabilities should "hover" over everything and be available whenever a user wants. Although the early publicity on cloud computing was for public offerings over the public Internet by companies such as Amazon and Google, private cloud computing is starting to come of age. A private cloud is a smaller, cloudlike IT system within a corporate firewall that offers shared services to a closed internal network. Consumers of such a cloud would include the employees across various divisions and departments, business partners, suppliers, resellers and other organizations. Shared services on the infrastructure side such as computing power or data storage services (or on the application side such as a single customer information application shared across the organization) are suitable candidates for such an approach. Of course, IT virtualization would be the basis of the infrastructure design for the shared services, and this will help drive energy efficiency for our green data centers of the future. Because a private cloud is exclusive in nature and limited in access to a set of participants, it has inherent strengths with respect to security aspects and control over data. Also, the approach can provide advantages with respect to adherence to corporate and regulatory compliance guidelines. These considerations for a private cloud are very significant for most large organizations. Cluster architecture for virtual servers There are now many IT vendors offering virtual servers and other virtual systems. Cluster architecture for these virtual systems provides another significant step forward in data center flexibility and provides an infrastructure for very efficient private cloud computing. By completely virtualizing servers, storage and networking, an entire running virtual machine can be moved instantaneously from one server to another.
<urn:uuid:6e9efbe8-fc10-436a-bb9f-f8b68ccae651>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Virtualization/How-to-Implement-Green-Data-Centers-with-IT-Virtualization/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931858
397
2.828125
3
2.4.3 What is exhaustive key search? Exhaustive key search, or brute-force search, is the basic technique of trying every possible key in turn until the correct key is identified. To identify the correct key it may be necessary to possess a plaintext and its corresponding ciphertext, or if the plaintext has some recognizable characteristic, ciphertext alone might suffice. Exhaustive key search can be mounted on any cipher and sometimes a weakness in the key schedule (see Question 2.1.4) of the cipher can help improve the efficiency of an exhaustive key search attack. Advances in technology and computing performance will always make exhaustive key search an increasingly practical attack against keys of a fixed length. When DES (see Section 3.2) was designed, it was generally considered secure against exhaustive key search without a vast financial investment in hardware [DH77]. Over the years, however, this line of attack will become increasingly attractive to a potential adversary [Wie94]. A useful article on exhaustive key search can be found in the Winter 1997 issue of CryptoBytes [CR97]. Exhaustive key search may also be performed in software running on standard desktop workstations and personal computers. While exhaustive search of DES's 56-bit key space would take tens or hundreds of years on the fastest general purpose computer available today, the growth of the Internet has made it possible to utilize thousands of machines in a distributed search by partitioning the key space and distributing small portions to each of a large number of computers. In this manner and using a specially designed supercomputer, a DES key was indeed broken in 22 hours in January 1999 (see Question 2.4.4). The current rate of increase in computing power is such that an 80-bit key should offer an acceptable level of security for another 10 or 15 years (consider the conservative estimates in [LV00]). In the mid-20s, however, an 80-bit key will be as vulnerable to exhaustive search as a 64-bit key is today, assuming a halved cost of processing power every 18 months. Absent a major breakthrough in quantum computing (see Question 7.17), it is unlikely that 128-bit keys, such as those used in IDEA (see Question 3.6.7) and the forthcoming AES (see Section 3.3), will be broken by exhaustive search in the foreseeable future.
<urn:uuid:cd05630e-aa7c-4be2-b3ef-a2853e8317c5>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-exhaustive-key-search.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935642
485
3.734375
4
Dubbed Frankenstein (natch!), the malware is made up of pieces of code from benign host programs, so it doesn’t trigger any red flags as something foreign to the system. Not only that, but by looking like something trusted, it could even become whitelisted, giving it an easy tunnel straight to the heart of an organization’s network. Also, like Frankenstein’s monster, the malicious creation is expected to learn about its environment as it goes. “We wanted to build something that learns as it propagates,” said head researcher Kevin Hamlen, associate professor of computer science, speaking to the University of Texas at Dallas News Center. “Frankenstein takes from what is already there and reinvents itself.” Hamlen and his co-creator, a doctoral student named Vishwath Mohan, hope to use the creature to improve anti-virus approaches and develop effective defenses against such a threat. “Shelley’s story is an example of a horror that can result from science, and similarly, we intend our creation as a warning that we need better detections for these types of intrusions,” Hamlen said. “Criminals may already know how to create this kind of software, so we examined the science behind the danger this represents, in hopes of creating countermeasures.” There have already been a range of metamorphic malware and viruses launched out into the wild, which, loosely, cover malicious threats that change their code as they propagate. Moving from machine to machine, these viruses, much like the flu, mutate in order to avoid detection as a known threat. Frankenstein generally falls under this model, but with the big twist of being made up of otherwise benign parts – other metamorphic malware is still a foreign entity in the system. "Frankenstein forgoes the concept of a metamorphic engine and instead creates mutants by stitching together instructions from non-malicious programs that have been classified as benign by local defenses,” Hamlen said. “This makes it more difficult for feature-based malware detectors to reliably use those byte sequences as a signature to detect the malware."
<urn:uuid:327b0085-25dc-412d-bce7-5788e33eeded>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/frankenstein-malware-a-monster-stitched-together/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00312-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951349
452
3.3125
3
Proximity: Objects grouped together in close proximity are perceived as a unit. Based on the location, clusters and outliers can be identified. Closure: Humans tend to perceive objects that are almost a closed form (such as an interrupted circle) as the full form. If you were to cover this line of text halfway, you would still be able to guess the words. This principle can be used to eliminate bounding boxes around graphs. A lot of charts do not need the bounding box; the human visual system “simulates” it implicitly. Similarity: Be it color, shape, orientation, or size, we tend to group similar-looking elements together. We can use this principle to encode the same data dimensions across multiple displays. If you are using the color red to encode malicious IP addresses in all of your graphs, there is a connection that the visual system makes automatically. Continuity: Elements that are aligned are perceived as a unit. Nobody would interpret every little line in a dashed line as its own data element. The individual lines make up a dashed line. We should remember this phenomenon when we draw tables of data. The grid lines are not necessary; just arranging the items is enough. Enclosure: Enclosing data points with a bounding box, or putting them inside some shape, groups those elements together. We can use this principle to highlight data elements in our graphs. Connection: Connecting elements groups them together. This is the basis for link graphs. They are a great way to display relationships in data. They make use of the “connection” principle. Figure 1-7 Illustration of the six Gestalt principles. Each of the six images illustrates one of the Gestalt principles. They show how each of the principles can be used to highlight data, tie data together, and separate it. A piece of advice for generating graphical displays is to emphasize exceptions. For example, use the color red to highlight important or exceptional areas in your graphs. By following this advice, you will refrain from overusing visual attributes that overload graphs. Stick to the basics, and make sure your graphs communicate what you want them to communicate. Figure 1-8 This bar chart illustrates the principle of highlighting exceptions. The risk in the sales department is the highest, and this is the only bar that is colored. A powerful method of showing and highlighting important data in a graph is to compare graphs. Instead of just showing the graph with the data to be analyzed, also show a graph that shows “normal” behavior or shows the same data, but from a different time (see Figure 1-9). The viewer can then compare the two graphs to immediately identify anomalies, exceptions, or simply differences. Graphs without legends or graphs without axis labels or units are not very useful. The only time when this is acceptable is when you want the viewer to qualitatively understand the data and the exact units of measure or the exact data is not important. Even in those cases, however, a little bit of text is needed to convey what data is visualized and what the viewer is looking at. In some cases, the annotations can come in the form of a figure caption or a text bubble in the graph (see Figure 1-10). Annotate as much as needed, but not more. You do not want the graphs to be overloaded with annotations that distract from the real data. Figure 1-9 Two bar charts. The left chart shows normal behavior. The right side shows a graph of current data. Comparing the two graphs shows immediately that the current data does not look normal. Figure 1-10 The left side bar chart does not contain any annotations. It is impossible for a user to know what the data represents. The right side uses axis labels, as well as text to annotate the outlier in the chart. Whenever possible, make sure that the graphs do not only show that something is wrong or that there seems to be an “exception.” Make sure that the viewers have a way to identify the root cause through the graph. This is not always possible in a single graph. In those cases, it might make sense to show a second graph that can be used to identify the root cause. This principle helps you to utilize graphs to make decisions and act upon findings (see Figure 1-11). A lot of visualizations are great about identifying interesting areas in graphs and help identify outliers but they do not help to take action. Have you ever asked yourself, “So what?” This is generally the case for graphs where root causes are not shown. Figure 1-11 This chart illustrates how causality can be shown in a chart. The number of servers failing per month is related to the temperature in the datacenter. By applying all the previously discussed principles, you will generate not just visually pleasing graphs and data visualizations, but also ones that are simple to read and ones that communicate information effectively. Information Seeking Mantra In a paper from 1996,9 Ben Shneiderman introduced the information seeking mantra that defines the best way to gain insight from data. Imagine you have a large amount of data that needs to be displayed. For others to understand the data, they need to understand the overall nature of the data—they need an overview. Based on the overview, the viewer then wants to explore areas of the data (i.e., the graph) that look interesting. The viewer might want to exclude certain data by applying filters. And finally, after some exploration, the viewer arrives at a part of the data that looks interesting. To completely understand this data, viewers need a way to see the original, underlying data. In other words, they need the details that make up the graph. With the original data and the insights into the data gained through the graphical representation, a viewer can then make an informed and contextual statement about the data analyzed. - “The Eyes Have It: A Task by Data Type Taxonomy for Information Visualization,” by Ben Shneiderman, IEEE Symposium on Visual Languages, 1996. The information seeking mantra summarizes this process as follows: Overview first, zoom and filter, then details on-demand. We revisit the information seeking mantra in a later chapter, where I extend it to support some of the special needs we have in security visualization. Applying visualization to the field of computer security requires knowledge of two different disciplines: security and visualization. Although most people who are trying to visualize security data have knowledge of the data itself and what it means, they do not necessarily understand visualization. This chapter is meant to help those people especially to acquire some knowledge in the field of visualization. It provides a short introduction to some visualization principles and theories. It touched on a lot of principles and should motivate you to learn more about the field. However, the visualization principles will be enough to guide us through the rest of this book. It is a distilled set of principles that are crucial for generating effective security visualizations. This chapter first discussed generic visualization and then explained why visualization is an important aspect of data analysis, exploration, and reporting. The bulk of this chapter addressed graph design principles. The principles discussed are tailored toward an audience that has to apply visualization to practical computer security use-cases. This chapter ended with a discussion of the information seeking mantra, a principle that every visualization tool should follow. © Copyright Pearson Education. All rights reserved.
<urn:uuid:a0697017-0fcf-4642-aff5-b27565e95a24>
CC-MAIN-2017-04
http://www.networkworld.com/article/2282583/software/chapter-1--visualization.html?page=3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00432-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921683
1,524
3.9375
4
I wonder if any of you based in the UK remember the British Telecom television advertisements of the late 1980s featuring ‘Beattie’, played by Maureen Lipman? In the most frequently quoted episode, Beattie’s pride leads her to see only the silver lining in her grandson’s otherwise poor GCSE results. Finding that he passed only pottery and sociology, she declares, “An ology! He gets an ology and he says he's failed. You get an ology - you're a scientist!” It’s probably fair to say, there will be a few scientists attending the MILCOM show in San Jose at the end of the month. Maybe the organisers should organise a competition to see how many ‘ologies’ you can spot. Going right back to military origins, several ‘ologies’ would be needed to discover the earliest known occurrence of warfare. Between humans that is; not between humans and aliens – for that you just need imagination or a Heinlein book. You could imagine archaeology and geology being used to identify the defensive nature of early settlements. You can surely envisage anthropology and palaeontology being used to study the origins, and the social and cultural development of early warriors. It’s appealing to think that warfare probably began as the result of a breakdown in communications. After commerce had been established between villages or herding camps, local competition over resources could well have given rise to the earliest conflicts. You can just imagine how it might have transpired; “Now look guys, let’s not be hasty here!” Wouldn’t unravelling those ancient conversations become the science of ‘communicatology’? Incidentally, the earliest evidence for man having died a violent death due to the aggression of another comes from c. 18,000 BC in the remains of a young man. He was found near the Nile River with several spear points embedded in his upper body. At that time, archaeological and geological evidence suggests that food was scarce, so perhaps he died in a fight over the means of subsistence. The first archaeological record of what could have been a prehistoric battle is to be found at a Mesolithic site, also near the Nile, on the Egypt-Sudan border. That find includes more bodies and arrowheads, clearly indicating the casualties of a battle, which have been determined to be about 13,140 to 14,340 years old. These days, the sciences involved in communications are largely employed to detect and avoid or prevent conflict arising. If early warning systems fail, it is also deployed to help win the battles and the war. This is evident from the military acronym ‘C4I’ – meaning ‘command, control, communications, computers and intelligence’. Command and control (C2) is about decision-making, and it’s supported by computers and communications; two pervasive enabling technologies that support C2 through intelligence (that’s the ‘I’). Aculab’s enabling technologies are used extensively in military communications systems, providing many essential functions such as the core media processing capabilities for enhanced voice processor units, and a variety of signalling and media gateways. DSP boards and host media processing (HMP) software can be readily integrated through Aculab’s APIs (high- and low-level APIs are available), enabling the development of a wide range of platforms and systems for military specific applications. That’s why we call it enabling technology – yes, it’s an ‘ology’ and scientifically speaking, you may call it ‘communicatology’. Why don’t you stop by booth 1337 when you’re at MILCOM next week; we’d be pleased to see you.
<urn:uuid:26fb6ab2-de85-41e4-94a0-9a5dd54dce1b>
CC-MAIN-2017-04
http://blog.aculab.com/2010/10/science-of-military-communications.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957689
795
2.578125
3
NOAA's upgraded technology draws a bead on hurricane season June is upon us, which means hurricanes may not be far behind. And according to the National Oceanic and Atmospheric Administration, this could be a bad year. NOAA’s Atlantic Hurricane Season Outlook says there is a 70 percent likelihood of between 13 and 20 named storms, of which seven to 11 could become hurricanes with winds of 74 mph or higher. We are looking at the real possibility of between three and six major hurricanes of Category 3 and stronger, with winds of 111 miles per hour or more. Of course, NOAA doesn't know if any of these possible storms will make landfall even if they do form. Many hurricanes spin around out in the ocean and don't cause any trouble other than forcing ships and aircraft to route around them. But it only takes one good hit to affect thousands of lives and do millions of dollars in property damage. One only needs to look as far back as Hurricane Katrina or Superstorm Sandy to what can happen. And the die is cast for a bad year. “This year, oceanic and atmospheric conditions in the Atlantic basin are expected to produce more and stronger hurricanes,” wrote Gerry Bell, lead seasonal hurricane forecaster with NOAA’s Climate Prediction Center in a briefing about the pending storms. “These conditions include weaker wind shear, warmer Atlantic waters and conducive winds patterns coming from Africa." NOAA does a good job these days using technology to predict dangerous storms, but this year it may have its work cut out for it with so much bad weather predicted, which is bad news for those of us living along the East Coast, and especially for those in the South, the typical target zone. Thankfully, NOAA will have some new prediction weapons at its disposal, if it can get them online and working before the storms hit. Right now, the predictions are pretty good, as was evident by the extremely accurate modeling of Superstorm Sandy. NOAA predicted the storm's intensity and impact, and projected the correct time of its landfall within six hours. To improve predictions even more, NOAA is working to bring a new supercomputer online in July, before hurricane season reaches its peak. The new system will run an upgraded hurricane weather research and forecasting model that provides significantly enhanced depiction of storm structure and improved storm intensity forecast guidance. The National Hurricane Center uses several dozen modeling programs combining factors such as historical data and atmospheric information to predict a storm’s path. And adding Doppler radar to its fleet of sensor-laden, storm-hunting aircraft, could mean a 10 to 15 percent increase in prediction accuracy and speed, according to NOAA. At the moment, the uncertain element in NOAA’s big picture has to do with satellites. The GOES 13 satellite that NOAA uses to watch the East Coast. It is having some problems, and this is not the first time. There are plans to reposition GOES 14 to watch over the hurricane target zone if 13 can't get fully operational before the hurricane season begins in earnest. With a violent season predicted, NOAA needs to make sure that all of its resources are working and in place, if not by June 1, then certainly by August when the season traditionally heats up. We can't fight hurricanes, but with enough warning people can prepare for them, or, if nothing else, get the heck out of the way. To learn about what you can do to prepare for a hurricane, visit the government's http://www.ready.gov/hurricanes storm preparation website and take their sage advice. Posted by John Breeden II on May 31, 2013 at 9:39 AM
<urn:uuid:ca0ca7d7-a194-4184-812e-8e80308386e7>
CC-MAIN-2017-04
https://gcn.com/blogs/emerging-tech/2013/05/noaa-upgraded-technology-hurricane-season.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956612
743
3
3
When most people think of solar power, they imagine a simple solar panel on the roof of their building. But to really amass a significant amount of energy from our Sun, we need to use reflecting dishes, known as solar concentrators, to align solar rays into a single energy-collecting point. The problem with solar collectors is all that concentrated energy also produces a lot of heat, which ends up frying the electronics with what's effectively a photonic laser. To fix the issue, IBM, in collaboration with Airlight Energy and Swiss university partners, used the same microchannel water-cooling technology IBM developed to prevent the Aquasar supercomputer from burning itself out to cool off photovoltaic generators. According to the researchers, the High Concentration PhotoVoltaic Thermal (HCPVT) system is equipped with hundreds of "triple junction photovoltaic chips" all connected to a microchannel-liquid cooled system. Each of these square-centimeter chips can convert sunlight into 200-250 watts, on average, over a typical eight-hour day. According to IBM, the system can capture energy that's 2000 times more concentrated than that given off by the Sun. "We plan to use triple-junction photovoltaic cells on a microchannel-cooled module which can directly convert more than 30 percent of collected solar radiation into electrical energy, and allow for the efficient recovery of waste heat above 50 percent," said Dr. Bruno Michel, manager, advanced thermal packaging at IBM Research, in a release. While the main function of the water-cooled system is to keep the electronics running, the scientists say they can also utilize the scalding hot water to power a desalination system. Supposedly, a large solar energy collection system could convert salt water from the sea into 30 to 40 liters of safe, drinkable water. For now, the team is only testing one prototype HCPVT located in Switzerland. The researchers, however, envision that their system will be able to provide sustainable energy and fresh water to locations around the world including Southern Europe, Africa, Arabic peninsula, southwestern United States, South America, and Australia This story, "IBM 'overclocks' solar energy collector tech with supercomputer cooling" was originally published by TechHive.
<urn:uuid:a125e947-5fc6-4691-b24d-247ace8e58a0>
CC-MAIN-2017-04
http://www.itworld.com/article/2709270/hardware/ibm--overclocks--solar-energy-collector-tech-with-supercomputer-cooling.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933615
472
3.609375
4
IBM is hoping a combination of IoT, AI and cloud can help stop the Zika epidemic in its tracks. IBM has thrown its vast resources behind the quest to track, monitor and hopefully eradicate the Zika virus. It will supply technology and work to the Oswaldo Cruz Foundation (Fiocruz), a Brazilian science and technology organisation, to help track the spread of the disease. The virus is highly contagious with no known cure or vaccine. It is spreading Brazil quickly where the Olympic Games start next week. Big Blue is offering its cloud resources and IoT network in order to monitor the disease. It plans to plans to donate a one-year subscription feed of highly local, daily rainfall, average temperature and relative humidity data from The Weather Company to the US Fund for UNICEF. Its intention is for UNICEF to use the information from The Weather Company, an IBM business, to better understand patterns of the spread of Zika, with a special focus on Brazil. IBM data points Data from the platform will allow UNICEF and other agencies to quickly understand an increasingly complex world. Rainfall, temperature and humidity play key roles in the development of Aedes aegypti larvae, the primary mosquito that carries Zika. Over 20,000 of these weather-related data points spread across Brazil can provide daily information used to help estimate the larvae’s proliferation. Other Zika-related efforts supported by IBM include the OpenZika project running on IBM’s World Community Grid, a virtual, crowdsourced supercomputer that IBM created. A free app is available for download that automatically provides to researchers the unused computing power of volunteers’ computers or Android devices. Scientists in Brazil and the US now have the ability to screen millions of chemical compounds to identify candidates for treatments to combat the Zika virus. In the first two months of the study, more than 50,000 volunteers from all over the world have enrolled and donated the equivalent of over 4,000 years of computing time and performed more than 20,000 virtual experiments, saving researchers $1.5 million in equivalent computing resources. Hackathon and Bluemix The firm is also supporting a hackathon at Fiocruz, Brazil this autumn at which 70 approximately software developers will be challenged to develop health apps for smartphones. These might include apps that enable people to more easily identify or report mosquito larvae or update public health officials on a local virus outbreak, and other issues related to health. IBM will help to identify appropriate software programmers and will provide its Bluemix cloud technology used for developing the applications. Professor Mark Skilton of Warwick Business School told Internet of Business that the model being applied here by IBM in the Internet of Things is to leverage the large-scale connectivity and computing power to track and analyse the virus. “The IoT is a combination of being able to capture data from specific locations and to be able to analyse and respond in real-time and in better insight and intelligence,” he said. “In the case of the IBM initiative to help the Zika virus this is on a number of fronts. Initially IBM has been running the “Open Zika “ project on the back of its open source World computing grid. This is enabling many people to connect and donate their mobile devices to help run software programs to analyse simulation model being led by several research consortiums. “The key aim is to speed up the race to find vaccines to fight Zika which currently has no known cure for it. This requires huge computing model resources that this crowd sourced cloud computing resource is assisting in,” he added.
<urn:uuid:93dde94f-ad0f-4359-8263-ac0c9e196900>
CC-MAIN-2017-04
https://internetofbusiness.com/ibm-using-iot-cloud-ai-track-zika/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935957
735
2.78125
3
Cables are frequently used for the distribution of the electrical energy. Although the cables are highly developed, there are sometimes malfunctions in the cable system. Electric utilities often face the problem of finding the exact location of a failure in a distribution cable. These failures often occur at the worst possible time and cause the maximum amount of inconvenience to utility customers. The utility must quickly find and isolate the failure to restore electric service. Available fault locating methods use cable fault locator, “thumper,” radar, acoustic detector or combinations of these. Fault locators are used to pinpoint faults occurring in communication / control cables to facilitate quick repair. In order to reduce down-time and facilitate easy maintenance, cable fault locators are indispensable. They use the principle of pulse reflection technique to quickly locate the point of occurrence of a fault, and are thus very flexible and time saving in nature. The demand for the product is bound to increase considering the growth of the telecom sector and cable operators in the entertainment sector in cities, metros and urban areas. Cable faults are damage to cables which effect a resistance in the cable. If allowed to persist, this can lead to a voltage breakdown. The equipment user must use a quickest most efficient method of pin pointing an underground cable fault with minimal training required for use of the cable fault locator equipment. The manufacturer must design the equipment with easy operating features for quick precise pinpointing of a cable fault that minimizes any additional damage to the cable under test. There are different types of cable faults, which must first be classified before they can be located. To locate a fault in the cable, the cable must first be tested for faults. Cable testing is therefore usually performed first in cable fault location. During the cable test, flash-overs are generated at the weak points in the cable, which can then be localised. The measures necessary for determining fault locations can be subdivided into individual steps. Insulation and resistance measurement provides information on the fault characteristics. An insulation test measures the insulation resistance between conductor and screen; from the periodic measurement of resistance you can derive the absorption properties of the insulating material. In cable identification, the faulty cables are identified from the fault-free cables at the already determined site. After the cable fault is identified and located, it is then possible to “burn it in” using burner devices, in other words to convert it from a low-resistive to a high-impedance fault. To answer the need for gentler fault locating methods, the industry has developed more sophisticated methods that reduce the stress on aged insulation systems. The general approach is to reduce the amount of thumping necessary to locate a fault while simultaneously reducing the voltages required to perform the task. Fiberstore offers a full range of fiber testers to choose from with the most efficient up to date TDR method coupled with our high voltage capacitive discharge units. Our line of cable fault locator and fiber visual fault locator is designed and manufactured based upon our hands on fault locating field expertise.
<urn:uuid:c4ae22c3-6182-427a-b0ed-290548b58ace>
CC-MAIN-2017-04
http://www.fs.com/blog/cable-fault-locator-to-fault-locating-field.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00422-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92236
621
3.265625
3
While TV episodes and Hollywood movies are fiction, what if there really was technology that could clean up bad images? The primary objective is identification, whether it is of people or license plates, to track down suspects. No matter how poor the image is, image enhancement magically produces HD images ready for identification. However, security cameras are not always set up for identification purposes. Most people counting applications mount cameras overhead, making it impossible to capture a face. In traffic monitoring, roadside cam r s watch for flow; cars are only identified by high-resolution LPR cameras when stalled at toll stations or exit ramps. Camera placement and objectives will affect whether an image will be usable or not. Even if a face is recorded, it may not lead to an arrest, as in the case of the 2010 Dubai assassination. It may not be possible to make a thumbnail clear as day, but there are real ways to improve images. “We are able to zoom, and we're able to enhance,” said Joelle Katz, Marketing Manager at Brivo Systems, in a prepared statement. “But don't count on CSI's pseudo-scientific enhancement to be available any time soon.” Traditional government users in federal, military and intelligence agencies benefit most from enhancing security video, but the applications are limitless. “We have folks in academia that use our software for projects they're working on,” said Benjamin Solhjem, PM of Motion DSP. “We have retail customers, such as Target and Wal-Mart, who use image enhancement for loss prevention. There's a lot of demand for video enhancement in any application that has use for a camera.”While image sharpening technology exists, awareness and demand are limited. “We have never had a request for this, though we have had requests for some other things that people see on TV shows,” said Bob Mesnik, President of Kintronics, a US distributor. How Things Work Image enhancement for still images is all about amplifying the image signal. “Enhancement of a still picture can be accomplished using compressed sensing,” Katz said. “It's a mathematical tool capable of creating high-resolution photos from low-resolution shots. At the very basic level, it works by repeatedly layering colored shapes into the areas where there are missing pixels to achieve what's called sparsity, a measure of image simplicity.” While compressed sensing is still being researched for radar and medical imaging, noisy and grainy video can be cleaned up with commercially available tools. Adobe PhotoShop and Topaz Enhance tools reduce noise in a number of ways: Spatial noise reduction in each frame, temporal noise reduction between frames and combining both methods in spatial-temporal noise reduction, Katz said. Motion DSP employs spatial temporal noise reduction algorithms, but cautioned that image enhancement is just a tool rather than a magic bullet. “‘CSI' will show a totally crappy image the size of my finger; then blow it up to be better than 1,080p. That's a misnomer,” Solhjem said. “But you can, utilizing certain algorithms, try to eliminate the bad data there and increase the level of information. It does not increase the resolution per se, but makes it easier to see what the image looked like when it was imaged by the camera.” Image enhancement also has to deal with compression, which reduces the number of usable pixels for analysis. “Bear in mind that these programs work best with the highest resolution pictures they can get,” said Dave Gorshkov, CEO of Digital Grape and Chair of the CCTV and VCA Technical Standards Working Group for the American Public Transportation Association. “What you find with the current generation of network cameras is that the analytics are done on the native image in the camera, using a dedicated DSP. It is not done at the control center, because the image needs to be compressed and then sent to the control center over a low-speed backhaul network. This compromises the type and complexity of VCA able to be done in realistic time frames at the camera, as more complex analysis done with powerful computers that are server based can't be put in network camera because of program size, processor power requirements and associated ‘on-cost' of such a camera.”
<urn:uuid:370d828d-e7c0-4381-accb-2aa8aebf6172>
CC-MAIN-2017-04
https://www.asmag.com/showpost/12849.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00422-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946367
894
2.65625
3
What is Heartbleed? This vulnerability takes advantage of a memory configuration within the ever-popular OpenSSL software library. The TLS heartbeat extension (RFC 6520) on an exploited version of OpenSSL allows an attacker to view up to 64k of what is in memory with each “heartbeat.” Thus, a multitude of information can be obtained unnoticed. It is important to note that this exploit is found in OpenSSL's implementation of SSL/TLS, not within the TLS protocol itself. Why is this important? SSL/TLS is the cornerstone of the Internet's means of encrypted transmission of data. We rely on websites to implement proper security measures when working with private information, e.g. bank accounts, medical records, social security numbers, and so on. OpenSSL is a widely used set of libraries that provides cryptographic services to many of these web servers. What makes this particular exploit interesting and very dangerous is that: Whom does this affect? So far, any company that provides services using non-patched OpenSSL to encrypt data can be vulnerable if proper measures of updating are not followed. Examples of this might include: What is at stake? How does this affect Green House Data's services? We are actively pursuing efforts to mitigate any presence of vulnerable systems within Green House Data's cloud infrastructure. From what we have seen so far, these efforts are primarily focused on systems using OpenSSL to encrypt TLS connections. Green House Data provides service and customer portals that use SSL and have taken the necessary actions to secure our systems. Those who take advantage of our managed services will be automatically patched during the regular patching cycle. We also provide proactive scanning of clients' systems for vulnerabilities and will notify if and when issues are found. We consider data security and integrity a high priority with every service we provide. What steps can be taken to fix this? References and Further Reading To test your server against the bug: Posted by: Systems Administrator Alex Kirby
<urn:uuid:a5fb0506-ab7b-4b32-829b-aa7f7efc171b>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/openssl-vulnerability-heartbleed-bug
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00478-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926296
408
2.65625
3
The Global Wind turbine Market has been estimated at USD XX billion in 2015 and is projected to reach USD XX billion by 2020, at a CAGR of XX% during the forecast period from 2015 to 2020. Wind Energy cumulative capacity is expected to rise to 1750 GW by 2030. A Wind turbine is a device which converts the kinetic energy of the wind into electrical energy. Wind turbine uses the energy present in the wind to generate power. Wind energy is a renewable energy that is available in ample quantity and extensively. It is an alternative to fossil fuels which are depleting in the world. Wind energy is the cleanest resource; it has neither toxic gas emissions nor greenhouse gas emissions. Wind turbines are connected to the network of electricity transmission The onshore and offshore wind that is trapped is an inexpensive, competitive and significant source of energy. Wind energy contributed to 4% of the total global electricity usage in 2013. The Major application of Wind turbine can be categorized into three segments, namely, Industrial, commercial and residential. Industrial can be further sub divided into Power generation, agriculture, Industrial automation, engineering and telecommunication. Most important usage of Wind turbines is its industrial application. And accounts for more than XX% share of global Wind turbine market in 2015. This market is driven by a number of factors, such as the large number of ongoing projects and innovations in the Power generation sector along with the growing demand for Industrial products. Especially the growth in emerging economies in Asia-pacific with India and China being the key drivers. However, this market faces certain drawbacks, such as high initial investments, rare earth metals usage and with a few ecological concerns such as land use and wildlife habitat. These factors may act as a roadblock to the growth of the market. This market can be broadly segmented into Small (1-120 KW), Medium (121-1000 KW), Large (1-3 MW) and other, based on its power plant’s installed capacity. In this report, the market is also segmented based on its technology time such as Horizontal axis wind turbine and Vertical axis wind turbine. The market has also been geographically segmented into North America, Europe, Asia, Middle East & Africa and Pacific, with Europe occupying the largest consumer base in the world. Asia and North America taking the next places. The emerging economies in Asia-Pacific have made this region an area of immense potential and opportunities. However, the price sensitivity in this region has considerably hindered the growth of this market. The market has more relevance today since trends show that world’s energy consumption is increasing, indicating a market opportunity for Wind turbines. The major companies dominating this market for its products, services, and continuous product developments are GE Energy, Siemens wind power A/s, Suzlon Energy limited, Vestas wind systems A/S, Sinovel Wind Co.Ltd., Xinjiang Goldwind Science & Technology Co., Ltd., Gamesa Corporation Technologica, ENERCON GMBH, Guodian United Power Technology Co., Ltd., China Mingyang Wind Power Group Ltd. and others. The current trends in mergers and acquisitions indicate that the smaller companies are getting acquired by the bigger ones for more innovations by utilizing their core competencies. For example General Electric’s acquisition of Alstom Power Key Deliverables in the Study 1. Market analysis for the Global Wind turbine Market, with region specific assessments and competition analysis on global and regional scales 2. Market definition along with the identification of key drivers and restraints 3. Identification of factors instrumental in changing the market scenarios, rising prospective opportunities, and identification of key companies that can influence this market on a global and regional scale 4. Extensively researched competitive landscape section with profiles of major companies along with their market shares 5. Identification and analysis of the macro and micro factors that affect the global Wind turbine market on both global and regional scales 6. A comprehensive list of key market players along with the analysis of their current strategic interests and key financial information 7. A wide-ranging knowledge and insights about the major players in this industry and the key strategies adopted by them to sustain and grow in the studied market 8. Insights on the major countries/regions in which this industry is blooming and to also identify the regions that are still untapped.
<urn:uuid:78435117-318c-4a17-b330-51550df1d490>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/global-wind-turbine-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939153
884
2.828125
3
Scientists say a technological revolution is on the horizon BY ROBERT S. BOYD Knight Ridder News Service WASHINGTON -- Tiny machines no bigger than a fingernail, a grain of rice or a red blood cell have been twirling, buzzing and slithering across the pages of science fiction and research laboratory benches for years. Now these Lilliputian gadgets are beginning to enter the real world. Following on the success of crash sensors in automobile airbags, new micromachines are being developed to sniff anthrax or nerve gas, to protect nuclear weapons and to resuscitate laboratory mice. Enthusiasts say they are the advance wave of a technological revolution comparable to the introduction of computer microchips. ``Imagine a machine so small that it is imperceptible to the human eye,'' said Al Romig, director of the Microsystems Science, Technology and Components Center at Sandia National Laboratory in Albuquerque, New Mexico. ``Welcome to the microdomain -- a place where gravity and inertia are no longer important, but the effects of atomic forces dominate,'' Romig wrote in a description of his lab's work published on the Internet. Sandia, along with other government and private research centers, designs ``microelectronic mechanical systems'' (MEMS) -- or ``micromachines'' for short. As their name implies, these miniscule contraptions combine both electronic and mechanical functions in a single device. They are etched out of silicon, the raw material of computer chips, which can be engineered at scales of millionths or billionths of an inch. But unlike a computer chip, MEMS don't just ``sit there and think,'' said Karen Marcus, director of the MCNC Technology Applications Center in Research Triangle Park, North Carolina. Instead, she said a typical micromachine senses its environment, figures out what it means, and then does something useful, such as inflating an airbag, steering a ballistic missile or reporting the presence of poison gas. Although they are difficult to design, diminutive machines made of silicon can be mass-produced cheaply, said Mark Bird, chairman of a new electronics industry committee established this month to develop standards for this dawning technology. They also, he said, work faster, more precisely and more reliably than larger mechanisms constructed of metal. ``Metal gets fatigued and has to be replaced,'' said Bird, an engineer at Amkor Electronics, a semiconductor company in Chandler, Ariz. ``A silicon device will continue to work for decades without wearing out.'' A microsensor used to detect hydrogen leaks aboard NASA spacecraft costs 10 times less, works 10 times faster and is 10 times more sensitive than conventional devices, according to Paul McWhorter, a Sandia engineer. The first widespread commercial application of micromachines are the crash sensors used since the early 1990s in automobile airbags. About a tenth of an inch across, these accelerometers sense a sudden change in a car's velocity, analyze it and flash a signal to inflate the bag in a fraction of a second. Coming next are more advanced mini-detectors that can respond to skids and roll-overs, said Pontus Soderstrom, manager of advanced systems technology for Autoliv (cq), a sensor company in Auburn Hills, Mich. Similar motion detectors are being developed for use in computer mice and in game controllers. ``No more joystick wrist for Nintendo players,'' said Marcus. The Air Force wants to apply similar technology to its missiles, she added. Some other examples of micromachines now on the drawing boards or in testing: --Perhaps the most complex MEMS so far is ``Stronglink,'' a miniature padlock for nuclear weapons designed by Steven Rodgers, a Sandia engineer. To unlock it, an operator enters a 24-digit code that steers a pin through a maze, turns a set of silicon gears with teeth the size of blood cells, pops up a mirror that relays an optical signal to an electronic switch that -- finally -- arms the bomb. One false move by a terrorist would jam the works of this midget Rube Goldberg device forever. --The Defense Advanced Research Projects Agency is developing a chemical and biological weapons detection system called ``Dognose,'' an array of delicate sensors on a silicon chip. ``They're trying to replicate a dog's nose, one of the most sensitive sensors in the world,'' Marcus explained. Hundreds of them could be thrown out the back of an airplane, she said. If they sniff anthrax or nerve gas, they would radio a warning to a computer. Sandia is also experimenting with very small seismographs to detect weapons explosions. ``We could sprinkle them all over the world,'' said Harry Weaver, a technology manager in the microelectronics lab. --Engineers at the Massachusetts Institute of Technology in Cambridge are building gas turbines the size of a shirt button, like miniature jet engines, that weigh less than half an ounce and generate 10 to 20 watts of electricity. They could replace the unreliable batteries now used in laptop computers and other products. MIT is also designing a ``mouse respirator,'' a tiny version of an iron lung for laboratory mice suffering breathing difficulties. These strains of mice take years to develop and cost as much as $200 each, according to Dr. Chi-Sang Poon, of the Harvard-MIT Division of Health Sciences and Technology. ``Until now there was nothing available to resuscitate these mice, which represent a major commitment of research time and money,'' he said. -- Henry Guckel, a professor of electrical engineer at the University of Wisconsin, Madison, developed a microscopic pressure gauge that can be mounted on the tip of a catheter and inserted into the heart to measure its pressure. The device is already in use in Sweden, Guckel said. Another Guckel invention -- a set of gears with teeth measuring eight thousandths of an inch -- is being used in miniature pumps manufactured by a company in Biel, Switzerland. -- Eun Sok Kim, an electrical engineer at the University of Hawaii, is developing midget microphones and speakers, less than a thousandth of an inch wide, that generate extremely fine sound waves. Kim said there is ``good potential for commercialization'' of his devices in ink-jet printers, hearing aids and other products. -- On a slightly larger scale are shrunken versions of Sojourner, the 23-pound, two foot-long rover that cruised the surface of Mars during last year's Pathfinder mission. NASA is building a one-pound, five-inch mini-rover to ride a Japanese spacecraft to the asteroid Nereus in 2001. Its mission is to land on the asteroid, collect samples and return them to Earth by 2006. ``They're running around the lab right now,'' said systems engineer Stacy Weinstein. Further ahead, NASA is considering a fleet of one-ounce, one-inch ``gnat rovers'' proposed by Anita Flynn, a robotics expert from MIT. A host of such little rovers could scatter over the Martian surface, seeking evidence of water. Other applications include switches for high-speed fiber optic lines, radio frequency tuners for wireless communications devices and tiny pumps for ink-jet printers or blood monitors. Despite the fascination with micromachines, Sandia's Weaver cautioned that this is ``not a mature technology. Very few of these things are on the market.'' Several small companies trying to make a living off MEMS have gone bankrupt. ``We're just turning the corner,'' said Marcus. ``The financial community is starting to warm up to MEMS. We're limited only by the imagination of the engineers.'' This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 13:07:54 PDT
<urn:uuid:0dcaef88-6021-409e-a8b9-8261d7a0a064>
CC-MAIN-2017-04
http://lists.jammed.com/IWAR/1998/04/0048.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926376
1,567
2.71875
3
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: AAT-2007-1.1-03;AAT-2007-3.1-02 | Award Amount: 4.69M | Year: 2008 Helicopters can generate a large amount of external noise, as their traditional missions rescue, medical, law enforcement are very close to populated areas. As emphasised in ACARE SRA2, increasing rotorcraft missions in the public vicinity should not lead to increasing public disturbance. Turboshaft engine is known as a major contributor to exterior noise for take-off conditions. ACARE SRA2 objectives imply that noise reduction must be maximised for the most dominant engine noise source in flight. An increased knowledge of the exhaust sound sources balance is then required. Broadband noise at a turboshaft exhaust is assumed to be a mix between combustion and turbine noise. TEENI (Turboshaft Engine Exhaust Noise Identification) will find the relationship between engine modules (combustion chamber, HP Turbine, Power Turbine) and their broadband noise signature and will give a recommendation about the noise source to be reduced in priority. But noise sources breakdown is an ambitious goal, due to the complexity of the physics involved, the harsh environmental conditions, and the small space available. TEENI carries in parallel 4 objectives : 1. To develop sensors for fluctuating quantities, adapted to such an environment 2. To develop noise sources breakdown methods 3. To understand broadband noise generation and propagation through blade rows 4. To discriminate engine exhaust noise sources TEENIs workplan includes : - Innovative sensors development, - New noise sources breakdown techniques, - Basic studies, including rig experiments, to understand the propagation effects of broadband noise through blade rows. These tests will also help to verify noise breakdown techniques. - New instrumentation and source breakdown techniques will be applied to a full-scale engine test - Development in HELENA (from Friendcopter) of the source breakdown capability - Estimation with HELENA of the engine noise source to be reduced in priority in flight Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.3.5 | Award Amount: 2.89M | Year: 2010 The CLAM project aims at developing a collaborative embedded monitoring and control platform for submarine surveillance by combining cutting edge acoustic vector sensor technology, underwater wireless sensor network protocols, collaborative situation-aware reasoning and distributed signal processing techniques for horizontal and vertical linear sensor arrays. The result will be a cooperative, flexible and robust underwater sensing, reasoning and communication platform for online surveillance of submarine environments accommodating pervasively deployed heterogeneous sensor nodes deployed at different water depths, enabling sensing and actuating devices to exchange data, autonomously network together, and collaboratively and locally asses their observation environment and act upon. Horizontal and vertical collaboration between sensor arrays in form of collaborative routing and beam forming, sensor fusion and distributed processing and reasoning enables fine-grained monitoring of the submarine environment and collaborative event detection as well as transmission of the network information to the monitoring stations.\nCLAMs consortium has experience and knowledge needed to deliver, exploit, and commercialize a complete solution right from the sensor node platform design, collaborative communication and networking protocols, adaptive, robust and scalable collaborative data processing and reasoning, up to the application requirements and market analysis. Participation of the international, external advisory board in this project indicates that the demand and potential market for such monitoring platforms goes beyond Europe. This can offer Europe a great opportunity in becoming an international leader in this emerging area which is still very much in its infancy. Fernandez Comesana D.,Microflown Technologies BV | Wind J.,Microflown Technologies BV SAE Technical Papers | Year: 2011 There are several methods to capture and visualize the acoustic properties in the vicinity of an object. This article considers scanning PU probe based sound intensity and particle velocity measurements which capture both sound pressure and acoustic particle velocity. The properties of the sound field are determined and visualized using the following routine: while the probe is moved slowly over the surface, the pressure and velocity are recorded and a video image is captured at the same time. Next, the data is processed. At each time interval, the video image is used to determine the location of the sensor. Then a color plot is generated. This method is called the Scan and Paint method. Since only one probe is used to measure the sound field the spatial phase information is lost. It is also impossible to find out if sources are correlated or not. This information is necessary to determine the sound pressure some distance from the source, at the driver's ear for example. In this paper, the method of Scan and Paint is enhanced in such way that it is possible to handle partial correlated sources. The key of the novel method is having a pressure microphone at the listener position which is used as a reference sensor. With all this data, it is possible to derive the spatial phase of the sources measured relative to the listening position. Copyright © 2011 SAE International. Source Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: AAT-2007-1.1-03;AAT-2007-3.1-02 | Award Amount: 5.28M | Year: 2008 Air traffic is predicted to grow by 5% per year in the short and medium term. Technology ad-vances are required to achieve this growth without unacceptable levels of noise. FLOCON addresses this issue by reducing fan noise at source through the development of innovative concepts based on flow control technologies. FLOCON is aimed primarily at reducing fan broadband noise. This is one of the most signifi-cant noise sources on modern aircraft and FLOCON provides one essential element of a wider effort by the industry to achieve established targets for noise reduction. Previous attempts at reducing broadband noise have been inhibited by a limited understand-ing of the dominant mechanisms and by a lack of high-fidelity numerical models. These is-sues are addressed in the ongoing PROBAND FP6 project. In FP7, FLOCON moves beyond the scope of PROBAND to the development of specific concepts for reducing noise in aero-engine fan stages. A wide range of concepts will be considered and brought up to Technology Readiness Level 4 (laboratory scale validation): Rotor trailing edge blowing Rotor tip vortex suction Rotor overtip treatments Rotor and Stator leading and trailing edge treatments Partly lined stator vanes Experiments will be performed on two rotating rigs, supported where possible by more detailed measurements on a single airfoil and a cascade. Numerical methods will be used to optimize the concepts for experimental validation and to extrapolate the results from labora-tory scale to real engine application. The potential benefit of each concept will be assessed, including any associated penalties (weight, complexity, aerodynamic performance). Recommendations will be given as to which concepts could be integrated into new engine designs and which will require further valida-tion at industrial rig or full engine-scale. Required developments in enabling technologies will also be identified. Agency: Cordis | Branch: FP7 | Program: CP | Phase: SEC-2010.2.3-3 | Award Amount: 2.99M | Year: 2011 Civil installations such as power plants are often located in wide and remote areas. In the coming years, the number of small distributed facilities will increase as a direct result of new European environmental policies aimed at increasing societies resilience to local manifestations of climate change. Yet the protection of fragmented assets will be difficult to achieve and will require portable security systems that are affordable to those in charge of their management. The BASYLIS project aims to address these issues by developing a low-cost smart sensing platform that can automatically and effectively detect a range of security threats in complex environments. The principal obstacles to early threat detection in wide areas are of two types: functional (e.g. false-alarm rate) and ethical (e.g. privacy). Both problems are amplified when installations are dynamic or located in changing environments. Potential solutions are unaffordable to most of the potential users. The BASYLIS system will consist of a transportable security platform capable of detecting a wide range of pre-determined security threats. The prototype design will include four highly sensitive sensors exploiting different parts of the spectrum: radio, magnetic, seismic, acoustic and optical waves, as well as images via intelligent video. The information gathered by these sensors is then brought together into an information layer composed of three levels: multi-sensor integration (MSI), image processing and risk assessment. The BASYLIS system will be characterized by high performance and high usability index. The engagement of end users in the specification and validation of the design has been considered from the start of the project, ensuring that the design of the final system meets the needs of the users. BASYLIS consortium has decided to focus on refugees-camps a hot-spot environment where European and UN aids are injured, killed or kidnapped every year.
<urn:uuid:94efb184-69bb-438e-a949-20d33369d7e4>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/microflown-technologies-bv-1094488/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00443-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918002
1,863
2.71875
3
IBM Blue Gene/P Supercomputer Comes to U.S.By Scott Ferguson | Posted 11-02-2007 IBM is looking to bring the first of its Blue Gene/P system supercomputers to North America with an agreement to build a new machine at the U.S. Department of Energy's Argonne National Laboratory in Illinois, the company announced Nov. 1. When it's complete, the supercomputer at Argonne will offer a peak performance of 445 teraflops, or 445 trillion calculations per second. When the new supercomputer is added to Argonne's current system, the laboratory will then have 556 teraflops of computing performance at its disposal, according to Big Blue. IBM first announced that it would begin building Blue Gene/P system supercomputers at the Supercomputer Conference in Dresden, Germany, on June 26. These will eventually replace the older Blue Gene/L system supercomputer. The Armonk, N.Y., company is currently building its first Blue Gene/P system at the Julich Research Center in Germany. When it first announced Blue Gene/P, IBM said it would be able to offer a sustained performance of a petaflop, or 1 quadrillion calculations per second. However, the ability to reach a petaflop depends on how many server racks a facility such as Argonne is willing to install. An IBM spokesperson said the company is not sure which of its Blue Gene/P systems will hit the petaflop mark first. Click here to read more about IBM's efforts to build ever larger supercomputers. Currently, the world's fastest supercomputer is a Blue Gene/L system at the DOE's Lawrence Livermore National Laboratory in Livermore, Calif., which runs at 280.6 teraflops. When it's completeby mid-2008the supercomputer at Argonne will be used for a number of research projects that need large-scale computing environments, according to the DOE. The field of supercomputing is getting more intense as more and more companies look to expand beyond the current crop of high-performance systems. In addition to IBM, Sun Microsystems is working on a supercomputer dubbed Constellation that also looks to break the petaflop barrier. And on Oct. 25, NEC announced its new SX-9 supercomputer that will offer a peak performance of 839 teraflops. Check out eWEEK.com's for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.
<urn:uuid:1154dd24-4422-418d-9078-7afc2887643d>
CC-MAIN-2017-04
http://www.cioinsight.com/print/c/a/Past-News/IBM-Blue-GeneP-Supercomputer-Comes-to-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00351-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933102
529
2.625
3
Intel Unveils Future Energy-Saving Technology The company Wednesday will announce a future chip manufacturing process that will reduce the energy needed for to run notebook computers.Smaller is better, certainly for notebook computers and their processors. However, the bane of low-power chips has been "current leakage," a problem that drains away much of the energy-savings on smaller processors. Intel Corp. engineers on Thursday will announce a "breakthrough" aimed at minimizing this current leakage at the International Workshop on Gate Insulator 2003 in Tokyo. The new improvements will be introduced into Intels lines when it begins manufacturing microprocessors with a 45-nanometer process, sometime in 2007, according to company officials. At that time, the use of a high-dielectric-constant (or high-k) material will help prevent battery packs from draining their charge when not being used. Intels announcement of a solution to the leakage problem follows rival Transmeta Corp.s recent demonstrations of low-power processors. Transmeta designed its Crusoe chips to minimize power. The company last month disclosed LongRun2, a technology to minimize leakage current that combines hardware, software and an improved manufacturing process. LongRun2 is due out in mid-2004 and will be used in the Efficeon family of processors, according to Kenn Durrance, a spokesman for Transmeta.
<urn:uuid:fa006086-26ee-44f8-8554-71f962b8e053>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Intel-Unveils-Future-EnergySaving-Technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00075-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932937
278
3.046875
3
How Does SSL Work? What are SSL Certificates and why do I need them? The Internet is your gateway to millions of potential new customers. Moving your business online provides the convenience and accessibility your customers and partners demand, helping you to stand out from the competition. As organizations provide more services and transactions online, security becomes a necessity. Customers need to be confident that sensitive information such as a credit card number is going to a legitimate online business. Organizations need to keep customer information private and secure. SSL certificates are an essential component of the data encryption process that make internet transactions secure. They are digital passports that provide authentication to protect the confidentiality and integrity of website communication with browsers. The SSL certificate’s job is to initiate secure sessions with the user’s browser via the secure sockets layer (SSL) protocol. This secure connection cannot be established without the SSL certificate, which digitally connects company information to a cryptographic key. Any organization that engages in ecommerce must have an SSL certificate on its Web server to ensure the safety of customer and company information, as well as the security of financial transactions. How SSL Certificates Work - A browser or server attempts to connect to a Website, a.k.a. Web server, secured with SSL. The browser/server requests that the Web server identify itself. - The Web server sends the browser/server a copy of its SSL certificate. - The browser/server checks to see whether or not it trusts the SSL certificate. If so, it sends a message to the Web server. - The Web server sends back a digitally signed acknowledgement to start an SSL encrypted session. - Encrypted data is shared between the browser/server and the Web server. There are many benefits to using SSL Certificates. Namely, SSL customers: - Get HTTPs which elicits a stronger Google ranking - Create safer experiences for your customers - Build customer trust and improve conversions - Protect both customer and internal data - Encrypt browser-to-server and server-to-server communication - Increase security of your mobile and cloud apps To learn more, check out our Understanding Digital Certificates & Secure Sockets Layer paper, which explains what secure sockets layer (SSL) digital certificates are and why they are essential to providing secure online transactions.
<urn:uuid:b9152576-ecc4-40cd-bbc3-e85c6d81cd98>
CC-MAIN-2017-04
https://www.entrust.com/ssl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00561-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884613
477
3.21875
3
Energy Department officials confirmed that trace leaks of plutonium have been detected in the air outside the country’s only underground nuclear waste repository, located 26 miles southeast of Carlsbad, N.M. An independent monitoring organization first detected the leaks over the weekend after an air monitor within the storage facility, the Energy Department’s Waste Isolation Pilot Plant, detected radiation. The plant stores 3.2 million cubic feet of plutonium-contaminated waste in salt caverns 2,150 feet underground. The Carlsbad Environmental Monitoring and Research Center, operated by the College of Engineering at New Mexico State University, said earlier this week it had detected trace amounts of plutonium and americium, a radioactive isotope, in an air filter from a sampling station located northwest of the storage site. Joe Franco, Carlsbad field office manager for Energy, confirmed at a press conference this afternoon that the leak emanated from the underground facility where waste is packed into drums stored in the salt caverns. Franco said it will be three weeks before officials are able to return underground to assess exactly what caused the leaks. During that time, Energy will develop safety plans to deal with potential radiation, health and mining hazards. Ryan Flynn, secretary of the New Mexico Environment Department, told reporters attending the conference that “events like this should never happen . . . one event is too many.” He criticized Energy for waiting two days to inform his office that plutonium had been detected outside the facility. The levels of plutonium radiation detected are “well below” levels that would be harmful to people of the environment, Flynn said. Personnel from the center installed the air filter on Feb. 11 and removed for it analysis on Feb. 16. The analysis showed 0.64 Becquerels of americium (a Becquerel is a measure of radioactivity in which one nucleus decays per second), and 0.046 Becquerels of plutonium had been deposited on the filter. The center has three monitors located around the plant. They have detected plutonium four times in the past, eventually determined to be fallout from nuclear weapons testing and detonations that occurred during the 1940s through the 1960s. According to Russell Hardy, director of the research center, this is the first time plutonium has escaped from the facility in 15 years of operation. Don Hancock, director of the Nuclear Waste Safety Program at the Southwest Research and Information Center, a nuclear watchdog group in Albuquerque, agreed that the leak should have never occurred and pointed out that the plutonium traveled at least a mile from underground to the CEMRC air monitor site. The underground storage facility stores waste transferred by truck from Los Alamos National Laboratory, N.M., as well as Energy facilities in Idaho and Georgia. Last year, that amounted to nearly 1,000 separate shipments. The facility closed for routine maintenance on Feb. 14 and was supposed to re-open March 10, when it would again start taking in waste shipments. Due to the leak, it will not take any new shipments as previously planned on March 10, Franco said. He could not say when it was likely to reopen. This story has been updated.
<urn:uuid:13f37af3-9a45-4234-907d-5c6db472d292>
CC-MAIN-2017-04
http://www.nextgov.com/health/2014/02/researchers-detect-plutonium-air-near-underground-nuclear-waste-plant/79133/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00221-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961362
650
2.8125
3
Studies Find X-Rays Just as Accurate as Digital MammographyBy M.L. Baker | Posted 06-15-2005 Digital mammography systems, which were first approved for marketing in the United States in 2000, cost at least three times as much as film systems, according to ECRI, the nonprofit health services research agency that conducted the report. But the technology does offer some advantages over X-ray film: The images are stored directly in a computer system, which allows them to be digitally enhanced, magnified or distributed. Hassles of storing, processing and handling film are eliminated. In some cases, digital mammography requires slightly lower doses of radiation. "Cost-effectiveness will ultimately determine whether full-field digital mammography technology is adopted, since hospitals must justify their purchase based on exam volume and patient population," the report concluded. The assessment is not the last word. The National Cancer Institute sponsored another new clinical study, enrolling 49,500 women in the United States and Canada. Each patient received both traditional and digital mammograms, with follow-up exams after one year. An analysis of that study is expected soon, but Robert Maliff, associate director of the Health Systems Group at ECRI, said he expected digital mammography to show only an incremental improvement, if anything. However, if doctors become convinced that digital images can be read just as accurately as X-rays, operational efficiencies might persuade institutions that do many mammograms to move toward the high-tech option. Maliff also predicted that some hospitals might adopt the technology for marketing reasons. A separate study, reported in HealthDay, did find a factor that predicted more accurate diagnosis of mammograms: Doctors with 25 years of experience who read more than 2,500 mammograms each year are 30 percent better at finding cancerous tumors than doctors who read less than 800 a year. Detection rates for individual doctors ranged from 29 percent to 97 percent when cancers were present. Individual doctors falsely identified cancer in 1 percent to 29 percent of the mammograms they evaluated for the study. The ECRI report was summarized by the Center for the Advancement of Health.
<urn:uuid:51f78191-11a4-4c66-9256-3225a36abf8f>
CC-MAIN-2017-04
http://www.cioinsight.com/print/c/a/Health-Care/Studies-Find-XRays-Just-as-Accurate-as-Digital-Mammography
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961988
435
2.953125
3
FAA releases road map for domestic drones Last week, the U.S Department of Transportation’s Federal Aviation Administration released its first road map outlining safety measures for unmanned aircraft systems (UAS), also known as “drones.” The 66-page document addresses policies, regulations, technologies and procedures that will be required for use of commercial drones in national airspace. Drones are typically used by federal, state and local government agencies, as well as universities conducting research. The Department of Homeland Security uses drones for border monitoring; NASA and the National Oceanic and Atmospheric Administration use them for atmospheric research; while Virginia Tech uses drones for mapping agricultural diseases. The road map explains that developing minimum standards for sense and avoid technology, monitoring control and communications and finding ways to ensure that UAVs can comply with air traffic control visual clearances and instructions are among the challenges yet to be overcome. Posted by GCN Staff on Nov 14, 2013 at 8:11 AM
<urn:uuid:2d1d4687-2b6c-4808-bc60-6e0f2cda8407>
CC-MAIN-2017-04
https://gcn.com/blogs/pulse/2013/11/faa-domestic-drones.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00249-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924239
199
2.625
3
There are a lot of different pieces that are involved in building a CCENT (Cisco Certified Networking Technician) 100-101 or a CCNA (Cisco Certified Network Associate) 200-120 home lab. You will have routers, switches, serial cards, Ethernet straight-through patch cables, Ethernet crossover cables, serial crossover cables, console cables, usb to serial adapters, rack stands, rack mounts and all kinds of CCENT and CCNA study material to compliment your lab. So you could build out quite a bit of equipment and there are still tons of little ancillary pieces we have not touched on. But today, we are going to focus on a device which has a couple of different names that you might hear it called. That is the mysterious access server or terminal server. Cisco Access Server An access server or terminal server in a Cisco CCNA lab is primarily there for one reason; to make your lab experience easier and more pleasurable. So you may be asking as many students do what is the access server for and what does it give me access to? Let’s take a step back and maybe I can illustrate how an access server is used and how it will make your lab experience better. We will also cover a few different access server solutions from a Cisco perspective. So you just got your stack of routers and switches delivered to your house and you are setting up your CCNA lab. Whether you have them in a rack or simply stacked on your desk really does not matter for this illustration. So let’s say they are in a rack next to your workstation desk. You crack open your CCNA lab workbook and you start off with the static routing lab that has three routers in it. Generally, the first thing you will do in the lab is to go and set the name of each router. So you will have your console session open on what will be Router1 and you will use the hostname command and change the name of the router. Then step two will be to go to what will be Router2 and change the hostname of that device. So you will have to stop what you are doing, reach over and move the console cable from Router1 to Router2. Then when you go to change the hostname on Router3 you will need to reach over and change it again…then when you go to configure the interface now back on Router1 you will need to reach over and move it again, and then on Router2 again. I think you can kind of get the picture now. You have to continually stop typing to move this stinkin’ cable. It becomes pretty annoying 50 pages into your 425 page CCNA lab workbook. So that is where our access server or terminal server solution comes into play (we will call it an access server from here on out). Basically what an access server is it is a glorified KVM ServSwitch for Cisco routers and Cisco switches. Just like a KVM ServSwitch, you can use hot keys to flip between devices. In this case your Cisco routers and switches. There are many different choices for access servers, but I will cover the three most common Cisco based solutions that you will see in a home CCENT or CCNA lab. The first solution are the Cisco 2509 and 2511 routers. The 2509 is an 8 port access server and the 2511 is a 16 port access server. Accordingly you can connect either up to 8 or 16 devices to the corresponding access server. *Note: The 2509, 2510, 2511 and 2512 models do the same thing. But only the 2509 and 2511 are Ethernet based. The 2510 and 2512 are token ring based but still work just fine if you are not going to telnet into them which you probably won’t be doing in your lab and then those models may save you some money as they are cheaper. Now there are actually two different versions of the 2509 and 2511 routers. There is the plain ol’ 2509 that has one 68 pin async port on the back of the router or the 2511 which as two of these 68 pin async ports on the back of it. On each async port you can insert an octal cable which gives you eight pigtail RJ-45 ends to plug into the console port or your Cisco routers and switches. Cisco 2509 Access Server Here is what an octal cable looks like: Cisco Octal Cable Now the next solution is the 2509-RJ or 2511-RJ router. Pretty similar to what we had above, right? The only difference here is that the 2509 has eight RJ-45 style ports on the back into which you can install one end of a rollover cable into and then the other end would go into the back of your Cisco router or Cisco switch that you want to console into. As you can probably imagine, the 2511-RJ is the same as the 2509-RJ with RJ-45 style ports on the back. The 2511-RJ simply has 16 of these ports. The reason some people prefer the RJ models is that they can customize the length of the rollover cables to suit their individual setup. Cisco 2509-RJ Access Server This is what a rollover cable looks like. Cisco Rollover Cable Finally, we will cover the NM-16a and NM-32a modules. These are NM modules that can be inserted into a modular Cisco router with an NM slot such as a 2600, 2600XM, 2811, 2821, etc routers. These modules use the 68 pin async modules on them so you can connect an octal cable to them. As I am sure you can imagine, the NM-16a has two async ports so it can support up to 16 devices and the NM-32a has four async ports so you can support up to 32 devices if you are going to have a CCIE lab which is currently requiring 25 devices. Cisco NM-16a Module So hopefully that helps to take out a little of the mystery of what is an access server. If you are looking to build your own CCENT or CCNA lab, please check out our helpful lab suggestions here.
<urn:uuid:daefb817-0315-40ce-8506-db01675f9d12>
CC-MAIN-2017-04
https://www.certificationkits.com/what-is-an-access-server-or-terminal-server-in-my-ccent-ccna-lab/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00149-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944786
1,279
2.640625
3
Not many people took me seriously in January when I warned that the effort of Japanese and Russian scientists fill a wilderness park in Siberia with cloned wooly mammoths would lead directly to a prehistoric zombie apocalypse and the end of civilization. Some refused to worry because an onslaught of cloned zombie mammoths is the kind of danger that would destroy the world in a bad horror movie, not an actual world. Obviously they forget that both bad horror movies and inadvisable science are created by the twisted imagination and overly developed skills of actual humans. Case in point is the narrow-leafed campion, a flow that became extinct 32,000 years ago but was brought back to into horrible, undead existence by a team of Russian scientists who recovered a fruit of the plant from the burrow of an arctic ground squirrel in Siberia. (Just as a quick aside: Who thinks it's a good idea to give free rein to create monsters to scientists whose judgment is not only bad enough to hang out in Siberia, but to spend a lot of their time in the burrows of arctic ground squirrels?) The Campion became a Franken-flower because one of its fruits and the seed it contained remained frozen for tens of thousands of years (like Mitt Romney's hair). U.S.-based paleontologist Grant Zazula theorized that, since Frankensteining worked with the flower, it might work on the 40,000-year-old frozen mammoth carcasses scientists dig out of the permafrost every couple of years. In December scientists in Yakutsk Russia revealed they had discovered the leg bone of a wooly mammoth that still had enough well-preserved marrow in its center to allow several attempts at cloning the animal, which died out 10,000 years ago. This is a completely separate set of scientists and different project from the one I mentioned the previous January. Lead scientists on the project such as Semyon Grigoriev predict they may be able to clone the massive mammoth within five years – exactly the same time-frame cited by Japanese mammoth-reviver Akira Iritani at the time I wrote my warning. Teams of Japanese scientists agreed with the Russians, in December, in fact, almost as if they knew something about the cloning of mammoths the rest of us don't. They undoubtedly knew about the successful cloning of an extinct Pyrenean Ibex sheep in 2001, for example. You will note that 2001 is more than twice as many years ago as the five years two unrelated groups of scientists cited as the delay required to clone and product a wooly mammoth. They also knew about the genes of the extinct Tasmanian Tiger resurrected as part of the genetic material of a mouse, though the animal itself could not be reproduced. There are still doubters, of course. Or, perhaps, scientists who have no doubt but do have a strong incentive to conceal the truth: "The public should not leap to the conclusion that we are on the edge of cloning woolly mammoths or dinosaurs," according to David Wildt, a senior scientist at the Smithsonian's National Zoological Park in Washington, D.C., as quoted in National Geographic in 2009. The reason? Not because it's impossible to do, or because it's impossible to find enough cells with intact nuclei to allow for cloning. Nope. Wildt's obstacle was that there are no suitable surrogate mothers for long-dead species. That hardly seems like an immovable object. Surrogate mothers from species that are almost suitable and transfer to artificial environments in mid-gestation if necessary could overcome that problem. Especially if the payoff is a giant, money-printing prehistoric zombie animal park that could also spawn a series of dinosaur-eats-human movies whose biggest weakness is that Jeff Goldblum appears frequently but is never eaten. All of this, of course, is rank, irresponsible speculation. Except, even CBS news admitted in December that "Cloning of wooly mammoths is no longer sci-fi." I'm afraid the headline may be more right than CBS knew. Naysayers have slammed the video as a hoax simply because it's obviously not real. But if you view the video below closely, and wish hard enough to believe, it's clear this video that went viral online a couple of weeks ago is clearly and unquestionably documents the existence of live(ish) wooly mammoths in the wild in Siberia. Or possibly a Sasquatch. Or an additional shooter on the grassy knoll, or a pack animal toting supplies to the site on which new faked shots of the moon walk are being filmed. You just have to look closely enough and in the right state of mind. Hysterical ignorant panic works fine for most people, but if you have a favorite form of irrationality try that on for size. I'm sure the mammoths won't mind. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:746572bb-3213-40b4-ad5f-b8e9ca847738>
CC-MAIN-2017-04
http://www.itworld.com/article/2729760/consumer-tech-science/more-evidence-of-rapidly-developing-siberian-prehistoric-zombie-mammoth-apocalypse.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00057-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965976
1,063
2.578125
3
In this interview, Rohyt Belani, CEO at PhishMe, illustrates the magnitude of the phishing threat. He discusses techniques, consequences and protection tips. What happens once your identity gets stolen? How exactly does a phisher benefit from gaining access to your sensitive information? What can he do? Of course it depends on the information stolen, and the goal of the hacker, as to what can be done with sensitive information. The first thing to clarify is that there are two types of phishing attacks – consumer oriented phishing attacks and enterprise oriented phishing attacks. In the case of an individual attack the end goal is to steal the person’s identity for financial gain – so to obtain credit cards and launder money through those channels; gain access to online identities – for example bank, PayPal and eBay accounts; make purchases that are easily converted into money such as train tickets and mobile phone top ups; etc. This type of cyber-crime is usually quickly detected, so the payload is limited, but it’s exceptionally difficult to identify the criminal behind the fraud leaving them free to practice their craft with little fear of capture. More recently criminals have realised there’s more to gain from a corporate attack but with the same anonymity. If you think of a bank – every time a consumer falls victim to a phishing scam the losses are ring fenced to that one account. However, if just one employees’ workstation is compromised and the attackers gain a foothold on the inside of the corporate network then that’s a whole new ball game as theoretically that could expose every customer account and more. A phisher needs just one person to click on a malicious link, or open an attachment laden with malware, to gain access to the organisation’s network. From this vantage point there’s a multitude of possibilities – stealing R&D information, customer details, even hold the organisation to ransom. One of the most important things is to be aware that emails are an attack vector and to treat all messages with caution – whether they’re from a stranger, a friend or a colleague. What are some of the most clever phishing schemes you’ve encountered? Phishing attacks can be categorised into three main forms – those with an authoritative tone, those that prey on greed, and those with an opportunistic message. I find that emails that adopt an authoritative approach are 25% more likely to draw the (un)desired response from the victim. Typically they will demand the user complete an action, within a certain timeframe instilling a sense of urgency, and outlines the consequences for non-participation. For example, a message claiming to be from the accounts or HR team stating that the payroll system is being migrated to another company and that the employee must follow a link and validate their bank account details by a certain date or their salary can’t be paid into their account on the next pay day. Greed is another motivator that attackers will use to draw people in. One such example is a message from the company stating that, due to a particularly successful year, a raffle will be held where one employee will win a prize – it could be a gift card, a holiday, or even a car. Everyone is encouraged to click a link to enroll in the scheme. Opportunistic messages can also be successful. For example a free lottery to win tickets to the Olympics, every tax season we see a spike in emails claiming to be from HMRC, and even the U.S. presidential elections were used to try to trick people. While these are the types of themes phishers will use, timing is another tactic they employ. Many successful phishing attacks are launched on Monday mornings and Friday afternoons as this is the time people tend to spend clearing their inboxes and can be distracted more easily. With the growing sophistication of phishing attacks, what can people do in order to protect themselves? The best form of defence is vigilance, as all too often security controls alone are not enough. At the very least – ALL electronic communications should be treated with caution. Even a message from a trusted friend or colleague may not be from who it purports to be. In a corporate environment, it’s important that every employee recognises the part they play in the organisation’s security posture as a whole. If you receive an email attachment that you weren’t expecting don’t just open it – instead check with the sender that they did indeed send it. While it may take an extra 10 seconds, it could save hours or even days if the attachment proves to be malicious. Everyone should learn how to read URLs to readily identify which are genuine and which aren’t as attackers often try to entice victim’s to click on a link to a website they control by making it look legitimate. As an organisation, instead of solely relying on technical controls, spend quality time educating the employee base. And I don’t just mean putting up a few posters as these passive techniques, alone, are not enough. You need to take a pro-active approach and immerse people in on the spot training with true-to-life experiences and scenarios. This way user behaviour can be changed and the message is more likely to be remembered. What is, currently, the magnitude of the phishing threat? Spear phishing against employees is the number one threat organisations face today. That’s not just me saying it – there are numerous headlines that validate this statement. If you look at all the major breaches organisations have suffered recently, the vast majority can be traced back to a phishing attack – RSA, Mitsubishi, the chemical and defence sector, the list goes on. The reason this is the attack vector is because it targets the weakest link in an organisation’s security posture – a human. “A phisher can easily send out millions of emails. Even if it tricks just 1% into clicking the link or divulging personal credentials, that’s quite a lot of hits! When phishers do their homework, and craft legitimate looking emails, the success rates increase. But when it comes to enterprise grade phishing, it’s a different ball game. The attackers know they’re up against enterprise grade security defences. The first thing they change is keeping the volume of emails low to prevent messages being caught in filters or other technical controls. They will spend time doing their homework and studying the individuals within the organisation, on places like LinkedIn and Facebook and other online forums, to craft a very specific message which is sent to just a handful of people. It takes just one person to respond to the message to give them a foothold in the environment.” A recent report by Trend Micro estimates that 91% of all cyber attacks that classify as advanced persistent threats begin with a spear phishing email.
<urn:uuid:ab1d128e-7018-4abb-9268-49d5eed8a81b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/12/24/phishing-techniques-consequences-and-protection-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00451-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944793
1,421
2.796875
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: Annotated Bibliograph1 Select a size Annotated Bibliography : Early Childhood Development India N. Forde-Jones Etowah High School Karoly, L. A., & Gonzalez, G. C. (2011). Early care and education for children in immigrant families. Future of Children, 21(1), 71-101 This article was written in 2011; therefore, it is a current source. This source was found on Galileo, a library subscription resource. This article was published in Future of our Children magazine. This articles explains the struggles of immigrant children growing up in the united states. There is a growing population of immigrants in America, and their children have to deal with various problems that makes life hard for them. Their parents make low minimum wage, some may not even speak English which may put the child at risk for developmental delay and poor academic performance. Lynn Karoly and Gabriella Gonzalez examine the current role of and future potential for early care and education (ECE) programs in promoting healthy development for immigrant children. Participation in center-based care and preschool programs has been shown to have substantial short-term benefits and may also lead to long-term gains as children go through school and enter adulthood. I will use this in my research paper to show that there are development problems in all children. I will use this to compare the risks in all children with developmental problems. “A substantial and growing share of the population, immigrant children are more likely than children with native-born parents to face a variety of circumstances, such as low family income, low parental education, and language barriers that place them at risk of developmental delay and poor academic performance once they enter school.” “According to data from the 2005–06 American Community Survey, of the 15.7 million immigrant children in the United States, nearly 5.7 million are age five or younger” “Among immigrant children under age eighteen, for instance, 28 percent are in a linguistically isolated household where no one age fourteen or older speaks English “very well,” 26 percent have parents without a high school degree, and 22 percent have family income below the poverty line. “The potential for high-quality early-learning settings to advance school readiness and academic achievement in absolute terms and to narrow gaps between less advantaged and more advantaged groups of children has spurred greater interest in promoting access to such programs, especially for disadvantaged children.” Kenney, M. (2012). Child, family, and neighborhood associations with parent and peer interactive play during early childhood. Maternal & Child Health Journal, 88-101. This source was written in 2012; therefore, it is a current source. This source was found on Galileo, a library subscription resource. It was published by Maternal &Child Health Journal. This journal is read by pediatricians, child psychologists and other doctors to understand growth in children. This is a scientific proven survey to prove that children develop more with interactions with families and others. In the first 5 years of life brain development is very critical. Studies supporting the value of regular, sustained play interactions with peers and caregivers during a period of highly active neural network formation are derived from investigations across many disciplines. Even modern media is ‘‘Taking Play Seriously’’ and highlighting parent and professional concerns about the perceived loss of play in the lives of today’s children. Peer play in early childhood is a primary context for the acquisition of social and linguistic skills, preparing the child for school readiness and success. Physically active play is hypothesized to function primarily for strength, endurance, and fitness for children. Language is also a key factor to help the child express emotions with parents and friends. If the parents don’t engage in the child’s life or develop then the child won’t improve through the critical stages. I will use this information to show how interactions with other people and things help develop the mind of a small child. It will show that introducing children to new things can help them get through the critical 5 years that they are supposed to learn. ” Recent increases in single headed households and work schedules in which both parents in a 2-parent household are employed leave little time for parent-initiated/led play.” ” Considerable literature exists on the developmental functions and benefits of children’s play activities, including contributions to cognitive, physical, and social/emotional well-being” “The purpose of this study was to examine national patterns with respect to opportunities that children 1–5 years of age have for taking advantage of significant early learning/socialization opportunities through peer and parent interaction.” “The current findings indicate children in poor, non-English-speaking households with limited education, poorer maternal health and greater parenting stress were read to/told stories less than children from the highest level income, English-speaking households with more education, better maternal mental health and limited parenting stress Ma, X., Shen, J., Lu, X., Brandi, K., Goodman, J., & Watson, G. (2013). Can quality improvement system improve childcare site performance in school readiness? Journal of Educational Research, 106(2), 146-156. This source was written in 2013; therefore, it is a current source. This source was found on Galileo, a library subscription resource. This source was published in Journal of Educational Research, which is read by scientist and psychologist. The authors evaluated the effectiveness of the Quality Improvement System (QIS) developed and implemented by Children’s Services Council of Palm Beach County (Florida) as a voluntary initiative to improve the quality of childcare and education. They adopted a growth model to investigate whether child care improved school readiness. Their control group is a group of children in preschool and the other group is children not in preschool. The goal is to see if preschool helps with school readiness. It is proven the children who went to preschool grew up more successful than the ones that didn’t. The information in this source is used to explain the processes in child development. I will use this to compare preschool versus not going to preschool for a child. “Formative years, referring to the age from newborn to 5 years, represent a critical stage of child development. Maria Montessori (1909) viewed the mind of children at this stage as so absorbent that the quality of education can have a long-lasting benefit in their entire lives.” “Compared with individuals in the preschool control group, those in the preschool intervention group completed significantly more years of education, were significantly more likely to attend a 4-year college, and achieved significantly better in intellectual and academic tests as young adults.” “Many characteristics of childcare in the home, childcare facility, and community impact school readiness, according to Thompson (2002) who maintained that home is the primary setting for developing school readiness but emphasizes that experiences from childcare facility are critically important.” ” Particularly for Children’s Services Council of Palm Beach County in Florida designed and implemented its QIS as a comprehensive, voluntary, systematic initiative to prepare children for school readiness, with goals to(a)produce positive outcomes for children; (b) provide consumer education for parents; and (c) define, advocate for, and obtain the resources necessary to encourage, support, and promote quality early care and education. children who spend considerable time there.” Rossin- Slater, M. (2015). Promoting health in early childhood., Future of Children 25(1), 35- 64. This source was written in 2015; therefore, it is a current source. This source was found on Galileo, a library subscription resource. It is published by Future of Children which is a book that is read by pediatricians all over the word. It explains the positive effects of promoting health in children. Children who are healthy early in life grow up to be healthier adults, they also become better educated. The US falls behind other advanced countries in early childhood health putting the health of future generations in risk. The strong connection between early ages and adult ages, early childhood gives a critical window to improve children’s disadvantages through evidence-based interventions and reduce inequality. Maya Rossin-Slater examines the evidence behind a variety of programs that target three groups: women at risk of getting pregnant, pregnant women, or children through age five. This information will be used to show that promoting health in a child’s life is very important in child development. I will use it to show the differences and outcomes of a child whose parents don’t promote health care versus a child whose parents that does. “Though it’s among the wealthiest countries in the world, the United States fares relatively poorly by standard indicators of early childhood health.” “For example, according to the U.S. Centers for Disease Control and Prevention (CDC), the U.S. infant mortality rate was ranked 32nd among the 34 countries of the Organization for Economic Cooperation and Development in 2010” “The idea that early-life conditions can have lasting consequences on lifelong human welfare was most famously put forth by David J. Barker, a British physician and epidemiologist, who coined the phrase “fetal origins hypothesis.” Barker argued that adverse in utero conditions can “program” a fetus to have metabolic characteristics that are associated with future disease.” ” People with low birth weight were 25 to 44 percent less likely to pass English and math exams at age 16, and 9 to 16 percent less likely to be employed in their 20s and 30s.” 5 They adopted a growth model to investigate whether child care improved school readiness. Their control group is a group of children in preschool and the other group is children not in preschool. The goal is to see if preschool helps with school readiness. It is proven the children who went to preschool grew up more successful than the ones that didn’t. This source was written in 2015; therefore, it is a current source. This source was found on Galileo, a library subscription resource. It is published by Future of Children which is a book that is read by pediatricians all over the word. It explains the positive effects of prom
<urn:uuid:171ca644-6b61-4119-99ca-cf68d15dfa1d>
CC-MAIN-2017-04
https://docs.com/india-forde-jones/1576/annotated-bibliograph1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00267-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959476
2,245
2.640625
3
WASHINGTON—As collection and use of information about individuals and its use in marketing and research continues to grow in size and complexity, it is clear that there is no quick and easy way to prevent privacy and security breaches. At a January 30 symposium at Georgetown Law School entitled “Big Data and Big Challenges,” legal experts discussed steps industry and government might take to protect data privacy and security. Big data has blurred the line between public and private information, said Paul Ohm, a privacy expert and senior policy advisor to the Federal Trade Commission. “We have more sensitive data, collected more of the time, by more people, and almost all of it is unregulated,” said Ohm, who stressed that his comments were his own and not official FTC positions. He predicted that the volume of data being collected and sophistication of collection methods will eventually make it common for individuals to suffer hardships due to the release of sensitive data. An increasing concern in the privacy debate is that seemingly benign data about individuals can be used to predict future behavior, some of which may be considered sensitive. A high-profile example of such data use emerged in 2012 in a New York Times report about retailer Target’s use of analytics to identify which customers were pregnant resulted in a teenager being sent coupons for baby items. “Lots of facts about you revealed in public, when aggregated, start to reveal private habits,” Ohm said. It’s also likely that what most individuals consider to be their most sensitive information, such as health-related information used for medical research, is also the most secure, Ohm said. “I worry so much less about medical and health research because we’ve had centuries, if not millennia, to think about things like codes of conduct, common ethics, institutional review boards—all things we do when we engage in human subject review,” he said. “Companies do very little if any of this.” Getting Business Analysts to Think More Like Researchers By contrast, the information that people provide in less controlled or uncontrolled venues carries far greater potential to cause harm, Ohm said. One possible solution is for businesses to think more like researchers, he suggested. “I would love to see a move to force companies to think much more about the ethics of what they’re doing, come up with external watchdogs, and look at the number of people within their walls who are allowed to see data and decide whether it raises too much of an unjustifiable risk.” Government regulation also could play a role. Ohm noted that there have been cases of people losing their jobs because they accessed certain company data, but those cases tend to reach the public only when data breaches occur at hospitals or other institutions that have established internal procedures to deal with breaches. Businesses, he said, may pay closer attention to the issue of internal access if they were required to disclose firings that resulted from data breaches. Federal legislative action on privacy and data security remains a possibility, but even with security breaches drawing media attention, legislation has yet to make it out of Congress. That could be due to a number of factors, including leadership changes, multiple committees having oversight, and private sector measures to secure commercial data, said Francine Friedman, senior policy counsel with the law firm Akin Gump Strauss Hauer & Feld in Washington. Federal lawmakers have shown interest in regulating the collection, use, sharing and storage of online data as recently as the 112th Congress, where more than 20 pieces of legislation dealing with data privacy and security, mobile device privacy, cyber security, data storage and breach notification, said The proposals addressed issues such as the sale of data regarding Internet users’ online behavior, data breach safeguards and the legality of employers asking employees and potential employees to provide access to their social media accounts. But Congress did not pass any of these proposals. “I used to say we’re one breach away from getting a breach notification law,” Friedman said. “There was a series of high-profile breaches. Fortunately, most of the data breaches have been maybe username and email address. Because there are safeguards that companies put into place, which is the right thing for them to do…most of those breaches have not resulted in a lot of harm.” The Potential for a Data Breach Notification Standard Industry has shown some interest in a federal standard for breach notification, Friedman said, as most states have data breach notification requirements and many businesses operate in multiple states that may have different requirements. “They want one standard, they don’t want 50 standards,” she said. “That’s the one area where I think there may be some action.” The prominent role of big data analytics in political campaigns is another possible reason Congress has been slow to move on actions regulating the sharing and use of data, Friedman said. The influence of big data in driving President Obama’s re-election campaign strategy was not lost on either party. “In fact, the [Democratic National Committee] is asking the Obama campaign to please share this treasure trove of information,” Friedman said. “Jumping on the bandwagon, the [Republican National Committee] at their recent meeting said, ‘We need to be better about our collection and use and sharing of data.’ Under Howard Dean, the DNC required all of the states to share their data up.” The Obama Administration also has said it is exploring ways to address issues of online privacy and security that don’t require Congressional approval, Friedman said. The administration’s “Privacy Bill of Rights” is one example, but the effect of such actions may be limited to raising awareness among consumers, she said. Christopher Doscher is a writer, editor and executive speechwriter in Laurel, Md. Home page photo of Georgetown Law School via Georgetown University.
<urn:uuid:2079ce60-f7cc-4ba9-95a8-efbd2ee99bac>
CC-MAIN-2017-04
http://data-informed.com/legal-experts-business-policymakers-need-to-address-data-ethics-privacy-as-data-proliferates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00111-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962151
1,212
2.609375
3
There’s something unusual sitting in the parking lot of the Allen County Public Library in Fort Wayne, Ind. Pay a visit to the 50-foot trailer and you might be surprised with what you find. Inside are various tools for cutting and shaping wooden objects, an electronics work bench, an injection molding machine and one of the most advanced gadgets for inventors, a 3-D printer. Allen County is one of just a handful of public libraries that have set up multipurpose workshops for patrons who want to share and collaborate in order to create and build things. The terms used to describe these spaces include “makerspaces,” “fab labs” or “hackerspaces.” So why does the Allen County Public Library have a high-tech lab for would-be designers, engineers and inventors? “The library is in the learning business, not just the book business,” said Director Jeff Krull. “Anytime libraries come across an opportunity for people to learn and grow, they should do it.” There are nearly 10,000 public libraries in the U.S., and patrons increasingly rely on them for access to technology. More than 90 percent of public libraries offer formal or informal technology training, according to a recent survey conducted by the American Library Association. Much of that training relates to instruction for computer skills, general software applications and Internet use. Makerspaces and fab labs cater to a particular type of library patron: inventors, artists, entrepreneurs, crafters and youth groups. The technology used in these workshops can revolutionize the manufacturing process, allowing designs and creations that can be modified to suit individuals in ways not possible with mass production. Makerspace: A makerspace is a location where people with common interests — often in computers, technology, science, digital or electronic art (but also in many other realms) — can meet, socialize and collaborate. Makerspaces incorporate elements of machine shops, workshops or studios where hackers can come together to share resources and knowledge to build and make things. Maker Meetups: A maker meetup is where groups of people with similar interests meet to work together, collaborate, create and share resources. Contact Summit: The Contact Summit bills itself as a “working festival of innovation where the Net’s leading minds and entrepreneurs can connect with the people who are building the social technologies of tomorrow.” Held in different cities, the event focuses on peer-to-peer solutions in technology, business, arts, education and government. Fab lab: The term “fab lab” traditionally refers to fabrication labs, which began as an outreach project from the Massachusetts Institute of Technology’s Center for Bits and Atoms. These fab labs share core capabilities, so that people and projects can be shared across them. In New York, the Fayetteville Free Library’s fab lab doesn’t emulate these fab labs exactly, preferring to call its lab a fabulous lab and leaving specific capabilities up to the needs of the community it serves. Photo: Patrons at the Fayetteville Free Library in Fayetteville, N.Y., use the fab lab’s 3-D printer. Krull believes that the lab at the Allen County library has worked out well, but for libraries looking to bring members of the community into their physical space, the county’s approach may not be the best option. “While the trailer is located right across the street from the library building, it is not drawing patrons inside,” Krull said. He thinks it would be advantageous to move the space into one of the library buildings. Currently, with the relatively small size of the trailer, makerspace programs must be limited to 12 users at a time. Looking ahead, Considine would like to offer the Fayetteville fab lab to youth camps with a nominal fee for materials. On previous trips to the library, elementary-schoolchildren have shown interest in the possibility of making missing game and Lego pieces. She also plans to showcase the wide variety of patron creations. Krull’s plans include extensive programming over the summer — some free, some with a modest charge. He is also looking at bringing the space into a larger, permanent location. Allen County and Fayetteville have created quite a buzz among librarians interested in offering nontraditional services that use cutting-edge technology. Both Krull and Considine feel that creating access to emerging technologies is completely in line with the needs a public library serves. Many libraries view these projects as test beds for other communities to embrace the future, according to Marcia Warner, past president of the Public Library Association, a division of the American Library Association. “Libraries have always been about books, information and an educated citizenry,” she said. “It seems like a natural progression to move into an area of facilitating information and material creation.” The librarians offering access to the first makerspace and fab labs agree about their impact. “By providing access and opportunity to experiences, libraries provide a pathway for transformation,” Considine said. “Technology is not the death of the public library today. It will, however, change libraries as they rethink their space and role.” Libraries, historically, retool continually, and the pace of evolution of the library promises only to move more quickly. Pat Newcombe is the associate dean for library and information services at the Western New England University School of Law Library. Nicole Belbin is head of access services at Western New England University School of Law Library.
<urn:uuid:244e6ead-9425-41ed-9a1d-6045eb1a1597>
CC-MAIN-2017-04
http://www.govtech.com/featured/Fab-Labs--at-the-Library.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00139-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943212
1,158
3.015625
3
Whether it’s an emergency announcement or page in a school, how do you determine the speaker and amplifier required for a specific location? IP paging systems can provide exactly the right sound level to exactly the right location, but what is the right sound level? How loud does the emergency paging system need to be? How much sound is needed to hear an announcement in a classroom or in a sports field where there is a lot of background noise? This article provides the technical background and practical information needed to help you determine the right power and speakers required. What does “loud” mean? Just how loud does sound need to be to hear it and how loud is too loud? If you ask the people in the city, too loud is the noise from traffic or the noise from the garbage collection or in the suburbs the sound of the lawn mower too early in the morning. Sound levels can even be loud enough to damages your hearing. On the other hand is the sound loud enough to be heard? We measure the loudness of sound in decibels (dB). It is sometimes referred to as the Sound Pressure Level (SPL). 0 dB is the minimum sound level a person with good hearing can hear. 130 dB is the point where the sound is painful. Even sound levels above 85 dB can be a problem. Most experts recommend that you use earplugs when continuously exposed to 85 dB and above. But what does 85 dB mean? The following chart shows common sounds and their associated sound levels. Normal conversation (about 3 ft. away) Loud singing or Washing machine Alarm clock (two feet away) Blow dryer, subway train Power mower, chainsaw Rock concert, thunderclap Sound Levels and Distance The further away from the speaker the lower the sound. Sound level is reduced using the law of squares. As an estimate, the sound level is reduced by 6 dB for every doubling of the distance. The following chart provides an example of how the sound level is reduced. |Distance||Sound level||Is about like the sound of| |1 M (3.28 ft.)||100 dB||Blow dryer| |2 M (6.56 ft.)||94 dB||Loud traffic or diesel truck| |4 M (13.1 ft.)||88 dB||City traffic inside the car| |8 M (26.25 ft.)||82 dB||Telephone dial tone or loud singing| |16 M (52.49 ft.)||76 dB||Vacuum cleaner, shower| |32 M (105 ft.)||70 dB||Single Passenger car.| |64 M (210 ft.)||64 dB||Loud conversation| |128 M (420 ft.)||58 dB||Normal conversation| Selecting the Right Amplifier A speaker is specified by the amount of sound it can provide at a distance of 1 meter and a power level of 1 watt. It is also defined by the nominal speaker angle and the maximum output at a certain power level. By adding power to the speaker (measured in watts) we can increase the sound level. The sound level increases by 3 dB for every doubling of power (watt). Each type of speaker has a maximum limit to the power that can be applied. The following chart provides an example of the sound level based on power applied to the speaker. If speaker is rated at 100 dB at /1 w/1M, then the following sound levels apply: |Power in Watts||Sound level at 1 Meter away from speaker||Is about like the sound of| |1 watt||100 dB||Blow dryer| |2 watts||103 dB||Power mower at 3 ft. away| |4 watts||106 dB||Chain saw| |8 watts||109 dB||Screaming child| |16 watts||112 dB||Jet engine at 325 ft. away| |32 watts||115 dB||Sandblasting| |64 watts||118 dB||Hearing damage possible| |128 watts||121 dB||Rock concert, thunderclap (close)| What is the right sound level? The lowest sound level most of us can hear is about 20dB. Normal conversation is at between 58 dB and 65dB. The threshold for pain and hearing damage is about 130 dB. We want to select sound levels that are between these extremes and hopefully above the background noise. The optimal volume is the point where everyone hears the page. How to do it To select the right components for the paging system, we start at the point where people will be listening to the sound, and then work backwards to the speaker and amplifier. Here’s an example that shows how to select the right sound level in a classroom. To make sure everyone hears the page in a noisy room, the sound level needs to be at least 10 db above the ambient noise level. If we are in a classroom that has a noise level of 60 dB, then we should provide a minimum sound output of at least 70 dB. If the speakers are located in the ceiling which is 3 meters (10 ft.) high, and the children are sitting at their desks, they are about 2 meters (6.5 ft.) away from the speakers. To calculate the right sound level from the speaker we can use the chart of distance vs. sound level. If we start with 78 dB at 1 meter from the speaker then at 2 meters (6.5 ft.) the sound level is 72 dB. One other thing to consider is the angle of the speakers. In the classroom we would like to select speakers with as wide an angle as possible. This allows us to use less speakers and still have everyone hear the page. As an example there are some ceiling speakers that fit into a drop ceiling with 100 degree angles. This means that the maximum sound will be heard in a circular area with a diameter of about 15 ft. In most situations we can get away with one speaker in a classroom that’s about 25 ft. x 25 ft. in size. The power required is usually only 0.5 watts. The sound level of the paging system needs to be loud enough so that people can hear the announcement over the background noise, but not loud enough to hurt their ears. We can select speakers for the wall, ceiling or even mount them on a pole. We just have to know the sound required at a certain distance and we can work backwards to select the speaker and amplifier power. If you need help with your emergency paging or classroom notification system, just contact us. We have a lot of experience with these systems so I’m sure we can be helpful.
<urn:uuid:c873e201-36ff-48b4-84bb-4a4537d9704b>
CC-MAIN-2017-04
https://kintronics.com/right-sound-level-paging-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00139-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902223
1,396
3.71875
4
DNSSEC: Security for Essential Network Services In July 1997, Eugene Kashpureff, founder of AlterNIC, took advantage of an inherent security vulnerability in DNS (Domain Name Service) and carried out the first DNS spoofing attack. "It's all done with standard MIME code, right out of the box. The only thing the bot does is make a couple of interesting small queries on a public name server," Kashpureff quipped. Five years later, the security issues have become much more visible -- and problematic. On October 21, 2002, in an attempt to bring down the Internet, a group of hackers from South Korea and the U.S. flooded the thirteen domain name root servers using a common DDoS (Distributed Denial of Service) attack. Seven of the thirteen servers completely failed to respond to legitimate DNS requests, and two failed intermittently. And just last month, another DNS spoofing attack rerouted traffic intended for the Al Jazeera website to an American pro-Iraqi war site instead. Fortunately, in all cases, the top-level server administrators were able to successfully counter the attacks, but all are in agreement that they might not be so lucky next time. Clearly the DNS infrastructure has major unaddressed vulnerabilities. What is the Internet community doing to improve DNS security? Fortunately, they're not sitting around idly, as the IETF (Internet Engineering Task Force) is drafting a new standard, DNSSEC (DNS Security Extensions), to combat the threats by providing end-to-end authenticity and integrity. How can DNSSEC be implemented to prevent potential future catastrophic attack, and why has it not been widely deployed by the Internet community to date? What are the largest DNS security holes and how can you protect your network? Let's take a look at the answers to these and other burning DNSSEC questions. DNS Security Vulnerabilities Ever since Paul Mockapetris published the original DNS architecture document in 1984, DNS has been the bedrock network service that supports the Internet. DNS has worked flawlessly for many years, but it was designed long before anyone was aware of the Internet security issues that have since developed. As the Internet has become an accepted part of the general community and no longer the province of a small club of highly technical engineers, integrity threats and the need for security awareness have increased. Because DNS is a UDP-based (User Datagram Protocol) network service, it has a number of major inherent security vulnerabilities. Most are instances of more general problems, but a few are inherent to peculiarities of the DNS protocol itself. Unlike TCP (Transmission Control Protocol), UDP does not have a mechanism for verifying a packet source, which makes it very vulnerable to source packet spoofing and inception attacks. There are four major points of attack: cache spoofing, traffic diversion, denial of service attacks on the top-level domain servers themselves, and buffer overruns. Cricket Liu, Executive VP, InfoBlox, Inc., and author of O'Reilly's "DNS & BIND," notes that there have been recent attacks on the DNS infrastructure using each of the known DNS vulnerabilities. How does a slave (secondary) know it is talking to the proper master (primary)? Because it is using UDP for communications, the data source is not verifiable, and the DNS data can be spoofed or corrupted on its way between the upstream primary server and the secondary slave. This is a major hole in the protocol. As the IETF threat analysis paper puts it, "The DNS protocol does not allow you to check the validity of DNS data. While packet interception attacks are far from unique to DNS, DNS's usual behavior of sending an entire query or response in a single unsigned, unencrypted UDP packet makes these attacks particularly easy for any bad guy with the ability to intercept packets on a shared or transit network." When Kashpureff spoofed the servers, he inserted some additional code into his machine's standard DNS queries that forced victims' machines to respond to his servers. When they did, an extra "A record," was sent to the victim's name server that included information on how to connect to Kashpureff's domains. Kashpureff did it partially as a publicity stunt for his alternative domain namespace company, but the distributed and hierarchical nature of the data meant that corrupted DNS data might end up in downstream caches where it could persist. Caching servers have a variable TTL (Time to Live). If the TTL value is set very high -- a week for example -- the corrupted data can cause harm for quite some time. Another common attack is where "falsified" DNS responses divert traffic away from the intended site. The socially engineered "hijacking" of aljazeera.net -- the Al Jazeera website -- apparently by US-based pro-war extremists is a good example. In this case, information about the ownership of the domain was modified so that it no longer pointed to the correct set of servers. If a user attempted to access the site, they saw the "hacked" web pages even though the Al Jazeera site itself wasn't touched. BIND, the software that handles the DNS requests, has several built-in vulnerabilities to buffer overrun attacks. These are well-known holes where large numbers of service requests cause the software to overrun into memory buffers not allocated to the program. These can be exploited in a number of nefarious ways to cripple the application or bring down the server. A recent and very disturbing example was the Li0n worm exploitation of a hole in a series of March-April 2001 attacks on the TSIG (Transaction Signature) code (part of the new DNSSEC BIND implementation). Distributed Denial of Service AttacksDDoS (Distributed Denial of Service) attacks are simple to mount and incredibly difficult to prevent. Because ICMP requests are the basic mechanism used to monitor the health of the Internet, it is nearly impossible to secure this against attack. The hierarchical nature of DNS, combined with the tiny number of top-level domain servers, makes them particularly tempting hacker targets as was confirmed by the October attack mentioned earlier. How DNSSEC Works From our discussion so far, it is clear that DNS has some major security issues that urgently need to be addressed. The Internet and security engineering communities have responded to the threats by developing DNSSEC, a new secure DNS protocol, which addresses the data integrity and source-spoofing issues by means of a public key distribution. Interestingly, the extensions do not protect against buffer overruns or DDoS types of attack, nor do they provide confidentiality -- another major issue. To maintain as much backwards compatibility as possible, the DNSSEC protocol requires only minor changes to the DNS protocol. DNSSEC has added four additional record types (SIG, KEY, DS, and NXT) and two new message header bits (CD and AD). Because the UDP protocol has a packet size limit of 512 bits, DNSSEC requires the use of EDNS0 extensions that override the limitation so that larger key sizes can be accommodated. The DNSSEC implementation uses the familiar public/private key system. The site administrator generates a key pair for the secured domain. The private key is generally kept on the domain's primary master name server, and the public key is published in the domain in a KEY record. The administrator signs the domain's data record to verify its authenticity and adds a SIG record, which contains the signature for each domain record. The administrator submits the public KEY to the administrator of the domain's parent to sign with proof that he is the administrator of that domain. The parent domain's administrator signs the domain's public KEY and returns it. Unfortunately, a major unresolved issue is that nobody has really determined key authentication and verification methodologies. How keys are configured initially and how they are updated has yet to be determined as well. DNSSEC solves many of the worst DNS security problems. It is based on generally known technology and is backwards compatible with the existing DNS infrastructure. It is completely transparent to the user population and downstream administrators if they choose not to implement the extensions. While it does require installation of BIND 9 or later, you should upgrade to BIND 9 for many other beneficial reasons. So why has DNSSEC not been widely adopted by the Internet community to date? After all, increased security of such a vital service should be an important priority for the maintenance of the Internet. Unfortunately, the implementation of DNSSEC is problematic because of the large increase in the computational load it puts on the servers, the hierarchical model of trust, the lack of tools to support the additional administrative overhead, the need for a higher level of time synchronization between the servers, and most critically, DNSSEC by itself does not begin to solve all the known DNS security vulnerabilities. Increased Computational Load DNSSEC significantly increases the size of DNS response packets, which drastically increases the computational load on the DNS servers and also increases the query response time. Just the process of verifying signed resource records is computationally intensive, particularly if you choose larger key sizes. The DNSSEC standard allows up to 1024 bit keys. Adding digital signatures to a domain increases each record size 5-7 times, which puts a burden on upstream name servers. Hierarchical Trust Model Like DNS itself, DNSSEC's trust model is almost totally hierarchical. Any compromise in the chain between the root servers and a target machine can damage DNSSEC's ability to protect the integrity of the data owned by that downstream system. Lack of Management Tools DNSSEC is an order of magnitude more complex than DNS. Since the system is relatively new, there are few tools to help with the cumbersome task of maintaining a signed domain. Serious problems can occur with configuration errors and expired keys, and monitoring and log analysis tools are virtually non-existent. Debugging the errors by hand can be time consuming and difficult. Forces Stricter Time Synchronization DNSSEC requires at least some time synchronization between the primary and secondary name servers. This is problematic because NTP (Network Time Protocol) itself is insecure, which opens up the possibility of DoS attacks based on invalid times. What Can I Do Now? You can turn on DNSSEC today if you have BIND 9. You need to distribute the keys to your upstream provider (provided they support it as well). The majority of the implementations so far have been on military networks, so you are unlikely to have access to those servers. There are things you can do in the meantime to minimize your vulnerability to attack while the top-level Internet community finalizes the specifications and works out the deployment bugs. Cricket Lui has created a handy check list to help you: - Get educated -- buy one of the many good books on DNS or take a class on DNS - Review your name servers' configurations and the contents of your zones - Use publicly available tools such as dnswalk - Eliminate single points of failure in your DNS infrastructure - Make sure your name servers are authoritative for the reverse mapping zones that correspond to all of your networks - Minimize the burden you place on "high-level" name servers (such as the roots) - If you use RFC 1918 address space, set up the corresponding zones on your name servers - Make sure your firewalls allow DNS messages from port 53 to high-numbered ports on your name servers, or you won't get responses - If you use Active Directory or Windows 2000/Windows XP's network registration features, make sure that your dynamic update and query traffic remain local - Your name servers must be authoritative for a zone with the same name as the name of your Active Directory domain In today's ultra security-conscious environment, an inherent security vulnerability is an open invitation to intrusions from harmless -- and not so harmless -- hackers. DNS has been a potential security hole since it was first developed and widely deployed, long before anyone took network and computer security seriously, but until recently, not much had been done to patch the vulnerabilities in DNS. IETF's new DNSSEC standard is the first step in the long process toward completely securing DNS and should be able to help improve the overall security of the Internet if its problems with trust, lack of tools, and packet size can be overcome. http://www.dnssec.net/ - The official DNSSEC website with all the resources on this subject in a nicely organized fashion http://www.ietf.org/internet-drafts/draft-ietf-dnsext-dns-threats-02.txt - A recent analysis of the security threats against DNS http://www.ietf.org/internet-drafts/draft-ietf-dnsext-dnssec-intro-05.txt - The IETF draft of the DNSSEC protocol http://www.isc.org/products/BIND/bind-security.html - A list of the known vulnerabilities in BIND and their patches. http://www.cert.org/advisories/CA-2002-19.html - Buffer overrun vulnerabilities and patches http://www.icann.org/committees/security/dns-security-update-1.htm - ICANN committee report on DNS vulnerabilities http://cyber.law.harvard.edu/icann/mdr2001/archive/pres/lewis.html - Paper on DNS Security Vulnerabilities
<urn:uuid:5986d599-58ae-40db-a544-9623a238a5f4>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/2204801/DNSSEC--Security-for-Essential-Network-Services.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00444-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927279
2,768
2.796875
3
Routed Protocols vs. Routing Protocols In this article we will cover the difference between Routed Protocols and Routing Protocols. This is one of the thing that can be asked of you if you are attending a job interview or if you are going to CCNA exam so, you must know the difference between a “routed” protocol and a “routing protocol” as one of the key concepts in the Routing world and networking world. A routed protocol is a protocol by which data can be routed. Routed protocol are IP, AppleTalk, and IPX. In this kind of protocols we require an addressing scheme and subneting. Addressing scheme will be used to determine the network to which a host belongs and to identifying that host on that particular network. All hosts on an internetwork are using the services of a routed protocol. That means routers, servers, but workstations to. The only two routed protocols that are in use today are IP and IPX but IPX is dropped from Cisco in exams and is not in use much these days. If you are studying routed protocols the best advice is to focus on IP routed protocol. A routing protocol is different and is only used between routers. It makes possible for routers to build and maintain routing tables. There are three classes of routing protocols: - distance vector - link state OSPF is one of two link state protocols, the other one is IS-IS. EIGRP is the only hybrid protocol but in normal literature you will see that EIGRP is distance vector routing protocol. If you ask Cisco, they are speaking about EIGRP as Enhanced Distance vector routing protocol. Every other routing protocol is also from distance vector category. There are RIP, RIPv2 and BGP. Every protocol has a different administrative distance, RIP is 120, IGRP is 100, EIGRP 90, OSPF 110 and that static routes normally have a lower administrative distance than every other route, if you use the defaults a static router is 1 and a directly connected router is 0.
<urn:uuid:3465c7cb-de57-499a-8705-cf8580196707>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/difference-between-routed-routing-protocols
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00562-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951045
428
3.59375
4
Automated network threat detection can help meet the goals defined within the Critical Security Controls (CSCs), according to the SANS Institute. The CSCs are a recommended set of actions for cyber-defense developed by the NSA Red and Blue teams, the US Department of Energy nuclear energy labs, law enforcement organizations and some of the nation's top forensics and incident response organizations. These are coordinated by SANS and are maintained by the Center for Internet Security (CIS) and are designed to mitigate modern attack profiles; they provide specific and actionable ways to stop attacks, with a goal to prioritize and focus a smaller number of actions with higher pay-off results. Perhaps unsurprisingly, SANS has found that using data science, machine learning and behavioral analysis can complement or improve traditional security methods when looking to drive efficiency in cyber-response. This type of technology picks up where perimeter security leaves off by providing deep, continuous analysis of both internal and Internet-bound network traffic to automatically detect all phases of a breach as attackers attempt to spy, spread and steal within a network. Analytical methods can be used to monitor critical performance characteristics, such as network traffic, CPU usage and port activity, and identify unique events or trends that exhibit the behaviors of malicious activities. Analytics can also be used to flag abnormal behavior of end users, applications and other elements inside the organization by identifying activities that depart from a normal baseline established over a period of time. “Automated threat detection is making inroads to identify new patterns, detect events that may not match a specific signature, and determine behavioral abnormalities,” wrote Barbara Filkins, senior SANS analyst, in the white paper, “The Expanding Role of Data Analytics in Threat Detection.” She added, “Time-honored threat detection methods and perimeter-based security defenses add valuable layers of protection around information system assets, but neither is sufficient to defend completely against modern threats.” Photo © mypokcik
<urn:uuid:7d425ee2-8d86-4cdb-bc17-f06993b1d858>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/sans-automated-threat-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00562-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930165
404
2.5625
3
South Korea is one of the most developed countries in Asia and is the eighth largest electricity consumer in the world. It has been making strong efforts to increase the renewable energy portion of its energy mix. The country is backed with a strong manufacturing industry (solar PV) as well as supportive policies. Besides, South Korea has revised its nuclear power policy to cut its reliance on nuclear power that would help the cause of solar PV adoption manifold. The Global annual solar power production is estimated to reach 500GW by 2020, from 40.134 GW in 2014, making this market one of the fastest growing one and South Korea Solar Power Market is estimated to reach $XX billion in 2020 with a CAGR of 10.6% from 2014 to 2020. With fossil fuel prices fluctuating continuously and disasters like Fukushima and Chernobyl raising serious questions about nuclear power, renewable sources of energy are the answer to the world’s growing need for power. Hydro Power has environmental concerns so apart from water the other renewable source of energy in abundance is Solar. The Earth receives 174 petawatts of solar energy every year. It is the largest energy source on the Earth. Other resources like oil and gas, water, coal etc. require lot of effort and steps to produce electricity, solar energy farms can be established easily which can harness electricity and the electricity produced is simply given to the grid. Falling costs; government policies and private partnerships; downstream innovation and expansion; and various incentive schemes for the use of renewable energy for power generation are driving the solar power market at an exponential rate. On the flipside, high initial investment, intermittent energy Source, and requirement of large installation area to setup solar farms are restraining the market from growth. In the recent years, lot of research is going on in this field to make production easier, cheaper and also to make the solar panels smaller and more customer friendly. Lot of efforts are being put into increase the efficiency of solar panels which used to have a very meagre efficiency percentage. Different techniques like Nano-crystalline solar cells, thin film processing, metamorphic multijunction solar cell, polymer processing and many more will help the future of this industry. This report comprehensively analyzes the South Korea Solar Power Market by segmenting it based on type (Concentrating type, Non Concentrating type, Fixed Array, Single Axis Tracker, and Dual Axis Tracker) and by Materials (Crystalline Silicon, Thin Film, Multijunction Cell, Adaptive Cell, Nano crystalline, and others). Estimates in each segment are provided for the next five years. Key drivers and restraints that are affecting the growth of this market were discussed in detail. The study also elucidates on the competitive landscape and key market players.
<urn:uuid:bb0dd3b7-b3a6-4277-ac66-e26124e780da>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/south-korea-solar-power-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00562-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946263
557
2.921875
3
The Energy sector in Zimbabwe presents immense investment opportunities be it in Power development, Petroleum supply or renewable energy sub sectors. 26 billion tonnes of coal reserves are available for power generation translating to 8,000 years of use at the current consumption of 3 million tonnes per annum. Zimbabwe is bordered to the north by the Zambezi River and to the south by the Limpopo River, both of which flow into Mozambique. The country consists of the following major river systems which form the basis of the seven river catchments the country has been divided into: Save, Runde, Mzingwane, Gwayi, Sanyati, Manyame and Mazowe. With the exception of the Save and Runde the other main rivers drain into either the Zambezi or Limpopo. The annual potential yield at 10 percent risk (resources in a dry year of a 10th year frequency) from all river basins in the country has been estimated to be 11.26km3/year. This assessment excludes external surface water resources from such bordering international rivers like the Zambezi and Limpopo.
<urn:uuid:a16183a0-17da-4037-8060-169835c5b03e>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/utilities-sector-of-zimbabwe-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00562-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918716
231
2.859375
3
Personalized Learning is that approach of learning where the focus is on the learner. The old approach of “one system fits all “is being replaced with an approach of “choosing what fits best for you”. It helps to address the distinct needs and interests of the individuals. Technology and digital skills have a big role to play in this. The introduction of technology has changed the way teachers teach and students learn. Not only that, it has simplified the way students and teachers communicate with each other. Technology has successfully entered classrooms in the form of online course content,online classes and lectures, software for students with special needs, online assessments and various other such tools. Due to so many advantages, not only teachers and students but parents are also embracing such technical innovations. With early learning of basic technology elements, it is definitely future oriented. Let us look at the various related technology concepts. It aims at designing a learning system based on the unique needs of any student. It is student centric instructions.Here, technology empowers you to make these changes very easily, be it changes in course content or assessment. There are softwares available that keeptrack of student performances and adjust according to those observations. The next generation of technical advancements in education is to integrate mobile technologies into classrooms to enhance the educational experience. Mobile technologies empower students to have access to educational resources on their own devices anytime and anywhere, maintaining privacy and safety at the same time. It refers to institutions issuing each student an electronic device in order to access internet, digital course material and digital books. One to one offers the benefit of equal access, easy maintenance and simple networking. In order to save initial costs, some institutions also encourage the policy of ‘bring your own device’. Due to this cost vs. benefits ratio, one to one computing is still the subject of debate. To engage students with technology, hardware is only one part of the puzzle. You also need the right software to harness the best of what the technology has to offer. From creating lessons to keeping records to communicating with students outside the classroom, there are apps to assist both students and teachers in every aspect of education. Various classroom apps are not only making learning, a fun experience but also saving time in activities like student/instructor feedback or getting parental consents. The idea of gamification of learning is to motivate students to learn in fun ways by using video games or by using game elements. It maximizes student engagement and interest. It may not necessarily involve playing a real video game in the class but using some fun game elements like challenges, levels, opportunities, player control, progress points etc that the environment can consider as gamified. Using game like names, grading or languages are the simplest ways to incorporate gamification in classrooms. This new method of learning where class work and homework elements have reversed has been possible only due to technology. Video/Audio lectures are made available to the students at home via internet and in class, time is devoted to discussions, activities around the subjects. Cloud computing has unlimited potential when it comes to collaboration. This is true for teacher-teacher, teacher –student and teacher-parent applications. It saves time, space and money for everyone. These are web based classrooms which support large number of participants without having the need to ever come to real classrooms. This has helped education reach each and every one interested of any age or economic backgrounds. The information era has dramatically changed the way children are educated these days. There are new teaching techniques and enormous information available for everyone at the click of a button. Although technology is growing fast everyday, effective integration with education systems is still evolving. The ultimate aim is to make learning effective and make the outcomes more positive.
<urn:uuid:af535697-d181-406e-aeb6-781ef1e6198a>
CC-MAIN-2017-04
https://www.hexnode.com/blogs/personalized-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949452
766
3.703125
4
DIY remote imaging system, courtesy of USGS - By Kevin McCaney - Jan 11, 2013 Agencies have been making a lot of use of remote monitoring systems, for everything from monitoring snowfall and other weather data to bridge monitoring and even tracking airborne drug smugglers. One of the most active users of remote monitoring systems is the U.S. Geological Survey, which deploys monitoring systems for a variety of programs. So active, in fact, that a team of USGS scientists built their own solar-powered, automated imaging system, using inexpensive off-the-shelf parts. In a paper published on USGS’ website, the team -- Rian Bogle, Miguel Velasco and John Vogel -- offer a detailed, illustrated account of putting together the system, which has been used for environmental monitoring in locations such as the South Pacific and Mojave Desert. The system described is one type the team created, the authors write, and is intended to be “easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.” There are four basic components to the system, the authors write: a data logger, imager (camera), sensors and power subsystem. The data logger provides the system’s main functionality, acting as clock, programmatic controller and environmental interface, with a programmable software stack at its core, the team writes. They used a Campbell Scientific CR200-series data logger as the system’s controller and a Canon EOS Digital Rebel camera as its imager. The team notes that the camera supports a persistent power source and, no small consideration for remote work, a power switch that can be left on. It also allows for electronic image triggering. Images are stored on an 8G flash card in the camera, which the team said would typically be good for a few months’ worth of images. Sensors for the system depend on how and where it’s going to be used. This system has been used, for instance, to measure wind speed and direction (with a Davis Instruments 7911 anemoeter), soil moisture (Campbell tipping buckets) and air temperature (Campbell temperature sensors). The system runs on the Campbell Scientific PakBus operating system, with a script written in CR-Basic, and it uses Campbell’s pc400 development software to communicate with the data logger. For power, the team typically used a 10-watt solar panel and a 7-amp 12-volt battery, which delivered enough juice for the camera to take pictures as often as every 10 minutes during daylight hours. For any other agencies thinking of building such a system of their own, the team’s paper goes into great detail, with pictures and diagrams, on how they assembled it, operated it and deployed it. Agency research teams deploying remote monitoring systems could use this template to build their own, particularly if they don’t have experience constructing such devices. The idea was to take advantage of commercial products to build a system from “readily available, low-cost components, powered entirely by solar energy, highly flexible in scheduling and automation, and requires no remote communication, minimal maintenance and only occasional visitation for data retrieval,” the team writes. “The ultimate goal of the design of this system is that it be easily replicated by others who may have minimal technical or electronic experience.” Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:c24b527d-0ecf-411d-a6dc-5422bcc2eef9>
CC-MAIN-2017-04
https://gcn.com/articles/2013/01/11/usgs-team-builds-remote-imaging-system.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00288-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935788
730
2.71875
3
Drip, Drip, Drop: Is Your Data Leaking? A data leak occurs almost every day in the U.S., and citizen and corporate information is stolen. Some leaks result from hackers gaining illegal access to restricted systems, but others occur because of human error at organizations such as hospitals, insurance companies and universities. These data leaks range from the small — affecting just a few people — to the colossal — affecting millions, according to the Privacy Rights Clearinghouse’s list of data breaches. Data breaches on the personal level can have devastating consequences, such as identity theft. But on the enterprise side, data breaches cannot only damage a company’s reputation but also result in the loss of future business, according to Mary Clarke, CEO of Cognisco, a knowledge assessment and learning solutions provider. “A serious data leakage issue could haunt a business for years by impacting customer confidence and reducing future sales,” she said. “Once trust is lost among customers, it is not easily regained.” The most recently documented data breach occurred at the University of Florida’s College of Dentistry. In early October, the college discovered that more than 330,000 patient records were compromised due to the presence of rogue software. “This issue [of data security] affects not just an organization like the University of Florida, but many large organizations, both in the higher education community, as well as in the commercial space,” said Mark Bower, director of information protection solutions at Voltage Security, an enterprise security company specializing in information encryption. “These days, things like Social Security numbers or any information that can be used to create an identity of a person certainly has value on the open market. And so any large system is going to be a target.” One solution to the kind of data leakage that occurs as a result of hacking involves implementing technology that allows companies to share information only with business partners. Data-centric encryption allows records to be preserved in just that way. “So I can encrypt, for instance, a Social Security number. I’ll still get some data that’s now encrypted and protected, but it still looks and feels like a Social Security number,” Bower said. “So even if [the system is] accidentally compromised, it will not have access to the real data.” Such solutions have become much less complex and easier to implement than in the past, Bower added. However, the other kind of data leakage — which results from employee error or misunderstanding — is a bit trickier to solve. An August report by research company InsightExpress found that two-thirds of employees have engaged in at least one security-threatening activity, such as failing to log off a work computer at the end of the day or storing a work laptop in an insecure location. Just earlier this month, a Baylor Health Care System employee in Dallas left a laptop computer in his vehicle and it was stolen, resulting in the Social Security numbers of 7,400 patients being compromised. And in February, Milwaukee County officials in Wisconsin accidentally released a number of court records for posting to a third-party Web site. These records contained details including names and payments made for paternity tests and psychological evaluations. For this reason, Clarke stressed the need for training and policies on data within the enterprise. Companies should start by assessing employees to find where misunderstanding is occurring, she said. Then, it is important to clarify “the roles and responsibilities of all job levels and [target] employees with responsibility for [data] transfer with additional training. “Companies and employers may also want to impose stricter consequences for security-risky behaviors,” she noted. Addressing data leakage through a combination of sound company polices, effective employee development and technology solutions will help industries fight the threat to organizations’ and the general public’s vital information. – Mpolakowski, email@example.com
<urn:uuid:6eedcad1-d263-448c-9c04-ab30ff2c12cd>
CC-MAIN-2017-04
http://certmag.com/drip-drip-drop-is-your-data-leaking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00196-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942497
818
2.8125
3
Water Robots Hit the Waves / November 22, 2011 Last Thursday, four self propelled robots called Wave Gliders left San Francisco for a 60,000 kilometer journey. These robots, each of which is about the size of a dolphin, are built by Liquid Robotics, and will travel together to Hawaii, then split into pairs. One pair will head to Japan while the other ventures to Australia, IEEE Spectrum magazine reports. Solar-powered sensors aboard the wave gliders will measure water temperature, clarity and salinity, and oxygen content; gather information on wave features and currents; and collect weather data. The point of the expedition, so to speak, is to “push the boundaries of science, and prove to the world that this type of technology is ready to increase our understanding of the ocean,” Graham Hine, senior vice president of operations, told IEEE Spectrum. The collected data is streaming via the Iridium satellite network and will be made freely available in accessible form on Google Earth’s Ocean Showcase. For researchers who register, the data will be available in a more complete form. Photos courtesy of Liquid Robotics
<urn:uuid:98bb2a19-38bc-44c6-be86-38fd9ef51249>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Water-Robots-Hit-the-Waves-11222011.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00222-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90216
231
3.109375
3
A novel use of supercomputing has resulted in a unique approach to researching the history of the world. SGI and Kalev Leetaru, Assistant Director for Text and Digital Media Analytics at the University of Illinois, set out to map the full contents of Wikipedia’s English language edition using a history analytics application. To implement the application, Leetaru took advantage of the UV 2000’s global memory architecture and high performance capabilities to perform in-memory data-mining. According to the press release, the project can now visually represent historical events using dates, locations and sentiment data gleaned from the text. Leetaru recently published Culturnomics 2.0, which utilized 100 million global news articles over a 25-year span, a network of 10 billion people, and contained 100 trillion relationships. A 2.4 petabyte dataset visualized changes in society, including the lead up to the Arab Spring and the location of Osama Bin Laden. That led to the idea of building a historical map based on Wikipedia entries. The project encompassed a wide range of analysis, generating videos, graphs and charts detailing any number of relationships. Some examples include connectivity structures, visualizing persons who were plotted and cross-referenced in the same article, and graphs depicting the online encyclopedia’s sentiment context over a millennium. This does not mark the first time a project has attempted to map Wikipedia entries. Previous attempts involved manual metadata entry, which resulted in a narrower scope of location data. In this case, SGI and Leetaru were able to identify and build connections based on every location and date found in Wikipedia’s four million pages. To achieve these results, the entire English version of the Wikipedia dataset was loaded into the UV 2000’s memory, although no specifics were provided about how big a chunk of RAM that involved or how many processors were being utilized. The UV 2000 architecture is capable of scaling up to 4,096 threads, using Intel E5-4600 processors, and up to 64 TB of memory. Once in memory, the Wikipedia data was geo- and date-coded using algorithms that tracked locations and dates in text. An average article included 19 locations and 11 dates. The resulting connections were then placed in a large network structure representing the history of the world. With all tags and connections established, visual analysis of the entire dataset could be generated in “near real-time.” The in-memory application model gave Leetaru the ability to test theories and research historical data, in a way that has never been done before. “It’s very similar to using a word processor instead of using a typewriter,” he said. “I can conduct my research in a completely different way, focusing on the outcomes, not the algorithms.”
<urn:uuid:1c920211-cd45-4df2-b032-2d59ef9e56e1>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/06/25/supercomputer_sails_through_world_history/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00434-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933886
578
3.25
3
Ethernet (IEEE 802.3) stands for a 1-persistent CSMA/CD LAN. The basic idea here is that a host listens to the cable first when it wants to transmit. If the cable is busy, the host waits until it goes idle; otherwise it transmits immediately. If two or more hosts simultaneously begin transmitting on an idle cable, they will collide. All colliding hosts then terminate their transmission, wait a random time, and repeat the whole process all over again. The protocol is simple and hosts can be installed on the fly without taking the whole network down. On the other hand, to know for sure that the frame it just sent did not collide with any other frame, the transmitter need to send a minimum of 64 bytes in the frame. This could represent a substantial overhead. Other disadvantages of Ethernet include nondeterministic, priorityless, and less efficient as the speed increases. Ethernet is the most widely-installed local area network ( LAN) technology. Specified in a standard, IEEE 802.3, Ethernet was originally developed by Xerox from an earlier specification called Alohanet (for the Palo Alto Research Center Aloha network) and then developed further by Xerox, DEC, and Intel. An Ethernet LAN typically uses coaxial cable or special grades of twisted pair wires. Over time, Ethernet has largely replaced competing wired LAN technologies such as token ring, FDDI, and ARCNET. The primary alternative for contemporary LANs is not a wired standard, but instead a variety of IEEE 802.11 standards also known as Wi-Fi. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Fast Ethernet or 100BASE-T provides transmission speeds up to 100 megabits per second and is typically used for LAN backbone systems, supporting workstations with 10BASE-T cards. Gigabit Ethernet provides an even higher level of backbone support at 1000 megabits per second (1 gigabit or 1 billion bits per second). 10-Gigabit Ethernet provides up to 10 billion bits per second. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses and error-checking data so that damaged data can be detected and re-transmitted. As per the OSI model, Ethernet provides services up to and including the data link layer.
<urn:uuid:d439a799-270c-4493-bc65-e690e16d94eb>
CC-MAIN-2017-04
http://www.fs.com/blog/ethernet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00462-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904377
489
3.9375
4
If you think that you are protected from malware if you use Linux, think again, warn researchers from AV manufacturer Dr. Web, who identified and examined a record-high number of Trojans for Linux this month – and the month isn’t over yet. According to the researchers, the different variants of three distinct Trojans they found seem all to have been created by the same person. Most of these Trojans are created to carry out DDoS attacks via a number of protocols and requests – they are capable of launching SYN, UDP, TCP and ping flooding, as well as of mounting DNS and NTP amplification attacks. There are variants that target Linux ARM distributions, others that infect servers and desktops running 32-bit versions of Ubuntu and CentOS, others still that target 64-bit versions of Linux. Once on a target machine, the Trojans first make sure that they will be started automatically each time the machine is rebooted, then they collect information about the system’s hardware and software (CPU model, available memory, OS version, etc.). The information is then sent in encrypted from to the remote C&C server, from which the malware then receives commands on what to do next, i.e. which target to attack, and updates. “The command servers facilitating control over the Trojans are located mainly in the territory of China, and the corresponding DDoS attacks are directed mainly against Chinese websites,” they noted. Infected Linux machines, on the other hand, are not located only in China, so be careful.
<urn:uuid:a8cbd572-e7ca-4af9-aeaa-40c2565edbc2>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/05/19/record-month-for-linux-trojans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00370-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950945
326
2.59375
3
Designing products with the environment in mindWe consider the environment at every stage of the product lifecycle starting with how a product is designed. Instead of one eco-friendly product, we consider the following when designing all of our products. |Smarter material choices| We are committed to making smarter choices about the materials that go into our products by using environmentally preferable materials and avoiding those that are not. We hold ourselves to the world’s toughest environmental standards such as Europe’s RoHS and REACH. And we go beyond these standards with our chemical use policy by reducing or eliminating other substances even if they are not restricted. |We also use recycled-content plastics in many of our products, helping close the recycling loop.| For example, in FY16, we used more than 14.1 million pounds of post-consumer recycled plastics in our products for a cumulative total of 36.2 million pounds used since the start of FY14. Learn More To lower your power bills and our collective carbon footprint, Dell’s engineers are focused on making our products as energy efficient as possible. Dell products use less and less power with each generation saving you an additional 25 percent or more in desktop- and notebook-related energy costs since 2008, and Dell servers are warrantied to run for extended periods at up to 113°F/45°C, allowing for an extensive geographic range of potential chillerless operation. Learn More |End of life and reuse| We make upgrading your technology easy to make your life easier. This also means products can last longer benefiting you and our planet. |We also design for recyclability, so that when your products finally reach the end of their life, it's easy for our recycling partners to disassemble and process.| To do this, we collaborate with those same partners to determine which design features are best for recyclability. This takes into consideration easy disassembly, minimal glues and adhesives, restrictions on paints and coatings, and labeling recyclable materials so our environmental partners can identify and put them toward the best possible reuse. For example, the exterior of our XPS 13 Ultrabook™ uses polymer-reinforced carbon fibers, which make the computer lightweight and cool to the touch. But that material had to also conform to EPEAT®'s criteria for recyclability, ensuring our recycling partners could return the material to usefulness. Learn More We find ourselves surrounded by products claiming to green, but how do we know for sure? When searching for technology that meets today’s environmental standards, look for reputable third-party eco-labels of ENERGY STAR, EPEAT and the 80 PLUS Program. Dell has a long commitment to these eco-labels. We stood with the U.S. EPA to launch their energy-efficiency mark known as ENERGY STAR and many of our products are registered across multiple countries for EPEAT — the highest bar for sustainably made electronics. Dell is also the first in the industry to offer an 80 PLUS® Gold certification for server power supplies and then the first to offer 80 PLUS Titanium-certified power supplies. Learn More
<urn:uuid:0801cc75-e054-4abc-8cdc-61b12f2b9209>
CC-MAIN-2017-04
http://www.dell.com/learn/ca/en/cacorp1/dell-environment-greener-products
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00186-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93468
658
2.671875
3
184.108.40.206 What is the life cycle of a key? Keys have limited lifetimes for a number of reasons. The most important reason is protection against cryptanalysis (see Section 2.4). Each time the key is used, it generates a number of ciphertexts. Using a key repetitively allows an attacker to build up a store of ciphertexts (and possibly plaintexts) which may prove sufficient for a successful cryptanalysis of the key value. Thus keys should have a limited lifetime. If you suspect that an attacker may have obtained your key, the key should be considered compromised, and its use discontinued. Research in cryptanalysis can lead to possible attacks against either the key or the algorithm. For example, recommended RSA key lengths are increased every few years to ensure that the improved factoring algorithms do not compromise the security of messages encrypted with RSA. The recommended key length depends on the expected lifetime of the key. Temporary keys, which are valid for a day or less, may be as short as 512 bits. Keys used to sign long-term contracts for example, should be longer, say, 1024 bits or more. Another reason for limiting the lifetime of a key is to minimize the damage from a compromised key. It is unlikely a user will discover an attacker has compromised his or her key if the attacker remains "passive." Relatively frequent key changes will limit any potential damage from compromised keys. Ford [For94] describes the life cycle of a key as follows: - Key generation and possibly registration (for a public key). - Key distribution. - Key activation/deactivation. - Key replacement or key update. - Key revocation. - Key termination, involving destruction or possibly archival.
<urn:uuid:d6be7121-00a1-40f2-a019-45d970ecdb6d>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/life-cycle-of-a-key.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00214-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908923
357
2.8125
3
If you’re in tech, you hear these terms all the time and probably wonder what the differences are. They can get a bit confusing, which is why I make these kinds of articles. Approaches to Artificial Intelligence - Machine Learning: An approach to AI that focuses on enabling computers to learn without being explicitly programmed. They can be summarized as systems that learn from data instead of from just their programming like normal computers. - Neural Networks: A type of Machine Learning, Artificial Neural Networks (ANNs) attempt to model biological systems, such as the brain, in order to be able to learn when exposed to unknown inputs. They’re made up of a serious of nodes that are connected to each other much like in the brain. - Deep Learning: A branch of machine learning that attempts to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations (Wikipedia). Note that deep learning is considered by many to be a simple rebranding of Neural Networks. - Supervised Learning A type of Machine Learning where data you provide is already labeled. For example, you might some samples that are clearly smiling faces and some that are clearly frowning, and you’re looking to train the algorithm to tell the difference. - Unsupervised Learning: A type of Machine Learning algorithm where the data you’re providing is not labeled, so you’re looking for the algorithms to tell YOU about patterns that it finds, which you might not even be aware of. - Expert Systems: A computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning about knowledge, represented primarily as if–then rules rather than through conventional procedural code. I hope this has been useful. [ Created: May 2016, Updated: August 2016 ]
<urn:uuid:968c7fbc-a4bb-438e-aa75-901f0320ca86>
CC-MAIN-2017-04
https://danielmiessler.com/study/artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00516-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94957
388
3.5625
4
The concept of nanotechnology may have been first introduced by the famous physicist Richard Feynman in 1959 when he delivered a lecture titled, “There’s Plenty of Room at the Bottom.” The lecture inspired Eric Drexler years later, who then helped popularize the concept. The Feynman Vision and Its Implications Feynman looked far beyond the laboratory accomplishments of his day (R. Feynman, 1961). He suggested that miniature manufacturing systems could build yet more manufacturing systems: “I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on.”Working on a small enough scale, these could build with ultimate precision: “If we go down far enough, all of our devices can be mass produced so that they are absolutely perfect [that is, atomically precise] copies of one another.” He asked, “What would the properties of materials be if we could really arrange the atoms the way we want them?” He suggested that nanomachines could achieve this key objective by building things with atom-by-atom control: “It would be, in principle, possible (I think) for a physicist to synthesize any chemical substance that the chemist writes down. . . . Put the atoms down where the chemist says, and so you make the substance.” There’s Plenty of Room at the Bottom – [zyvex.com] Now comes the interesting question: How do we make such a tiny mechanism? I leave that to you. However, let me suggest one weird possibility. You know, in the atomic energy plants they have materials and machines that they can’t handle directly because they have become radioactive. To unscrew nuts and put on bolts and so on, they have a set of master and slave hands, so that by operating a set of levers here, you control the “hands” there, and can turn them this way and that so you can handle things quite nicely. Most of these devices are actually made rather simply, in that there is a particular cable, like a marionette string, that goes directly from the controls to the “hands.” But, of course, things also have been made using servo motors, so that the connection between the one thing and the other is electrical rather than mechanical. When you turn the levers, they turn a servo motor, and it changes the electrical currents in the wires, which repositions a motor at the other end. Now, I want to build much the same device—a master-slave system which operates electrically. But I want the slaves to be made especially carefully by modern large-scale machinists so that they are one-fourth the scale of the “hands” that you ordinarily maneuver. So you have a scheme by which you can do things at one- quarter scale anyway—the little servo motors with little hands play with little nuts and bolts; they drill little holes; they are four times smaller. Aha! So I manufacture a quarter-size lathe; I manufacture quarter-size tools; and I make, at the one-quarter scale, still another set of hands again relatively one-quarter size! This is one-sixteenth size, from my point of view. And after I finish doing this I wire directly from my large-scale system, through transformers perhaps, to the one-sixteenth-size servo motors. Thus I can now manipulate the one-sixteenth size hands. Well, you get the principle from there on. It is rather a difficult program, but it is a possibility. You might say that one can go much farther in one step than from one to four. Of course, this has all to be designed very carefully and it is not necessary simply to make it like hands. If you thought of it very carefully, you could probably arrive at a much better system for doing such things. J Storrs Hall: Feynman Path to Molecular Nanotechnology – [nextbigfuture.com] Here are links and summaries of the first ten parts of the Feynman path to molecular nanotechnology as conceived and writen by Foresight President J Storrs Hall. Feynman’s Path to Nanotech (part 7) – [foresight.org] There are at least two major parts to a project to implement the Feynman Path. The first is essentially to work out a roadmap for the second. In particular, - Design a scalable, remotely-operated manufacturing and manipulation workstation capable of replicating itself anywhere from its own scale to one-quarter relative scale. As noted before, the design is allowed to take advantage of any “vitamins” or other inputs available at the scales they are needed. - Implement the architecture at macroscale to test, debug and verify the design. This would be a physical implentation, probably in plastic or similar materials, at desktop scale, and would include operator controls that would not have to be replicated. - Identify phase changes and potential roadblocks in the scaling pathway and determine scaling steps. Verify scalability of the architecture through these points in simulation. Example: electromagnetic to electrostatic motors. It would be perfectly legitimate to use externally supplied coils above a certain scale if they were available, and shift to electrostatic actuation, which would involve only conducting plates, below that scale, and never require the system to be able to wind coils. - Identify the smallest scale, using best available fabrication and assembly technology, at which the target architecture can currently be built. - Write up a detailed, actionable roadmap to the desired fabrication and manipulation techniques at the nanoscale. The Early History of Nanotechnology – [cnx.org] Nanotechnology is an essentially modern scientific field that is constantly evolving as commercial and academic interest continues to increase and as new research is presented to the scientific community. The field’s simplest roots can be traced, albeit arguably, to 1959 but its primary development occurred in both the eighties and the early nineties. In addition to specific scientific achievements such as the invention of the STM, this early history is most importantly reflected in the initial vision of molecular manufacturing as it is outlined in three important works. Overall, an understanding of development and the criticism of this vision is integral for comprehending the realities and potential of nanotechnology today.
<urn:uuid:0cf05a23-8dbb-4f2a-ade7-3b01690d4a73>
CC-MAIN-2017-04
http://www.hackingtheuniverse.com/singularity/nanotechnology/feynman-path
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00424-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950355
1,337
3.75
4
5 strategies for addressing cybercrime From Jesse James to Butch Cassidy to Bonnie and Clyde, criminals have robbed individuals, stage coaches, trains and banks. Why? Because, as Willie Sutton famously said, “that’s where the money is.” Fast forward to the internet age, criminal conduct has expanded dramatically to include new types of fraud, theft and espionage conducted through cyberspace. Cybercrime can be far reaching with long-term effects -- from the impact on organizations from the theft of intellectual property or business secrets to the consequences identity theft can have on an individual, including credit standing and loss of personal resources. Responding to cybercrime is even more challenging because the economics favor the criminals. With just a laptop, a single individual can wreak havoc on individuals and organizations with minimal cost and little risk of being caught.More advanced technologies and protective measures will eventually deter nefarious conduct, help security officers catch and prosecute perpetrators and level what has become an unbalanced playing field. In the meantime, it is imperative that all digital users practice basic cybersecurity hygiene to increase their own protection and improve cybersecurity overall. It is estimated that roughly 80 percent of exploitable vulnerabilities in cyberspace are the direct result of poor or nonexistent cyber hygiene. While it is also important to address the remaining 20 percent of more-sophisticated intrusions -- advanced persistent threats, distributed denial of service attacks, botnets, destructive malware and the growing challenge of ransomware -- raising the bar for basic cyber hygiene will improve our overall cybersecurity protection profile and reduce the threat from cybercrime. Cybersecurity is a shared responsibility and requires the attention of a broad range of stakeholders. It requires an effective public/private partnership that incorporates businesses and institutions of all sizes along with national, state, local, tribal and territorial agencies to produce successful outcomes in identifying and addressing threats, vulnerabilities and overall risk in cyberspace. Individual consumers also have a role, and adding cybersecurity to K-12 as well as higher education curriculums will help raise awareness for all users. Teaching users how to better protect themselves is a necessary component to any strategy. A framework addressing cybercrime should include these five strategies: 1. Raising awareness A comprehensive and sustained national cybersecurity education campaign is essential for raising public awareness of the risk and impact of cyber activity and the need to deploy basic protective measures on desktops, laptops, tablets, phones and other mobile devices. The explosion of connected devices -- from smart refrigerators, lighting systems, heating and air conditioning, security services to autonomous automobiles -- puts an exclamation point behind the importance of cyber protection for individual users and organizations of all sizes and levels of sophistication. Cybersecurity education should cover the basics: - Use strong passwords. - Apply system updates in a timely and efficient manner. - Secure devices by enabling a firewall and deploy solutions to address viruses, malware and spyware. - Learn not to click on email links or attachments, unless the sender is known and trusted. Even then, phishing emails sometimes spoof the sender’s identity to trick the user into clicking a link or attachment. 2. Leveraging trusted resources Additionally, building, maintaining, scaling and updating an online source of information on how users of all levels of sophistication can establish and improve their protection profiles in cyberspace is imperative. Leveraging capabilities, such as those created in the United States by the National Cyber Security Alliance through Stay Safe Online or in the United Kingdom with Get Safe Online, to implement a comprehensive and sustained national education and awareness campaign is a fundamental component of any successful cybersecurity program. Current cybersecurity efforts, such as the Stop… Think… Connect campaign sponsored by the Department of Homeland Security, are a good start. However, existing programs need to scale more broadly to accelerate positive change. Enterprises can reference valuable tools such as the NIST Cybersecurity Framework, Center for Internet Security/SANS Top 20 Controls, ISO 27001 and NIST 800-53 for recommendations on improving an overall cybersecurity profile. 3. Building an economic framework Simply purchasing every new tool or security product is not the answer. From the individual user to the small business to the large enterprise, it is important to make investment decisions for cybersecurity in a risk management construct that includes trying to secure the biggest bang for the buck. AFCEA International’s Cybersecurity Committee took a look at this issue and provides useful information to assist in the examination around the economics of cybersecurity. More information can be found here and here. 4. Working with invested partners Improving our national and global capabilities to detect, prevent, mitigate and respond to cyber events through a joint, integrated, 24x7 public/private operational capability that leverages information sharing, analysis and collaboration should be a priority. To build a mature operational capability for cybersecurity, we should learn from how the National Weather Service and the Centers for Disease Control and Prevention leverage technology and data analytics to identify patterns and trends to issue early alerts and warnings as well as recommendations for potential protective measures. Working through the global community to address gaps and coordinate law enforcement, investigation and prosecution of cyber criminals will help tackle both the economics and the challenges of anonymity in nefarious cyber activity. Global agreement on cyber deterrence and norms of cyber conduct will benefit national and economic security, public health and safety and everyday life in cyberspace. 5. Implementing a response plan Implementing a National Cyber Incident Response Plan is essential to national and economic security. It should recognize the unique nature and risk presented by cyber events and provide a predictable and sustained clarity around roles and responsibilities of various stakeholders during thresholds of escalation. A strategic, yet agile, framework should be accompanied by operational playbooks that focus on critical infrastructure. These steps are necessary to achieve ground truth and situational awareness during a cyber event. There are initiatives across these topic areas, but many remain ad hoc. Ongoing improvement in cybersecurity requires a coherent, coordinated and collaborative approach across the stakeholder community. It is not just about the federal government, it is also about state, local, tribal and territorial agencies. It is not just about the public sector, it must include industry in a true partnership founded on mutual respect and engagement that honors, recognizes and leverages roles, responsibilities and capabilities in a joint, integrated and collaborative manner. It is not just about domestic risk, it is about global risk to an interconnected and interdependent community and the threat to national and economic security. Each of us has a role to play in improving our individual and collective cybersecurity. With the proliferation of mobiles devices and the explosion of the Internet of Things presenting new and emerging cyber challenges, we must implement basic protective measures that will help to reduce the risk while increasing the cost and difficulty for cyber criminals. Although we are no longer dealing with bandits hiding behind rocks to hijack a stagecoach, we are nonetheless still facing website defacers, hackers for hire, criminals seeking financial gain, political hacktivists, nation states engaged in political and economic espionage or even terrorist organizations. Together, we must move forward aggressively to improve our national cybersecurity posture in a globally connected world. With a multidimensional and coherent approach to cybersecurity and cybercrime, each of us can contribute to make a meaningful difference.
<urn:uuid:3d8bf228-9759-4bbb-992c-f347026f9c21>
CC-MAIN-2017-04
https://gcn.com/articles/2017/01/11/strategies-addressing-cybercrime.aspx?admgarea=TC_SecCybersSec
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00424-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925778
1,473
2.78125
3
Ever wonder what would happen if a nuclear bomb was to be detonated somewhere near to you? Should such a worry have been one of your concerns the answer is now available via NukeMap3D, a Google Earth-based simulator that lets you set ground zero and blast yield, along withl as wind direction and strength and then calculates the fallout map, casualties, and animates the resultant mushroom cloud. You can examine the results from various set viewpoints such as ground zero, airplane, or low earth orbit or navigate to any vantage point you please. Below is my test using a 100 kiloton bomb detonated in downtown Los Angeles with a 4mph wind traveling WNW. The results are a final cloud (at 271 seconds) 11,840 meters tall and 10,380 meters wide with 101,330 estimated fatalities and 540,760 injured. With the given wind speed and direction the fallout plume reaches will beyond where I live. Of course, 100 kilotons is not really that big big a blast compared to the likes of the biggest nuclear blast test ever conducted. The Soviet Union's Tsar Bomba was detonated in 1961 and yielded between 55 and 60 megatons. Here's my test repeated with the Tsar Bomba yield; it's rather more dramatic. That yellow ball is the size of the fireball that would be produced. This is a great, albeit depressing, mashup.
<urn:uuid:f0baed9b-0b01-47f2-b808-811d9f6c804c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225046/security/what-would-a-nuclear-blast-do-to-my-town-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00242-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946912
284
3.078125
3
Laptop Use Can Damage Male Fertility A New York university's study finds that men can impair their fertility through laptop use of a notebook computer. Intel says it won't comment since it hasn't seen the study.Researchers at a New York university believe theyve found evidence that laptop use of notebook computers by men can impair fertility. Researchers at the State University of New York at Stony Brook showed that prolonged laptop use heated mens testicles on the order of 5 degrees Fahrenheit, within the 1.8 degree to 5.2 degree range found to impair the production of male sperm. The test sample was relatively small29 people, according to reportsand the test subjects used an undisclosed Pentium 4-based laptop for an entire hour. "We havent yet seen the study, so we cant comment," a spokeswoman for Intel Corp., based in Santa Clara, Calif., said in an instant message to ExtremeTech.
<urn:uuid:8b3caa6e-3da4-4fd0-a6ef-e52215657856>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Laptop-Use-Can-Damage-Male-Fertility
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931232
185
2.609375
3
The European Space Agency’s massive Planck telescope has been hard at work digging through ancient light signals to find the original spark of the Big Bang. The clue-yielding light has traveled 13.8 billion years to reach research equipment and is so faint that Planck has to scan every point on the sky an average of 1,000 times to spot illuminations. This has resulted in an incredibly massive map of the cosmos, not to mention some interesting new spin-outs of the original research mission. As one might image this sky-mapping and light-combing process requires some serious HPC resources. “So far, Planck has made about a trillion observations of a billion points on the sky,” said Julian Borrill of the Lawrence Berkeley National Laboratory, Berkeley, Calif. “Understanding this sheer volume of data requires a state-of-the-art supercomputer.” But scientists behind the project point to another particularly difficult angle to their research that necessitates a high performance system. To get to the light sources and make accurate models, there is a lot of noise from the Planck sensors to plow through—and a lot of teasing apart of these critical signals versus the static that they are wrapped in. Project scientists point to the noise as one of the fundamental challenges of the mission and have looked to a top 20 system to solve the problem. At the heart of these signal search and filter process is the Opteron-powered “Hopper” Cray XE6 system that is part of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab. According to NASA, the computations needed for Planck’s current data release required “more than 10 million processor-hours on the Hopper computer. Fortunately, the Planck analysis codes run on tens of thousands of processors in the supercomputer at once, so this only took a few weeks.” Hopper is NERSC’s first system at the petascale pedestal, which rounded out at number 19 on the last Top 500 list with 217 TB of memory running across 153,216 cores. The center is looking to continue the Cray tradition by tapping into the Cascade, as announced around ISC last year.
<urn:uuid:dae593e1-436c-488a-9265-d294d6df4d23>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/03/25/hopper_lights_up_the_cosmos/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926279
470
3.953125
4
In 2001, the cost to sequence an entire human genome was $100 million. Since then, cost has moved swiftly downward, hitting $1 million around 2007 to sequence the genome of James Watson (a co-discoverer of DNA’s double-helix). The price has continued its downward curve, falling to about $3,000-$4,000 in 2013. A race is on to reach a price target of $100 per complete genome. That $100 figure is still a few years off, but it is looking more probable every day. Chip-based sequencing, which eliminates the need for expensive reagents and uses relatively cheap equipment, has lowered costs significantly and should allow even lower cost sequencing as throughput speeds increase. Nanopore sequencing, which splits a DNA strand into its single helixes and passe s the entire strand through a tiny protein tube that reacts to the individual molecules, is a radical departure from previous technology. It’s not yet ready for clinical use, but it, too, should lower costs and speed up throughput. Healthcare technology is opening new doors. That drop in cost will mean that truly personalized medicine is on the near horizon. The ability to sequence a genome quickly and cheaply is the gateway to understanding the underlying molecular pathways for disease. Making sense of all that data The next big hurdle is analyzing the genome and understanding what it all means. Currently, the exome, or the portion of the genome that we actually understand, represents only about 1 percent of the total. It’s one thing to search out mutations in specific areas of a genome to identify a specific disease risk. It’s far more difficult to understand the implications of all 3 billion genetic pairs in a genome. Add in the epigenetic information in the controller regions of DNA and you have a monumentally complex data set. To achieve the ability to redirect disease-causing genetics will take a complex toolkit of analytic technology, computing infrastructure, medical research and a healthcare system that supports the end goal. Analytic technology is moving forward quickly, allowing medical researchers to identify mutations and create therapies based on the results. The pediatric cancer clinical trial led by the Neuroblastoma and Medulloblastoma Translational Research Consortium (NMTRC) and supported by the Translational Genomics Research Institute (TGen) is a model for how to use genetic data, analytics and computing infrastructure to improve treatment. TGen researchers map the genome of tumor cells and use analytic tools on a high-performance computing platform to quickly determine the disease pathways. This helps the oncologists select the most effective form of therapy for each child. Beyond cancer treatment Oncology is not the only area where genotyping affects treatment decisions. For example, clinicians now use genetic markers to predict how patients will react to warfarin therapy, a common treatment to prevent clot-related strokes in patients with atrial fibrillation. Older approaches required near-daily blood tests at the beginning of therapy to determine an appropriate dose. These are baby steps up the mountain of genetic data that we need to conquer, but as our sophistication in genetic analysis grows, and as the speed of processing increases, our understanding of our DNA will increase. As with most technology, the rate of increase will speed up, and the time needed to solve the problem will go down. I am not unrealistic in hoping that my children will live much longer lives than me, because they will know so much more about how to avoid disease. Think of it. In the near future, we could identify before birth all the diseases built in to a baby’s DNA. A bit farther in the future, we may be able to take steps in utero to prevent those diseases from happening. Any disease caused by an unfavorable genetic variation could be short-circuited, eliminating many of the chronic diseases that not only shorten lives but also make the final decades of life unpleasant and expensive. I like the idea that accidental injuries could become the most common cause of death for 100-year-olds, not cardiac arrest or Alzheimer’s. Three steps to unlocking the future So how do we accomplish that? First, keep investing in healthcare technology. In a previous post on patient-centered care, I noted that we’ve made tremendous progress in crossing the quality chasm, and that technology has been critical in those efforts. We need to increase our sophistication in using analytic tools to improve outcomes and cut costs in patient care and operations. That will help free up funds to invest in new genetic-based treatments when they become available. And the better we are at analytics, the better prepared we will be to use genetic information. Second, patients need to be confident that insurers and employers can never use genetic data to discriminate against them. Only then will they be willing to test for disease and give themselves the option of preventing it. Third, we need a system in which everyone has an incentive to prevent disease. That will produce an environment that is willing to pay for genetic testing and for treatments to prevent disease. If the financial risk is perpetually kicked down the road to the next insurer, no one has an incentive to prevent long-term consequences. With continued investment in technology and research, and continued transformation toward a patient-centered, information-driven healthcare system, our grandchildren will be able to live much longer, and more importantly, much healthier lives than we can even imagine. I look forward to participating in the Bipartisan Policy Center’s forum on personalized medicine later this week as we culminate the celebration of National Health IT Week. I invite you to join the webcast or follow the #personalizedmedicine conversation on Twitter.
<urn:uuid:856f9246-dbcf-4651-a0c3-1fc2d61437f2>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474858/healthcare-it/breaking-the-code--the-potential-of-the--100-genome.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00232-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94065
1,167
3.015625
3
An innovative approach to security awareness training is to use simulated attacks on workers. A recent Naked Security survey shows that 85% of IT security professionals say it's good to send workers fake phishes with the aim of educating them about their vulnerability and getting them to change their behavior. Is there benefit in this approach to user education? The following news clip is from the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), which is part of the Department of Homeland Security: A recent spear-phishing campaign started and ended in October 2012, using publicly available information from an electric utility's Web site to customize an attack against members of the Energy Sector. Employee names, company email addresses, company affiliations, and work titles were found on the utility's Web site on a page that listed the attendees at a recent committee meeting. This publicly available information gave the attacker the company knowledge necessary to target specific individuals within the electric sector. Malicious emails were crafted informing the recipients of the sender's new email address and asked them to click on the attached link. This link led to a site that contained malware. Another email with a malicious attachment may also have been associated with this campaign. Fortunately, no intrusions or infections were discovered following the campaign that targeted 11 specific entities. What if this happened to people in your organization? Would your co-workers take the bait and click the link, putting your business at risk for malware? We all want to believe our colleagues are smart enough to detect the foul smell of a phish attempt, but that's not always the case. Some messages - like the one referenced above - are quite believable and they fool even the most astute people. [RELATED: How to blunt spear phishing attacks] A recent experiment showed just how successful those types of campaigns can be. Tyler Klinger of Critical Intelligence and Scott Greaux of PhishMe were engaged to send fake spear phishing messages to employees of two real-world utility companies. Twenty-six percent of the recipients clicked on a link in the phony emails. Had this been a real phishing attack, just one click on a malicious link could have unleashed malware into the organization. While there are technological solutions to combat phishing attempts, they aren't especially effective. It's hard to develop the technology that will weed out a well-crafted email message before it reaches the intended target. Security experts agree that one of the best defenses is to bolster "the human firewall." In other words, to provide training to workers so they learn to recognize (or at least suspect) a phish attempt. If you can get your colleagues to slow down and really evaluate the messages they receive before acting on them, you've won half the battle. An innovative approach to user education is to use simulated attacks on your colleagues. A recent Naked Security survey featured in the Sophos Security Threat Report for 2013 shows that 85% of the 933 information security people who responded to the survey say that businesses should "fool employees into opening inappropriate emails with the aim of education." A new report discusses whether this is an effective approach to security awareness and training. The report is based on a roundtable discussion among members of Wisegate, including practicing CSOs from Fortune 500 companies. The roundtable was initiated by Joe Ferrara, CEO of Wombat Security Technologies, a security awareness and training company. The CSOs were asked, "Does simulated attack training work?" The group consensus was yes, it does work. As one security leader put it, "it is more of a teachable moment — and the key will be following up with training that works for the employee." In addition, it helps to get workers to realize just how vulnerable they are to attacks that use social engineering to gain their confidence. The CSOs cites some specific benefits of simulated attack training: - It increases specific awareness of the phishing and spear phishing threat. When workers fall for a simulated attack, they become more aware of the real threat and more receptive to the message from IT security. - It improves the general awareness of security. Simulated attack programs help to open the lines of communication between workers and security staff, which in turn helps to improve the efficiency of general security awareness training. - It provides security training metrics. Simulated attacks allow you to track the effectiveness of your security training over time and to target the areas or people that most need additional training. - It helps to focus both the company and the security staff on user behavior and how to turn that weak link into a strength. People can be a weak link in the security chain when it comes to social engineering attacks. Running simulated attacks can help you develop a balance between spending on technology and spending on security training. The CSOs agree that simulated attacks are a valuable part of user awareness training — if they are done right. Ferrara offers the following best practices to ensure you get the most out of your training program: - Get internal buy-in on the approach from executives across all departments. - Assess the existing level of user awareness prior to starting a new simulated attack methodology. This gives you a baseline for judging the effectiveness of this methodology and to plan future campaigns. - Use the upfront assessment data, combined with new data from the simulated attacks, to plan and prioritize future training. - Combine your training methodology with learning science principles in order to ensure maximum retention by your colleagues. - Continue the learning assignments throughout the year. - Maintain heightened user awareness by making your training program a continuous process. For more information, read the full report, "A Security Officer Debate: Are simulated phishing attacks an effective approach to security awareness and training?" Linda Musthaler is a Principal Analyst with Essential Solutions Corporation. You can write to her at :LMusthaler@essential-iws.com. About Essential Solutions Corp: Essential Solutions researches the practical value of information technology, and how it can make individual workers and entire organizations more productive. Essential Solutions offers consulting services to computer industry and corporate clients to help define and fulfill the potential of IT.
<urn:uuid:9262db73-b1c5-4e0d-b587-719079c933d5>
CC-MAIN-2017-04
http://www.networkworld.com/article/2165305/security/should-you-simulate-a-phishing-attack-on-your-own-colleagues-to-raise-security-awareness.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952153
1,245
2.578125
3
If you think gadgets are getting too small as it is, you've got another thing coming: Rice University researchers have built an entire lithium ion energy storage device into a single nanowire thousands of times smaller than a human hair. The researchers built two versions of their battery and supercapacitor hybrid: The first was built to prove that they could quickly charge and discharge lithium ions through the nickel and tin anode to the polyethylene oxide electrolyte and then to the supercapacitor-like polyaniline cathode. The second device had similar capabilities packed into a single nanowire. The device consisted of thousands of nanowire devices, each about 150 nanometers wide, arranged into centimeter-scale arrays. The Rice lab of Professor Pulickel Ajayan first developed nanobatteries last December. Since then, the team has developed a new process that allows them to install a capacitor inside the nanowires. The polyaniline cathodes are drop-coated onto a polyethylene oxide gel-like electrolyte that stores lithium ions and serves as an electrical insulator between nanowires in an array. The experimental battery stands about 50 microns tall, is about the diameter of a human hair, and is almost invisible when viewed edge-on. The size of the batteries can be easily scaled up, as they can be as long and wide as templates allow. The researchers believe their creation will be the smallest-ever battery and that it could be used as a rechargeable power source for future nanoelectronics. So far, the researchers think that their nanowire devices show good capacity but it drops off after about 20 cycles. So they are fine-tuning the materials to increase their ability to repeatedly charge and discharge. The next step for the device is optimization. The scientists have already begun tinkering with the thickness polymer separator and exploring different electrode systems that could lead to improvements. Like this? You might also enjoy... - NASA Finds Evidence of Flowing Water on Mars - Microsoft Debuts Software for Homebrew Gadget Builds - Flickering LEDs Transmit 800-Mbps Wi-Fi Speeds This story, "Tiny batteries do tight-rope act on nanowire" was originally published by PCWorld.
<urn:uuid:8232babd-3a36-485e-a423-b378ee16b82c>
CC-MAIN-2017-04
http://www.itworld.com/article/2737909/hardware/tiny-batteries-do-tight-rope-act-on-nanowire.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939603
472
3.421875
3
Iacob O.,University of Dundee | Iacob O.,5 South College Street | Rowan J.S.,University of Dundee | Rowan J.S.,5 South College Street | And 4 more authors. Hydrology Research | Year: 2014 Climate change is projected to alter river flows and the magnitude/frequency characteristics of floods and droughts. Ecosystem-based adaptation highlights the interdependence of human and natural systems, and the potential to buffer the impacts of climate change by maintaining functioning ecosystems that continue to provide multiple societal benefits. Natural flood management (NFM), emphasising the restoration of innate hydrological pathways, provides important regulating services in relation to both runoff rates and water quality and is heralded as a potentially important climate change adaptation strategy. This paper draws together 25 NFM schemes, providing a meta-analysis of hydrological performance along with a wider consideration of their net (dis) benefits. Increasing woodland coverage, whilst positively linked to peak flow reduction (more pronounced for low magnitude events), biodiversity and carbon storage, can adversely impact other provisioning service-especially food production. Similarly, reversing historical land drainage operations appears to have mixed impacts on flood alleviation, carbon sequestration and water quality depending on landscape setting and local catchment characteristics. Wetlands and floodplain restoration strategies typically have fewer disbenefits and provide improvements for regulating and supporting services. It is concluded that future NFM proposals should be framed as ecosystem-based assessments, with trade-offs considered on a case-by-case basis. © IWA Publishing 2014. Source
<urn:uuid:deed5685-f924-4f30-ae30-2259bab48149>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/5-south-college-street-1404419/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905822
324
2.734375
3
The BBC, working in collaboration with Lancaster University and Nominet, has managed to turn the micro.bit computer board into a functioning IoT device. Launched in 2015, the micro.bit is a computer that aims to get young people interested in science, technology, engineering and maths (STEM). It’s 4cm by 5cm in size, and users are able to connect it up to Arduino and Raspberry Pi coding PCs. There’s also Bluetooth technology on board for connectivity. But now researchers have found a method for the computers to transmit data packets between each other, which Nominet believes will let children learn how the internet and IoT function. In order for the tech to be used in schools, it has to be easy-to-use and safe, Nominet has said. Its method would see data transferred between the boards with a special handle, meaning personal data isn’t stored. As well as this, each child will also be able to access a private friend list, where they’ll be able to find their classmates’ handles. They can then add who they want, safely. The method works with a Raspberry Pi acting as a gateway for connectivity, and Nominet will provide disk images for each Pi so there isn’t a need for lots of complex code. When the user is connected, they select their handle through a gateway. It’s transferrable between micro:bits, which means they can use more than one. Nominet’s IOT tools work as the backend system for the IoT network, with its registry storing data and providing a layer between devices – helping to keep things simple. You might like to read: Amazon to host Internet of Things competition for start-ups using AWS Expanding IoT expertise Adam Leach, Nominet director of research and development, says the organisation is developing this project so that it expands its existing IoT expertise. He said: “We have built a strong set of tools that enable IoT applications and now we are on a mission to establish other use cases,” he said. “This project with the BBC will really show what our technology can do.” “We introduced privacy by design by making sure personal data wasn’t part of the system in the first place,” said Leach. “We don’t want the name, password or email address of anybody using a micro:bit.” Hands-on learning is vital Simon Shen, CEO of 3D printer brand, XYZprinting, believes it’s vital that when it comes to teaching youngsters tech, they get a hands-on experience. He told Internet of Business: “Educators have to make sure that youngsters are learning hands-on. With 3D printing, for example, kids get much more out of designing their own 3D model and seeing it being built in front of them than merely being told the mechanics. “Tech organisations who want to get kids interested in internet technologies should make ‘play’ the centre of their design – using technology that’s fun removes a lot of barriers to learning and adoption. Take robots, for example: designing, assembling and programming a robot engages students across a range of STEAM skills while being fun – a great example of entertainment.” Major skills shortage Robert Dragan, CEO of Welsh edtech start-up Learnium, says there’s a major shortage of technical talent in the UK but says products like the micro:bit are doing their bit to solve this problem. He told Internet of Business: “Let’s look at the big picture. The world is moving towards a digital and creative economy. Yet, the UK has a shortage of technical and scientific talent – the very people that will power the new economy. The future depends on promoting STEM subjects to young people. “Children are most excited when they can apply creative thinking to the world around them. Using the micro:bit with the IoT promises just that. It’s a great opportunity that hopefully will upgrade the education system.” You might like to read: Businesses ready to invest in Internet of Things technologies
<urn:uuid:fff5bf0c-4252-440d-8fa8-9bdc3d638511>
CC-MAIN-2017-04
https://internetofbusiness.com/bbcs-micro-iot-get-kids-interested-tech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94417
878
3.25
3
A number of reports detailing wide-ranging system security breaches have been at the forefront of the news in the past couple of years. Typically, sensitive personal data such as Social Security Numbers (SSNs), credit card numbers, and bank account numbers are stolen from insecure systems, resulting in identity theft, financial fraud, or other unauthorized use of the information. As a result, system administrators must constantly be monitoring their systems and ensuring appropriate security precautions are taken. Security can be applied at different levels of a system architecture. For example, a firewall might be installed to prevent unauthorized server access from outside of the network. A secure network protocol technology such as IPSec might be used to secure the communication channel between computers on a network. A strict password policy might be put in effect that requires users to select a strong password and change it on a frequent basis. Database-level security measures including authentication and authorization might also be used to enhance application security. In this article, twelve security best practices for DB2 for Linux, UNIX, and Windows are described. They focus specifically on elements that can be controlled from a database administration and programming standpoint, and do not include other security technologies or policies that might also be applicable on a wider system scale. The best practices are not listed in any particular order, but rather, all of them are equally important, as they all contribute toward the overall security level of your DB2 data server. - Revoke implicit authorities and privileges from PUBLIC - Use explicit values for the SYSxxx_GROUP parameters - Track implicit privileges - Do not grant unnecessary privileges - Use an encrypted AUTHENTICATION mode - Use orphan IDs to create and own objects - Use views to control data access - Use stored procedures to control data access - Use LBAC to control data access - Prevent SQL injection in applications - Apply the latest DB2 fix packs - Perform random security audits Revoke implicit authorities and privileges from PUBLIC DB2 internally uses a pseudo-group called PUBLIC, which privileges can be granted to and revoked from. PUBLIC is not actually a group defined in the external security facility, but is a way to assign privileges to any user who successfully authenticates with DB2. When a new database is created, certain database authorities and privileges are automatically granted to PUBLIC, as shown in Table 1. Table 1. A list of the authorities and privileges granted to PUBLIC after database creation |Authority or privilege||Description| |BINDADD||Allows the user to create new packages in the database| |CREATETAB||Allows the user to create new tables in the database| |CONNECT||Allows the user to connect to the database| |IMPLICIT_SCHEMA||Allows the user to create objects in a schema that does not already exist (it creates the schema on-the-fly)| |USE privilege on the USERSPACE1||Allows the user to create tables or indexes in the USERSPACE1 table space| |CREATEIN on schema NULLID||Allows the user to create objects in the NULLID schema| |CREATEIN on schema SQLJ||Allows the user to create objects in the SQLJ schema| |EXECUTE WITH GRANT privilege on all functions and procedures in the SYSPROC schema||Allows the user to invoke stored procedures and execute functions in the SYSPROC schema and grant that permission to other users| |EXECUTE WITH GRANT privilege on all procedures in the SQLJ schema||Allows the user to invoke stored procedures in the SYSPROC schema| |BIND and EXECUTE privilege on all packages created in NULLID schema||Allows the user to BIND and EXECUTE packages in the NULLID schema| |SELECT privilege on tables in the SYSIBM schema||Allows the user to view information in the system catalog tables| |SELECT privilege on views in the SYSCAT schema||Allows the user to view information in the system catalog views| |SELECT privilege on administrative views in the SYSIBMADM schema||Allows the user to view information contained in these administrative views| |SELECT privilege on catalog views in the SYSSTAT schema||Allows the user to view information in the system catalog views| |UPDATE privilege on views in the SYSSTAT schema||Allows the user to update statistical information in these system catalog views| As a best practice, immediately revoke the implicit privileges granted to PUBLIC after the creation of a new database. For example, Listing 1 shows a subset of statements you can execute to revoke privileges from the system catalog views and other privileges implicitly granted to PUBLIC. This list is not comprehensive. Listing 1. Revoking implicit privileges from PUBLIC after database creation CREATE DATABASE testdb; CONNECT TO testdb; REVOKE BINDADD ON DATABASE FROM PUBLIC; REVOKE CREATETAB ON DATABASE FROM PUBLIC; REVOKE CONNECT ON DATABASE FROM PUBLIC; REVOKE IMPLICIT_SCHEMA ON DATABASE FROM PUBLIC; REVOKE USE OF TABLESPACE USERSPACE1 FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.COLAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.DBAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.INDEXAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.PACKAGEAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.PASSTHRUAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.ROUTINEAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.SCHEMAAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.SECURITYLABELACCESS FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.SECURITYPOLICYEXEMPTIONS FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.SEQUENCEAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.SURROGATEAUTHIDSFROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.TABAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.TBSPACEAUTH FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.XSROBJECTAUTHFROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.AUTHORIZATIONIDS FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.OBJECTOWNERS FROM PUBLIC; REVOKE SELECT ON TABLE SYSCAT.PRIVILEGES FROM PUBLIC; ... ... Beginning in DB2 V9.1, the CREATE DATABASE command syntax has been extended with the RESTRICTIVE option. If the RESTRICTIVE option is included, it causes the RESTRICT_ACCESS database configuration parameter to be set to YES and no privileges are automatically granted to PUBLIC. If the RESTRICTIVE option is omitted then the RESTRICT_ACCESS database configuration parameter is set to NO and all the privileges described above are automatically granted to PUBLIC. Use explicit values for the SYSxxx_GROUP parameters DB2 defines a hierarchy of super-user authorities (SYSADM, SYSCTRL, SYSMAINT, and SYSMON), each with the ability to perform a subset of administrative operations such as creating a database, forcing users off a system, and taking a database backup. Their associated instance-level parameters (SYSADM_GROUP, SYSCTRL_GROUP, SYSMAIN_GROUP, and SYSMON_GROUP) control which users inherit that authority. Each parameter can be set to the name of a group of users (defined in the external security facility) who should have that authority. Once set, all users in the specified group inherit the authority. For example, if you have an operating system group called DBAGRP1 that all the DBA users are a member of, all users in this group would inherit the SYSADM authority by setting the value of the SYSADM_GROUP instance parameter to the value DBAGRP1, using the commands shown in Listing 2. Listing 2. Updating the SYSADM_GROUP instance parameter UPDATE DBM CFG USING SYSADM_GROUP dbagrp1 db2stop db2start During a default DB2 installation on Windows, the value of these parameters defaults to NULL. This means that super-user authority is granted to any valid user account that belongs to the local Administrators group. On Linux and UNIX platforms, a NULL value defaults to the primary group of the instance owner, which by default after an installation only contains the user ID of the instance owner. As a best practice, change the default value of each instance-level authority parameter to an explicit group name in order prevent unintended super-user access. In a small business where one DBA fills many roles, these parameters might be set to the same group name. In a large environment where multiple DBAs are responsible for a system, different group names might be used. In addition to ensuring these parameters are given an explicit value, you should also do your best to ensure that all users who are a member of the specified group actually need to be a member of the group. If they don't, remove them! Since user and group account management is handled outside of DB2, DB2 does not scrutinize whether users should or should not be a member of a group. Track implicit privileges As previously mentioned, PUBLIC is implicitly granted certain privileges when a new database is created. This isn't the only time implicit privileges are granted. In some circumstances, the database manager implicitly grants certain privileges to a user when a user creates a database object, such as a table or a package, or when the DBADM authority level is granted. It is important to understand which implicit privileges are granted and the security implications of these implicit privileges. Table 2 summarizes the cases where implicit privileges are granted. Table 2. Summary of the implicit privileges granted for different actions |Action||Implicit privileges granted to the user performing the action| |Create a new database| |Grant DBADM authority| |Create object (table, index, package)| |Create a view| As a best practice, carefully scrutinize and track the implicit privileges that are granted when you perform an action. If you later undo the action, ensure you have a process that revokes any implicit privileges as well. For example, suppose you initially granted the DBADM authority to the user JEFF and at a later time you decided to revoke that authority. To revoke the DBADM authority from JEFF, you could use the following statement: REVOKE DBADM ON DATABASE FROM USER jeff After executing this statement, JEFF would no longer have DBADM authority; however, he would still have the GRANT, BINDADD, CONNECT, CREATETAB, CREATE_EXTERNAL_ROUTINE,CREATE_NOT_FENCED_ROUTINE, IMPLICIT_SCHEMA, LOAD and QUIESCE_CONNECT authorities on the database that were implicitly granted when JEFF was originally granted the DBADM authority. These would need to be explicitly revoked from JEFF. Do not grant unnecessary privileges While developing an application, it is tempting for developers not to worry about security issues right away. For example, developers will typically develop and test their application with a super-user account (DBADM or SYSADM) in order not to be bogged down with security error messages when they try to run their code. It is quite easy to grant a user all database permissions and authorities using the Control Center, as is shown in Figure 1. Figure 1. Granting permissions from the Control Center Often, once the application makes it through the development and testing phases, the permissions granted during the development process to suppress security error messages still linger, even though they are not necessarily required. As a best practice, carefully review the privileges that are granted to each user as part of your application installation and configuration process. Make sure that all the permissions and authorities being granted are actually required. It is easy for a developer who is not very familiar with the DB2 security model to simply grant all available privileges using the Control Center (see Figure 1) to suppress security error messages. You should ensure that all the permissions and authorities that are granted are actually required, or if only one or two of them are really needed. Use an encrypted AUTHENTICATION mode Authentication is the process of validating a supplied user ID and password using a security mechanism. User and group authentication is managed in a facility external to DB2, such as the operating system, a domain controller, or a Kerberos security system. The actual authentication location is determined by the value of the instance parameter AUTHENTICATION. The various authentication schemes include having users authenticated on the DB2 server itself (using the server's security facility), on the client (allowing "single sign-on" access), a Kerberos security facility, or through a user-defined Generic Security Service (GSS) plug-in. Additional authentication options include the ability to encrypt user names and passwords, as well as data, as they travel across the network between client and server. Table 3 summarizes each of the encrypted authentication options. Table 3. A summary of encrypted AUTHENTICATION modes * Note: XQuery is an officially supported query language as of DB2 9. As a best practice, use an encrypted authentication mode. The authentication mode you choose for your environment will be determined by the level of sensitivity of your data. If all of your data is sensitive, you might opt to choose the DATA_ENCRYPT authentication mode, which encrypts a lot of the data traveling between client and server. If only a small subset of your data is sensitive, you might choose to use the SERVER_ENCRYPT mode so that at least your password is encrypted, while the sensitive data can be secured through a different mechanism. To update the AUTHENTICATION instance parameter - in this example to a value of DATA_ENCRYPT - use the commands shown in Listing 3. Listing 3. Updating the AUTHENTICATION instance parameter UPDATE DBM CFG USING AUTHENTICATION DATA_ENCRYPT db2stop db2start Note that the AUTHENTICATION parameter is set at the instance level, meaning that databases created in the same instance share the authentication mode. If you have two databases and each requires a different AUTHENTICATION mode, you would need to create the databases in separate instances. Use orphan IDs to create and own objects When a database object is created, it is owned by the user ID that executed the DDL statement to create it. If that user ID is later retired (e.g. the user leaves the company) or if the user no longer needs database access or authority on database objects, a DBA must revoke privileges from the user. This can result in other dependent database objects or packages becoming invalid (or inoperative). Once successfully created, a database object or package is considered to be valid (versus inoperative) for as long as the object creator or binder of the package continues to hold the necessary privileges on the database objects that are referenced by it. Hence, objects and packages containing static SQL statements can become invalidated when the privileges of the object creator or binder of the package are revoked. As a best practice, use an orphan ID to create and own objects. To summarize this process: - Create a new user ID in your external security facility and mark this user ID as invalid so that it cannot be used. - Ensure that the user ID has no CONNECT authority by removing this user ID from all operating system groups and by ensuring that the CONNECT privilege is revoked from the user or a group the user belongs to. When new database objects need to be created, or other DDL statements must be executed, grant the necessary privileges required to perform the action to the new user ID by executing the GRANT statement. For example, to create a view on table T1, you must grant the SELECT privilege on table T1 to the new authorization ID: GRANT SELECT ON TABLE T1 TO USER <user_auth_id> Set the current session authorization ID to the new user ID temporarily. For example: SET SESSION_USER = <user_auth_id> Under this authorization ID, create database objects and bind packages. For example, to create the view on table T1, you would execute the following statement: CREATE VIEW V1 AS SELECT * FROM T1 Once all required database objects and packages are created, use group membership and group privileges to control access to the created database objects and packages. GRANT SELECT ON VIEW V1 TO GROUP1 GRANT EXECUTE ON PKG TO GROUP1 When you have finished, reset the current session authorization ID to your regular authorization ID by executing one of the two following statements: SET SESSION_USER = SYSTEM_USER SET SESSION_USER = <user_auth_id> This approach ensures that a single user ID is associated with the role of creating database objects, binding packages, and granting privileges. Over time as users come and go, this will greatly simplify database schema and privilege management. Use views to control data access A common way to control access to table data is using views. Rather than exposing your entire set of table data to application users, you can create a view based on a subset of columns. For example, assume that the table defined in Listing 4 that contains insurance policy information: Listing 4. A sample table definition containing insurance data CREATE TABLE INSURANCE ( CUSTID INTEGER NOT NULL PRIMARY KEY, SALARY FLOAT, RENEWAL_MONTH VARCHAR(3), SEX CHAR(1), MARITAL_STATUS CHAR(1), NUM_DEPENDENTS INTEGER, YEAR_1ST_POLICY INTEGER, NUM_CLAIMS INTEGER, CYCLES INTEGER, AGE FLOAT, COMMUTE_DIST FLOAT ); Suppose that one of the insurance company's sister companies wants access to customer data so they can analyze it and make additional customized offers to customers. However, suppose that by law, the insurance company is not allowed to divulge a person's age or how many claims they've made. In order to satisfy these requirements, a view such as the one in Listing 5 could be defined over the table so that the customer's age and claim history is left out: Listing 5. A sample view definition over the previous table containing insurance data CREATE VIEW ins_v1_sis_comp_1 AS SELECT custid, salary, sex, marital_status, num_dependents, year_1st_policy, cycles, age, commute_dist FROM insurance; ); Access to the data could then be governed through the view, instead of the base table. Access to the view can be controlled using GRANT statements, so that not every user could view this data. For example, you can control who can SELECT, INSERT, UPDATE, and DELETE from/to the view. As a best practice, use views to control access to tables when you want to hide a subset of table columns or rows. Using views also helps insulate the application when the underlying table definition changes. Specific privileges can also be granted on views, similar to tables. In addition, predicates can be added to a view definition that will further qualify a set of data, while keeping private information hidden. Using the above example, to only view male customers who are age 65 or above, the view definition in Listing 6 could be used: Listing 6. A sample view definition over the previous insurance data further limiting the rows returned CREATE VIEW ins_v1_sis_comp_1 AS SELECT custid, salary, sex, marital_status, num_dependents, year_1st_policy, cycles, age, commute_dist FROM insurance WHERE age >= 65 AND sex = 'M'; ); This would still meet the legal requirements of not providing a customer's age; however, it would give the sister company more relevant information to further customize their offers. Use stored procedures to control data access Another popular method of controlling access to table data is through the use of stored procedures. A stored procedure is a group of SQL statements that form a logical unit and perform a particular task. They are created and run on the data server and used to encapsulate a set of frequently run operations or queries. For example, operations on an employee database (hire, fire, give raise, lookup) could be coded as stored procedures and invoked by the application instead of being coded directly inside the application. Stored procedures can be compiled and executed with different parameters and results, and they can have any combination of input, output, and input/output parameters. Listing 7 shows an example of a stored procedure that determines an employee's new salary and bonus, depending on their performance rating. Listing 7. A stored procedure to give an employee a raise and bonus depending on their performance rating CREATE PROCEDURE UPDATE_SALARY ( IN empNum CHAR(6), IN rating SMALLINT) LANGUAGE SQL BEGIN IF rating = 1 THEN UPDATE employee SET salary = salary * 1.10, bonus = 1500 WHERE empno = empNum; ELSE UPDATE employee SET salary = salary * 1.05, bonus = 1000 WHERE empno = empNum; END IF; END This stored procedure accepts two input parameters, the employee number and a rating, then updates the employee's salary and bonus depending on the rating given. For employees with a rating of "1", the employee is given a ten percent raise and a $1500 bonus. For all other ratings, the employee is given a five percent raise and a $1000 bonus. As a best practice, consider using stored procedures as a way to control access to your data. Access to tables would be allowed indirectly through a stored procedure call, thereby limiting the actions a user could perform on a table while also controlling what users could invoke the stored procedure. Many applications nowadays design their database layer as stored procedures. That is, all database access is performed through stored procedure invocations. Applications wanting to begin a transaction, such as updating an order or purchasing a product simply need to invoke the appropriate stored procedure from the application. A side benefit of this approach is the fact that all logic is centralized in one place, making management and maintenance easier, as well as making the functionality available to other applications. This approach lends itself to a Service Oriented Architecture (SOA) quite nicely. You can also control access to stored procedures through GRANT and REVOKE statements. Users wanting to invoke the procedure would need to be granted the EXECUTE privilege. Additional privileges may be required on individual objects being referenced by the stored procedure, depending on the bind options and whether the SQL statement is static or dynamic. Use LBAC to control data access A new and exciting feature in DB2 9 is Label Based Access Control (LBAC). LBAC lets you decide exactly who has write access and who has read access to individual rows and individual columns. A special new security administrator authority (SECADM) is used to configure LBAC by creating security policies which essentially define the criteria that is used to decide who has access to what data. After creating a security policy, the security administrator creates objects, called security labels that are also part of that policy. Labels can be based on any criteria, such as a job title, whether the user is a manager or not, or whether the user belongs to a specific department. Once created, a security label can be associated with individual columns and rows in a table to protect the data held there. The security administrator allows users access to protected data by granting them security labels. When a user tries to access protected data, that user's security label is compared to the security label protecting the data. A security administrator can also grant exemptions to users. An exemption allows a user to access protected data that their security labels might otherwise prevent them from accessing. If a user tries to access a protected column that their LBAC credentials do not allow them to access, then the access will fail and they will get an error message. As a best practice, consider using LBAC as a way to control access to sensitive data. LBAC is very configurable and you can tailor it to match your particular security environment. This exciting new security feature in DB2 9 might alone be worth migrating for! Many applications implement similar row and column-based security access mechanisms natively. Why not let your developers concentrate on developing the business logic? It is now easy to offload this capability to the data server using this very customizable new security feature. Prevent SQL injection in applications With the wealth of new web applications replacing traditional client-side applications, security must be designed into the system from the start. A common security hole in web-based applications is known as SQL injection. SQL injection is a technique which enables an attacker to execute unauthorized SQL commands by taking advantage of non-scrutinized input opportunities in applications that build dynamic SQL queries. This typically occurs when the web application combines the strings of a query with an unchecked input variable so that someone could add a second query or otherwise change the query to give them information or access that they should not have. For example, consider the following PHP code snippet: $sql = 'SELECT * FROM staff WHERE empID="'.$_GET['empid'].'"' $stmt = db2_prepare($conn, $sql); $result = db2_execute($stmt, array(10)); This query selects all rows from the STAFF table where the employee ID is equal to the one retrieved from a form. This statement is vulnerable to SQL injection - quotes in $_GET['username'] are not escaped and will be concatenated as part of the statement text, which can result in malicious behavior. Consider what would happen if $_GET['empid'] were the following at runtime: " OR 1=1 OR empID = " When concatenated into the original expression, the query would look like this: SELECT * FROM staff WHERE empID = "" OR 1=1 OR empID = "" This statement selects all rows from the table, potentially exposing private information. While this particular example might not be considered severe, other more malicious code could be added, especially DELETE or UPDATE statements that modify tables. As a best practice, code your applications with security in mind. Avoid retrieving redundant data and scrub all input values. SQL injection attacks are mainly based on exploiting code not written with security in mind. In order to prevent this, the PHP manual makes several recommendations, including: - Never trusting any kind of input, even if it comes from a select box, a hidden input field or a cookie. - Never connect to the database as a user with super authority or as the database owner. Use always customized users with very limited privileges. - Check if the given input has the expected data type. Languages like PHP have a wide range of input validation functions. - Quote each non-numeric user supplied value that is passed to the database with a database-specific string escape function. - Do not print out any database specific information, especially about the schema, by fair means or foul. Apply the latest DB2 FixPaks DB2 FixPaks contain important bug fixes and performance enhancements. They are usually scheduled for release on a quarterly basis and can be freely downloaded from the DB2 Technical Support Web site. FixPaks are cumulative. This means, for example, that FixPak 10 also contains the fixes in FixPak 9, FixPak 8, FixPak 7, and so on, so you only have to download the latest one to take advantage of all the updates in the previous ones. If you want to find out what version and FixPak level of DB2 you currently have installed, use the command from a command window. The output displays the bit-size of your instance - either 32 or 64 bit, the FixPak level currently installed, and the installation directory. As a best practice, develop all new applications using the latest FixPak level. That way, you will be able to take advantage of all the latest performance enhancements and fixes. For existing applications, you should seriously consider moving to the latest FixPak if it contains a fix that addresses an important security concern. A FixPak download contains the code used to upgrade your installation along with supporting documentation. The APAR list, contained in the aparlist.txt and aparlist.html files, contains a list of APARs, or known product defects, that are fixed in the FixPak. If you were experiencing incorrect or unexpected behavior in your current level of DB2, it might have been due to a product defect. You can view the APAR file to see if the FixPak contains a fix for it. The fix pack README file, called FixPackReadme.txt, contains the instructions for installation. You should always read this file before starting a FixPak installation in order to understand the exact sequence of steps you should follow. Finally, the FixPak release notes, contained in the release.txt and relnotes.pdf files, contain the latest information about the DB2 product and known issues and workarounds.fix. Perform random security audits Finally, any proactive security plan should include random security audits. These audits involve logging database events such as authorization checking, database object maintenance, security maintenance, system administration, and user validation and ensuring that access patterns look normal. Fortunately, DB2 comes with an auditing facility that generates and allows a DBA to maintain an audit trail for a series of predefined database events. Auditing takes place at the instance level, meaning that once it is started, it audits the activity for all databases in that instance. The audit facility can monitor different types of database events and you can specify whether only successful or failed events, or both, should be logged. db2audit command is used to configure and operate the audit facility. Once auditing is configured and audit records generated, they can be extracted into a text file, which can then be analyzed. They can also be extracted into delimited ASCII files, which can then be loaded into DB2 relational tables for analysis and querying. For example, suppose that you get an anonymous tip from one of your application users that a user called SAM is attempting to gain access to database objects and tables that he is not supposed to have access to. You decide to randomly monitor the DB2 instance for failed authorization checking attempts. During one lunch hour, you begin by configuring the audit facility to audit the CHECKING event type, recording only failed attempts and using NORMAL error processing: db2audit configure scope checking status failure At 12pm, you start the audit facility: During the auditing period, SAM walks over to the database server and logs in. He opens a command line window, connects to the SAMPLE database and unsuccessfully tries to update the employee salaries in the EMPLOYEE table. He issues the following SQL statements: connect to sample user sam using bad123boy update tedwas.employee set salary = salary * 1.5 Upon receiving the error message: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0551N "SAM" does not have the privilege to perform operation "UPDATE" on object "TEDWAS.EMPLOYEE". SQLSTATE=42501 indicating that he does not have permission to update that table, he quickly logs off the server and leaves, thinking that nobody will notice anything. An hour passes by and you decide to check the contents of the audit log. You extract the records from the db2audit.log file into ASCII delimited files: db2audit extract delasc delimiter ; category checking database sample status failure Having previously created the DB2 tables to hold the audit data, you load the extracted data from the checking.del file into the CHECKING table, using the following command: LOAD FROM checking.del OF del MODIFIED BY CHARDEL; INSERT INTO audit.checking You attempt to find out more information about the failed authorization attempt by querying the AUDIT.CHECKING table: SELECT category, event, appid, appname, userid, authid FROM audit.checking From the query results, shown in Listing 8, you can see that one audit record was generated for the failed update statement. Listing 8. Result of querying the CHECKING table SELECT category, event, appid, appname, userid, authid FROM audit.checking CATEGORY EVENT APPID APPNAME AUTHID ------------------------ ----------------------- ---------------- ------------ CHECKING CHECKING_OBJECT *LOCAL.DB2.060206220334 db2bp.exe SAM 1 record(s) selected. The output confirms that SAM tried to access a table he was not supposed to. Now that your suspicions have been validated, you will continue to gather additional evidence to present to your management, so they can take corrective action. As a best practice, perform random security audits on your database. You can also perform an audit after receiving information that would lead you to believe someone is trying to compromise the security of your system. The DB2 auditing facility is very powerful and can provide you with the detailed information you need to audit access attempts. While reactive monitoring to unforeseen events is inevitable, proactive auditing should also be an important component of your security plan and you should allot time at different points during the month to perform monitoring and analysis. In this article, twelve DB2 security best practices were reviewed, ranging from using encrypted authentication modes to performing security audits. Monitoring your system's security is becoming an increasingly important task, given the escalating number of occurrences of system security breaches. By following these best practices, you can help minimize the security threats to your DB2 data server. You should use the best practices in conjunction with security best practices and policies that are in place at other levels in your system architecture to ensure a comprehensive secure solution. - DB2 9 Online Information Center: The DB2 online (and searchable) documentation. - Read more articles about DB2 9 on developerWorks. - "Understand how user and group accounts interact with DB2 UDB" (developerWorks, August 2005): This article describes the different user and group accounts that are needed to install and work with IBM DB2 Universal Database for Linux, UNIX, and Windows, Version 8.2. It also introduces the DB2 UDB security model, including user authentication, user and group authorization, and super users.. - "Understand the DB2 Universal Database security plug-ins" (developerWorks, December 2005): Learn about the IBM DB2 Universal Database security plug-ins, a new feature introduced in Version 8.2. This article explains what the security plug-ins accomplish and teaches you how to enable and write your own security plug-ins. . - "Understand how authorities and privileges are implemented in DB2 UDB" (developerWorks, January 2006): This article reviews the different administrative authority levels and privileges available in DB2 UDB and how they can be granted to and revoked from user and group accounts. . - "Understand the DB2 audit facility" (developerWorks, March 2006): Learn about the DB2 audit facility, its purpose, how to use and configure it with the db2audit command, and get tips for using it effectively. . - "DB2 Label-Based Access Control, A Practical Guide, Part 1: Understand the basics of LBAC in DB2" (developerWorks, May 2006): This tutorial includes use case scenarios that demonstrate how users can apply Label-Based Access Control (LBAC) to protect their data from illegal access, and yet have the flexibility of allowing users to access data restrictively. The tutorial provides a step-by-step guide to creating LBAC solutions based on use-case scenarios.. - "DB2 Label-Based Access Control, A Practical Guide, Part 2: A step-by-step guide to protect sensitive data using LBAC" (developerWorks, May 2006): This tutorial includes use-case scenarios that demostrate how users can apply LBAC to protect their data from illegal access, and yet has the flexibility of allowing user to access data restrictively. The tutorial provides a step-by-step guide to create LBAC solutions based on use-case scenarios.. - developerWorks Information Management zone: Find more resources for DB2 for Linux, UNIX, and Windows developers and administrators. Get products and technologies - Download DB2 9 (test drive) to try out the features described in this article. - Build your next development project with IBM trial software, available for download directly from developerWorks. - Participate in the discussion forum. - developerWorks blogs: Get involved in the developerWorks community.
<urn:uuid:5e3e7dcd-5ab1-4f4f-a413-0f0a0c06c0f6>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/data/library/techarticle/dm-0607wasserman/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00003-ip-10-171-10-70.ec2.internal.warc.gz
en
0.866426
7,833
2.5625
3
The timeline by which supercomputing advances has been pretty consistent, with thousand-fold increases occurring roughly every decade. To illustrate, Sandia Lab’s ASCI Red became the first teraflop supercomputer in 1996, and Los Alamos Lab’s RoadRunner broke the petaflop barrier in 2008. If this trend continues, we should see an exascale machine by the end of 2020. And most experts agree with this timeframe. Peter Kogge, however, remains skeptical. Kogge, an IEEE Fellow and professor of computer science and engineering at the University of Notre Dame, recently shared his thoughts on the subject in a Scientific Computing article. Kogge predicts an end to the “spectacular progress” supercomputing has enjoyed in the past. He argues that the “power wall” will make Moore’s Law-predicted speed increases unsustainable. Chips will still get faster, but not as quickly. In 2007, Kogge and a body of experts came together at the behest of DARPA to create a 278-page report [PDF] that examined the feasibility of building an exaflop-class supercomputer by 2015. The agency asked the group to determine the key challenges as well as the engineering technologies that would be necessary to build such a machine. Kogge reports on the sobering conclusions: “The practical exaflops-class supercomputer DARPA was hoping for just wasn’t going to be attainable by 2015. In fact, it might not be possible anytime in the foreseeable future. Think of it this way: The party isn’t exactly over, but the police have arrived, and the music has been turned way down.” The biggest obstacle to this next level of computing prowess? Power. Kogge uses the Blue Waters supercomputer as an example. Blue Waters will require 15 MW for 10 petaflops of power. If you were to create an exascale machine by scaling Blue Waters 100-fold, it would take 1.5 gigawatts of power to run it. That’s more than 0.1 percent of the total US power grid, states Kogge. The DARPA report panel members reached the conclusion that it would not be feasible to build an exaflop-level supercomputer by merely tweaking the current computing technology. Only a complete redesign could achieve the necessary power savings. The power obstacle is just the first of many “seemingly insurmountable obstacles.” There are also concerns about memory, long-term storage, and system resiliancy, not to mention the software problem — getting the code to run on so many cores. And to make matters worse, Kogge explains that many of the proposed solutions would require additional hardware, futher increasing the power demand. Kogge is not one to point out all the problems without offering solutions. He writes that “success in assembling such a machine will demand a coordinated cross-disciplinary effort carried out over a decade or more, during which time device engineers and computer designers will have to work together to find the right combination of processing circuitry, memory structures, and communications conduits — something that can beat what are normally voracious power requirements down to manageable levels.” He himself is working on developing new memory technologies that reduce the energy required by the data fetching process by bringing the data to the computation instead of having to move copies of the data around repeatedly. The findings in the report definitely shed light on exascale’s pain points, but by doing so, they also illuminate the path to progress. And even more importantly, Kogge believes, “government funding agencies now realize the difficulties involved and are working hard to jump-start this kind of research.”
<urn:uuid:c168031a-433c-437c-ae7d-4a2f11fd67fe>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/02/02/on_the_road_to_exascale_expect_delays/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00003-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94114
788
3.4375
3
Jun 30, 2014 Ray Kurzweil, an American author, scientist, inventor, futurist, and a director of engineering at Google was quoted in Time Magazine in March 2012 saying: “A kid in Africa with a smart phone has access to more information than the president of the U.S. 15 years ago.” This is a powerful statement about the power of technology. However, that child in Africa will not be able to benefit from that wealth of information like a child in Silicon Valley. The reason for this is that the social infrastructure is not there to support that child in Africa as it is in Silicon Valley. Technology without social innovation and globalization will not advance civilization. Since its founding in 1910, Hitachi has responded to significant societal challenges based on our mission of “contributing to society through the development of superior, original technology and products.” Hitachi’s corporate strategy is built around social innovation. Working with IEEE, Hitachi is sponsoring an IEEE award for Innovation in Societal Infrastructure to promote the development of societal infrastructure through information technology. This award was established by Hitachi Ltd., in cooperation with the IEEE Computer Society, as an institute-level award within IEEE to recognize “significant technological achievements and contributions to the establishment, development, and proliferation of innovative societal infrastructure systems through the application of information technology with an emphasis on distributed computing systems.” With this award Hitachi hopes to help realize a world that provides safety, comfort and convenience for people in every region, country and community. Congratulations to Dr. Balaji Prabhakar, Professor, Stanford University, as the inaugural recipient of the IEEE Institute Level award for Innovation in Societal Infrastructure! The award presentation is scheduled for Tuesday July 1, 2014, 13:00 at the 2014 IEEE International Symposium on Information Theory, in Honolulu, Hawaii. This award is in recognition of “… his demonstration of the innovative use of information technology and distributed computing systems to solve longstanding societal problems, in areas ranging from transportation to healthcare to recycling.” Details of Dr. Prabhakar’s contribution can be found on the Stanford Center for Societal Networks website Stanford University like Hitachi, recognizes the importance of social innovation in a technology driven world and has established centers like the Stanford Center for Societal Networking where Dr. Prabhakar worked on a joint project with the National University of Singapore. This project was conducted in Singapore and Bangalore. Social innovation is a global requirement and to address this Hitachi Ltd. is undergoing a business transformation to be a global company. Last year Jack Domme, our Hitachi Data Systems CEO, was appointed as a Corporate Officer of Hitachi Ltd. As the first non-Japanese corporate officer, Jack has assumed an expanded leadership role in further advancing the globalization efforts of Hitachi and accelerating the social innovation business through the development of strategies that bridge the Hitachi Group Companies. Technology without social innovation will not advance civilization, and social innovation without globalization will not create a world of equal opportunity. For more information on Hitachi’s Social Innovation business, please see the following link. http://www.hitachi.com/businesses/innovation/
<urn:uuid:db859311-33dc-4ec6-95f5-920b0205411c>
CC-MAIN-2017-04
https://community.hds.com/community/innovation-center/hus-place/blog/2014/11/11/technology-without-social-innovation-and-globalization-will-not-advance-civilization
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925123
655
2.546875
3
ABC of Password Creation and Management Nowadays password is the most popular authentication system for every online operations. Password has become a must for secured access in most websites. A password policy is a set of rules that aims to improve computer security by motivating users to create dependable, secure passwords and then store and utilize them properly. A password policy is a part of the official regulations of an organization and might be employed as a section of the security awareness training. A password policy may have two parts. They are described below: 1. Password Creation: • Agencies should implement a password policy enforcing either: o a minimum password length of 16 characters with no complexity requirement or o a minimum password length of ten characters, consisting of at least three of the following character sets: – lowercase characters (a-z) – uppercase characters (A-Z) – digits (0-9) – punctuation and special characters • Passwords should be reasonably complex and difficult for unauthorized people to guess. • Password should be unique, with meaning only to those who chooses it. Dictionary words, common phrases and even names should be avoided. Pick a phrase, take its initials and replace some of those letters with numbers and other characters and mix up the capitalization. For example, the phrase “This may be one way to remember your password sentence” can become “TmB0WTrYp$!” • It should not contain any word spelled completely. • Users are not allowed to use common words and are never based on personal information, (i.e. user name, social security number, children names, pets’ names, hobbies, anniversary dates, etc.); • Agencies must not use a numerical password (or personal identification number) as the sole method of authenticating a system user to access a system. • Employees may not use a password for their company accounts that they are already using for a personal account. • It is must to have different passwords for all these three levels; regular users, Root and Administrators. • User accounts will be disabled temporarily after 3 failed login attempts. • Beside a strong password two-factor authentication system should be enabled. 2. Passwords Management: • All passwords must be changed regularly, with the frequency varying based on the sensitivity of the account in question. This requirement will be enforced using software when possible. • Agencies should: o ensure that passwords are changed at least every 90 days o prevent system users from changing their password more than once a day o check passwords for compliance with their password selection policy where the system cannot be configured to enforce complexity requirements o force the system user to change an expired password on initial logon or if the password is reset • If the security of a password is in doubt– for example, if it appears that an unauthorized person has logged in to the account — the password must be changed immediately. • Default passwords — such as those created for new employees when they start or those that protect new systems when they’re initially set up — must be changed as quickly as possible. • Users may never share their passwords with anyone else in the company, including co-workers, managers, administrative assistants, IT staff members, etc. Everyone who needs access to a system will be given their own unique password. • Employees may never share their passwords with any outside parties, including those claiming to be representatives of a business partner with a legitimate need to access a system. • Employees should take steps to avoid phishing scams and other attempts by hackers to steal passwords and other sensitive information. All employees will receive training on how to recognize these attacks. • Users must refrain from writing passwords down and keeping them at their workstations. See above for advice on creating memorable but secure passwords. A password may follow the traditional guidelines yet still turn out to be a weak password. Users who can’t remember their strong passwords and end up writing them down or constantly having to reset their passwords undermine the benefits of a strong password policy. Passwords are one piece of the security puzzle in the enterprise. Keeping user accounts secure takes a combination of a thorough process for strong password creation and an easy to use system for users to follow to keep those passwords safe.
<urn:uuid:3298a4f4-fbea-44b6-90fe-ae96292292f6>
CC-MAIN-2017-04
https://www.cirt.gov.bd/a-b-c-of-password-creation-and-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931547
886
3.5
4
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: - New York as a global city - booklet Select a size New York A Global City Question du programme THEME Les dynamiques de la mondialisation QUESTION Les territoires dans la mondialisation Lesson Plan Introduction I. An attractive city A. Transport in NYC B. A diverse population C. Tourism and Education II. A world power A. The financial capital B. Politics and diplomacy C. Culture III. Challenges for today and for tomorrow A. Transforming urban areas B. A divided city Introduction What is a global city ? Saskia Sassen literally wrote the book on global cities back in 2001 (though her global cities work dates back well over a decade prior to that book). In short form, in the age of globalization, the activities of production are scattered on a global basis. These complex, globalized production networks require new forms of financial and producer services to manage them. These services are often complex and require highly specialized skills. In this world then, a global city is a significant production point of specialized financial and producer services that make the globalized economy run. Sassen covered specifically New York, London, and Tokyo in her book, but there are many more global cities than this. A number of studies were undertaken to produce various rankings. However, when you look at them, you see that the definition of global city used is far broader than Sassen’s core version - these rankings attempt to look at global cities in four basic ways: 1. Advanced producer of services 2. Economic giants 3. International Gateway. 4. Political and Cultural Hub. From www.newgeography.com New York city facts 1624: first Dutch settlement 1674: New York City returned to the English and remained English. The city’s commercial ties to London gave it an advantage over other American cities 1883: opening of the Brooklyn Bridge. Manhattan and Brooklyn became a single city of 3.4 million people over an area f 359 square miles 1895: the metropolis had 298 firms with assets of $1 million 1921: the port was merged with that of the New Jersey to create a single Port Authority 1932: New York’s governor, Franklin D/ Roosevelt was elected president and his administration launched a New Deal ; New York City alone received $1 billion between 1933 and 1939 1934-1945: mayoralty of Fiorello La Guardia : major bridges, sixty miles intracity expressway, a traffic tunnel for East River, additions to subway lines 1939: opening of La Guardia airport and 14 new piers added to the port 1930, 1931, 1939: Chrysler building, Empire State Building, Rockefeller Center 1945: United Nations established in New York City 1947-1963: massive construction boom, addition of 58 million square feet of office space 1955: 7.8 million people 1960s: race riots 1970s: the city experienced near bankruptcy 1985: 6 of the big 8 accounting firms and 7 of the top 10 management consulting agencies were in New York City 1988: the metropolitan region reached 18 million people; central city shrunk to 7.3 million 1990s: NYSE remained the world’s largest capital market 2001: terrorist attack, a large part of downtown is destroyed Source : Christopher KENNEDY, The Evolution of Great Cities. Urban Wealth and Economic Growth. 2011. Pages 26-29 Transport in New York City Port Authority Facilities John F. Kennedy Airport is the busiest international air passenger gateway in the United States. Over seventy airlines operate out of the airport, with non-stop or direct flights to destinations in all six inhabited continents. The state-of-the-art World Trade Center Transportation Hub, when completed in 2015, will serve over 200,000 daily commuters and millions of annual visitors from around the world. At approximately 800,000 square feet, the Hub, designed by internationally acclaimed architect Santiago Calatrava, will be the third largest transportation center in New York City, rivaling Grand Central Station in size. The WTC Transportation Hub's concourse will conveniently connect visitors to 11 different subway lines, it will represent the most integrated network of underground pedestrian connections in New York City. The Hub features an "Oculus” design, which will give the facility a distinctive, wing-like appearance. When completed, the "Oculus,” the upper portion of the Transportation Hub, will serve as the main concourse. Incorporating 225,000 square feet of exciting, multi-level retail and restaurant space along all concourses, the Hub promises to be a destination location, becoming the centerpiece for the entire Lower Manhattan district. When complete, this structure will reach five stories underground into a basement with connecting ramps leading to the parking and below-grade facilities of all of the adjacent projects on the 16-acre WTC site. From http://www.panynj.gov/wtcprogress/transportation-hub.html Who are the New Yorkers? New York City has historically attracted job seekers from outside the United States. In 2013, foreign-born immigrants accounted for 42.7% of all New York City workers, 37.0% of all New York City residents, and 46.0% of the New York City labor force. As of 2013, over 1.9 million immigrants to the United States worked in New York City; of those, 1.6 million lived in New York City. In the last decade, the growth of the foreign-born population, at 7 percent, outpaced the growth of the city's overall population, at 3 percent. At the same time, the origins of New York City's immigrant population are changing. Immigration from Europe has fallen dramatically as a proportion of overall immigration to New York, while Latin America has surged to the top spot, followed closely by Asia. Tourism and Education City officials estimate the overall economic impact of tourism in 2013 to be $58.7 billion. Direct visitor spending was estimated to be $39.4 billion. With seven universities in New York featured within the QS World University Rankings 2015/16 and an additional three in close proximity to the city, there’s good reason why New York City is one of the most popular study destinations in the world, ranked 15th in the QS Best Student Cities 2015. 1. Cornell University: New York City’s highest ranking institution, Cornell University is currently ranked 19th in the world. A member of the prestigious Ivy League group, Cornell University’s main campus is actually in Ithaca, around 200 miles to the north-west of New York City, but it also has a strong presence in NYC. 2. Columbia University: Currently stands in 22nd place in the QS World University Rankings. Another member of the prestigious Ivy League, Columbia University has a central location in the Upper West Side of Manhattan. It boasts a highly diverse faculty and student body (just under 30,000 students overall), with more than 7,000 international students from over 150 different countries. 3. New York University (NYU): Ranked among the world’s best, at 53rd this year. Notably, New York University has a strong focus on internationalization. Its main hub is its Washington Square campus, in Greenwich Village. This area is one of New York City’s most creative neighborhood, and over the years the school has attracted an eclectic mix of writers, artists, musicians and intellectuals. From www.topuniversities.com Economic Power 2014 saw the New York Stock Exchange lead the world's markets in global capital raising for the fourth consecutive year. Fueled in part by the largest IPO in history - that of Alibaba which raised $25 billion - it was a landmark year for the NYSE across a range of industry categories including the number and value of tech IPOs. For the fourth year in a row, NYSE led in capital raised at more than $70 billion, and for the third year in a row led in tech IPOs with $29 billion in proceeds. Today, NYSE- Listed companies account for $27 trillion in market capitalization, representing the most valuable listed franchise in the world. From www.nyse.com For the past 61 years, Fortune Magazine has been ranking the top 1000 companies in the United States based on revenues for the latest respective fiscal years for each company. The 2015 list shows there are a few areas of the country where Fortune 1000 companies are clustered. The biggest cluster is a corridor along the East Coast : stretching from Boston, Ma. to Norfolk, Va., 267 Fortune 1000 companies are headquartered in this nine state area of the East Coast. New York City is home the most headquarters with 72, followed by Houston (49), Atlanta (22), Chicago (22), and Dallas (15). Political Power Shortly after the establishment of the United Nations in 1945, the U.S. government negotiated the Agreement Between the United Nations and the United States Regarding the Headquarters of the United Nations (1947) established the specific geography of the U.N. “headquarters district” as the property on the East River where the 38-floor U.N. tower is located along with an easement over Franklin D. Roosevelt Drive. In four subsequent supplemental agreements (completed in 1966, 1969, 1980, and 2009), the headquarters district of the U.N. has expanded significantly over a dozen times in four separate agreements -most recently under the Obama administration in 2009- and now encompasses entire buildings and warehouses in New York and Long Island beyond the original U.N. headquarters building in Turtle Bay. As part of the U.N. headquarters district, these locations, which in some cases are simply floors and offices in commercial buildings, are “inviolable” to U.S. officers and officials and “under the control and authority of the United Nations” except as specified in the agreement. From Heritage.org Cultural Power Chinese Investors Star on Broadway Who’s the latest behind-the-scenes investor on Broadway? China. Three of the hottest musicals on Broadway have Chinese backers as China starts expanding live theatrical entertainment at home and looks to New York for expertise. “This is the first season that Chinese companies are investing on Broadway,” said Simone Genatt, chairman of Broadway Asia, a New York-based production and licensing entertainment company primarily focused on Asia. “They’ve been doing Broadway musicals in mainland China for the last decade, but this is the first time China is here in New York.” The New York investments are part of a broader push to expand musical theater inside China. Big Broadway shows such as “Cats” and “The Sound of Music” have been touring China for years. In a next step, “Cats” and “Mamma Mia” have been translated into Chinese. Chinese companies say they’re hoping to leverage their stakes in Broadway shows to gain expertise in U.S. productions, bring shows to China and eventually develop more original Chinese-produced musicals. From the Wall Street Journal, June 4, 2015. New York City, a vibrant cultural scene INCREASED GROWTH OF NEW YORK CITY’S ENTERTAINMENT INDUSTRY October 15, 2015 Mayor Bill de Blasio, Deputy Mayor Alicia Glen, and Media & Entertainment Commissioner Cynthia López today announced that New York City’s filmed entertainment industry now contributes $8.7 billion to the local economy, an increase of more than 1.5 billion, or 21 percent, since 2011. According to an independent study conducted by the Boston Consulting Group (BCG), New York City is one of only three cities in the world with a filming community large enough to enable a production to be made without needing any roles to be brought in from other locations, including cast, crew members, and the creative team. Additionally, a rich real-life history, iconic locations, diverse storytellers and top talent are among the reasons productions choose to film in New York City. While television has seen the greatest increase (from 29 series in the 2013-2014 season to a record 46 series in the 2014-2015 season), New York City was home to 242 film productions in 2014 and as of this month, 256 films have been shot so far in 2015. “There’s something special about New York City – and the TV and film industry has picked up on it. The filmed entertainment industry channels nearly 9 billion dollars into our local economy each year, supporting the creation of thousands of dependable good-paying jobs and showcasing the history, creativity and vivacity of our people and our city,” said Mayor Bill de Blasio. From www.nyc.gov Transforming urban areas Reshaping the Financial District after 9/11 "Because of those buildings being attacked, there was an outpouring of awareness and generosity and people wanting to help rebuild, coupled with an openness to other cultural influences. The architectural scene in the US, which once was probably led by architects in Los Angeles and other places in the west, has returned once again to the east coast. I'm not saying there aren't any good architects on the west coast, but there's a tremendous concentration of architects in New York City now that haven't been here since the turn of the 20th century." Interview of architect Craig Dykers of Snøhetta, who designed a recently opened pavilion on the memorial site. (www.dezeen.com) Two examples of gentrification: The Meat Packing District between 1985 and 2015 Brooklyn's Hipster Heaven All the people waiting on the L train platform are in their 20s and 30s and have full body tattoos, piercings and funny hairstyles. They’re going to Williamsburg, a neighborhood where they’ve created, next to Latino and Hassidic communities, a community of their own. Williamsburg, one subway stop into Brooklyn, has turned into a neighborhood of artists, students and people who go out at night. They demand good food at fair prices and, above all, think they are different from the sophisticated, arrogant, money-driven Manhattanites. It’s peaceful, with trees on both sides of the street and not a megastore in sight. Williamsburg looks like a village, with its own style, pace and rules (especially, “be cool”). It’s all about modesty and conviviality. The avenue is the center of it all. There are Italian, Mexican and macrobiotic restaurants mingled with bagel, thrift and antiques shops. People stroll calmly down the sidewalks, often followed by a dog or bike. (from journalism.nyu.edu) Transforming abandoned buildings into trendy bars (2007-2009) A tale of two cities A report by the city comptroller’s office found an alarming rise in the share of overcrowded housing units from 2005 to 2013. Here’s a press release on the implications of this finding: “Studies make it clear that crowding hurts the whole family, it makes it harder for kids to learn and puts the entire family at a greater risk of homelessness. This new report shows that the problem of crowding is stubbornly increasing, with nearly 1.5 million New Yorkers now living in a crowded (2 people in a studio) or severely crowded home (three or more people in a studio).” New York City has prospered during the 12-year mayoralty of Michael Bloomberg, which comes to an end this year. But the same cannot be said of all New Yorkers. In January 2013, for the first time in recorded history, the New York homeless shelter system housed an average nightly population of more than 50,000 people. That number is up 19 percent in the past year alone, up 61 percent since Bloomberg took office, and it does not include victims of Hurricane Sandy, who are housed separately. While homelessness is increasing in other cities, the numbers from New York are astoundingly high. This January, on average, over 21,000 children slept in city shelters each night, a 22 percent increase over the same period in 2011. More than one percent of NYC children (21,034 of 1,780,000) slept in a shelter this January. From www.citylab.com, March 8, 2013. oductions choose to film in New York City. While television has seen the greatest increase (from 29 series in the 2013-2014 season to a record 46 series in the 2014-2015 season), New York City was home to 242 film productions in 2014 and as of this month, 256 films have been shot so far in 2015. “There’s something special about New York City – and the TV and film industry has picked up on it. The filmed entertainment industry channels nearly 9 billion dollars into our local economy each year, supporting the creation of thousands of dependable
<urn:uuid:0aa9e14f-aeba-4b94-b47c-6fc36a5f1982>
CC-MAIN-2017-04
https://docs.com/franck-gasulla/1934/new-york-as-a-global-city-booklet
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940641
3,505
2.8125
3
SGI’s Altix UV 1000 “Anakyklosis” at the Technical University of Denmark is being used to enable researchers to find new genes and proteins that could benefit future biotechnology initiatives to back industrial processes. The system provides large shared memory for researchers to tackle projects in metagenomics, an area which involves far larger data sets than general human genome research would require. According to Thumas Sicheritz-Ponten, director of the metagenomics research project at the University, the limitations in memory were showstoppers for this type of work in the past. He notes, however, that the SGI super Anakyklosis “can hold the equivalent of 2500 human genomes in its working memory at once, so it opens up new opportunities for systems biology research.” According to one of the lead researchers on the project, Nikolaj Blom, “The need for larger and faster computers has become very urgent due to the development of the metagenomics research area…this deals with mapping the entire genome content of bacterial communities, such as those found in the deep oceans, in wastewater or our own gut. The resulting amount of data is several thousand times larger than the entire human genome.” The research being conducted on the SGI machine could help reduce dependence on fossil fuels. According to the researchers, by finding ways to produce chemicals and other industrial components out of organic materials, which will lead to a broader base of sustainable raw materials to work from.
<urn:uuid:e26a7a9a-038b-42a4-83fb-ac2fa99498ab>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/06/16/sgi_system_supports_metagenomics_research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00151-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936213
307
2.65625
3
Supply Chain Management (SCM) What is sap supply chain management? Supply Chain Management is the process of managing and streamlining the entire flow of materials and activities for an organisation, from the supplier side to the customer side. The entire value chain of the organisation is dependent on the supply and movement of output intensive materials and the efficiency of activities that enable the same. The entire flow of goods and services is managed and enabled smoothly by supply chain management.
<urn:uuid:ac881f30-1ce4-4273-9489-7715305e5647>
CC-MAIN-2017-04
https://www.hcltech.com/technology-qa/what-is-supply-chain-management-scm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00453-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923368
94
2.6875
3
Q. What was the name of the computer that killed much of the crew in 2001: A Space Odyssey? Author Arthur C. Clarke and director Stanley Kubrick worked together on the future-fiction classic 2001: A Space Odyssey. The film and novel were both released in 1968 and depicted the exploration of a mysterious black monolith in a manned mission to Jupiter. The human members of the mission interact with their ship’s artificial intelligence system, HAL 9000 (Heuristically programmed ALgorithmic computer), who, after a series of malfunctions, chooses to kill the crew to protect its own programmed orders. If you’ve been following our 12 Days of Geek experience, you may recall our previous inquiry into the three laws of robotics. So here’s a new question—does HAL act in accordance with the three laws?
<urn:uuid:6a4b08fc-26cb-4d3e-870b-f25e03d47df0>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/12-days-of-geek-day-8/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00573-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942448
170
3.015625
3
Programming Cars to KillBy Samuel Greengard | Posted 2016-03-28 Email Print What happens when a mechanical part fails or there's a landslide, and a self-driving car must choose between saving its passenger or a motorist in another car? The MIT Technology Review recently presented a story titled "Why Self-Driving Cars Must Be Programmed to Kill." Although the topic seems to careen entirely into the sensationalistic category, it actually represents a very real and disturbing dilemma for companies manufacturing products. There's a growing need to embed ethical decision making into systems that rely on artificial intelligence (AI) and algorithms. Self-driving vehicles, as the article points out, are at the nexus of this technology conundrum. As automakers embed automatic and autonomous functions in cars and trucks—things like automatic braking, automated steering and self-parking functions, for instance—there's a need to think about what happens during an unavoidable accident (rather than the human negligence we typically describe as an "accident"). For example, what happens when a mechanical part fails or a landslide takes place and the car must make a choice between saving its passenger or a motorist in another car? How does the motor vehicle steer, break and sense the environment around it? Which safety systems spring into action and how do they work? It's a given that manufacturers will embed features and capabilities that make autos and driving safer. Heck, simply removing phone- and food-wielding humans from the equation is a huge step forward. And while there's a clear need to understand liability laws and design products that operate in an ethical and legally permissible way in a digital world, there's also a gray area that is completely unavoidable. And that's where the rubber hits the proverbial road. As the article points out: "If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation." Unfortunately, there are no clear answers, and right and wrong are highly relative terms in this context. When researchers at the Toulouse School of Economics in France presented the question about how autonomous vehicles should operate to several hundred Amazon Mechanical Turk participants, the results were fairly predictable: Cars should be programmed to minimize death tolls. However, respondents also noted that they had strong reservations about these systems. Simply put: People were in favor of cars that sacrifice the occupant to save other lives …but they don't want to ride in such a vehicle. As we wade deeper into robotics, drones, 3D printing and other digital technologies, similar questions and ethical conundrums will occur. It may not be long until every organization requires a chief ethical officer to sort through the moral and ethical implications of technology.
<urn:uuid:d0091dae-d7f9-4be8-92d4-3d8645388b65>
CC-MAIN-2017-04
http://www.baselinemag.com/blogs/programming-cars-to-kill.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959581
574
2.96875
3
If you're a developer looking to increase your employability, then learning a new language is always a good strategy. But the big question is this: Which language should you learn? If you want to stay ahead of the pack, though, and be able to take your pick of the plum jobs of the future, then it may be worth looking beyond Java, Python and these other languages. What about gaining skills and experience in up-and-coming languages that aren't in demand yet - but may well be soon? It's a career strategy that worked for programmers who spotted the potential of Java when it was introduced in the 1990s. Those who got in there early could walk in to any Java programming job they wanted a few years later - and demand the very highest rates as well. The problem is picking the right language to learn, as there are plenty of new ones to choose from. "Almost all new languages are coming from open source projects," says Mark Driver, a research director at Gartner. "That means there are no barriers to entry, so thousands of new languages are coming on to the scene. Most disappear quickly, and only a few ever catch on." Driver says he believes the reason is that, for most organizations, the "incumbents" such as Java, C++ and C# are just too entrenched to replace, "and there's very few enterprises that want to expand the languages they use too much." But the signs say a few new languages are catching on. Here are six of the most promising ones, in no particular order: Why learn Dart? Google's backing ensures that Dart has a good chance of succeeding. Opa: Simple, Secure Web Apps Although Opa hasn't yet been adopted by enterprises in any significant way, there are a lot of discussions about the language on the Internet at the moment, Driver says. Why learn Opa? Web applications are going to get more complex and prevalent, and there's unique value in having the server-side/client-side distribution of code happen automatically. Scala: Scalable Language in More Than Name Only Scala is short for "scalable language," and it's designed to be exactly that: Scala can be used for tiny programs or very large-scale applications. It's not particularly new, as it was introduced in 2003, but interest is on the rise. One key reason for that is that you can optimize code to work with concurrency. Another is simply that many developers like using it. A key advantage for companies considering Scala is that it interoperates with Java. It runs on JVMs (and Android), while integrated development environments (IDEs) such as Eclipse, IntelliJ or NetBeans, and frameworks such as Spring or Hibernate, all work with it. "The ability to adopt it on top of existing JVMs is really significant," says Jeffrey Hammond, a principal analyst at Forrester. Why learn Scala? It appeals to enterprises that have already invested in Java and don't want to have to support anything new in their production environments. Erlang: With Concurrency Comes Availability Erlang is another language gaining momentum because of concurrency. Originally developed in 1986, Erlang was open sourced in 1998. It's designed for building large-scale, highly available applications. Erlang's runtime system supports hot swapping, so code can be modified or updated without having to stop a running system. Language-level features are provided for creating and managing processes to simplify concurrent programming. Meanwhile, processes communicate using message passing, removing the need for explicit locks. Why learn Erlang? Both Gartner's Driver and Forrester's Hammond suggest Erlang is likely to proliferate in the coming months and years. Ceylon: Modular Java Killer Based on Java, Ceylon has been designed as a Java killer. Developed as a language for writing large programs in teams by Red Hat, the first stable release became available at the end of 2013. Modularity is a key feature. Code is organized into packages and modules, then compiled to module archives. The tooling supports a system of module repositories, with every module published in a central repository called Ceylon Herd. Since Ceylon is based on Java programming and comes with an Eclipse-based IDE and command-line tools (with built-in modularity support,) Ceylon shouldn't be too difficult to get up and running if you're already skilled in Java programming. Go: Language for the Cloud Go, another open source Google language, first appearing in 2009. Also known as Golang, Go is a traditional language like C, but it's written expressly for the cloud, with concurrency and other features such garbage collection built in. Large Go applications can be compiled in a few seconds on a single computer. Projects written in Go include Docker and Force.com. "We're hearing a lot about Go at the moment," Driver says. "There's a lot of experimentation going on with it - but it does have a steep learning curve." Why learn Go? The combination of suitability for the cloud, Google backing and the high level of interest in Go at the moment suggest that the language will very likely take off. Read more about developer in CIO's Developer Drilldown. This story, "6 emerging programming languages career-minded developers should learn" was originally published by CIO.
<urn:uuid:341ee984-8f89-4933-a852-e56c07152d40>
CC-MAIN-2017-04
http://www.itworld.com/article/2695224/it-management/6-emerging-programming-languages-career-minded-developers-should-learn.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954335
1,115
2.625
3
Extremely high frequency (EHF) is the highest band of radio waves, and operates at a frequency range of 20 GHz-300 GHz. The radio waves in this band have wavelengths that are in the range 10 mm to 1mm, therefore, the waves in this band are called as millimeter waves (mmW). In wireless communications, frequency is a major factor that ascertains the feasibility of the technology. Millimeter wave technology operates in an unregulated bandwidth that is available world-wide, with better efficiency than traditional wireless LAN frequencies such as 2.5 GHz or 5 GHz. The mmW technology has many applications in imaging, telecommunications, home networking, satellite communication, and construction & manufacturing, among others, due to its unique features. The value of the U.S. millimeter wave technology market is estimated to reach $80 million by the end of 2014, and is expected to reach $435 million by 2018, at a CAGR of 55.9%. This growth is attributed to the growing telecom application market, especially in the small cell backhaul field. The mmW scanner market is also expected to grow rapidly in the coming five years. This report also looks into the whole value chain of the market. It also focuses on the parent markets and the sub-markets of this industry, thus identifying the total potential market that can be tapped by millimeter wave technology. The report is based on an extensive research study of the market and the related frequency ranges, frequency band licenses, and product industries. It is aimed at identifying the entire market, specifically the mmW products and mmW components in all the applications excluding consumer electronic applications. The report covers the overall market and sub-segment markets through extensively detailed classifications, in terms of both revenue and shipments. The market segmentation detailed in the report is as given below: - MM scanners and imaging systems (active and passive), MM RADAR and satellite communication systems, perimeter and surveillance RADAR, application-specific RADAR, satellite systems, MM telecommunication equipment, mobile back-haul equipment, sub-segments such as small-cell and macro-cell, Pico-cell and Femtocell, enterprise, and other networking equipment By Application Areas: - Mobile and telecommunication, consumer & commercial, healthcare, industrial, automotive & transportation, military, defense & aerospace, and other emerging and next-generation applications - 8 GHz to 43 GHz Frequency Millimeter wave – sub-segments – 23 GHz-38 GHz Band, 38 GHz-43 GHz Band; 43 GHz to 80 GHz Frequency Millimeter wave – sub-segments - 57 GHz-64 GHz Band, 71 GHz-76 GHz Band; 80 GHz -300 GHz Frequency Millimeter wave - 81 GHz-86 GHz Band, 92 GHz-95 GHz Band By Licensing Nature: - Fully-licensed, light-licensed, and unlicensed frequency bands Along with the market data, customize the MMM assessments to meet your company’s specific needs. Customize to get a comprehensive summary of the industry standards and deep-dive analysis of the following parameters: - In-depth trend analysis of products in competitive scenario - Product matrix, which gives a detailed comparison of product portfolio of each company - Product matrix, which gives a detailed comparison of product portfolio for mmW market along with the various applications they are used for - A comprehensive coverage of regulations followed in the market Data from mmW Firms - Fast turn-around analysis of firms’ responses to market events and trends - Various firms’ opinions about different products and applications from different companies - Qualitative inputs on macro-economic indicators and mergers & acquisitions - In-depth analysis of the market for different frequency ranges and band licenses in the mmW market Shipment/ Volume Data - Tracking the value of components shipped annually - Tracking the quantitative inputs of mmW applications Trend Analysis of Applications - Application matrix, which gives a detailed comparison of application portfolio of each company - Application matrix, which gives a detailed comparison of application portfolio for mmW market along with the various products they are used for - Application matrix, which gives a detailed comparison of application portfolio for mmW market along with frequency range and band licenses Frequency Range & Band Licenses Analysis - Frequency range matrix, which gives a detailed comparison of frequency range portfolio and frequency band licenses on the basis of different products they are used in Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:7591d8c2-8b95-4280-849a-7080e1f2d75f>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/usa-millimeter-wave-technology-7254105779.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00205-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907982
962
2.515625
3
Lecture capture solutions are systems that are designed to record the audio and visual content of a classroom lecture for later playback, and in most cases include a suite of software facilities for editing and publishing content via the Web. This allows students to access recorded classroom content from PCs, as well as mobile devices such as smartphones and portable media players. Given the benefits that lecture capture capabilities offer students and instructors, campus IT leaders should understand this technology and solution space. Benefits of Lecture Capture in Postsecondary Environments With today's students well accustomed to podcasts and on-demand video, a growing number of postsecondary institutions view lecture capture as an opportunity to better engage and support the student body. From an institutional perspective, key benefits and opportunities of lecture capture solutions include:
<urn:uuid:832c47da-812d-435b-9f6e-2457600335e4>
CC-MAIN-2017-04
https://www.infotech.com/research/lecture-capture-solutions-speak-to-higher-education-needs
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943053
155
2.625
3
What if all objects were interconnected and started to sense their surroundings and communicate with each other? The Internet of Things (IoT) will have that sort of ubiquitous machine-to-machine (M2M) connectivity. Since there are estimates that between 50 billion to 500 billion devices will have a mobile connection to the cloud by 2020, here’s a glimpse of our possible future. Your alarm clock signals the lights to come on in your bedroom; the lights tell the heated tiles in your bathroom to kick on so your feet are not cold when you go to shower. The shower tells your coffee pot to start brewing. Your smartphone checks the weather and tells you to wear your gray suit since RFID tags on your clothes confirm that your favorite black suit is not in your closet but at the dry cleaners. After you pour a cup of java, the mug alerts your medication that you have a drink in-hand and your pill bottle begins to glow and beep as a reminder. Your pill bottle confirms that you took your medicine and wirelessly adds this info to your medical file at the doctor’s office; it will also text the pharmacy for a refill if you are running low. Your smart TV automatically comes on with your favorite news channel while you eat breakfast and browse your tablet for online news. After you’ve eaten, while you are brushing your teeth, your dishwasher texts your smartphone to fire up your vehicle via the remote start. Because your “smart” car can talk to other cars and the road, it knows what streets to avoid due to early morning traffic jams. Your phone notifies you that your route to work has been changed to save you time. And you no longer need to look for a place to park, since your smartphone reserved one of the RFID parking spaces marked as "open" and available in the cloud. Don’t worry about your smart house because as you exited it, the doors locked, the lights went off, and the temperature was adjusted to save energy and money. Does it sounds too farfetched for 2020? It shouldn’t since a good part of that is in the works now. If Mark Zuckerberg has his way about the Internet of Things, then “your news feed and a Facebook alert could share with you that your refrigerator or milk carton indicates that you are running out of milk. You could authorize your refrigerator app to signal Whole Foods to deliver a gallon of milk, all via Facebook's omnivorous, pervasive platform.” According to IBM Director of Consumer Electronics Scott Burnett, “What we're doing is creating the Facebook of devices. Everything wants to be its friend, and then it's connected to the network of your other device. For instance, your electric car will want to 'friend' your electric meter, which will 'friend' the electric company." The future is now If you run for exercise, then imagine your smart running shoes uploading your running time, distance, speed and how many calories you burned to a website that keeps track of your progress over time. Your “scale has Wi-Fi” too, also tracking your weight progress. If your asthma acts up and you use your inhaler, “it uses GPS to determine the time and location when the inhaler is used, and then stores or sends that information to a remote server.” According to German telecommunications giant Deutsche Telekom's M2M Competence Center, there are more than 100 million vending machines, smoke alarms, vehicles, and other devices that now automatically share information. In Europe, machine-to-machine (M2M) communications have moved even the farmers out of barns and into this networked world of “things.” Deutsche Telekom and French remote monitoring solutions Medria Technologies have developed a “HeatPhone” that automatically sends a text message to farmers when a cow is in heat and ready for insemination, or when calving begins. Old McDonald can have an augmented farm by wearing goggles that allow him to see a report about the current state of everything he looks at, from the health of his cows, to milking machines, to grain bins. Last week, Technology Review reported that a French startup, SigFox, went live with a cellular data network specifically for inanimate objects; in other words, it was a big boost to the Internet of Things and making your ‘dumb’ appliances much smarter. “The goal is to make all kinds of appliances and infrastructure, from power grids to microwave ovens, smarter by letting them share data.” EVRYTHNG, a global software company originating in the UK, has worked with Diageo, an international premium drink company headquartered in London, “to add an individual digital identity to every product it sells. When a consumer buys a bottle of whisky to give as a gift, for example, he or she is invited to create a personalized online video message which the recipient can activate by pointing their smartphone at the item's barcode.” A “hot” new “mainstream” product taking us a step closer into the Internet of Things in the USA is the Galaxy Camera. Readwrite mobile reported, “The Galaxy Camera is a camera first, cell phone… never. This is an intriguing product, and not just because it is a camera that can use social apps like Facebook, Twitter, Google+ and others from wherever you are.” So while this may seem farfetched science fiction, here’s another glimpse into our potential future. Thanks to M2M communications and Internet-enabled things, your refrigerator, kitchen pantry, and recipe app have “friended” each other, decided you’ve eaten too much red meat, and select a healthier dinner recipe. Your smart appliances could not only send an order to the grocery store, but might notify both your doctor’s office and your health insurance company if you ignore their chicken recipe selection. Your camera phone captures your “unhealthy” lifestyle by snapping a picture of you stuffing a steak in your mouth. So should we worry about our appliances spying and possibly turning into “narcs?” At the 4th Annual Internet of Things Europe: Shaping Europe's Future Internet Policy - The road to Horizon 2020 there were privacy and security debates “surrounding the need for separate data protection legislation for the Internet of Things.” The “privacy of devices, including sensors, is paramount and must be ensured to prevent unauthorized access. What are the emerging security risks? How can it be ensured that the required safeguards are in place to prevent IoT [Internet of Things ] viruses and other security threats?” Even now your smart TV connected to the web can be hacked, so there are a plethora of potential privacy and security problems on the IoT horizon. Oh yeah, and if these Internet-enabled devices use the high-speed wireless data standard LTE (long-term evolution) networks to communicate? The LTE network is vulnerable to jamming and it’s “relatively easy” to “block service across much of a city.” All that it would take to jam LTE would be a cheap battery, “a laptop and an inexpensive software-defined radio unit” that cost as little as $650. “Picture a jammer that fits in a small briefcase that takes out miles of LTE signals—whether commercial or public safety,” said Jeff Reed, director of the wireless research group at Virginia Tech. “There are multiple weak spots—about eight different attacks are possible,” Reed told Technology Review. “The LTE signal is very complex, made up of many subsystems, and in each case, if you take out one subsystem, you take out the entire base station.” A research paper [PDF] outlining the LTE vulnerabilities was filed with National Telecommunications and Information Administration.
<urn:uuid:0f638c75-81e2-4ae3-be25-6745d58aa855>
CC-MAIN-2017-04
http://www.computerworld.com/article/2473571/data-privacy/glimpse-of-your-life-in-2020-thanks-to-the-internet-of-things.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00446-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948391
1,632
2.78125
3
While smartphones can make a significant and positive impact on those who embrace the technology, conversely the technology can wreak havoc on its users if they are complacent, careless or re-active to the ever expanding number of security threats. Inattention to potential security threats can result in the invasion of privacy, identity theft, inconvenience, the loss of intellectual property and the actual loss of money. The more dependent a smartphone user is on the technology, the more they have at risk. Attevo offers this 13-point checklist of security habits and usage suggestions for all smartphone owners: - Always maintain physical control over your smartphone to prevent outright theft, unauthorized usage or the installation of malware (apps with malicious code) by seemingly mild-mannered co-workers or by ruthless digital predators; treat a smartphone like a wallet, never leave it unattended in public spaces. - Enable the smartphone’s password/passcode protection setting; a recent study reveals that only 38% of smartphone users enable this basic security feature. - Install operating system updates whenever they become available to reduce the number of system vulnerabilities; a 2011 report indicated that 90% of Android users were running outdated operating system versions with serious security vulnerabilities. - Install an anti-malware protection app (if available for the device) to thwart infection from malicious apps and websites; all major platforms have been hacked and are susceptible. - When using the smartphone’s web browser, avoid suspicious/questionable websites that can be the source of malicious code. - Be selective when buying or installing apps; wait for app reviews, download only from trusted sources (known app stores) and be cautious/suspicious of free apps, because they are free for a reason (the reason could be access to your data). - Understand and control each downloaded apps “access” to smartphone data and personal information; game apps do not need access to phonebook contacts, photos, e-mails, location, browsing history, texting history and other phone features (avoid allowing automatic app updates). - Do not save passwords, PINs or other account information as Contacts or in Notes. - Avoid using open Wi-Fis, especially for shopping and banking activities; Wi-Fi sniffing is a common occurrence that can have significant consequences like lost credit card numbers. - Avoid opening suspicious e-mail or SMS text messages, especially from unknown sources. Unwary readers may be unwillingly tricked into phishing by entering sensitive information from online prompts. - Turn the Bluetooth access feature off when not needed and avoid Bluetooth use in busy public areas. - Utilize a PIN to access voice-mail and avoid using the carrier’s default PIN setting. - Insure that smartphone e-mail account access is through either a SSL or HTTPS connection so that transmitted data is encrypted.
<urn:uuid:18a3498e-d7ee-49a6-9408-094bbca047dc>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/03/20/smartphone-security-checklist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00354-ip-10-171-10-70.ec2.internal.warc.gz
en
0.872242
585
2.59375
3