text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In a January 30 address explaining his Computer Science For All initiative, President Barack Obama said, “We have to make sure all our kids are equipped for the jobs of the future, which means not just being able to work with computers, but developing the analytical and coding skills to power our innovation economy.” The initiative, if approved by congress, earmarks $4 billion for states and another $100 million for districts to train teachers and purchase the tools for elementary, middle and high schools to provide opportunities to learn computer science to promote more Science, Technology, Engineering and Math (STEM) skills. The funding programs, which will appear in the president’s forthcoming budget proposal for 2017, are just the latest effort from the White House to bring more science and technology education to students. With this said, adding more computer science and STEM instruction to K-12 teaching and learning in the next year is most likely inevitable. In order to prepare for adding more coding in the classroom, though, schools need to lay the groundwork by creating and implementing solid technology management and digital citizenship practices. Technology management and STEM In order for more coding and other hands-on STEM learning to become an integral part of every student’s education, school’s management of technology and digital devices will need to be well thought out and executed. Schools are already beginning to beef up technology infrastructure and Internet connectivity through initiatives like e-Rate funding. Additionally, schools are purchasing more and more digital devices like chromebooks and tablets so that students in all grades have access to online resources. But how are these devices managed? The U.S. Department of Education Office of Educational Technology’s 2014 publication, Future Ready Schools: Building Technology Infrastructure for Learning, stresses the importance of planning and implementing procedures that employ system-level controls for device and application management. School district staff should be able to push out updates, security protocols and other critical functions from a central location (versus physically touching each device). As more devices are added and more students use them, this will become increasingly important. Schools will need software that not only allows remote management of devices, but allows remote monitoring of how and when the devices are being used. This will prevent misuse by students while saving significant amounts of time for IT managers. Digital citizenship and STEM As more and more teaching and learning of coding is adding into K-12 education, it will be imperative to allow students and teachers to access the resources required to do so. Both private and corporate organizations, such as Cartoon Network and MIT Media Lab, are taking initiative to provide curriculum for coding. But in order for students access these resources, they have to be unblocked on school’s networks. Network management will need to shift from simply blocking and filtering websites and apps to more robust pairing of digital citizenship and monitoring of online activity. To address this idea of monitoring and promoting digital citizenship, the Future Ready Schools publication also states, “Less ability to modify or change the device settings can make it easier for IT staff to maintain devices, but gives students less freedom to personalize devices for their needs. The decision to allow more control over a device may vary depending on the student. A multitiered model of permissions and restrictions gives students who demonstrate responsible behavior more privileges and restricts access for students who fail to show responsible behavior. As you consider these policies, remember that restricting a student’s access in one class will affect that student’s ability to participate in learning in subsequent classes as well.” Having a technology management system in place that allows the remote monitoring of all devices, coupled with a multitiered model of monitoring online activity, will allow students to make choices and be responsible online. This will then allow instructors and schools to give students all the resources necessary to get the most from computer science and STEM education in the future. Need a network, classroom, and e-safety management solution? Impero Education Pro software can help. For more information, call 877.883.4370 or email now.
<urn:uuid:f094e08c-0387-4463-815f-2f29524133c7>
CC-MAIN-2017-09
https://www.imperosoftware.com/adding-more-stem-education-in-k-12-starts-with-digital-citizenship-and-technology-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00149-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933119
824
3.265625
3
Black Box Explains...USB 2.0 and USB OTG The Universal Serial Bus (USB) hardware (plug-and-play) standard makes connecting peripherals to your computer easy. USB 1.1, introduced in 1995, is the original USB standard. It has two data rates: 12 Mbps for devices such as disk drives that need high-speed throughput and 1.5 Mbps for devices such as joysticks that need much lower bandwidth. In 2002, a newer specification, USB 2.0, or Hi-Speed USB 2.0, gained wide acceptance in the industry. This version is both forward- and backward-compatible with USB 1.1. It increases the speed of the peripheral to PC connection from 12 Mbps to 480 Mbps, or 40 times faster than USB 1.1! This increase in bandwidth enhances the use of external peripherals that require high throughput, such as CD/DVD burners, scanners, digital cameras, video equipment, and more. USB 2.0 supports demanding applications, such as Web publishing, in which multiple high-speed devices run simultaneously. USB 2.0 also supports Windows® XP through a Windows update. An even newer USB standard, USB On-The-Go (OTG), is also in development. USB OTG enables devices other than a PC to act as a host. It enables portable equipment—such as PDAs, cell phones, digital cameras, and digital music players—to connect to each other without the need for a PC host. USB 2.0 specifies three types of connectors: the A connector, the B connector, and the Mini B connector. A fourth type of connector, the Mini A (used for smaller peripherals such as mobile phones), was developed as part of the USB OTG specification.
<urn:uuid:c875601a-0f2f-48e0-889a-5f798c4d050b>
CC-MAIN-2017-09
https://www.blackbox.com/en-au/products/black-box-explains/black-box-explains-usb-2-0-and-usb-otg
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00325-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925224
364
2.75
3
When you suddenly can’t access your files, but nothing seems wrong with the hard drive, how can you get your data back? Data recovery cases often depend on getting past broken links in the organizational structure the drive relied on to find your pictures, documents and other files. In a previous post, we took a general look at some of the organizational structure – or “meta data” – of a hard drive. In this post, I want to offer a more specific look at how one element of meta data on drives formatted by Windows, called the bitmap, can be used to make data recovery faster and more effective. The bitmap is useful in recoveries where there are logical puzzles and also in cases where the drive has physically failed. The bitmap exists on NTFS formatted drives. NTFS – or New Technology File System – is a way to organize data. It was developed by Microsoft and it’s found on drives using Windows or formatted by Windows. Macs use a different ways to organize data, as do other operating systems. The bitmap, as its name suggests, gives the lay of the land. The bitmap exists as a hidden file called $Bitmap at the root of each NTFS partition. It shows your hard drive where it can find data and where there is available space to write new data. To understand how it does this, let’s first take a quick look at how data exists on a hard drive. The 1s and 0s you’ve probably heard about are in reality tiny patches of metallic film that are either magnetized or not. They are arranged in concentric circles on all sides of the multiple spinning discs inside a hard drive. Eight of these 1s or 0s is called a byte, and a byte has 256 possibilities, since flipping the eight switches (the 1s and 0s) gives you two to the eighth power. These possibilities are assigned values. For example, the byte 01100001 in binary code translates to the letter “a” in ASCII text. Contiguous bytes are organized into a sector – typically 512 (another power of two) bytes per sector. Contiguous sectors in turn are grouped into clusters. Cluster sizes vary in size, but 8 sectors per cluster – resulting in 4 kilobyte clusters – is common. A file – for example, a photograph of your dog – may occupy several clusters, which may or may not be next to one another. A bitmap is a file that simply records which clusters have been used. For each cluster, the bitmap file assigns a 1 if that cluster has any data written to it, or a 0 if it is available space. As you alter the data on your drive, the bitmap adjusts. If you delete a file, the bitmap will show the area it occupies as now available space, with 0s for those clusters. (Which is why it’s called “zero filling” when we erase all data from a drive.) If you write data to the drive, the bitmap flips the switches to 1s for the clusters that current data now occupies. It’s important to stop here to make a distinction. Data can exist on your hard drive without being recorded by the bitmap or being part of the overall structure of organized data. The bitmap only keeps track of the relevant stuff – the stuff your computer considers saved data. For example, if you delete a file, the 0s and 1s that comprise it are not automatically overwritten with anything, but it’s no longer relevant. The clusters the deleted file occupies will now be considered available space. In the bitmap, those cluster addresses will now be marked with 0s. On the actual surface of the disk, those magnetized/not-magnetized patches (the 1s and 0s) of the deleted file still exist, but they are not protected. The next time the drive records data, it is free to write over the file that was deleted. This is why it’s important to stop using your computer if you accidentally delete important data. If you continue to use the machine, you risk the hard drive writing data over the file you’ve lost. Now, let’s look at how all this information about the bitmap applies to data recovery. If you are recovering data from a drive that failed mechanically – say it stopped spinning – and you can read the bitmap, you can use it to image only the used area. This can save a considerable amount of time. The alternative, which is the way most data recovery software works, is to start at Sector 0 and just grind away until every sector is read. Not only does this take unnecessary time, it can put the data at risk if the drive is severely troubled. If it had damage to the read/write heads – and perhaps some light rotational scoring – a complete read starting from Sector 0 may cause the replacement heads to fail, perhaps resulting in more surface scratches. The attempt to image the drive in this crude way could render the data permanently unusable. So, if you are using data recovery software, and it just hangs and hangs, or seems to be making no discernible progress, shut it down. The bitmap is also highly useful in cases where the drive has been reformatted mistakenly or important files have been deleted. In these cases, the bitmap shows where not to look. Deleted files or the files that existed before the drive was reformatted or had its operating system reinstalled are all no longer relevant to the current file system. They are off the grid, living in unallocated space. To find them, look in all the clusters that the clusters addresses that the bitmap has labeled as empty. The clusters that the bitmap considers as used has the new data that is not of interest – the new format, the new operating system, the files that were not accidentally deleted. There are many more ways that data recovery can become more elegant with greater understanding of the logical structure of a hard drive’s file system. With this understanding, better software can be built to make imaging a drive faster and more reliable.
<urn:uuid:35225fec-2ed1-4830-a1e9-82f88ae206fc>
CC-MAIN-2017-09
https://www.gillware.com/blog/data-recovery/using-the-bitmap-to-make-data-recovery-more-efficient/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00093-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949874
1,268
3.234375
3
Illegal online activities such as phishing and typosquatting are growing at an alarming rate. To understand the issue in detail High-Tech Bridge analyzed 946 domains that may visually look like a legitimate domain (for example replacement of “t” character by “l” character, or mutated domain names such as “kasperski.com” or “mcaffee.com”) or that contain typos (e.g. “symanrec.com” or “dymantec.com”). For ten antivirus companies, they 385 domains which can be classified by the following categories: 164 fraudulent domains. Domains registered by third-parties to make money on users erroneously visiting websites hosted on these domains (due to a typo in URL or a phishing campaign) by displaying ads, redirecting users to questionable websites selling illegal or semi-legal products and services, etc. 164 domains were detected (42.5%). 107 corporate domains. Domains registered by the antivirus companies to prevent potential Typosquatting and illegal usage of these domains. 107 domains were detected (27.7%). 73 squatted domains. Domains registered by cyber-squatters in the hope that the antivirus companies or third-parties will buy the domains at some point in the future. Websites on these domains are not active. 73 domains were detected (18.9%). 41 other domains. Domains registered by third-party businesses or companies that may have a legitimate reason to register the domain (e.g. similar Trade Mark or company name) without intention to spoof the identity or to benefit from user typos. 41 domains were detected (10.6%). Detailed statistics are provided in the table below: Corporations, governments and law-enforcement agencies go to great lengths to prevent abusive or illegal domain name registration and usage. However, our research revealed that the average age of a fraudulent domain is as high as 1181 days, and the average age of a squatted domain is 431 days. Research results show that some antivirus companies pay attention to such illegal activities and try to prevent them, for example Kaspersky and McAfee purchased more than 70% of the domains that could be potentially used for illegal purposes if registered by third-parties. While others should probably pay more attention to domain squatting, monitor illegal activities and block them. Researchers wanted to understand which domain registrars are used by cyber crooks to register fraudulent and squatted domains. The most popular domain registrars for fraudulent or squatted domains are listed in a table below: During the research the company also collected statistics about the most popular countries in which to host websites with fraudulent content. The list is provided below: Marsel Nizamutdinov, Chief Research Officer at High-Tech Bridge, comments on the research: “Our research clearly demonstrates that cyber criminals do not hesitate to use any opportunity to make money on domain squatting and subsequent illegal practices. There are many ways to make money from these domains: they can be resold at a profit to the legitimate owner of the Trade Mark, used to display annoying ads, redirect users to pornographic or underground pharmaceutical websites, or even to infect with malware user machines who accidentally made a typo in the URL or clicked a phishing URL.”
<urn:uuid:36a5b153-6abd-435a-88c7-af3f30087188>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/12/12/how-cyber-squatters-and-phishers-target-antivirus-vendors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00093-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92767
697
2.578125
3
Economic Impact of Cybercrime Estimated at $445 billion Worldwide, and between 15% and 20% of the value created by the Internet SANTA CLARA, Calif. — June 9, 2014 — A new report from the Center for Strategic and International Studies (CSIS) and sponsored by McAfee, part of Intel Security, shows the significant impact that cybercrime has on economies worldwide. The report, “Net Losses – Estimating the Global Cost of Cybercrime,”concludes that cybercrime costs businesses approximately $400 billion worldwide,with an impact on approximately 200,000 jobs in the U.S., and 150,000 jobs in the EU. The most important cost of cybercrime comes from its damage to company performance and to national economies. Cybercrime damages trade, competitiveness, innovation, and global economic growth. Studies estimate that the Internet economy annually generates between $2 trillion and $3 trillion, a share of the global economy that is expected to grow rapidly. Based on CSIS estimates, cybercrime extracts between 15% and 20% of the value created by the Internet. Cybercrime’s effect on intellectual property (IP) is particularly damaging, and countries where IP creation and IP-intensive industries are important for wealth creation lose more in trade, jobs and income from cybercrime than countries depending more on agriculture or industries of low-level manufacturing, the report found. Accordingly, high-income countries lost more as a percent of GDP than low-income countries – perhaps as much as 0.9 percent on average. “Cybercrime is a tax on innovation and slows the pace of global innovation by reducing the rate of return to innovators and investors,” said Jim Lewis of CSIS. “For developed countries, cybercrime has serious implications for employment. The effect of cybercrime is to shift employment away from jobs that create the most value. Even small changes in GDP can affect employment.” Economic Impacts on both Businesses and Consumers CSIS researchers found that the United States notified 3,000 companies in 2013 that they had been hacked, with retailers leading as a favorite target for hackers. In the U.K., retailers reportedly lost more than $850 million to hackers. Australian officials reported that large scale attacks have occurred against an airline, hotel chains and financial services companies, costing an estimated $100 million. With proper protections in place, these losses could be avoided. The report found that global losses connected to “personal information” breaches could reach $160 billion. Forty million people in the U.S., roughly 15 percent of the population, have had their personal information stolen by hackers. The study tracked high-profile breaches around the world: 54 million in Turkey; 20 million in Korea; 16 million in Germany and more than 20 million in China. Part of the losses from cybercrime are directly connected to what experts call “recovery costs,” or the digital and electronic clean-up that must occur after an attack has taken place. The McAfee-CSIS report discovered that while criminals will not be able to monetize all the information they steal, their victims must spend significant resources as if they could. In Italy, for example, actual hacking losses totaled $875 million, but the recovery, or clean-up costs, reached $8.5 billion. In other words, there can be a tenfold increase between the actual losses directly attributed to hackers and the recovery companies must implement in the aftermath of those attacks. Turning from Losses to Potential Economic Gains Governments are beginning serious, systematic efforts to collect and publish data on cybercrime to help countries and companies make better choices about risk and policy. Improved international collaboration, as well as public/private partnerships are also beginning to show tangible results in terms of reducing cybercrime. Last week, 11 nations announced the takedown of a crime ring associated with the GameOver Zeus botnet. “It’s clear that there’s a real tangible economic impact associated with stopping cybercrime,” said Scott Montgomery, chief technology officer, public sector at McAfee. “Over the years, cybercrime has become a growth industry, but that can be changed, with greater collaboration between nations, and improved public private partnerships. The technology exists to keep financial information and intellectual property safe, and when we do so, we create opportunities for positive economic growth and job creation worldwide.” Report can be found at: http://www.mcafee.com/us/resources/reports/rp-economic-impact-cybercrime2.pdf McAfee, part of Intel Security and a wholly owned subsidiary of Intel Corporation (NASDAQ: INTC), empowers businesses, the public sector, and home users to safely experience the benefits of the Internet. The company delivers proactive and proven security solutions and services for systems, networks, and mobile devices around the world. With its visionary Security Connected strategy, innovative approach to hardware-enhanced security, and unique global threat intelligence network, McAfee is relentlessly focused on keeping its customers safe. http://www.mcafee.com For 50 years, the Center for Strategic and International Studies (CSIS) has developed practical solutions to the world’s greatest challenges. As we celebrate this milestone, CSIS scholars continue to provide strategic insights and bipartisan policy solutions to help decision makers chart a course toward a better world. CSIS is a bipartisan, nonprofit organization headquartered in Washington, DC. The Center’s 220 full-time staff and large network of affiliated scholars conduct research and analysis and develop policy initiatives. This report builds an economic model to scope the direct losses from cybercrime and cyberespionage around the globe. CSIS enlisted economists, intellectual property experts, security researchers to develop report. Note: McAfee is a trademark or registered trademark of McAfee, Inc. in the United States and other countries. Intelis a trademark of Intel Corporation. Other names and brands may be claimed as the property of others.
<urn:uuid:75a0ac1f-c30a-4cb2-8211-2d8ea6e7b312>
CC-MAIN-2017-09
https://www.mcafee.com/es/about/news/2014/q2/20140609-01.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00269-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931234
1,233
2.84375
3
Imagine a squad of Army Rangers prepping to capture a high-value subject barricaded inside a three-story building. The Rangers decide to send in a small camera drone to check for IEDs—but there’s a problem: The enemy has begun putting its booby-traps on the ceiling, where the downward-facing drones can’t see them. If only those little gizmos had cameras on the top…? A new project by the U.S. Army Research Laboratory and Georgia Technical Institute just might help. It aims to give soldiers the ability to 3-D print swarms of minidrones to specific specifications within 24 hours. Its creators call this approach “aggregate derivative approach to product design,” or ADAPT. “A soldier with a mission need uses a computer terminal to rapidly design a suitable [drone],” says a poster by project chief engineer Zacarhy Fisher. “That design is then manufactured using automated processes such as laser cutting and 3-D printing. The solution is sent back to the soldier and is deployed.” Fisher says the drone itself could be fabricated in less than a day, with total turnaround time of less than three days. In their research paper on the design approach, they lay out a four-step process: requirements analysis, which is figuring out what type of drone you need for the mission; architecture selection, selecting among a variety of standard and custom parts to build it; interface design, making sure it all fits together; and concept refinement. The trick is to limit the number of potential build options around one of the four different tasks a soldier might need a small drone for. Previous research from Georgia Tech has identified those as perimeter surveillance and defense, reconnaissance for inside buildings, reconnaissance for inside caves, and jungle reconnaissance. Depending on the mission type, you know if you need a video camera, target designator, light detection and ranging and other pieces. The authors describe the basic approach as inspired by Lego. “The on-demand approach is succinctly explained via an analogy to Lego,” they write. “Lego bricks contain a number of modular parts that can be constructed into different models depending on what outcome is desired. Instructions are provided to help the user build different systems out of the same set of components.” At the beginning of December, the researchers performed a demonstration on several of the drones at Aberdeen Proving Grounds in Maryland. Future capabilities could include combining 3-D printing, drones and artificial intelligence, and research being led by Kyrre Glette at the University of Oslo, who in 2014 demonstrated the first steps in program to allow robots to 3-D print themselves.
<urn:uuid:541d93b0-716a-44d5-be5a-ca262d84e924>
CC-MAIN-2017-09
http://www.nextgov.com/defense/2017/01/army-wants-3d-print-minidrones-24-hours/134501/?oref=channelriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00621-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939108
559
2.734375
3
As Intel prepares to roll out its Many Integrated Core (MIC) technology for commercial production in 2012, it has managed to entice a major US supercomputing center to start porting some of its science codes to the new architecture. The Texas Advanced Computing Center (TACC) announced it has teamed up with the chipmaker and begun porting a handful of research applications to the pre-production “Knights Ferry” MIC processor. Later this year, TACC will build a cluster of such chips for further development, with the intent to deploy a system based on the commercial “Knights Corner” MIC processor when Intel starts production. MIC represents Intel’s entry into the HPC processor accelerator sweepstakes, as the company attempts to perform an end-run around GPU computing. Mainly thanks to NVIDIA, over the last few years GPU computing, aka GPGPU, has become a mainstream HPC solution across workstations, clusters and supercomputers. They rely on specialized programming environments, like CUDA and OpenCL, to develop software on those platforms. As suggested by its name, MIC is essentially an x86 processor, with more cores (but simpler ones) than a standard x86 CPU, an extra-wide SIMD unit for heavy duty vector math, and four-way SMT threading. As such, it’s meant to speed up codes that can exploit much higher levels of parallelization than can be had on standard x86 parts. Knights Ferry is Intel’s development implementation spun out of the chipmaker’s abandoned Larrabee processor effort for visual computing. The chip sports 32 IA cores and runs at 1.2 GHz. Since each core supports a four-way SMP (as opposed to the two-way HyperThreading on Xeons), each chip can manage up to 128 threads in parallel. Memory-wise, Knights Ferry has 8 MB of cache and 1 to 2 GB of GPU-flavored GDDR5 DRAM. Like its current GPGPU competition, Knights Ferry is meant to be hooked up to a PCIe bus, acting as a co-processor to a standard x86 CPU. Knights Corner will be Intel’s first commercial version of MIC, will have upwards of 50 cores per chip, and will be implemented on the company’s 22nm process technology. Although no official date has been announced for the commercial launch, according to a presentation by Intel research engineer Pradeep Dubey at the recent 2011 Open Fabrics International Workshop in Monterey, Knights Corner is slated for release sometime in the second half of 2012. At this point, TACC is using the MIC software development kit (SDK), employing a Knights Ferry chip attached to a single machine. According to TACC’s deputy director Dan Stanzione, they are planning to build a “relatively small” cluster of Knights Ferry-equipped nodes to test codes in a distributed computing environment before the end of the year. On Thursday, I spoke with Stanzione, who was very upbeat about the new architecture, noting that the x86 compatibility is a big deal for TeraGrid researchers. In aggregate, they have a massive investment in their science codes, numbering in the hundreds. “This is a way to get a dramatically better power per operation without having to throw out everything we know about software,” he said, adding, “I’m really excited about this as a path forward. I think it has the potential to be a real game-changer.” One nice feature of MIC programming is that it inherently supports OpenMP, a popular parallel computing model for shared memory environments. And since Intel’s HPC tool chain — Parallel Studio and Cluster Studio — has been extended to the MIC architecture, the programmer can even stay in the same development environment for both its Xeon and MIC work — which, of course, Intel would like very much. The result is that OpenMP code written for four-core or six-core x86 CPUs, like some of the ones TACC has started porting, should move rather easily to a 32-core MIC co-processor. “Getting the codes to run the first time is pretty simple,” Stanzione said, adding that when they move to the MIC cluster, they’ll have to figure out how to layer an MPI distributed memory model on top of that. According to him, they’ve already ported a bunch of benchmark codes and have started with the applications. One is a bio-modeling app, which attempts to detect epistatic interactions (how genes modify each other to express a phenotype) across a corn genome. The code was thousands of lines long, but because it was parallelized via OpenMP, it moved to MIC with minimal restructuring. Although TACC has committed resources to the MIC effort, Stanzione said they are evaluating hardware and software accelerator approaches across the spectrum, most notably using CUDA and OpenCL on GPUs. (TACC’s Longhorn supercomputer is currently the center’s largest GPU platform, sporting 512 NVIDIA Tesla processors.) Although it’s too early to compare performance across specific applications, it’s already apparent that porting is much simpler with Intel’s offering. “Moving a code to MIC might involve sitting down and adding a couple of lines of directives that takes a few minutes,” explained Stanzione. “Moving a code to a GPU is a project.” Although measuring performance is still a work in progress, the early results on scaling appear to be encouraging. According to Stanzione, doubling the number of MIC cores has roughly doubled the performance on some of the initial codes. They expect to be able to say a lot more about performance when they get the Knights Corner commercial parts. From Intel’s point of view, getting TACC to sign on to MIC development is a big boost for its manycore effort. Assuming the porting goes as planned, the chipmaker will be able to point to a nice set of proof points based on real-world HPC applications. According to John Hengeveld, Intel’s director of technical compute marketing for its datacenter group, they’ll be able to incorporate TACC’s experience into the upcoming delivery of Knights Corner parts and software. “Having a partner that is helping us work on issues of scalability and optimization is really quite valuable,” he explained. Although TACC is the first big HPC organization with a committed roadmap for MIC development, they won’t be the last. Intel currently has about 100 MIC developers scattered around, and according to Hengeveld, they’ll be announcing some bigger collaborations in the months ahead. And as we get closer to MIC’s commercial release, the news surrounding the new architecture should start to pick up. “We’ll be talking a lot more about this at ISC,” promised Hengeveld.
<urn:uuid:3f759d72-1b7e-4ae5-8098-97b9fc7969ed>
CC-MAIN-2017-09
https://www.hpcwire.com/2011/04/21/tacc_steps_up_to_the_mic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00621-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94666
1,452
2.546875
3
The most obvious difference is that hubs operate at Layer 1 of the OSI model while bridges and switches work with MAC addresses at Layer 2 of the OSI model. Hubs are really just multi-port repeaters. They ignore the content of an Ethernet frame and simply resend every frame they receive out every interface on the hub. The challenge is that the Ethernet frames will show up at every device attached to a hub instead of just the intended destination (a security gap), and inbound frames often collide with outbound frames (a performance issue). In the physical world a bridge connects roads on separate sides of a river or railroad tracks. In the technical world, bridges connect two physical network segments. Each network bridge kept track of the MAC addresses on the network attached to each of its interfaces. When network traffic arrived at the bridge and its target address was local to that side of the bridge, the bridge filtered that Ethernet frame so it stayed on the local side of the bridge only. If the bridge was unable to find the target address on the side that received the traffic, it forwarded the frame across the bridge hoping the destination will be on the other network segment. At times there were multiple bridges to cross to get to the destination system. The big challenge is that broadcast and multicast traffic have to be forwarded across each bridge so every device has an opportunity to read those messages. If the network manager builds redundant circuits, it often results in a flood of broadcast or multicast traffic, preventing unicast traffic flow. Switches use the best of hubs and bridges while adding more abilities. They use the multi-port ability of the hub with the filtering of a bridge, allowing only the destination to see the unicast traffic. Switches allow redundant links and, thanks to Spanning Tree Protocol (STP) developed for bridges, broadcasts and multicasts run without causing storms. Switches keep track of the MAC addresses in each interface so they can rapidly send the traffic only to the frame’s destination. The other benefits of using switches are: - Switches are plug-and-play devices. They begin learning the interface or port to reach the desired address as soon as the first packet arrives. - Switches improve security by sending traffic only to the addressed device. - Switches provide an easy way to connect segments that run at different speeds, such as 10 Mbps, 100 Mbps, 1 Gigabit, and 10 Gigabit networks. - Switches use special chips to make their decisions in hardware making low processing delays and faster performance. - Switches are replacing routers inside networks because they are more than 10 times faster at forwarding frames on Ethernet networks. Networking & Wireless Training
<urn:uuid:7c756a0c-0b85-4446-b4e5-2c7234ab61cc>
CC-MAIN-2017-09
http://blog.globalknowledge.com/2012/08/14/what-is-the-difference-between-bridges-hubs-and-switches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00321-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935431
549
3.8125
4
Military Drones Present And Future: Visual TourThe Pentagon's growing fleet of unmanned aerial vehicles ranges from hand-launched machines to the Air Force's experimental X-37B space plane. 1 of 22 A Jan. 2 drone strike against a Taliban leader in Pakistan near the Afghan border illustrates the expanded role that unmanned aerial vehicles are playing in U.S. military operations. Militant leader Mullah Nazir and several Taliban fighters were killed by the attack, which involved at least two missiles, according to reports. The Department of Defense and U.S. intelligence agencies are increasingly using UAVs for everything from battlefield surveillance to remote-controlled strikes against terrorists. Such strikes have also been blamed for civilian casualties in their pursuit of enemy targets. And drones aren't limited to overseas operations. Military flights are increasingly taking place over U.S. skies, raising privacy and public safety issues, according to a new report from the Electronic Frontier Foundation. When one of the Air Force's MQ-9 Reapers, described as "hunter, killer" drones, crashed in Nevada last month, a spokesman expressed relief that no one was hurt. The Army and the Air Force are developing "sense-and-avoid" systems that will let military UAVs share U.S. airspace with commercial and private planes by automating how they maneuver. One such system will use cooperative sensors that work with the Traffic Collision Avoidance System used in civil aviation and with the FAA's Automatic Dependent Surveillance-Broadcast (ADS-B) system. Another is an optical system that looks for other aircraft and provides tracking information to a computer on the UAV. The Navy plans to begin deploying UAVs on aircraft carriers. The U.S. Navy last month lifted a Northrop Grumman drone, the X-47B Unmanned Combat Air System, onto the USS Harry S. Truman. The X-47B is capable of flying preprogrammed missions, then returning to the carrier for landing. Its initial application will be refueling other aircraft while in flight, but the X-47B can also carry and fire weapons. Other countries are developing or buying their own UAVs. Britain's Royal Navy recently tested a drone that could potentially be used from its ships, according to The Guardian. There's always the risk that a U.S. drone will fall into the wrong hands. A few weeks ago, Iran claimed to have captured a U.S. Navy drone that had entered its airspace. Navy officials denied that it was theirs. Military drones range from lightweight flying machines that can be launched by hand, to the Air Force's 11,000 pound X-37B, which is about one-quarter the size of NASA's space shuttle. The X-37B, pictured above, took off on an Earth-orbiting mission on Dec. 11, a secretive project that will test the feasibility of long-duration military space flights. The "reusable space plane" was launched from Cape Canaveral, Fla., on an Atlas V rocket by the Air Force's Rapid Capabilities Office, which develops combat support and weapons systems. The X-37B is dwarfed by the rocket and fairing used to lift it into space. From top to bottom, the whole system is 196 feet long. The X-37B itself is 29 feet long and 10 feet high. The Pentagon has used UAVs for more than 50 years. Some, like General Atomics' Predator, have established their utility through years of service, but new designs, such as UAVs equipped with laser weapons, keep pushing the boundaries on what drones can do. Following is our guide to U.S. military drones in their many shapes and sizes. Image credit: Air Force 1 of 22
<urn:uuid:b9d9c402-7684-4d6f-b1c4-de68cb0c4bdb>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/military-drones-present-and-future-visual-tour/d/d-id/1107839?cid=sbx_iwk_related_commentary_information_management_government
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00321-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952662
776
2.8125
3
Achard F.,European Commission - Joint Research Center Ispra | Stibig H.-J.,European Commission - Joint Research Center Ispra | Eva H.D.,European Commission - Joint Research Center Ispra | Lindquist E.J.,Forest Assessment | And 3 more authors. Carbon Management | Year: 2010 This article covers the very recent developments undertaken for estimating tropical deforestation from Earth observation data. For the United Nations Framework Convention on Climate Change process it is important to tackle the technical issues surrounding the ability to produce accurate and consistent estimates of GHG emissions from deforestation in developing countries. Remotely-sensed data are crucial to such efforts. Recent developments in regional to global monitoring of tropical forests from Earth observation can contribute to reducing the uncertainties in estimates of carbon emissions from deforestation. Data sources at approximately 30 m × 30 m spatial resolution already exist to determine reference historical rates of change from the early 1990s. Key requirements for implementing future monitoring programs, both at regional and pan-tropical regional scales, include international commitment of resources to ensure regular (at least yearly) pan-tropical coverage by satellite remote sensing imagery at a sufficient level of detail; access to such data at low-cost; and consensus protocols for satellite imagery analysis. © 2010 Future Science Ltd. Source Eberenz J.,Wageningen University | Herold M.,Wageningen University | Verbesselt J.,Wageningen University | Wijaya A.,Center for International Forestry Research | And 5 more authors. 2015 8th International Workshop on the Analysis of Multitemporal Remote Sensing Images, Multi-Temp 2015 | Year: 2015 This study predicts global forest cover change for the 1980s and 1990s from AVHRR time series metrics in order to show how the series of consistent land cover maps for climate modeling produced by the ESA climate change initiative land cover project can be extended back in time. A Random Forest model was trained on global Landsat derived samples. While the deforestation was underestimated by the model, major global patterns were effectively reproduced. Compared to reference data for the Amazon satisfying accuracies (>0.8) were achieved, but results are less promising for Indonesia. © 2015 IEEE. Source Sanou H.,Sotuba BP 258 | Sanou H.,Copenhagen University | Angulo-Escalante M.A.,Investigacion en Alimentacion y Desarrollo A.C. | Martinez-Herrera J.,Jatro Bio Energy and Oilseeds SPR de RL | And 7 more authors. Crop Science | Year: 2015 Jatropha curcas L. has been promoted as a “miracle” tree in many parts of the world, but recent studies have indicated very low levels of genetic diversity in various landraces. In this study, the genetic diversity of landrace collections of J. curcas was compared with the genetic diversity of the species from its native range, and the mating system was analyzed on the basis of microsatellite markers. The genetic diversity parameters were estimated, and analysis of molecular variance, principal coordinate analysis, and unrooted neighbor-joining tree were used to describe the relationship among populations. Results confirmed very low genetic diversity in African and Asian landraces. Mexican populations from the regions of Veracruz, Puebla, and Morelos were also found to have low levels of diversity (mostly monomorphic), while populations from Chiapas were polymorphic with an expected heterozygosity between 0.34 and 0.54. Bayesian analysis showed differentiation according to geographic locations, which was confirmed by principal coordinate analysis and neighbor-joining tree. Estimations of outcrossing rate of individual families from Chiapas showed that some mother trees were mainly outcrossing. Mating system could not be estimated in the landraces from Mali and populations from Veracruz, Puebla, and Morelos (Mexico), as these were highly monomorphic. The observed low level of genetic diversity in some of the populations and landraces suggests that breeding programs should test for genetic variation and heritability in relevant quantitative traits and estimate if sufficient gain can be expected from traditional testing and selection. Diversification of the local gene pools may be considered for breeding and selection. © Crop Science Society of America. Source Kaeslin E.,Forest Assessment Unasylva | Year: 2010 The article presents an overview of conservation issues affecting the successful coexistence of forests, people, and wildlife. Forest wildlife likewise offers both products and ecosystem services. Forests and wildlife together offer a basis for commercial and/or recreational activities like hunting, photography, hiking and birdwatching. There are two main drivers behind these threats. The increasing consumption of wealthier populations, which stimulates agricultural and industrial production, resource extraction, and tourism, leads to degradation of forests. As a result of faunal depletion, the remaining primary tropical and subtropical forests, which still provide good habitat for wild animals, are widely becoming empty of large vertebrate. The Convention on Biological Diversity (CBD) Liaison Group on Bushmeat defines bushmeat hunting as the harvesting of wild animals in tropical and subtropical forests for food and non-food purposes. Source Potapov P.,South Dakota State University | Hansen M.C.,South Dakota State University | Gerrand A.M.,Forest Assessment | Lindquist E.J.,Forest Assessment | And 3 more authors. International Journal of Digital Earth | Year: 2011 To collect and provide periodically updated information on global forest resources, their management and use, the United Nations Food and Agriculture Organization (FAO) has been coordinating global forest resources assessments (FRA) every 5-10 years since 1946. To complement the FRA national-based statistics and to provide an independent assessment of forest cover and change, a global remote sensing survey (RSS) has been organized as part of FAO FRA 2010. In support of the FAO RSS, an image data set appropriate for global analysis of forest extent and change has been produced. Landsat data from the Global Land Survey 1990-2005 were systematically sampled at each longitude and latitude intersection for all points on land. To provide a consistent data source, an operational algorithm for Landsat data pre-processing, normalization, and cloud detection was created and implemented. In this paper, we present an overview of the data processing, characteristics, and validation of the FRA RSS Landsat dataset. The FRA RSS Landsat dataset was evaluated to assess overall quality and quantify potential limitations. © 2011 Taylor & Francis. Source
<urn:uuid:bc4ead24-1151-46ca-9c1d-a187772e9eb4>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/forest-assessment-147033/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00321-ip-10-171-10-108.ec2.internal.warc.gz
en
0.905644
1,365
2.59375
3
As the first American woman to enter space, astronaut Sally Ride was nothing if not an inspiration to girls and women around the globe. Her death this week has provoked not just a deep sadness that she's gone, but also considerable reflection on the challenges and opportunities facing women in science and technology today, some 30 years after her first mission. I had a chance recently to speak with Karen Purcell, author of the forthcoming book Unlocking Your Brilliance: Smart Strategies for Women to Thrive in Science, Technology, Engineering, and Math, about some of the challenges facing women in these fields. Purcell's book is due out on Aug. 1. As a professional engineer and founder of award-winning engineering firm PK Electrical, she has both professional and personal insight on the topic. PCWorld: Please describe some of what makes it difficult for women to thrive in science, technology, engineering, and math (STEM) fields. Purcell: Women who choose to follow careers in STEM disciplines face unique and frustrating challenges. Even after we establish ourselves in our careers, we continue to encounter potential career-ending traps. Not only do women in STEM careers have higher attrition rates than do their male counterparts--especially within the first 10 years on the job--we also have higher attrition rates than women in other careers. The general belief that men outperform women in math and science fields is one of the reasons for the high attrition rate. Other reasons include cognitive gender differences, a womans lack of interest in the STEM fields, work-life balance issues, and bias. This is an important subject to acknowledge and correct; otherwise, we will never level the playing field. In the STEM fields, conquering post-secondary education is also vastly different from conquering a career. College and university programs teach guidelines, problem solving, and time management, but they do not always prepare us for the working world and are not always the best indicator for how well we will do once we land jobs. When we start our careers, we are being paid to achieve results. Multimillion-dollar projects can ultimately fail because of our workmanship. The pressures facing young females entering the STEM workforce are humbling and extremely trying. Without much warning, these pressures can lead to high levels of dissatisfaction. PCWorld: It's obvious why these challenges are a problem for women, but why are they also a problem for the fields themselves? Why would IT, for example, benefit from the involvement of more women? Purcell: Many jobs within the STEM fields focus on designing products and materials that aim to advance our experiences and allow us to live safer lives. Therefore, it is critical to have a strong female presence to ensure that products and materials are developed to benefit both genders. Without the involvement of women in these fields, product designers may easily overlook needs that are specific to women. Examples of this are evident in the design of past products. For instance, when voice recognition was first becoming popular, the systems were calibrated to recognize male voices because only males were designing the products. Because of this, womens voices were unrecognized when they tried to use the various systems. PCWorld: What do you think needs to happen in order for there to be a more balanced proportion of men and women in this area? Could it--and should it--ever be 50/50? Purcell: Early exposure to math and science for young girls is essential. Young adults are inquisitive and may end up in STEM fields for a variety of reasons, but early, sustained exposure to these fields and encouragement would result in more informed women making more precise college decisions. More than that, it would help young women understand that their gender shouldnt determine the career path they choose, that pursuing a STEM career doesnt make them any less feminine. Girls often fall off the STEM grid and we need to figure out how to keep them engaged and how to give them the exposure, guidance, encouragement, and resources they need, and that our male counterparts get. In engineering only about 10% are women. Through encouragement, exposure to STEM, and support to young women, one day we may get to the 50% level. PCWorld: What can women do to make their way easier on this career path? Purcell: Read my book. Unlocking Your Brilliance specifically provides common hurdles that women face in a STEM career and suggests strategies to overcome those hurdles. PCWorld: What should companies do to attract and retain more women? Purcell: Offer mentoring programs. A mentor can make transitions into a new position or company smoother. Mentors can also be a long-lasting resource of information. Finding a mentor early can do wonders for the amount of satisfaction that anyone, male or female, finds on the job. Also, it's important for a company to listen to its employees' needs and offer flexibility. At my company, we try to be as flexible as possible to everyones needs, not just women. I think having a fair workplace regardless of gender is also important. PCWorld: Finally, it's not uncommon to hear the retort that no one should get special treatment or be treated differently at all by employers or peers simply because of something like their gender; some, in fact, would argue that that's just another form of discrimination. What's your answer to such charges? Purcell: I would agree that no one should get special treatment and that everyone should be treated equally and fairly. This would include job responsibilities, following the rules, and flexibility. It would also include equal pay. Women are typically getting paid 20% less than their male counterparts. The system is broken and it needs to be fixed. This story, "Women in IT: 'The system is broken,' author warns" was originally published by PCWorld.
<urn:uuid:ca191ce1-f5d0-42be-abbf-30222861e4ef>
CC-MAIN-2017-09
http://www.itworld.com/article/2724724/it-management/women-in-it---the-system-is-broken---author-warns.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00497-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964489
1,182
2.828125
3
SSL Certificate Security Glossary 128 bit SSL 128 bit SSL is also referred to as strong SSL security. The 128 bit tells users that the size of the encryption key used to encrypt the data being passed between a web browser and web server is 128 bits in size (mathematically this would be 2 to the power of 128). Because the size of the 128 bit key is large it is computationally unfeasible to crack and hence is known as strong SSL security. Most web servers and web browsers support 128 bit SSL. However some versions outside of the US will only support 40 bit SSL and should be upgraded. The act of determining that a message has not been changed since leaving its point of origin. Authentication, secure authentication or secure SSL authentication of a user, is usually derived from something that the user understands, is or has. Many SSL Authentication Systems Which Provide SSL Internet Security and Online Payment System Security Are Now Shifting Toward Public Key Encryption. An Internet IPsec protocol, A field that immediately follows the IP header in an IP datagram and provides authentication and integrity checking for the datagram. Also protection against replay attacks; it secures authentication like secure SSL digital ID validation. A portable device used for authenticating a user. Authentication tokens operate by challenge/response, time-based code sequences, or other techniques. This may include paper-based lists of one-time passwords. A record containing information that can be shown to have been recently generated using the session key known only by the client and server. SSL Certificate security must be genuine and verifiable. In SSL Internet security and network security, it is imperative that authenticity is not assumed. A technology that makes it possible to identify who published a piece of software and to verify that it has not been tampered with. It also confirms that the digital certificate used to sign the code was issued by the certificate authority originally. Giving access or other rights to a user, process or program that has been authorized. A file that attests to the identity of an organization or web browser user and is used to verify that data being exchanged over a network is from the intended source. The certificate is digitally signed either by a Certificate Authority or is self-signed. There are CA certificates, client CA certificates, client certificates, and server Certificate Revocation List A list maintained by the Certificate Authority of all certificates that are revoked, but not expired. A certificate may be revoked because the user's private key is assumed to be compromised, the user is no longer certified by this Certificate Authority, or the Certificate Authorities private key is assumed to be compromised. The complete assessment of the technical and nontechnical security functions of a system and other safeguards that are made for the accreditation process, which establishes the degree to which a particular plan and implementation meet a certain set of security conditions. Certification Authority (CA) A third party organization which is used to confirm the relationship between a party to the https transaction and that party's public key. Certification authorities may be widely known and trusted institutions for internet based transactions, though where https is used on companies internal networks, an internal department within the company may fulfill this role. CPS (Certification Practice Statement) CPS is short for Certification Practice Statement. The CPS is a document published by the Certification Authority and outlines the practices and policies employed by the organization in issuing, managing and revoking digital certificates. CRL (Certificate Revocation List) CRL is short for Certificate Revocation List. The CRL is a digitally signed data file containing details of each digital certificate that has been revoked. The CRL can be downloaded and installed into a user's browser and ensures that the browser will not trust a revoked digital certificate. CSR (Certificate Signing Request) CSR is short for Certificate Signing Request. When applying for a SSL certificate the first stage is to create a CSR on your web server. This involves telling your web server some details about your site and your organization; it will then output a CSR file. This file will be needed when you apply for your SSL certificate. Instructions on how to create a CSR with all popular web server software are available here. A digital signature (not to be confused with a digital certificate) is an electronic rather than a written signature. It can be used with any kind of message, whether it is encrypted or not, simply so that the receiver can be sure of the sender's identity and that the message arrived intact. A digital certificate contains the digital signature of the certificate-issuing authority so that anyone can verify that the certificate is real. Additional benefits to the use of a digital signature are that it is easily transportable, cannot be easily repudiated, cannot be imitated by someone else, and can be automatically time-stamped. Digital Signature Algorithm (DSA) An algorithm for producing digital signatures, developed by NIST and the NSA. To sign a message, Jean uses the DSA Sign Algorithm to encode a digest of the message using her private key. For all practical purposes, there is no way to decrypt this information. However, anyone who receives the message and accompanying digital signature can verify the signature by using the DSA Verify Algorithm to process the following information: the received signature; a digest of the received message; and Jeans public key. If the output of this algorithm matches a certain part of the digital signature, the signature is valid and the message has not changed. In contrast to RSA and other encryption-based signature algorithms, DSA has no ability to encrypt or decrypt information. Digital Signature Standard (DSS) A National Institute of Standards and Technology (NIST) standard for digital signatures, used to authenticate both a message and the signer. DSS has a security level comparable to RSA (Rivest-Shamir-Adleman) cryptography, having 1,024-bit keys. Quite simply, the act of selling over the internet. This can either be Business to Business (B2B) or Business to Consumer (B2C). Encryption is the process of changing data into a form that can be read only by the intended receiver. To decipher the message, the receiver of the encrypted data must have the proper decryption key. In traditional encryption schemes, the sender and the receiver use the same key to encrypt and decrypt data. Public-key encryption schemes use two keys: a public key, which anyone may use, and a corresponding private key, which is possessed only by the person who created it. With this method, anyone may send a message encrypted with the owner's public key, but only the owner has the private key necessary to decrypt it. A secured system passing and inspecting traffic via an internal trusted secure server network and an external secure server network that is untrusted, like the Internet. Firewalls can be used to discover, prevent, or mitigate certain kinds of secure server network attack. This provides Internet security and online security. Host headers SSL Host headers are used by IIS as a means of serving multiple websites using the same IP address. As a SSL certificate requires a dedicated IP address host headers cannot be used with SSL. When the SSL protocol takes place the host header information is also encrypted - as a result the web server does not know which website to connect to. This is why a dedicated IP address per website must be used. Browsers can connect to web servers over http and over https. Connecting over https involves you entering https:// before the domain name or URL and, providing the web server has a SSL certificate, the connection will be secured and encrypted. IIS (Internet Information Services) IIS is short for Internet Information Services and is Microsoft's popular web server A protected/private character string which is applied to authenticate an identity, which gives secure authentication and secure SSL authentication, sometimes with digital signatures and digital certificates like 128-bit SSL digital certificates. Passwords are for a user's online security or authorization security. Working together are certs and secure email with SSL certificates, all terms related to online security. Similar to "protocol" in human communication which involves a previously agreed upon set of rules for communicating in diplomatic settings. On the Internet, a protocol is an agreed upon method for sending and receiving information. The key that a user keeps secret in asymmetric encryption. It can encrypt or decrypt data for a single transaction but cannot do both. The key that a user allows the world to know in asymmetric encryption. It can encrypt or decrypt data for a single transaction but cannot do both. A self signed certificate issued from a root level Certificate Authority (CA). A Web server that utilizes security protocols like SSL to encrypt and decrypt data, messages, and online payment gateways to accept credit cards, to protect them against fraud, false identification, or third party tampering. Purchasing from a secure Web server ensures that a user's credit card information or personal information can be encrypted with a secret code that is difficult to break. Popular security protocols include SSL, SHTTP, SSH2, SFTP, PCT, and IPSec. SSL (Secure Sockets Layer) SSL is short for Secure Sockets Layer. The SSL protocol was developed by Netscape and is supported by all popular web browsers such as Internet Explorer, Netscape, AOL and Opera. For SSL to work a SSL certificate issued by a Certification Authority must be installed on the web server, SSL can then be used to encrypt the data transmitted (secure SSL transactions) between a browser and web server (and vice versa). Browsers indicate a SSL secured session by changing the http to https and displaying a small padlock. Website visitors can click on the padlock to view the SSL certificate. The SSL Key, also known as a Private Key, is the secret key associated with your SSL certificate and should reside securely on your web server. When you create a CSR your web server will also create a SSL Key. When your SSL certificate has been issued, you will need to install the SSL certificate onto your web server - which effectively marries the SSL certificate to the SSL key. As the SSL key is only ever used by the web server it is a means of proving that the web server can legitimately use the SSL certificate. If you do not have, or lose either the SSL Key or the SSL certificate then you will no longer be able to use SSL on your web server. The SSL handshake is the term given to the process of the browser and web server setting up a SSL session. The SSL handshake involves the browser receiving the SSL certificate and then sending "challenge" data to the web server in order to cryptographically prove whether the web server holds the SSL key associated with the SSL certificate. If the cryptographic challenge is successful then the SSL handshake has completed and the web server will hold a SSL session with the web browser. During a SSL session the data transmitted between the web server and web browser will be encrypted. The SSL handshake takes only a fraction of a second to complete. SSL Port / https Port A port is the "logical connection place" where a browser will connect to a web server. The SSL port or the https port is the port that you would assign on your web server for SSL traffic. The industry standard port to use is port 443 - most networks and firewalls expect port 443 to be used for SSL. However it is possible to name other SSL ports / https ports to be used if necessary. The standard port used for non-secure http traffic is 80. SSL Proxy allows non-SSL aware applications to be secured by SSL. The SSL Proxy will add SSL support by being plugged into the connection between the browser (or client) and the web server. Ordinarily the SSL handshake and subsequent encryption of data between a browser and the web server is handled by the web server itself. However for some extremely popular sites, the amount of traffic being served over SSL means that the web server either becomes overloaded or it simply cannot handle the required number of SSL connections. For such sites a SSL Accelerator can help improve the number of concurrent connections and speed of the SSL handshake. SSL Accelerators offer the same support for SSL as web servers. Shared SSL & Wildcard SSL It is possible for a web hosting company to share a single SSL certificate - this allows the same SSL certificate to be used by many websites without the need to issue individual SSL certificates to each hosting customer. The recommended way to share SSL is to use a Wildcard SSL certificate. This allows the unlimited use of different sub domains on the same domain name. The Wildcard certificate allows the webhosting company to give each customer a secure sub domain, such as customer1.webhost.com, customer2.webhost.com, etc. The same can be applied for organizations wanting to secure multiple sub domains within the enterprise network. TLS (Transport Layout Security) TLS is short for Transport Layer Security. The TLS protocol is designed to one day supersede the SSL protocol, however at present few organizations use it instead The procedure that contrasts two levels of system explicitation for appropriate
<urn:uuid:fb7c883d-25ad-4740-b338-ba522d88b4ba>
CC-MAIN-2017-09
https://comodosslstore.com/au/support/glossary.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00497-ip-10-171-10-108.ec2.internal.warc.gz
en
0.894772
2,842
3.21875
3
Imagine a six-wheeled robot about the size of a suitcase rolling down a bustling city sidewalk and delivering packages to local businesses, elderly, and young professionals short on time. Suddenly, a boy jumps in its path, and the robot veers, missing him by inches. This is not a sci-fi movie scene, rather a likely street view in the not-too-distant future. Robots are marching toward everyday life, hoping to blend into the hustle and bustle of society. But are we really safe with them around? "It is important in terms of social acceptance as well as safety that people are able to predict a robot's behavior," says Matthew Walter, an assistant professor at Toyota Technological Institute in Chicago. Robots are already appearing on sidewalks in select cities. Estonia-based tech startup Starship Technologies just tested its delivery robot four weeks ago in San Francisco. The turnover of professional service robots, such as those in healthcare and logistics, will grow to $23.1 billion for the period 2016-2019, compared to $4.6 billion in 2015, according to the latest report by the International Federation of Robotics. The unpredictable nature of sidewalks presents an especially difficult challenge for robots, from inattentive adults staring into phones to absent-minded children playing imaginary games to harried professionals rushing out of buildings. Then there are critical moments of interaction with car drivers and bicyclists. "I think probably one of the biggest challenges is dealing with pedestrians, navigating around pedestrians, and predicting their behavior," Walter says. All of which begs the question: How quickly can robots respond to encounters with humans? Starship's robot travels at pedestrian speed – about four miles per hour – and uses computer-vision technology, such as GPS and proprietary mapping, to pinpoint its location to the nearest inch. It has a sophisticated obstacle avoidance system that acts as a "bubble of awareness" around the robot preventing it from bumping into things, says Henry Harris-Burland, marketing and communications manager at Starship. "Let's say the system fails," he says. "The worst thing that can happen is the robot just comes to a slow stop. It stops in 30 centimeters, which is a very safe stop distance." Perhaps more challenging, humans need to learn how to interact with a robot, too. A pedestrian, for instance, crosses the road safely after making eye contact with a car driver because both sides have an established social contract that the pedestrian goes first. Such unspoken social contracts would be difficult to achieve with robots. That's not to say delivery robots are too dangerous for their own good. They offer an incredibly valuable service, such as delivering food to the elderly who have a tough time venturing out in bad weather (although snow-covered stop signs and traffic lights might confuse robots). Most signs point to a future with robots, which isn’t a bad thing as long as people don't get hurt in the name of progress, says at least one casual observer. "I think it would be a very efficient way to get things quickly," says Samir Patel, a law school undergraduate at Washington University in St. Louis. "Automated things are always a little bit better, but the only concern I might have is people's safety on the sidewalks."
<urn:uuid:6017a7d6-69fc-457d-b132-0ac73dc22ab8>
CC-MAIN-2017-09
http://www.ioti.com/iot-trends-and-analysis/are-we-safe-robots
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00441-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96179
673
3.0625
3
The old and insecure MD5 hashing function hasn't been used to sign SSL/TLS server certificates in many years, but continues to be used in other parts of encrypted communications protocols, including TLS, therefore weakening their security. Researchers from the INRIA institute in France have devised several attacks which prove that the continued support for MD5 in cryptographic protocols is much more dangerous than previously believed. They showed that man-in-the-middle attackers can impersonate clients to servers that use TLS client authentication and still support MD5 hashing for handshake transcripts. Intercepting and forwarding credentials through protocols that use a TLS channel binding mechanism is also possible. The same will apply in the future to the SHA-1 hashing function which is currently being phased out from digital certificate signing. Their attacks are dubbed SLOTH, which stands for Security Losses from Obsolete and Truncated Transcript Hashes, but is also a comment on the overly slow pace of phasing out legacy and insecure algorithms like MD5 from protocols. MD5 dates back to 1992 and is used for hashing -- the process of taking an input and generating a unique cryptographic representation of it that can serve as an identifying signature. Unlike encryption, which is reversible, hashing is supposed to only work one way. It's built on the premise that no two inputs should result in the same hash, or signature. If the algorithm allows two inputs to match the same hash, then it is vulnerable to a so-called collision attack. This means that an attacker can take a legitimate file, like a certificate, modify it, and still present it as valid because it has the same hash as the original. MD5 signatures have been known to be insecure and vulnerable to practical collisions since at least 2005 and their use for signing SSL/TLS server certificates has been phased out. However, support for the algorithm was kept in other parts of the protocol where its use was still considered safe due to other factors. Most of the encrypted Web is based on server authentication, where the client verifies the server's certificate to make sure that it's talking to the right website and not a rogue one served by an attacker who can intercept and modify network traffic. But there are also implementations of TLS client authentication, where the server verifies the client's certificate, such as with certain banking applications or virtual private networks. During client authentication, the client signs a hash of the connection handshake transcript with its own certificate. In the case of TLS up to version 1.1 the transcript hash was generated using a combination of MD5 and SHA1, but starting in TLS 1.2 the client and server can negotiate the hashing algorithm based on what they support. TLS 1.2 allows stronger hash functions like SHA-256 and SHA-512, but also supports MD5. So, if a man-in-the-middle attacker can trick the client to authenticate with a server under his control, he can then impersonate that client to the real server by negotiating MD5 hashing and using what the INRIA researchers call a transcript collision attack. "We find that the TLS libraries in Java (SunJSSE) and on Akamai Servers support RSA-MD5 signatures for both client and server authentication," the INRIA researchers said in a blog post that explains their findings. "Even implementations that do not advertise support for RSA-MD5, such as NSS (before version 3.21), BouncyCastle (Java before version 1.54, C# before version 1.8,1), PolarSSL/mbedTLS (before 2.2.1), GnuTLS (before version 3.3.15), and OpenSSL (before version 1.0.1f) surprisingly accept RSA-MD5 signatures." The researchers determined that to find a collision for a client impersonation attack on TLS, the attacker would need to compute 2^39 hashes, which is quite practical and would take several hours on Amazon EC2 instances. In fact, during their proof-of-concept attack, they found the collision is just one hour using a workstation with 48 CPU cores. Another attack that the researchers demonstrated is credential forwarding. This defeats a mechanism known as tls-unique which binds credentials to the TLS channel used to transmit them. This means that a man-in-the-middle attacker shouldn't be able to capture credentials from a TLS connection and relay them to a legitimate server in order to authenticate, because that would open a different TLS channel. Channel binding with tls-unique is used in SCRAM, the default authentication protocol for XMPP (Extensible Messaging and Presence Protocol); Token Binding, which is designed to protect HTTP cookies and OAuth tokens; and the FIDO universal authentication framework. A generic collision that could defeat channel binding would require 2^48 keyed-hash message authentication code (HMAC) computations, the researchers found. In their proof-of-concept attack, they found the collision in 20 days on a workstation with 4 Nvidia Tesla GPUs, but they believe the time can be significantly reduced with parallelization across many more GPUs. Attacking TLS server authentication, which is what's used in most HTTPS (HTTP Secure) implementations, is also theoretically possible, but fortunately it's much harder than attacking client authentication. That's because the server signature does not cover the whole handshake transcript, at least in TLS 1.2. That's good news, because according to Internet scans, 30 percent of HTTPS servers are currently willing to send RSA-MD5 server signatures and are theoretically vulnerable. If the attack would have been more practical, it would have been devastating to Web security. Due to these findings, the editors of TLS version 1.3, which is currently only a draft, have removed all use of MD5 signatures from the protocol. OpenSSL, Mozilla NSS, GnuTLS, BouncyCastle, PolarSSL/mbedTLS already have patched versions available that disable the use of such signatures. Oracle Java clients will be fixed in the next critical patch update. The researchers have warned that SHA-1, which is also known to be theoretically vulnerable to collisions, could lead to similar problems in the future if it's not removed from TLS 1.2 implementations in a timely manner. "If practical collision attacks on SHA1 appear, then many constructions in TLS, IKE, and SSH will be breakable," they said. "So, if you can afford to do so, get rid of MD5 and SHA1 in all your protocol configurations." Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.
<urn:uuid:fc730210-dfc0-41ca-a6c1-cba9b6b42ca6>
CC-MAIN-2017-09
http://www.cio.com.au/article/591746/continued-support-md5-endangers-widely-used-cryptographic-protocols/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00141-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945027
1,383
3.28125
3
Although many in our industry throw around terms like Big Data and Analytics, the reality is, these systems have been and continue to be static and stupid, only doing the few tasks they have been specifically programming to perform. Step outside of this narrow focus and these systems quickly fail. (Not in the good way) Yet on the flip side, these same systems have delivered tremendous business and societal benefits by automating tabulation and harnessing computational processing as well as programming to deliver huge increases in enterprise and personal productivity. The days of static, unresponsive computing intelligence is coming to an end. The machines and systems of tomorrow will be cognitive, adaptive and contextual with natural language interfaces capable of making complex decisions involving extraordinary volumes of fast moving, globally disperse data sets. Herein lies potentially the biggest opportunities for disruption. Recently many hospitals have, for example, started using data analytics to identify cardiac patients at greatest risk while reducing the number of readmissions. Those big data initiatives are tied to cost-saving and resource-conserving efforts as much as they are tied to improving patient’s lives. Elsewhere in healthcare, technologies like IBM Watson and other Watson-like technologies are now assisting doctors at Memorial Sloan Kettering in diagnosing patients by providing a variety of possible causes for a set of symptoms. Watson can help doctors narrow down the options and pick the best treatments for their patients. The doctor still does most of the thinking. Watson is there to make sense of the data and help make the process faster and more accurate, a kind of sixth sense augmenting their abilities and giving them access to unimaginably vast amounts of additional information to help make better more informed decisions. This is the promise of cognitive systems--a category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally to extend and magnify human expertise and understanding. These systems will learn and interact to provide expert assistance to scientists, engineers, lawyers, and other professionals in a fraction of the time it now takes. Needless to say, there are still challenges to broad adoption of this type of technology; federal laws aimed at protecting the privacy of healthcare data could limit the role of big data in healthcare. Indeed, critics are complaining that data brokers may have greater access to medical data than do patients. Yet progress marches on and the time has come for healthcare providers to call on fleets of servers to spot hidden trends and anomalies in medical images as they diagnose patients using deep learning technology. This type of computing involves training systems called artificial neural networks to understand a variety of information provided to it. The information can range from audio, images, and other inputs where it can automatically make inferences about what it is and how it should response. In healthcare this data could be a discussion between a healthcare provider and a patient. But it could also augment these discussions with related or contextual data such as X-rays, MRIs, CT scans, 3D medical images to better help identify and diagnose treatment options. The most advanced of these technologies is found with in Natural language processing or NLP. NLP focuses on linguistics and is especially concerned with the interactions between computers and humans, enabling computers to derive meaning from spoken or natural language input. Although it’s hard to say what the future of healthcare may look like, it’s becoming evident that advanced forms of machine intelligence will likely play a much more critical role in augmenting and potentially eliminating misdiagnoses from the healthcare world. It seems that this focus area could be a key entry point into healthcare while also enhancing our own intelligence within the space.
<urn:uuid:56be5f24-55ee-4945-bbeb-f0907703314c>
CC-MAIN-2017-09
https://www.citrix.com/articles/healthcare-the-ghost-in-the-machine-is-intelligent.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00317-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940908
727
2.53125
3
Energy Department officials confirmed that trace leaks of plutonium have been detected in the air outside the country’s only underground nuclear waste repository, located 26 miles southeast of Carlsbad, N.M. An independent monitoring organization first detected the leaks over the weekend after an air monitor within the storage facility, the Energy Department’s Waste Isolation Pilot Plant, detected radiation. The plant stores 3.2 million cubic feet of plutonium-contaminated waste in salt caverns 2,150 feet underground. The Carlsbad Environmental Monitoring and Research Center, operated by the College of Engineering at New Mexico State University, said earlier this week it had detected trace amounts of plutonium and americium, a radioactive isotope, in an air filter from a sampling station located northwest of the storage site. Joe Franco, Carlsbad field office manager for Energy, confirmed at a press conference this afternoon that the leak emanated from the underground facility where waste is packed into drums stored in the salt caverns. Franco said it will be three weeks before officials are able to return underground to assess exactly what caused the leaks. During that time, Energy will develop safety plans to deal with potential radiation, health and mining hazards. Ryan Flynn, secretary of the New Mexico Environment Department, told reporters attending the conference that “events like this should never happen . . . one event is too many.” He criticized Energy for waiting two days to inform his office that plutonium had been detected outside the facility. The levels of plutonium radiation detected are “well below” levels that would be harmful to people of the environment, Flynn said. Personnel from the center installed the air filter on Feb. 11 and removed for it analysis on Feb. 16. The analysis showed 0.64 Becquerels of americium (a Becquerel is a measure of radioactivity in which one nucleus decays per second), and 0.046 Becquerels of plutonium had been deposited on the filter. The center has three monitors located around the plant. They have detected plutonium four times in the past, eventually determined to be fallout from nuclear weapons testing and detonations that occurred during the 1940s through the 1960s. According to Russell Hardy, director of the research center, this is the first time plutonium has escaped from the facility in 15 years of operation. Don Hancock, director of the Nuclear Waste Safety Program at the Southwest Research and Information Center, a nuclear watchdog group in Albuquerque, agreed that the leak should have never occurred and pointed out that the plutonium traveled at least a mile from underground to the CEMRC air monitor site. The underground storage facility stores waste transferred by truck from Los Alamos National Laboratory, N.M., as well as Energy facilities in Idaho and Georgia. Last year, that amounted to nearly 1,000 separate shipments. The facility closed for routine maintenance on Feb. 14 and was supposed to re-open March 10, when it would again start taking in waste shipments. Due to the leak, it will not take any new shipments as previously planned on March 10, Franco said. He could not say when it was likely to reopen. This story has been updated.
<urn:uuid:30431ad3-8289-4e55-a1d2-d4e4cdbff21c>
CC-MAIN-2017-09
http://www.nextgov.com/health/2014/02/researchers-detect-plutonium-air-near-underground-nuclear-waste-plant/79133/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00493-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961362
650
2.8125
3
Based on the number of prominent research projects currently in the works, 2014 could be a tipping point for the field of personalized medicine and in silico research. Last week, the Insigneo Institute at the University of Sheffield spotlighted its Virtual Physiological Human (VHP) program, which the project’s backers describe as “transcending sci fi and transforming healthcare.” The goal of the VHP project is to create an in silico replica of a living human body to enable drug testing and other medical treatments. The model will be used directly in clinical practice to improve diagnosis and outcome. Founded one year ago, the institute is celebrating the first phase of the technology. The program is funded by the European Commission, which has invested nearly €220 million since 2007 to advance in silico projects across Europe. VHP will rely on integrated computer models of the mechanical, physical and biochemical functions of a living human body that enable it to operate as a cohesive whole. In fact, a main aim for the project is to facilitate a paradigm change in which the body is seen as a single multi organ system instead of as a collection of individual organs. The project has already made a lot of headway in its first year, addressing a wide range of medical problems, including pulmonary disease, coronary artery disease, bone fractures and Parkinson’s Disease. “What we’re working on here will be vital to the future of healthcare,” stated Dr. Keith McCormack, who leads business development at the Institute. “Pressures are mounting on health and treatment resources worldwide. Candidly, without in silico medicine, organisations like the NHS will be unable to cope with demand. The Virtual Physiological Human will act as a software-based laboratory for experimentation and treatment that will save huge amounts of time and money and lead to vastly superior treatment outcomes.” The Insigneo Institute for in silico Medicine includes more than 120 academics and clinicians who are collaborating to develop computer simulations of the human body and its disease processes. The researchers expect that once the virtual human is complete, it will be the most advanced application of computing technology in healthcare.
<urn:uuid:d180fe6f-b960-4e54-9db1-156dafc2ad1e>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/05/19/virtual-human-program-aims-transform-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00489-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939045
444
2.671875
3
For millions of Americans, the smartphone has become one of the most important tools in their lives. Your phone tracks your movements, absorbs emails and text messages and notifies you of every birthday and appointment. Every second, information floods your smartphone. Unless you switch them off, your apps are working round the clock, obeying your every setting and preference. All day long your phone is churning private data through its circuitry, and if criminals can break into your phone, they can steal all kinds of things, from banking details to compromising photos and video. These thieves don't have to steal your actual phone. They may not even be located in the same country. How do they do it? Spyware, which is kind of like a computer virus, except instead of messing up your hard drive, it enables strangers to snoop on you. Skilled hackers can install spyware on your phone without you even realizing it. Once it's on your phone, spyware can record everything you do, from sending text messages to shooting video of your family reunion. Hackers may break into private accounts, commandeer email and even blackmail their victims. Keep in mind, "spyware" is a vague and multi-faceted term, and it's not always malevolent. Some parents install a kind of spyware on their kids' smartphones in order to keep track of their activities. Managers sometimes keep tabs on their employees by watching what they do on their company computers. I don't endorse this behavior, and I think there are much healthier ways of watching kids and employees, but this kind of spyware isn't intended to ruin your life. Don't click strange links. The easiest way to avoid contracting spyware is this: Don't click strange links. If you receive an email from a suspicious stranger, don't open it. If you receive an email or text from someone you do know but the message seems peculiar, contact your friend by phone or social media to see whether the message was intended. This might sound obvious, but sometimes our curiosity gets the better of us. When a link appears, some of us struggle to avoid clicking it, just because we want to know where it leads. Other times, an authentic-looking email is actually a phishing scam in disguise. If you're the least bit doubtful, don't click. Lock your phone. Some types of phones are more susceptible to spyware than others. (More about this below). But owners can dramatically reduce their chances of infection by locking their phones. A simple PIN will deter most hackers. Also avoid lending your phone to strangers. Yes, some people honestly forget their chargers at home and urgently need to call their spouses. But a clever con artist only needs your unlocked phone for a minute to cause a lot of damage. In this case, being a Good Samaritan is risky business. Androids and spyware. The bad news is this: Android phones are particularly vulnerable to spyware. It's simple to install a spying app on any Android gadget, but only once you get past the lock screen. To protect yourself, make sure you have the lock screen turned on and no one knows the PIN, password or pattern. You can make it even harder by blocking the installation of third-party apps. To do this, go to Settings; Security and uncheck the Unknown Sources option. It won't stop a really knowledgeable snoop, but it could stump less-savvy ones. iPhones and spyware. Apple users can get pretty smarmy about their products. If you own an iPhone, you probably already know that your phone is far safer from malware than Android gadgets. A recent "Forbes" study showed that nearly 97 percent of all known malware threats only affect Android devices. That's good news for Mac addicts, but it can also make owners overconfident. Last August, Apple had to release an extremely critical iOS update to patch a security threat. Before the update, an attacker could take over and fully control an iPhone remotely just by clicking the right link. Investigators learned that this kind of attack was called Trident, and the spyware was called Pegasus. The latest iOS was partly designed to prevent these exploits from damaging your iPhone. This is just one reason you should keep your iPhone up to date. To get the latest version of iOS, go to Settings; General; Software Update. Your device will then automatically check for the latest version of the Apple operating system. Secondhand smartphones. Beware the secondhand smartphone. Sometimes they're handy, because a jail-broken phone is cheap and disposable and may work with many service providers. But they may also come with spyware already installed. Buying a secondhand phone is a common practice, especially if you're traveling in a foreign country or you're between contracts and just need something for the short-term. If you have any suspicions about your phone, your best tactic is to reset factory settings. It's inconvenient, but it might save you a lot of heartache down the line. © 2017 Tulsa World syndicated under contract with NewsEdge/Acquire Media. All rights reserved.
<urn:uuid:bc2aa1cf-69b7-460d-a8d4-1819123e5e1f>
CC-MAIN-2017-09
http://www.cio-today.com/article/index.php?story_id=030002OAXQAU
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00189-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950258
1,053
2.640625
3
Predictive analytics can already help prevent churn and anticipate equipment failures, but MIT has applied it to a new realm altogether: protecting ships at sea from so-called "rogue waves." Also known as killer waves, rogue waves swell up seemingly out of nowhere and can be eight times higher than the surrounding sea. They can strike in otherwise calm waters with virtually no warning, causing untold devastation even to large ships and ocean liners. Now, MIT has developed a predictive tool it says can give ships and their crews a two- to three-minute advanced warning, allowing them to shut down essential operations on a ship or offshore platform. In the past, scientists have approached the problem by trying to simulate every individual wave in a body of water to produce a high-resolution picture of the sea state. It's proven computationally expensive and time-consuming. In this case, the MIT researchers took a different tack based on their observation that waves sometimes cluster together in a group, rolling through the ocean together. Certain wave groups, they found, end up “focusing” or exchanging energy in a way that eventually leads to a rogue wave. “These waves really talk to each other,” said Themis Sapsis, the American Bureau of Shipping Career Development assistant professor of mechanical engineering at MIT. “They interact and exchange energy. It’s not just bad luck. It’s the dynamics that create this phenomenon.” Combining ocean-wave data available from measurements taken by ocean buoys with a nonlinear analysis of the underlying water wave equations, Sapsis' team quantified the range of wave possibilities for a given body of water. They then developed a simpler and faster way to predict which wave groups will evolve into rogue waves. The resulting tool is based on an algorithm that sifts through data from surrounding waves. Depending on a wave group’s length and height, the algorithm computes a probability that the group will turn into a rogue wave within the next few minutes. “It’s precise in the sense that it’s telling us very accurately the location and the time that this rare event will happen,” Sapsis said. “We have a range of possibilities, and we can say that this will be a dangerous wave, and you’d better do something.” A paper describing the tool was published this week in the Journal of Fluid Mechanics. Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.
<urn:uuid:5c881d48-e9c3-40ec-9818-1281d7eb44f1>
CC-MAIN-2017-09
http://www.cio.com.au/article/594815/new-algorithm-from-mit-could-protect-ships-from-rogue-waves-sea/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00189-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944562
538
3.03125
3
When you visit the History of Computer Chess exhibit at the Computer History Museum in Mountain View, California, the first machine you see is "The Turk." In 1770, a Hungarian engineer and diplomat named Wolfgang von Kempelen presented a remarkable invention to the court of Maria Theresa, ruler of Hungary and Austria. It consisted of a mechanical figure dressed in (what Europeans saw as) Oriental garb, presiding over a cabinet upon which a chess board sat. Full of gears ostentatiously placed in a front side drawer, The Turk was cranked up by hand, after which an opponent could sit down and play a game against the dummy. "Even among the skeptics who insisted it was a trick, there was disagreement about how the automaton worked, leading to a series of claims and counterclaims," writes author Tom Standage. "Did it rely on mechanical trickery, magnetism, or sleight of hand? Was there a dwarf, or a small child, or a legless man hidden inside it?" Well, all of the above—or below, actually. In the rear bottom interior of the box sat a flesh-and-blood operative (by necessity a small one) who followed the human contender's moves from below and maneuvered The Turk's right hand across the table board. Nonetheless, the machine became "the most famous automaton in history," Standage notes, commented on by Charles Babbage, Edgar Allan Poe, Benjamin Franklin, and Napoleon Bonaparte. More importantly, The Turk whetted the West's appetite for real devices that could do such things. Over two centuries later, this project culminated in Deep Blue—the IBM computer that bested Russian chess champion Gary Kasparov in 1997. But what's most fascinating about "Mastering the Game," the Computer History Museum's computer chess exhibit, is that it frames the rise of the automated chess playing as a debate between two philosophies of computing. One emphasized the "brute force" approach, taking advantage of algorithmic power offered by ever more powerful processors available to programmers after the Second World War. The other has foregrounded the importance of teaching chess computers to select strategies and even to learn from experience—in other words, to play more like humans. "Make a plan" The Second World War saw breathtaking innovation in mechanical calculation engines both in Britain and the United States. Code-breaking machines like Britain's Colossus and trajectory calculators like Harvard's Mark I gave theoreticians a new sense of the possible. In 1947, Alan Turing wrote a programming manual for the Ferranti Mark I computer. We know that computer chess was already on the famous cryptographer's mind by the first principle he offered to budding programmers. "Make a plan," Turing counseled. "This rather boring piece of advice is often offered in identical words to the beginner in chess." True to his hint, Turing subsequently designed what is regarded as the first program to play the game. There was no machine capable of interpreting his code, however, so Turing went through the program's algorithms with his friend Alick Glennie. Turing played the computer's role, moving in accordance with his algorithms. It was a pretty good game, actually, lasting for 30 moves with no serious blunders until Glennie pinned down Turing-the-computer's queen. At that point, his algorithm (or perhaps Turing) resigned. Meanwhile, United States mathematician Claude Shannon wrote a 1950 paper that foresaw the prospects for automated chess. "The thesis we will develop is that modern general purpose computers can be used to play a tolerably good game of chess by the use of a suitable computing routine or 'program,'" Shannon wrote. He outlined two possible strategies for these programs. His "Type A" program exhaustively explored all possibilities three moves ahead via ten subprograms, named T0 through T9. - T0 - Makes move (a, b, c) in position P to obtain the resulting position. - T1 - Makes a list of the possible moves of a pawn at square (x, y) in position P. - T2, ..., T6 - Similarly for other types of pieces: knight, bishop, rook, queen and king. - T7 - Makes list of all possible moves in a given position. - T8 - Calculates the evaluating function f(P) for a given position P. - T9 - Master program; performs maximizing and minimizing calculation to determine proper move. This strategy would later be dubbed "minimax lookahead." The problem with the technique, Shannon observed, was that it would produce a very slow chess machine that completed its half of a 40 move game in about... ten hours. It also confirmed the misperception that human chess masters actually take every possible variation into consideration when playing. The paper quoted the famous observation of chess expert Ruben Fine. "Very often people have the idea that masters foresee everything or nearly everything," Fine wrote in 1942. "All this is, of course, pure fantasy. The best course to follow is to note the major consequences for two moves, but try to work out forced variations as they go." That last comment informed Shannon's "Type B" strategy for a chess program: (1) Examine forceful variations out as far as possible and evaluate only at reasonable positions, where some quasi-stability has been established. (2) Select the variations to be explored by some process so that the machine does not waste its time in totally pointless variations. Practically all subsequent chess programs would follow either a "Type A" or "Type B" system, according to the Computer History Museum's exhibition literature. But the best early programs moved in a Type B direction. The earliest post-World War II chess computers included a truncated game player developed by the English programmer Dietrich Prinz. It worked out "mate-in-two" puzzles; that is, the device located the best way to checkmate an opponent in two steps. Not much of a breakthrough—but then in July of 1958, Chess Review announced the release of IBM mathematician Alex Bernstein's program for the IBM Digital 704. Bernstein's program clearly followed Shannon's "Type B" route. "In order to avoid examining the consequences of all possible moves," he explained in the article, "a set of decision routines were written which select a small number (not greater than seven) of strategically good moves." The author called this array "The Tree." The computer posited each of these "seven plausible moves," then asked itself to calculate plausible replies based on eight questions. First: was the king in check? Second: could material be lost, gained, or exchanged? Third: was castling possible? And so on. "Were the machine to have a larger memory, more questions could be asked," Bernstein conceded. The machine could be bested by advanced beginning players, but Shannon's Type B strategy was further supported that year by Allen Newell and Herbert Simon at Carnegie Mellon University and Cliff Shaw at the Rand Corporation. They contended that the progress of computer chess traveled parallel to the fields of artificial intelligence and heuristics—solving problems based on experience. "Alpha beta pruning" was their term for the strategy picking process: The hope is still periodically ignited in some human breasts that a computer can be found that is fast enough, and that can be programmed cleverly enough, to play good chess by brute-force search. There is nothing known in theory about the game of chess that rules out this possibility. Empirical studies on the management of search in sizable trees with only modest results make this a much less promising direction than it was when chess was first chosen as an appropriate task for artificial intelligence. We must regard this as one of the important empirical findings of research with chess programs. By the early 1960s, four MIT students began working on a chess computer that, by the time they graduated, could beat amateur players. It very consciously adopted the alpha beta heuristic approach. Another MIT programmer added fifty more heuristics to his program, playable on a Digital Equipment Corporation PDP-6 machine. In 1967, Richard Greenblatt's "MackHack VI" was the first to challenge a human player in a chess tournament. It earned a rating comparable to that of a competent high school student. Deep Blue was coming... but beating a grand master was a slow process.
<urn:uuid:6f7871f6-a010-4125-998f-a2692e45311c>
CC-MAIN-2017-09
https://arstechnica.com/gaming/2011/08/force-versus-heuristics-the-contentious-rise-of-computer-chess/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00365-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958504
1,726
3.125
3
A WiFi hotspot allow all the devices including desktops, laptops, smart devices etc. to connect to internet from a common point. You can also set your PC into WiFi hotspot and allow the nearby devices to connect internet using the hotspot. Why use PC as WiFi Hotspot In most homes, there is a router which can be used to share internet connection among all devices. But what if there is no wireless router and you want to share internet with other wireless devices? The solution is simply use the wired connection to PC and use it to wirelessly share internet with other devices. Another scenario is when you have good internet offer on your smartphone and you want to share it with other devices. You can connect the smartphone to your PC or laptop and then share internet with other devices. The possibility of creating a WiFi hotspot started from Windows XP. The main condition was you need to have a wired NIC (Network Interface Card) and a wireless NIC. The internet was connected to the PC using a wired connection and this was broadcasted using wireless connection. Hotspot using XP was not at all popular as it was buggy, configuring the connection was complicated and the connection was not stable. The situation has changed a lot since XP. With popularity of wireless and smart devices, Microsoft did a lot of work to improve the feature in future versions of Windows. In Windows 7, it is even possible to use command prompt to configure WiFi and with Windows 10, it is easier. Create PC WiFi Hotspot in Windows 10 - Click Start -> Click Settings - Click Network & Internet - Click Mobile hotspot on the left side - Select Share My Internet Connection from -> You need to select if it is a WiFi or Ethernet - You can see the Network name and Network password which is automatically set by your PC. If you want to set it more complicated or something that suits you, click on Edit. - Once the steps are finished, you need to activate internet connection. Turn on the option Share my Internet connection with other devices Now your Windows 10 PC should be able to share internet connection with all other devices. If you want to turn off WiFi hotspot, go back to settings -> Network & Internet -> Click Mobile hotspot -> Turn Off the option Share my Internet connection with other devices. Create PC WiFi Hotspot in Windows 8 and Windows 7 - Go to Start -> Control Panel - Network and Internet -> Network and sharing center - Click Manage Wireless Networks on the left side - Click Add on the top left side - Under How do you want to add a network, select Create an ad hoc network - Click Next on Set up a wireless ad hoc network - Type a Network name, select Security Type as WPA2-Personal, enter a password which will the one used to connect to this network - Click Next A screen will be shown confirming the network is ready. You may click on wireless connection to see the list of available wireless connection. You can see your network listed there with a different icon which says waiting for users. Your WiFi hotspot is ready to accept wireless connection. Create PC WiFi Hotspot using third party software There are plenty of third party software which offer turning your PC into Wifi hotspots. Some of them are free and some paid. These software does the same that we did before using the operating system options. The advantage of using third party software is additional features such as enhanced security and speed. So these software’s are more recommended in companies and organisations.
<urn:uuid:f56ea62d-d72a-4cef-97fb-42230f25948e>
CC-MAIN-2017-09
http://atechjourney.com/how-can-i-make-my-pc-wifi-hotspot.html/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00541-ip-10-171-10-108.ec2.internal.warc.gz
en
0.923778
724
2.515625
3
With robots increasingly being used on manufacturing floors, researchers are looking for ways that humans can work better with their robot coworkers. Scientists at MIT say the answer is cross-training -- humans and robots switching jobs to learn how they affect each other's work. "People aren't robots. They don't do things the same way every single time," said Julie Shah, head of the Interactive Robotics Group at MIT, in a statement. "And so there is a mismatch between the way we program robots to perform tasks in exactly the same way each time and what we need them to do if they are going to work in concert with people." Shah noted that a lot of robotics research focuses on making sure robots and humans can operate safely side by side. However, more work needs to be done to make robots smart and flexible enough to work effectively with people. Often, the issue is that robots are programmed to do the same job the exact same way time after time. Humans, however, aren't so exacting. So how do you enable robots to anticipate and react to these differences? According to Shah, at least part of the solution involves having the robots, and humans, learn what it's like to do the other's job. To get started, she worked with MIT doctoral candidate Stefanos Nikolaidis to build an algorithm that would enable the robots to learn from their role swap. This step involved enabling the robots to gain information through demonstration. By watching humans perform what are normally robotic jobs, the robots are able to learn how humans want the machines to perform their jobs. MIT reported that Shah and Nikolaidis found that humans and robots worked together as a team 71% more after cross training. They also noted that the amount of time that people spend waiting for robots to finish a task dropped by 41%. People also reported that the robots worked with them more efficiently after cross training. "This is the first evidence that human-robot teamwork is improved when a human and robot train together by switching roles, in a manner similar to effective human team training practices," said Nikolaidis in a statement. Human and robot collaboration has been a major focus of robotics research. Last summer, for instance, scientists at Harvard University announced that they were working on an Iron Man-like smart suit that could improve soldiers' endurance in war zones. The suit, which is expected to include sensors and its own energy source, is being designed to delay the onset of fatigue, enabling soldiers to travel farther in the field, while also supporting the body and protecting it from injuries when the soldier is carrying heavy loads. In late 2011, Toyota Motor Corp. reported that it planned to release a group of robots that will act as health-care aids by 2013. The robots are being designed to lift patients and to help people suffering from paralysis to walk again. NASA, which has used a robotic arm to carry astronauts in space and has a humanoid robot working on the International Space Station, said it will need humans and robots to work together to enable future, more industrious space missions. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "MIT aids human, robot cooperation with cross-training" was originally published by Computerworld.
<urn:uuid:f4bce964-128a-4cb7-aa0b-d87ca0e3ea03>
CC-MAIN-2017-09
http://www.itworld.com/article/2711841/hardware/mit-aids-human--robot-cooperation-with-cross-training.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00182-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964463
727
3.390625
3
How do we know an earthquake's epicenter? Sophisticated technology and old-fashioned science make pinpoint accuracy possible - By Henry Kenyon - Jul 16, 2010 The Washington D.C. metro area was jolted awake in the early hours of Friday morning by rare event -- an earthquake measuring 3.6 on the Richter scale. Major earthquakes don't often happen east of the Rocky Mountains, an area that is relatively geologically stable. So how does a university seismology department or the U.S. Geological Survey, determine the location and strength of an earthquake? The USGS has seismic measuring stations located across the country. The location, strength and depth of an earthquake are determined by detecting the series of waves it generates. Earthquakes always generate two types of waves, said Terry Tullis, professor emeritus of geological sciences at Brown University. Geologists and USGS use an automated system that measures the interval between the arrival of the primary and secondary waves. The difference between the P and S waves is used to measure the distance of the earthquake from the sensor. “The farther away it is, the longer a delay is between when the P wave gets there and the S arrives,” he said. It's much like the technique of estimating how far away lightning is by counting the seconds until the thunder arrives, but far more precise. Earthquakes are something to tweet about The exact location of an earthquake is quickly determined by triangulating its location between several seismic stations. Once the distance of an earthquake can be determined, its relative size can be measured as well. There are two major scales used to measure earthquakes: the Richter scale, which measures the overall strength of an earthquake; and the modified Mercalli intensity scale, which measures the amount of shaking. “A given earthquake will have a bulls-eye pattern of Mercalli intensities, the highest being right where the earthquake was, and they decay as you go away,” Tullis said. The East Coast is more geologically stable than the West Coast. However, when earthquakes do occur in the eastern part of the country, they are felt over a greater distance, due to a process called attenuation. Tullis said the rocks under the eastern part of the country are older and colder than in the West, which has several large active faults. The cooler rocks underlying the eastern seaboard attenuate less than in the West. “It’s a little bit like the difference between banging on a sponge and a piece of crystal. One of then rings and the other dampens it out,” he said. Major eastern earthquakes, such as the New Madrid earthquakes that took place in Missouri in 1811 and 1812, were reputed to have rung church bells as far away as Boston. By comparison, a similar earthquake in California would not be felt so far from the epicenter. Earthquakes constantly take place in the eastern United States, but most are so small as to be undetectable by humans. Tullis noted that geologists don’t understand the nature of most of these earthquakes, but they appear to involve slipping on faults that are artifacts from former geological times. “Earthquakes do occur due to slips on faults in the East, it’s just that when they do occur, they’re not always on faults that we know, or if we know them, ones that we know very well,” he said.
<urn:uuid:474daa0a-0b0d-4209-9cc2-0c8aa61e111b>
CC-MAIN-2017-09
https://gcn.com/articles/2010/07/16/washington-dc-earthquake.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00182-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970946
719
4.09375
4
Scientists develop chips for super fast cloud data transfers IBM recently announced that it has developed a new technology that can be used to transfer big data between clouds and data centers four times faster than current technology. As big data and Internet traffic continues to grow exponentially, future networking standards have to support higher data rates. To support this increase in traffic, scientists at IBM Research and Ecole Polytechnique Federale de Lausanne in Switzerland have been developing ultra-fast and energy efficient analog-to-digital converter (ADC) technology that will help boost Internet speeds to between 200 and 400 gigabit/sec at extremely low power, IBM said in a statement. An ADC converts analog signals to digital, approximating the right combination of zeros and ones to digitally represent the data so it can be stored on computers and analyzed for patterns and predictive outcomes. “Most of the ADCs on the market today weren’t designed to handle the massive big data applications we are dealing with today — it’s the equivalent of funneling water through a straw from a fire hose,” said Dr. Martin Schmatz, systems department manager at IBM Research. “This is IBM’s first attempt at designing a new ADC that leverages a standard CMOS logic process, not only resulting in the most efficient ADC in its class, but also opening the possibility to add massive computation power for signal analysis on the same chip with the ADC.” For example, scientists could use hundreds of thousands of ADCs to convert the analog radio signals for their work on the Square Kilometer Array (SKA), the world's largest and most sensitive radio telescope. The SKA collects analog radio data from deep space and is expected to produce 10 times the global Internet traffic. The prototype ADC could transport the signals quickly and at very low power — a critical requirement considering the thousands of antennas which will be spread over 1,900 miles. While this latest technology is only a lab prototype, a previous version of the design has been licensed to Semtech Corp., a leading supplier of analog and mixed-signal semiconductors. The company is using the technology to develop advanced communications platforms expected to be announced later this year. Connect with the GCN staff on Twitter @GCNtech.
<urn:uuid:e9b909e7-3eff-459f-b892-fa063b112274>
CC-MAIN-2017-09
https://gcn.com/articles/2014/02/25/chips.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00182-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917799
467
3.203125
3
Mass production of incredibly powerful quantum computers may be only 10 years away thanks to researchers at the University of New South Wales who have demonstrated a quantum bit based on the nucleus of a single atom in silicon. The breakthrough is a significant step forward from the creation of the world’s first quantum bit in September last year. UNSW professor Andrew Dzurak said last year, researchers wrote and read back quantum information on an electron that was bound to an atom. “This year, we have drilled down inside the atom, writing and reading information on the nucleus of an atom, which is a million times smaller,” Dzurak said. “When we work with the nucleus, we have a more accurate quantum bit than we had in September last year. “The previous quantum bit, although demonstrated, didn’t have the accuracy necessary to do reliable calculations; no we have a quantum bit that can do that.” Dzurak said having more accurate quantum bits will enable scientists to “scale up” and make more viable quantum machines. “We have moved to a more advanced level [in quantum computing], with a [quantum bit] that is hundreds of thousands of times more accurate than previously,” he said. “We achieved a read-out fidelity of 99.8 per cent, which sets a new benchmark for qubit accuracy in solid state devices.” Dzurak said that quantum technology can be manufactured now, but commercial quantum-based machines are still 10 years away. He compared the cycle of quantum computer development to the discovery of the first transistor in a silicon chip – which was first demonstrated in 1947 – and how it took “a couple of decades” before integrated circuits and modern computers were created. He says developing one quantum computer to hundreds of thousands takes a “significant engineering life span.” Quantum bits, or qubits, are the building blocks of quantum computers and offer enormous advantages for searching databases, breaking modern encryption and modelling “atomic-scale” systems such as biological molecules and drugs. These qubits are coupled together to create massive increases in computing power. The new quantum process The new discovery was published on Thursday in Nature and describes how information is stored and retrieved using the magnetic spin of a nucleus. “We have adapted magnetic resonance technology, commonly known for its application in chemical analysis and MRI scans, to control and read-out the nuclear spin of a single atom in real-time,” said UNSW associate professor Andrea Morello.Read more: Australian scientists claim world record for preserving quantum information According to the researchers, the nucleus of a phosphorus atom is an extremely weak magnet, which can point in two natural directions, either “up or down.” In the quantum world, the magnet can exist in both states simulatenously – a feature known as “quantum superposition.” These natural positions are equivalent to the “zero and “one” of a binary code, as used in existing classical computers, UNSW scientists said. In this experiment, the scientists controlled the direction of the nucleus, “writing” a value onto its spin and then “reading” that value out – turning the nucleus into a functioning qubit. The accuracy of this qubit rivals what many consider to be today’s best quantum bit – a single atom in an electromagnetic trap inside a vacuum chamber, the researchers said. “Our nuclear spin qubit operates at a similar level of accuracy but it’s not in a vacuum cleaner – it’s in a silicon chip and can be wired up and operated electrically like normal integrated circuits,” said Morello. “Silicon is the dominant material in the microelectronics industry, which means our qubit is more compatible with existing industry technology and is more easily scalable.”
<urn:uuid:e029815b-39b8-40d1-84ba-e23976c9f57f>
CC-MAIN-2017-09
http://www.computerworld.com.au/article/459422/scientists_demonstrate_key_component_quantum_machine/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00534-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937253
813
3.734375
4
Before you can use a new Flash memory card, you must format it. Different commands are required to format and erase Flash memory depending on the type of filesystem running on the router. Class A and Class C filesystems issue the format command. Class B filesystems issue the erase command. This is the Class A filesystem: |Cisco 12000 Series Internet Router| |Cisco 7000 Route Switch Processor (RSP)| |Catalyst 8500 Switch Route Processor (SRP)| |Cisco 7500 Series RSPs (RSP 2, RSP 4, RSP 8)| |Cisco 6400 Universal Access Concentrator (UAC)| |Catalyst 5000 and 5500 Route Switch Module (RSM)| |Multiservice Switch Route Processor (MSRP) for LightStream 1010| |ATM switch and processor for LightStream 1010 and Catalyst 5000 and 5500| This is the Class B filesystem: |Cisco 1000 series routers |Cisco 1600 series routers: The 1600 series router has a single PC card that contains Flash memory. The 1601 to 1604 run from Flash. If you remove the PC card when the router is running, the router halts. The 1601R to 1605R run from RAM. If you remove the PC card, the router does not load the Cisco IOS Software image during the next bootup. In the 1600 series, you cannot delete the running image file or any other file unless it is in a different partition. |Cisco 3600 series routers: The 3600 series routers traditionally uses a Class B filesystem. However, with the addition of crash information file support in Cisco IOS Software version 12.2(4)T, the 3600 needs the ability to delete individual files. Consequently, the 3600 series routers with Cisco IOS Software version 12.2T and later utilize commands from Class B filesystems as well as commands from Class C filesystems. To activate the Class C filesystem commands on the 3600 with Cisco IOS Software Release 12.2T, issue the erase command to completely remove all files from the Flash filesystem. When the Flash is empty, issue the squeeze command against it to create a squeeze log. At this point, the 3600 Flash system issues the delete and squeeze commands like a Class C filesystem.| This is the Class C filesystem: |AS5800 Dial Shelf Controller (DSC)| |Catalyst 5000 and 5500 Supervisor III Module| |Catalyst 6000 and 6500 Supervisor Engine I| |Catalyst 6000 and 6500 Supervisor Engine II| |Cisco 7000 Route Processor| |Cisco 7100 series routers| |Cisco uBR7100 series routers| |Cisco 7200 Series Network Processing Engine| |Cisco uBR7200 series routers| |Cisco 7200VXR Series Network Services Engine 1| |Cisco 7600 Series Internet Routers| |Cisco 10000 series routers (Edge Services Router (ESR))| |Cisco uBR10000 series routers| To remove the files from a compact Flash memory card previously formatted with a Class B Flash filesystem, perform one of these procedures: |For external compact Flash memory cards, issue the erase slot0: command.| |For internal compact Flash memory cards, issue the erase flash: command.| To format an external compact Flash memory card with a Class B Flash filesystem, refer to this sample output: Router# erase slot0: Erasing the slot0 filesystem will remove all files! Continue? [confirm] Current DOS File System flash card in slot0: will be formatted into Low End File System flash card! Continue? [confirm] Erasing device... eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeeeeeeeeeeeeeeee ...erased Erase of slot0: complete To remove the files from a compact Flash memory card previously formatted with a Class A or Class C Flash filesystem, perform one of these procedures: |For external compact Flash memory cards, issue the format slot0: command.| |For internal compact Flash memory cards, issue the format flash: command.| To format an internal compact Flash memory card with a Class A or Class C Flash filesystem, refer to this sample output: Router# format flash: Format operation may take a while. Continue? [confirm] Format operation will destroy all data in "flash:". Continue? [confirm] Enter volume ID (up to 64 chars)[default flash]: Current Low End File System flash card in flash will be formatted into DOS File System flash card! Continue? [confirm] Format:Drive communication & 1st Sector Write OK... Writing Monlib sectors ............................ Monlib write complete .. Format:All system sectors written. OK... Format:Total sectors in formatted partition:250592 Format:Total bytes in formatted partition:128303104 Format:Operation completed successfully. Format of flash complete For more information you may wish to refer to: Cisco PCMCIA Filesystem Compatibility Matrix and Filesystem Information
<urn:uuid:e1410dca-b846-4ceb-856c-6e21e82a00d1>
CC-MAIN-2017-09
http://www.networkworld.com/article/2343977/cisco-subnet/how-to-format-and-erase-flash-in-a-cisco-router.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00410-ip-10-171-10-108.ec2.internal.warc.gz
en
0.801404
1,078
2.515625
3
This year marks the 40th anniversary of Creeper, the world’s first computer virus. From Creeper to Stuxnet, the last four decades saw the number of malware instances boom from 1,300 in 1990, to 50,000 in 2000, to over 200 million in 2010. Besides sheer quantity, viruses, which were originally used as academic proof of concepts, quickly turned into geek pranks, then evolved into cybercriminal tools. By 2005, the virus scene had been monetized, and virtually all viruses were developed with the sole purpose of making money via more or less complex business models. In the following story, FortiGuard Labs looks at the most significant computer viruses over the last 40 years and explains their historical significance. 1971: Creeper: catch me if you can While theories on self-replicating automatas were developed by genius mathematician Von Neumann in the early 50s, the first real computer virus was released “in lab” in 1971 by an employee of a company working on building ARPANET, the Internet’s ancestor. Intriguing feature: Creeper looks for a machine on the network, transfers to it, displays the message “I’m the creeper, catch me if you can!” and starts over, thereby hoping from system to system. It was a pure proof of concept that ties the roots of computer viruses to those of the Internet. 1982: Elk Cloner Written by a 15-year old as a way to booby trap his friends’ Apple II computer systems without physical access to them, Elk Cloner spread via floppy disks. Infected machines displayed a harmless poem, dedicated to the virus’ glory. Intriguing feature: Elk Cloner was the first virus ever to spread outside of the lab it was created in. Its global impact was negligible and its intent plainly geeky. First detected in the Hebrew University of Jerusalem, the aptly-named Jerusalem is somewhat deleterious. Each year on Friday the 13th, this virus deleted every single program that’s run on the infected system. Intriguing feature: Jerusalem is the first example of a destructive virus to have a global impact. Of course, the sheer number of computers back then was infinitesimal, compared to today. 1992: Michelangelo: The sleeper must awaken The dormant Michelangelo virus was designed to awaken on March 6th (Michelangelo’s birthday – as in the Renaissance artist, not the Ninja Turtle) and erase critical parts of infected computers’ hard drives. Intriguing feature: The promises of destruction it carried spawned a media frenzy. In the weeks preceding March 6th, media relayed (and some may say amplified) experts’ predictions forecasting 5 million computers going definitively down. Yet, on March 6th, only a few thousand data losses were reported – and public trust in AV companies’ ethics was tainted for a while. Melissa propagated via infected Microsoft Word documents and mailed itself to Outlook contacts of the contaminated user. It was virulent enough to paralyze some important mailing systems on the Internet. Its author created the bug to honor Melissa, a stripper he’d met in Florida. Whether he conquered her heart this way is somewhat unlikely, but one thing is sure: the malicious code earned him 20 months in jail and a $5,000 fine. Intriguing feature: Someone created a variant of Melissa that encrypted the infected files and demanded a ransom of $100 to be wired to an offshore account for decryption. The author was traced to the said account. While it remained an isolated case, it is worth noting that 6 years before the malware scene became fully monetized, someone had already started figuring out how to make bucks out of viruses. 2000: I LOVE YOU At the dawn of the XXIst century, I LOVE YOU worm infected tens of millions of computers. As a fairly simple worm, I LOVE YOU presented itself as an incoming email with “I love you” in its subject line and infected the machine of users who opened the attachment. It then mailed itself to all of the contacts found on the infected user’s system. Intriguing feature: While the author’s motivation clearly wasn’t about money, the damages were: When the dust settled, I LOVE YOU had cost companies around the world between $5 and $10 billion. Much of that cost can be attributed to the time spent “cleaning” infected machines. 2001: Code Red While I LOVE YOU targeted end users, Code Red infected Web servers, where it automatically spread by exploiting a vulnerability in Microsoft IIS servers. In less than one week, nearly 400,000 servers were infected, and the homepage of their hosted Websites was replaced with “Hacked By Chinese!” Intriguing feature: Code Red had a distinguishing feature designed to flood the White House Website with traffic (from the infected servers), probably making it the first case of documented “hacktivism’ on a large scale. Like Code Red, Sasser spread without anyone’s help; but this time, the virus exploited a vulnerability in Microsoft Windows to propagate, which made it particularly virulent. What’s more, due to a bug in the worm’s code, infected systems turned off every couple of minutes. Intriguing feature: For the first time, systems whose function isn’t normally related to the Internet (and that mostly existed before the Internet) were severely impacted. More than one million systems were infected, AFP’s communications satellites were interrupted for hours, Delta Airlines was forced to cancel flights, the British coast guard had to go back to print maps, and a hospital had to redirect its emergency room because its radiology department was completely paralyzed by the virus. The damage amount was estimated to be more than $18 billion. Microsoft placed a $250,000 bounty on the author’s head, who turned out to be an 18-year old German student. When caught, the student admitted that he created the malicious code as a creative way to help his mother to find a job in the computer security industry. 2005: MyTob, the turning point MyTob appeared in 2005 and was one of first worms to combine the features of a Bot (the infamous “Zombies,” controlled by a remote Botmaster) and a mass-mailer. Intriguing feature: MyTob marks the entry in the era of Botnets and of cybercrime. Business models designed to “monetize” the many botnets appeared (some of which will count more than 20 million machines): installation of spyware, diffusion of spam, illegal content hosting, interception of banking credentials, blackmail, etc. The revenue generated from these new botnets quickly reached several billion dollars per year; a figure that is growing today. 2007: Storm botnet By 2007, cybercriminals already had lucrative business models in place. They’re thinking about protecting their money spinners (infected computers). Before 2007, botnets showed a cruel lack of robustness: in neutralizing its unique Control Center, a botnet could be completely neutralized, because Zombies didn’t have anyone to report to (and take commands from) anymore. Intriguing feature: By implementing a peer-to-peer architecture, Storm became the first Botnet with decentralized command-¦ It is much more robust. At the peak of the epidemic, Storm had infected between 1 and 50 million systems and accounted for 8% of all malware running in the world. Koobface (an anagram for Facebook) spreads by pretending to be the infected user on social networks, prompting friends to download an update to their Flash player in order to view a video. The update is a copy of the virus. Intriguing feature: Koobface is the first botnet to recruit its Zombie computers across multiple social networks (Facebook, MySpace, hi5, Bebo, Friendster, etc). Today, it is estimated that at any time, over 500,000 Koobface zombies are online at the same time. Conficker is a particularly sophisticated virus, as it’s both a worm, much like Sasser, and an ultra-resilient botnet, which implements bleeding-edge defensive techniques. Curiously, it seems that its propagation algorithm is poorly calibrated, causing it to be discovered more frequently. Some networks were so saturated by Conficker, that it caused planes to be grounded, including a number of French Fighter planes. In addition, hospitals and military bases were impacted. In total approximately 7 million systems were infected worldwide. Intriguing feature: Conficker did not infect Ukrainian IPs, nor machines configured with a Ukrainian keyboard. This suggests the authors were playing by the cybercriminal gold rule, which implicitly states, “Don’t target anything in your own country, and the arm of justice won’t be long enough to reach you.” 2010: Stuxnet, welcome to the cyber war According to most threat researchers today, only governments have the necessary resources to design and implement a virus of such complexity. To spread, Stuxnet exploited several critical vulnerabilities in Windows, which, until then, were unknown, including one guaranteeing its execution when inserting an infected USB key into the target system, even if a systems autorun capabilities were disabled. From the infected system, Stuxnet was then able to spread into an internal network, until it reached its target: a management system of an industrial process edited by Siemens. In this particular instance, Stuxnet knew the weak point with a specific controller – perhaps a cooling system – and most likely intended to destroy or neutralize the industrial system. Intriguing feature: For the first time, the target of a virus is the destruction of an industrial system (very probably a nuclear power plant in Iran). According to the trends we’re seeing, the next target for cybercriminals could be smart phones. Their widespread use and the fact that they incorporate a payment system (premium rate phone numbers) make them easy money-generating targets. Furthermore, they have a localization system, a microphone, embedded GPS and one (or several) cameras, which potentially allow a particularly invasive spying of their owners. Author: Guillaume Lovet, Fortinet.
<urn:uuid:508d53f4-3872-4776-8eed-b0ab75c18f75>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2011/03/14/40th-anniversary-of-the-computer-virus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00410-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954529
2,158
3.1875
3
Multiculturalism is a term that is everywhere these days in education. This is a way to promote educational achievement in students by using a set of strategies and materials that promote the contributions of different cultures to society, while encouraging inclusion, democracy, inquiry and critical thinking, skill acquisition and self-reflection. Since our society is becoming increasingly pluralistic, it behooves educational institutions to prepare their students for living in and contributing to that world. This is a lot more than simply offering tacos in the cafeteria on Cinco de Mayo, or celebrating Martin Luther King Jr. Day. This is a new approach to the way students interact with the curricula and the school itself, and with each other. In the same way that the physical infrastructure of universities has been adapted to the needs of those with physical disabilities (hand rails, ramps, etc.), multiculturalism aims to do something similar on a more abstracted scale. If you think about what the ultimate aims of shifting to a more inclusive focus are, you can then see ways to leverage your digital signage to supports your school’s multicultural program. As always when considering digital signage, you should think about what you want your audience to do, and then come up with ways to get them to take action. Eastern University’s Caleb Rosado (Department of Urban Studies) identifies seven actions that come from a comprehensive multicultural educational system: recognizing, respecting, acknowledging, valuing, encouraging, enabling and empowering. Dr. James A. Banks, author of the book Educating Citizens in a Multicultural Society, defines five dimensions of multicultural education: This is where most people start thinking about multiculturalism in an educational context. In the early years of shifting to a more inclusive focus, it was about putting African Americans, Latinos, Asians and members of other minority groups into the curriculum. Even in an American History class, the contributions of these groups can and should be covered. It also became important to emphasize the roles women have played through the ages in all subjects. This sort of thing can start simply – for example, putting up pictures and short bios of women who have contributed to Chemistry, or Athletics, in those departments and classrooms. Eventually, one hopes that these groups get integrated into the subjects themselves and studied in depth – not as outliers or exceptions, but as categories of people who have always been contributing to their fields. On your digital signage screens, you can certainly draw attention to specific calendar days that are significant for groups on your campus. If you have a sizable population of people from a South Korean background, you might put up messages celebrating National Foundation Day on October 3. But it might be more useful to go a step further and mark Hangul Day a week later on October 9th – a day commemorating the invention and proclamation of the hangul alphabet used in Korea. Your messages can be paired with additional ones that teach common syllabic blocks (like “hello” and “nice to see you”, or “happy birthday”), and a brief explanation of how the written language is organized, and the history behind it. Or mark and educate the students about the Buddha’s Birthday, which is celebrated by many countries in Asia. Coordinating some of your digital signage messages with what’s happening in different classes can also reinforce and expand what students learn there. When a physics class is delving into quantum theory, your screens can show how different people from many countries contributed to that field. And this is not only about mentioning people who are not white males – Americans usually have ancestry from somewhere else, and highlighting the contributions of Austrians, Hungarians, Irish and other groups can help your students understand that even Caucasians come from diverse backgrounds and histories. Teachers can also help students develop the tools they need to look at the overarching culture’s assumptions – determine the cultural frames of reference, and understand where the assumptions that form those frames come from. Once they do that, students can start to analyze and critique the values their culture holds in high esteem. Instead of just consuming information from books and course materials, they become critical readers and thinkers, and can start participating in an ongoing dialogue about what their culture is and how it got there. This process of getting them to construct and examine knowledge also helps them remember it better. Your digital signage messages can help reinforce these ideas. Show messages with: - Tips on using Google and other search engines effectively - How to determine the validity of a web or news source - Short notes on the differences between emotional and neutral language in articles - QR tags or short URLs to fact-checking websites like Snopes, PolitiFact and TruthOrFiction - Similar links to online dictionaries, like Merriam-Webster, and databases, like WhoWhatWhen - Quiz questions about commonly held misconceptions, urban legends, rumors and hoaxes - Signs of bias in any writing, including textbooks and advertising Students today come from a variety of racial backgrounds, including many of mixed races. Different groups learn things differently, and when teachers take these different styles into account, that’s pedagogical equity. For example, studies show that both African Americans and women of all races learn science better in groups than on their own, so a teacher might adapt how things are structured in the classroom to accommodate them better. This is not about learning styles – this is expanding the teachers’ arsenal of techniques and teaching strategies. Cooperative learning, role-playing, guided discovery – these are just a few of the more modern techniques that can foster learning in a widely diverse student population. And everyone benefits – not just the groups that prefer one pedagogical technique over another. Teachers who are open-minded and flexible, who have lots of different ways to reach and engage their students, will have more success. This is as true in Social Studies classes as it is in Algebra. Your teachers are busy people, and showing them digital signage messages in the staff room, or other places they frequent, to motivate them about different ways to conduct things in their classes might give them some inspiration and support. Remind them to give their students time to think about things, and time to write good notes. Strategies and techniques like Think–Pair–Share, Whip Around, and asking open questions can create an atmosphere in which more people participate, and are more engaged. Posting resources for tips on creating a more culturally equitable classroom, with QR tags or short URLs, can give teachers some serious food for thought – you can find many ideas here, here, here and here. Chances are, your teachers are already doing this sort of thing on their own, but allowing them to contribute their own findings and experiences, to be shared on your screens with others, can help knit your entire teaching staff together and focus their efforts as a team. Your staff is your best resource. Exposure to, and a deepening understanding of ,another culture invariably leads to less prejudice towards that culture. The more students know about each other’s backgrounds, the more open they are to different ways of looking at and doing things. When they can “walk in one another’s shoes”, they will be more inclined to see their fellow students as individuals – certainly with some different outlooks and experiences, but also with many of the same values and priorities as their own.
<urn:uuid:18765f5f-9fc1-4d94-b7ff-6b69582a6f38>
CC-MAIN-2017-09
https://techdecisions.co/video/multiculturalism-digital-signage-modern-university/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00354-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949718
1,514
3.59375
4
It's possible to transmit life-threatening signals to implanted medical devices with no prior knowledge of how the devices work, researchers in Belgium and the U.K. have demonstrated. By intercepting and reverse-engineering the signals exchanged between a heart pacemaker-defibrillator and its programmer, the researchers found they could steal patient information, flatten the device's battery, or send malicious messages to the pacemaker. The attacks they developed can be performed from up to five meters away using standard equipment -- but more sophisticated antennas could increase this distance by tens or hundreds of times, they said. "The consequences of these attacks can be fatal for patients as these messages can contain commands to deliver a shock or to disable a therapy," the researchers wrote in a new paper examining the security of implantable cardioverter defibrillators (ICDs), which monitor heart rhythm and can deliver either low-power electrical signals to the heart, like a pacemaker, or stronger ones, like a defibrillator, to shock the heart back to a normal rhythm. They will present their findings at the Annual Computer Security Applications Conference (ACSAC) in Los Angeles next week. At least 10 different types of pacemaker are vulnerable, according to the team, who work at the University of Leuven and University Hospital Gasthuisberg Leuven in Belgium, and the University of Birmingham in England. Their findings add to the evidence of severe security failings in programmable and connected medical devices such as ICDs. They were able to reverse-engineer the protocol used by one of the pacemakers without access to any documentation, and this despite discovering that the manufacturer had made rudimentary attempts to obfuscate the data transmitted. Previous studies of such devices had found all communications were made in the clear. "Reverse-engineering was possible by only using a black-box approach. Our results demonstrated that security by obscurity is a dangerous design approach that often conceals negligent designs," they wrote, urging the medical devices industry to ditch weak proprietary systems for protecting communications in favor of more open and well-scrutinized security systems. Among the attacks they demonstrated in their lab were breaches of privacy, in which they extracted medical records bearing the patient's name from the device. In developing this attack, they discovered that data transmissions were obfuscated using a simple linear feedback shift register to XOR the data. At least 10 models of ICD use the same technique, they found. They also showed how repeatedly sending a message to the ICD can prevent it from entering sleep mode. By maintaining the device in standby mode, they could prematurely drain its battery and lengthen the time during which it would accept messages that could lead to a more dangerous attack. One saving grace for the ICDs tested is that, before they will accept any radio commands, they need to be activated by a magnetic programming head held within a few centimeters of the patient's skin. For up to two hours after a communications session is opened in that way, though, the ICDs remained receptive to instructions not just from legitimate programming or diagnostic devices but also the researchers' software-defined radio, making it possible to initiate an attack on a patient after they left a doctor's office. Until devices can be made that secure their communications better, the only short-term defense against such hijacking attacks is to carry a signal jammer, the researchers said. A longer-term approach would be to modify systems so that programmers can send a signal to ICDs putting them immediately into sleep mode at the end of a programming session, they said. Previous reports of hackable medical devices have been dismissed by their manufacturers. The researchers in Leuven and Birmingham said they had notified the manufacturer of the device they tested, and discussed their findings before publication.
<urn:uuid:dcdc523e-94ff-42a2-9089-d60a12be07c8>
CC-MAIN-2017-09
http://www.itnews.com/article/3146548/security/implantable-medical-devices-can-be-hacked-to-harm-patients.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00282-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959293
780
3.078125
3
The depletion of Internet addresses would seem to spell relief for aged routers that are struggling to deal with the Internet’s growth, but the complicated interplay between those trends might cause even more problems. Last Wednesday, some older routers and switches stumbled when the Internet’s table of routes surpassed 512,000 entries, the maximum they could hold in a special form of memory called TCAM (Ternary Content Addressable Memory). The event drew widespread attention, though it was actually the third time in this young century that the Internet had broken through such a threshold. The number of routes exceeded 128,000 around 2003 and 256,000 in 2008, each time causing problems for some outmoded gear. Devices that don’t have room for all the routes may reboot themselves or fail to route some traffic, but the affected gear was fairly old. Cisco Systems says all the routing products it’s sold for at least the past two years have had enough room in TCAM for more than 512,000 routes. Routers designed for the cores of carrier networks surpassed that long before. Juniper Networks, Cisco’s longtime router rival, said it updated its gear for this problem more than 10 years ago. Alcatel-Lucent said its routers use a different memory architecture from the devices that got hit with the problem. Because almost all the addresses defined by IPv4 (Internet Protocol version 4) have already been handed out to Internet service providers or end users, the number of routes allocated under that system may not grow much more, according to Cisco engineers. That would be one silver lining on a cloud that’s hung over the network of networks for years. “IPv4 cannot grow forever. We already reached a certain limit, so we personally wouldn’t expect it to grow much larger,” said Sasa Rasovic, incident manager at Cisco’s Product Security Incident Response Team. However, another danger remains, and it comes from the address depletion itself. With fewer IPv4 addresses at hand, users or service providers may want to split them up into smaller routes. By common agreement among Internet engineers, the smallest accepted route on the Internet today points to a block of 256 consecutive IP addresses. (Using private addresses, companies and service providers can hook up many more devices behind those globally unique ones.) Now, some network operators want to break up those blocks so they can satisfy more customers, said Jim Cowie, chief scientist at Dyn, a traffic management company that recently acquired Internet analysis firm Renesys. Then, instead of one Internet route to reach the 256 addresses, there would be two. “People are trying to do more with less,” Cowie said. Along the way, some may also be putting profit ahead of the Internet’s ease of use. IP addresses officially are handed out free by nonprofit regional authorities, but their supplies are mostly gone. The mad dash for IPv4 addresses has led to some unseemly practices by those who already got their addresses. “As the IPv4 address space is now depleted, a few smaller routes ... are being sold to other entities. Apart from a number of other more serious issues this is causing to the Internet community at large, this also has potential to cause a growth of the routing table size,” Cisco’s Rasovic said in an email message. “It’s hard to predict just how fast and how big of an impact this will have in the future.” If some service providers start to split up the smallest blocks into even smaller ones, that could even affect whether all users can reach everyone else on the Internet, Dyn’s Cowie said. Other operators might filter out the smaller routes, keeping their own routing tables a more reasonable size but not offering access to some addresses, he said. And though it’s impossible to say how many new routes might result, routers would continue to face a growing number of them. Like new party guests who want a piece of the same pie, Internet address holders could cut the IPv4 address space into ever smaller pieces, and it would fall onto the routers to keep track of all the slivers. Dave Schaeffer, CEO of ISP Cogent Communications, thinks the routing tables will keep growing just from new addresses coming online. “There’s still a big, dark pool out there of IPv4 addresses in the hands of service providers that can be routed, that are not routed (yet),” Schaeffer said. Migrating to IPv6 would eliminate the address shortage, because the newer protocol has an almost unlimited supply. Few users have adopted IPv6 even though most systems and networking gear have long been equipped for it. The IPv6 routing table still only has about 20,000 routes in it, Cowie said. That’s what makes it feasible for Cisco to suggest, among other things, that network operators reassign some of the memory in their routers that was automatically set aside for IPv6 routes and give it to IPv4 routes. But the short supply of older addresses and the expected growth of the Internet of Things eventually will bring more IPv6 addresses into service, Cowie said. That will raise issues of its own. “Now that IPv6 has been introduced, more and more devices are going to be connected,” Rasovic said. “The tables are different [in IPv6], and they’re managed differently in memory.” It’s hard to know how many IPv6 routes there could eventually be, Cowie said. Those routes will all take up more memory, because an IPv6 address is much longer than one from the older version. Network engineering groups are already trying to figure out how to manage IPv6 routes, according to Cisco. The IPv4 route-table problem will be with us for a long time, according to Cogent’s Schaeffer. Some of Cogent’s customers were affected by the surge in routes last week, though on their own equipment rather than Cogent’s, Schaeffer said. Conscientious ISPs may aggregate their own routes to help bring the tables back from the limit, but the reprieve will only be temporary, he said. “We may, for a short period, fall below 512, but inexorably, the trend is larger tables.”
<urn:uuid:2ef14a85-30f8-4720-a575-93941a69fa6d>
CC-MAIN-2017-09
http://www.cio.com/article/2476601/how-can-the-internet-have-too-many-routes-and-not-enough-addresses.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00458-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955994
1,322
2.53125
3
As U.S. organizations become more cognizant about the need to protect the data they collect, they learn how poor the guidelines, laws, and compliance methods are. The U.S. laws are more sectoral. If you really want to adopt a holistic data privacy standard that protects your users — and your organization — look more to the European Union for useful guidance. Enterprise organizations today are very aware of the importance of IT security. Plenty of data breaches, security vulnerabilities, hacking attempts to compromise systems, and software development defects have driven home that message. As a result, most IT organizations have put practices into place to protect the organization from “bad guys” who are trying to break in. However, only recently have most large corporations (and small ones, too) put attention on data privacy: the responsibility for the organization to control access to personally identifiable information (PII) that is collected during the process of doing business. The Sectoral (and Sometimes Conflicting) Approach in the U.S. Healthcare organizations are perhaps most conscious of the data privacy issue as part of their need to protect users’ information for confidentiality, integrity, and availability. Even without the influence of corporate or personal ethics, HIPAA legislation has driven healthcare IT to comply with now-well-established practices. Financial services organizations have similar motivations. However, other industries (particularly in the U.S.) are just catching up with designing systems for data privacy. Many of their IT personnel are getting confused, and for good reason. Overall this is a complicated but important mess. The existing data privacy rules are inadequate, overwhelming, open to interpretation, and contradictory. For instance, the U.S. supported Safe Harbor law, aimed at “prohibit[ing] the transfer of personal data to non-European Union countries that do not meet the European Union (EU) ‘adequacy’ standard for privacy protection,” which directs US companies to deal with EU data separately. However, the Safe Harbor program conflicts with the U.S. Patriot act. Moreover, the distinction between security and privacy sometimes is unclear. Security is about access, which — when it fails — permits a loss of information. Privacy is about data exposure at the boundaries: how much you share, knowingly and unknowingly, and who can get their hands on that data. Understanding Data Privacy To understand the difference, it might help to think of a family that owns several cars. Instead of the parents and teenagers each having a key to each car in the driveway, the family might employ a key rack in the entry hallway. Anyone who can get inside the home perimeter — the front door — can access the car keys, which are meant for the family members to use. Security prevents people from getting into the house, but the parents need to think about what information is stored and exposed once someone walks inside. So organizations need to consider questions including: - What information is the company acquiring, and how it is controlling who can access it? - If the intended person gets access to the data, what can he do with it? - What business processes need to be put into place to ensure that the right people can touch personally-identifiable information, and that the wrong people cannot? - What industry laws affect these decisions? Which should? …and that is just to start with. The EU Data Protection Guidelines Instead of looking at the U.S. rules and policies to guide you in designing data privacy practices, I urge you to rely on the policies developed by the European Union (EU). Frankly, the U.S. has pretty low standards of privacy — and not just at a consumer level, such as Facebook. Microsoft’s new Outlook apps were blocked by the EU over privacy concerns, as just one recent business example. One key advantage of the EU guidelines is its notion of separation between data processor and data controller. That is, it puts attention on: - Who has authority to process information, versus - Who may decide who can process information. That’s an important distinction which affects accountability, security, and more. For instance, consider the role of an email admin: In some cases the admin might have to look at a user’s email; and in other cases she absolutely should not do so. Who decides whether and when that admin should look at the email logs? Surely it should not be the admin herself. With the EU notion of data privacy, there are separate rules and task forces. Control and process are very different. The person who designs cannot create the rules. The Druva Way At Druva, we treat the European directive as a gold standard, and we build our software to comply with those goals. (So is Microsoft, incidentally: See Microsoft Adopts Cloud Privacy Standard.) We already do several things that way and are committed to do more. This isn’t the easiest way to conduct business, but it’s the right way. For example, the EU directives influenced the way we build encryption, based on shared responsibility. Most companies either build encryption with the key in the cloud (so more people have access) or at end user (so the key is completely exposed). Either way, admins have more access than they should. We follow the EU provider/admin model in several ways. We build detailed audit trails for just about everything, such as when a file is backed up. The scope for accessing those logs is limited by two things: detailed admin rights and departmental policies. The high level admin doesn’t have a lot of access, and the low level admin doesn’t have a lot of control. We also follow security policies that primary address threat protection, though they have an effect on privacy. Among them are geo-fencing and IP-based access control. Such features ensure that, say, German data cannot be accessed in Taiwan. Data privacy is important because breaching data is expensive. Where every action triggers a chain reaction of tons of data, there’s a corporate responsibility to make sure it’s protected. At Druva, we’re doing our best to make that job easier for our customers by building software that complies with the best-quality data privacy initiatives. Concerned about data privacy in your organization? Read our white paper, Preparing for The New World of Data Privacy: What Global Enterprises Need to Know.
<urn:uuid:b5b4a22c-8f1c-46a0-b89e-df1e49e9acd9>
CC-MAIN-2017-09
https://www.druva.com/blog/user-data-privacy-think-globally-act-locally/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00050-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948609
1,325
2.625
3
Amazon Web Services (AWS) on Thursday announced it will build and operate a 100-megawatt (MW) wind farm in Ohio that will power its current and future cloud service data centers. The project, called the Amazon Wind Farm US Central, is expected to generate about 320,000 megawatt hours (MWh) of wind power per year beginning in May 2017; that's enough electricity to power more than 29,000 U.S. homes a year. While AWS's latest wind farm is dwarfed by previously announced projects, it is still large compared to those typically built by non-utility businesses. For example, one of the largest wind farms to be completed this year was the 300MW Jumbo Road wind project located about 50 miles southwest of Amarillo, Texas. The project was commissioned by Berkshire Hathaway Energy subsidiary BHE Renewables, an electricity utility that sells power to Austin Energy. That wind farm cost more than $1 billion to build. Wind-powered energy, the second-largest category of renewable electricity generation, is expected to grow on average by 2.4% every year to become the largest power contributor by 2038. Other than hydroelectric, all other renewable forms of energy are expected to grow faster -- but they are starting from a smaller base. For example, photovoltaics (solar) power generation is expected to grow by 6.8% per year, geothermal by 5.5%, and biomass by 3.1%, according to the U.S. Energy Information Administration (EIA). Amazon has launched a handful of wind farm projects and other renewable energy initiatives over the past two years as it moves toward a goal of 100% renewable energy use. In April 2015, AWS announced that it was getting about 25% of its power from renewable energy sources; it plans to increase that level to 40% by the end of 2016. In January 2015, Amazon announced a renewable project with the Amazon Wind Farm (Fowler Ridge) in Benton County, Indiana, which is expected to generate 500,000MWh of wind power annually. In June 2015, the company announced Amazon Solar Farm US East in Virginia, which is expected to generate 170,000 MWh of solar power annually. And in July 2015, AWS announced Amazon Wind Farm US East in North Carolina, which is expected to generate more than 670,000 MWh of energy annually. Also this year, Amazon unveiled a pilot of Tesla's energy storage batteries that are designed to help bridge the gap between intermittent production, from sources like wind, and a data center's constant power demand. Along with the new Amazon Wind Farm US Central, Amazon said its renewable projects will deliver more than 1.6 million MWh of renewable energy into electric grids across the central and eastern U.S., or roughly the equivalent amount of energy required to power 150,000 homes. This story, "Amazon to build massive wind farm to power web services" was originally published by Computerworld.
<urn:uuid:a6afa10f-215b-46d2-8a8d-2af69ff6a1f8>
CC-MAIN-2017-09
http://www.itnews.com/article/3007188/sustainable-it/amazon-to-build-massive-wind-farm-to-power-web-services.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00402-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963174
614
2.59375
3
Mobile computing's rise from niche market to the mainstream is among the most significant technological trends in our lifetimes. And to a large extent, it's been driven by the bounty of Moore’s Law—the rule that transistor density doubles every 24 months. Initially, most mobile devices relied on highly specialized hardware to meet stringent power and size budgets. But with so many transistors available, devices inevitably grew general-purpose capabilities. Most likely, that wasn't even the real motivation. The initial desire was probably to reduce costs by creating a more flexible software ecosystem with better re-use and faster time to market. As such, the first smartphones were very much a novelty, and it took many years before the world realized the potential of such devices. Apple played a major role by creating innovative smartphones that consumers craved and quickly adopted. To some extent, this is where we still stand today. Smartphones are still (relatively) expensive and primarily interesting to the developed world. But over the next 10 years, this too will change. As Moore’s Law rolls on, the cost of a low-end smartphone will decline. At some point, the incremental cost will be quite minimal and many feature phones of today will be supplanted by smartphones. A $650 unsubsidized phone is well beyond the reach of most of the world compared to a $20 feature phone, but a $30 to $40 smartphone would naturally be very popular. In this grand progression, 2013 will certainly be a significant milestone for mobile devices, smartphones and beyond. It's likely to be the first year in which tablets out-ship notebooks in the US. And in the coming years, this will lead to a confluence of high-end tablets and ultra-mobile notebooks as the world figures out how these devices co-exist, blend, hybridize, and/or merge. Against this backdrop, in this two-part series, we'll explore the major trends and evolution for mobile SoCs. More importantly, we'll look to where the major vendors are likely going in the next several years. Tablet and phone divergence While phones and tablets are mobile devices that often share a great deal of software, it's becoming increasingly clear the two are very different products. These two markets have started to diverge and will continue doing so over time. From a technical perspective, smartphones are far more compact and power constrained. Smartphone SoCs are limited to around 1W, both by batteries and by thermal dissipation. The raison d’etre of a smartphone is connectivity, so a cellular modem is an absolute necessity. For the cost sensitive-models that make up the vast majority of the market, the modem is integrated into the SoC itself. High-end designs favor discrete modems with a greater power budget instead. The main smartphone OSes today are iOS and Android, though Windows is beginning to make an appearance (perhaps with Linux or BlackBerry on the horizon). Just as importantly, phone vendors like HTC must pass government certification and win the approval of carriers. There is very much a walled-garden aspect, where carriers control which devices can be attached to their networks, and in some cases devices can only be sold through a certain carrier. The business model places consumers quite far removed from the actual hardware. In contrast, tablets are far more akin to the PC both technically and economically. The power budget for tablet SoCs is much greater, up to 4W for a passively cooled device and as high as 7-8W for systems with fans. This alone means there is a much wider range of tablet designs than smartphones. Moreover, the default connectivity for tablets is Wi-Fi rather than a cellular modem. The vast majority of tablets do not have cellular modems, and even fewer customers actually purchase a wireless data plan. As a result, cellular modems are almost always optional discrete components of the platform. The software ecosystem is relatively similar, with Microsoft, Apple, and Google OSes available. Because tablets eschew cellular modems, the time to market is faster, and they are much more commonly sold directly to consumers rather than through carriers. In terms of usage models, tablets are much more PC-like, with reasonable-sized screens that make games and media more attractive. Looking forward, these distinctions will likely become more pronounced. Many tablets today use high-end smartphone SoCs, but the difference in power targets and expected performance is quite large. As the markets grow in volume, SoCs will inevitably bifurcate to focus on one market or the other. Even today, Apple is doing so, with the A6 for phones and the larger A6X for tablets. Other vendors may need to wait a few years to have the requisite volume, but eventually the two markets will be clearly separate. Horizontal business model evolution Another aspect of the mobile device market that is currently in flux and likely to change in the coming years is the business model for the chip and system vendors. Currently, Apple is the only company truly pursuing a vertically integrated model, where all phones and tablets are based on Apple’s own SoC designs and iOS. The tight integration between hardware and software has been a huge boon for Apple, and it has yielded superb products. Samsung is one of the few others companies that takes a vertically integrated approach to phones and tablets, although in truth its strategy seems to be ambivalent on that point. Unlike Apple, Samsung’s SoCs are readily available to third parties, and some Samsung devices, such as the S7562 Galaxy S Duos, use SoCs from competitors. More recently though, there has been a trend of Samsung devices using Samsung SoCs, at least for the premier products. For the moment, Samsung’s approach is best characterized as a hybrid, particularly as the company lacks a bespoke OS. The rest of the major SoC vendors (e.g., Intel, Qualcomm, Nvidia, TI, Mediatek, etc.) have stayed pretty far away from actual mobile devices. These companies tend to focus on horizontal business models that avoid competing with customers or suppliers. In the long term, mobile devices are likely to evolve similarly to the PC and favor a horizontal business model. The real advantage is one of flexibility; as costs drop and the market expands, it will be increasingly necessary for vendors like HTC to offer a wide range of phones based on radically different SoCs. While a vertically integrated company like Apple can focus and maintain leadership in a specific (and highly lucrative) niche, it would be very difficult to expand in many growing areas of the market. The differences between an iPhone 6 and a $20 feature phone are tremendous and would be very difficult for a single company to bridge. However, SoC vendors will attempt to reap the benefits of vertical integration by providing complete reference platforms to OEMs. Conceptually, this is a form of "optional" system integration, where the phone vendor or carrier can get the entire platform from the SoC supplier. This has the principal advantages of reducing time to market while also providing a baseline quality and experience for consumers. Currently, this approach has mostly been tested in emerging markets, but it's likely to become more common over time. There is a crucial distinction between reference platforms and vertical integration. Namely, OEMs can always choose to customize a platform to differentiate, and the SoC vendor avoids dealing with consumers directly. Typically, most of the customization is in terms of software on top of a base operating system.
<urn:uuid:4f127614-93a9-4ccd-b8f1-c107bf6af767>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2013/02/the-future-of-mobile-cpus-part-1-todays-fork-in-the-road/?comments=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00402-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955837
1,529
2.84375
3
Interest towards model-based testing has increased quite significantly over the years as people have started to reach limits of traditional approaches and at the same time started to see and understand the benefits that applying MBT can have to the quality assurance function. In this blog post, I’m outlining what is really important when you are selecting and evaluating an MBT tool. The first and maybe even quite surprising observation that people often make when they start to look at MBT is the variety of completely different approaches and the vastness of both academic and commercial tools. MBT actually means numerous different things and approaches. In a loose term, model based testing is anything that is based on computer readable models that describe some aspects of the system to be tested in such a format and accuracy that it enables either completely or semi-automatic generation of test cases. The three main approaches to model based testing are the graphical test modeling approach, environment model driven test generation, and system model driven test generation. There are also others but these three are the main approaches. All the model based testing approaches above can produce the same end result – that is they can all be used to generate executable test cases and test documentation. However, this is not the main point here. The key here is what the users need to do in order to get those tests out. Graphical test modeling is simplest of the approaches listed above and is actually nothing more than modeling the test cases themselves in a graphical notation. Environment, use case, or usage models describe the expected environment of the SUT. That is, these models describe how the system under test is used and how the environment around the system operates. These models represent the tester – not the system that we are testing. The models include testing strategies, that is the input selection, and hand crafted output validators, or test oracles. The third main approach to model based testing is called system model driven test generation. Here the idea is that the model represents the actual, desired behavior of the system itself. Conformiq DesignerTM is an example of a system model driven approach. Each of the approaches has their pros and cons; quite an extensive comparison of the approaches is available online at https://www.conformiq.com/wp-content/uploads/2015/02/What-is-Important-When-Selecting-MBT-Tool2.pdf. With graphical and environment MBT approaches, the process of test design, that is the process of deciding how to test, what to test and what not, is a manual activity. These approaches to MBT rely on manual test design so they speed up parts of the test design process but still leave a lot of work to the manual process of thinking through all the necessary test steps and combinations, which introduces a lot of risks, such as missed test coverage, and it takes a lot of time, especially when the requirements change. As the intent of this discussion is to expose the reader to the most advanced of these MBT technologies, we will limit this discussion to system model driven approaches since they are the only approaches that are used to actually automate the test design. Because there are different automated test design tools, each with sometimes subtle differences from the others, it is useful to understand what is important to know when selecting an MBT tool for automating the test design. Modeling / tool ease of use Since all MBT tools and methods start with a model, obviously the modeling notation and environment needs to be such that the end user can understand and feel comfortable working with it. Great test generation features of a MBT tool are close to useless if you cannot understand how to use them. There are many drawing and modeling tools, so it is important to select a tool that “fits your need”. Overkill with many non-modeling functions makes learning and using that type of tool more complicated than needed and drawing tools that don’t restrict the models to those constructs used for test generation, allow users to introduce non-functional notations that will need to be changed later during the import into the test design tool. The optimal solution is a modeling tool tailored to the user’s needs. In this case – system design for test generation. Also learn if a third party modeling tool is required or one comes with the test design engine. Even modest tool costs add up for larger volumes but, more importantly, you will need to continually match differing releases for compatibility and get in the middle of two vendors to get an immediate fix when a defect is found. Is the third party tool worth the risk and effort is a question to ask. The end user needs to be able to rigorously express the system behavior; an MBT tool that does not allow you to do that is quite useless. For example, does the tool support multithreaded / multicomponent modeling?; Hierarchical decomposition?; Concepts for model reuse?; Advanced arithmetic? If these system operations can’t be expressed in the models, the test design tool can’t cover them. Ask what are the constructs your SUT needs modeled to express its behavior. There needs to be a careful balance between expressivity and ease of use since as you simplify the modeling notation for making it easier to understand and work with, you are inevitable sacrificing expressivity and the other way around. This is the reason why Conformiq offers two alternative notations. The more traditional modeling notation which is based on Java and UML is highly expressive and can be used to meaningfully describe extremely complex systems, but requires programming skills, so users need to be relatively technical. More higher level / system level models that do not require such a support for expressivity, can be modeled using simple non-programming Conformiq CreatorTM notation, which then can be used by testers and Subject Matter Experts (SMEs) with little less technical background. Both modeling tools are designed to create system representations for testing without carrying the overhead of additional non-modeling complexity. Generation of great quality tests The quality of the test cases is by far the most important thing in quality assurance. If the quality of your tests is low, it really does not matter how fancy your testing processes are or how cool are the tools you are using for test execution. When you are about to automate the test design, you really need to assess the quality of the test cases that the MBT tool produces. One naturally needs to remember here that the quality of the output that the MBT tool generates (tests) cannot be on a higher level of quality than the input (model). Therefore you must also pay close attention to the quality and completeness of the model. Look for a tool that offers a great variety of different test generation heuristics. Relate these heuristics to your particular needs. If your system is manipulating string values, the tool should have some extensive support for expressing string patterns (such as regular expressions). If it is conducting a lot of numeric calculations, make sure that the tool has support for boundary value analysis and other equivalence class partition methods. The more test design heuristics the tool supports, the more breath of test coverage and flexibility it delivers for testing different types of designs. What do and will you need for full coverage? Test generation is computationally very intensive task. The more complex the system and model and test heuristics used, the more computing resources needed. Test engineers may devise models that are beyond the capabilities of the MBT tool – the tool simply chokes when given such a model. Unfortunately, scalability is something that is quite often ignored while running initial proof-of-concept projects which mean that organizations can easily invest on a tool that does not scale to real world industrial problems. Models must be simplified meaning reduced coverage and/or more manual effort is needed. This can be a very expensive and poor investment. Conformiq has invested a significant amount of effort over the years to bring the performance of the test generation core engine to a level where we can manage real industrial problems. I wrote actually a short blog about this some time ago (http://www.conformiq.com/2013/12/performance-of-test-generation/) where I’m shedding some light on what happens under the hood detailing how the tool can be for example deployed on a cluster or a cloud for fast test generation. It may seem easy to automatically split the model across multiple computation cores, but the real trick is to do it deterministically. This means that regardless of which cores and their order used, the generated test cases are always the same. I’m confident to say that Conformiq’s test generation core is a magnitude of order stronger than any other tool on the market today. Integrations with other tools and integration API’s The MBT tool is not best used as a standalone capability. Instead it should be tightly integrated with other tools used to document, manage and execute test cases. It is important to notice that most of the MBT tools do not actually execute the generated test cases but instead export them in to various different test execution environments. This is primarily because many of the MBT users have already invested in test execution infrastructure prior to moving to include automating the test design. Therefore, instead of replacing your existing test execution system when deploying MBT, you should look for a tool that integrates with your existing infrastructure. This same approach applies if manual test execution is employed; you look for a tool that integrates your ways of working. Test execution infrastructure is again just one piece of the overall solution and the MBT tool should integrate with those other tools as well. These are tools like requirement management and test management systems, version control systems, and so on. If there is no out-of-the-box integration with your particular tool, the MBT tool should offer integration API’s that allow you to easily integrate with your tool. Investing in a tool that does not have proper integration API’s can be risky as that can limit your freedom to upgrade your testing infrastructure in the future and support multiple test execution platforms. Related to the quality of the tests, the tests need to be understandable as well so they should be very easy to review. You should not take it at face value that tools just magically create good quality tests and not spend any time on reviewing them - just the opposite. The tool should allow you to understand why every test case is needed. Everything starts from “simple things” like having understandable and meaningful names and high level descriptions for test cases. For example, naming test cases such as “Test #1”, “Test #2”, and so on, does not really help the user to understand what the tests are all about, but also manual test case renaming must occur. In order to be sure that you have not missed anything in the test design, the tool must be able to convince the user that the generated test suite indeed meets all the requirements. That is, you must be able to assess the coverage of the test suite. You must be able to relate the tests back to the functional requirements and also to the model. You must be able to have tools for detailed analysis of each and every test case. You must be able to walk thru the test case step by step and simulate it against the model to gain full understanding of the tests, if that is deemed needed, and so on. Model analysis and debugging Since system models are human made, they may contain errors. A model that performs arithmetic can for example perform a division by zero while a concurrent model can have model-level thread scheduling that cause the threads to deadlock. The bigger and more complex the models get, the more important it is for the MBT tool to provide different means of analyzing issues in the model itself, simulate the model in order to gain understanding and identification of erroneous scenarios and even lack of coverage, and then to link the failed test execution results back to the model for better understanding the root cause of a failed test case. These again are items that often go unnoticed during the very first proof-of-concept pilots and again investing in a technology that does not support a full-fledged model analysis can be very short-sighted. Conformiq Designer, for example, while performing the test generation, verifies that the model is internally consistent, i.e., the tool checks for the absence of internal computation errors (such as division by zero). If the model happens to contain an internal error, the tool will produce a comprehensive report that details the circumstances under which the problem occurred, graphically pin-points the problematic location in the model and shows a full execution trace to the problem. For further analysis of the problem, users can start a “model debugger” which is an infrastructure that allows the user to analyze various issues in the model and get a better understanding of the automatically designed and generated test cases. With these tools available, the model debugging and analysis is much faster and less error prone, and streamlines the whole modeling / test generation process. Debugging real world models can be the most time consuming part of deploying MBT. This becomes increasingly important if MBT is used in an agile development process. Look for a solution that offers fluent model debugging and analysis capabilities. Support and user community System model driven MBT requires bit of a different skill set than traditional manual testing and some test engineers may feel slightly intimidated by these tools. Also, working with the models requires a different mindset than traditional testing, so some of the senior testers may feel alienated with these tools. There are a lot of best practices collected over the years and lot of documentation available on how to address these problems. The MBT tool vendor should not only provide a state-of-the-art testing solution, but also provide training, documentation and best practices around these and other questions both before and during deployment. Proof of Concept Overall, PoCs are intended to do exactly what they say – proof the concept. They do not prove the capability for use on real programs. While this full testing cannot be done as a PoC, what can and should be done is to select PoCs that expose the positives and negatives for all MBT tools being evaluated. Make the PoC really meaningful to you or run multiple PoCs to demonstrate tool differences and that it will work for you. It may be difficult to determine the real engine differences, but that understanding is THE most critical learning from running a PoC.
<urn:uuid:282eb48f-5d5f-4597-84d0-5475e2c70ce8>
CC-MAIN-2017-09
https://www.conformiq.com/2013/12/what-is-important-when-selecting-an-mbt-tool/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00278-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942249
2,963
2.890625
3
Visser I.N.,Orca Research Trust | Zaeschmar J.,P.O. Box 91 | Halliday J.,6 Kennedy Street | Abraham A.,Care of Orca Research Trust | And 13 more authors. Aquatic Mammals | Year: 2010 The first record of killer whale (Orcinus orca) pre-dation on false killer whales (Pseudorca crassi-dens) is reported here. On 25 March 2010, a group of 50 to 60 false killer whales, including approximately 15 calves and accompanied by three to five bottlenose dolphins (Tursiops sp.), were sighted in the Bay of Islands, New Zealand. Within 30 min, they were approached by a group of approximately eight killer whales. Five false killer whales were attacked, with at least three rammed from below, forcing them out of the water. After 29 min, the killer whales were milling at the surface and feeding on the carcass of a false killer whale calf, possibly the only individual killed. The killer whales had prolific fresh and healed oval wounds, which were attributed to cookie cutter shark (Isistius sp.) bites. Source
<urn:uuid:7348c84f-9469-42d0-ba74-d61319635fec>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/care-of-orca-research-trust-857917/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00454-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948285
248
2.625
3
Live monthly webcasts for Inventors, Technologists, and Startups. Simple description of patents: how to turn an invention into a patent, how they protect your rights, why do you need them, and how much do they cost? Talk presented by Thad Gabara who holds over seventy patents and is licensed to prosecute patent applications before the United States Patent and Trademark Office. Teaching, Suggestion and Motivation (TSM) occured due to the Supreme Court case of Graham v. John Deere Co., 383 U.S. 1 (1966). A second Supreme court case called KSR concerns the issue of obviousness as applied to patent claims. An examiner can reject a claim based on common sense of a person having ordinary skill in the art (PHOSITA). Common sense is a perception and Voltaire has stated "Common sense is not so common." If you are interested in obtaining a patent, this talk is a must see. Claims are typically partitioned into apparatus, method and "means for" type language. Some of the mystery of reading claims is uncovered. Terms covered include : antecedent basis, negative limitation, etc. Time permitting a patent case may be presented. The basic fundamentals of patents are presented. The history, rights and types of patents are addressed. The required components of a patent application and topics that can not be patented are detailed. Patentability issues based on basic law and a description of 35 U.S.C. §101, §102, §103 and §112 are covered.
<urn:uuid:45a103d3-789f-4deb-81b1-71b213907054>
CC-MAIN-2017-09
https://www.brighttalk.com/channel/187/intellectual-property-patents
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00398-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935728
319
2.796875
3
Last May, China launched a rocket into space, supposedly to release a “cloud of barium” 10,000 km (6,213 miles) in the air to research the magnetosphere. But Brian Weeden, a space analyst at the Secure World Foundation, has rounded up a growing body of evidence to suggest the launch was in fact the test of rocket designed to launch a “kinetic kill vehicle” to destroy enemy satellites. The internet is making what governments do in space, like everywhere else, less mysterious. Weeden’s theory is based on publicly-available information, including an analysis of where debris from the rocket landed, satellite photography of the launch pad, and photos of the rocket plume posted by observers in Hong Kong: Weeden says last May’s rocket launch was actually a secret test of the first system that could be used to target satellites in high orbits. That’s different from the anti-satellite weapons China has tested before, which analysts believe are designed to destroy satellites closer to earth. A botched 2007 test of of one of these weapons created a cloud of satellite debris that inspired the movie “Gravity.” The US has not publicly questioned China’s anti-satellite program, though it almost certainly suspects what its rival is up to. US diplomats privately scolded China after the 2007 screw-up and, in a memorably passive-aggressive diplomatic cablereleased by Wikileaks, asked a number of questions like this: —- The U.S. position on the relationship between missile defense, stability, and deterrence is well-known. In light of China’s testing activity, how does China view missile defense and its relationship to the Asia-Pacific military balance, deterrence, and stability? This public silence doesn’t sit well with Weeden, who fears that it will set back efforts to develop peaceful norms in space. The US hasn’t tested specific anti-satellite weapons in decades, but its arsenal of missile defense system rockets could be easily adapted to the purpose. America even blew up one of its own failing satellites in 2008 with a modified missile, a move that has been interpreted as a response to China’s 2007 effort. As long as public silence enables the US, China and Russia and perhaps next India to develop anti-satellite weapons on the sly, it magnifies the possibilities of an armed space race and more accidents that create dangerous space debris. While the US would rather not let on to how much it knows about China’s testing efforts, Weeden draws a parallel to the recent revelations about NSA surveillance that have complicated America’s online diplomacy. It’s better, he proposes, to talk about the challenges of navigating the increasingly important infrastructure of space than create a vacuum of uncertainty.
<urn:uuid:d9ee27c0-82b6-494c-9950-8ef97017baed>
CC-MAIN-2017-09
http://www.nextgov.com/defense/2014/03/chinas-secret-antisatellite-weapons-should-be-everyones-radar/80818/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00574-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943111
576
2.625
3
Earlier this summer, the UK’s University of Reading held a competition in memory of Alan Turing. Known as the father of artificial intelligence, Turing estimated that computers would be able to hold conversations with humans to a certain degree by the year 2000. More than a decade into the new millennium, no system has successfully passed Turing’s test of computer intelligence. That may change in the near future as one team nearly accomplished the feat back in June. The Telegraph posted an article about the near success. Alan Turing believed if a computer could pass for human in conversation, it would be defined as intelligent. His test, which was originally titled “the imitation game,” had a five-minute time limit and required the system to fool at least 30 percent of the humans it spoke with. During the university’s “Turing test marathon,” a program came very close to passing the mathematician’s test. Named “Eugene,” the application emulated a 13 year-old boy via a chat interface and fooled 29.2 percent of the humans it interacted with. Programs like Eugene require advanced language processing abilities. Without this functionality, applications would be unable to figure out the context of a message and create an appropriate response. It’s this same capability that vaulted IBM’s Watson supercomputer to popularity. Watson famously exhibited an ability to understand words in context when competing on Jeopardy. Combine that functionality with a massive knowledge base and the system made quick work of all-time Jeopardy champ Ken Jennings. Of course, IBM didn’t spend all that time and capital to create a game show killer. Watson has found a home in the medical industry, helping professionals at Sloan Kettering, WellPoint and other institutions. As the system ingests more medical data, its machine learning algorithms are expected to become more accurate at delivering patient care. Watson’s style of learning is similar to that of humans: as people receives new input; they typically store it in memory and learn to react appropriately in the future. In this case though, Watson’s users understand they are communicating with a computer, which automatically disqualifies it from the Turing test. In the end, for a computer to pass Turing’s test, it doesn’t need to encapsulate human understanding and intelligence, just simulate it — or to put it another way, to become an efficient liar. Of course, that may not be so different from being human after all.
<urn:uuid:65b98182-feb9-4471-95a4-68661e68015c>
CC-MAIN-2017-09
https://www.hpcwire.com/2012/08/27/benchmarking_computer_intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00274-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947769
520
3.296875
3
Internet Protocol Version 6 (IPv6) continues to be the focus of much work within the IETF as well as throughout the world in numerous deployment projects. The success of IPv6 depends not only on the protocol itself but also on its interaction with existing services such as the Domain Name System (DNS). In our first article, David Malone looks at some issues with DNS servers and IPv6. If you are interested in following the progress of IPv6 deployment, you might want to visit The IPv6 Forum's Website at: A couple of years ago I signed up for GSM cellphone service and later added GPRS data service to my account. With my Bluetooth-enabled phone and laptop, I can access the Internet from almost anywhere in the world. The service is neither particularly fast nor inexpensive, but for occasional use it works very well, and has "saved the day" for me numerous times. However, GPRS is not the only wide-area wireless data network technology. Kostas Pentikousis gives an overview of the many alternatives. The term "Internet Governance" is not well-defined, but it is being used more frequently when speaking about such organizations as the Internet Corporation for Assigned Names and Numbers (ICANN). The formation of the World Summit on the Information Society (WSIS) and its Working Group on Internet Governance (WGIG) has certainly brought the term into sharper focus. Although governance is certainly not a technical protocol issue, we still believe that it is important for our readers to follow both the debate about and the actual evolution of Internet Governance issues. However, we fully appreciate that this is an area where opinions differand that is why the article by Geoff Huston on this topic is labeled "Opinion." Your feedback is very much appreciated, so drop us a line at Ole J. Jacobsen, Editor and Publisher
<urn:uuid:02a4f164-df5d-4af4-9b41-fffe24a27801>
CC-MAIN-2017-09
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-31/from-editor.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00150-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950224
395
2.578125
3
Could a smarter grid withstand storms like Sandy? - By William Jackson - Nov 07, 2012 A week after Hurricane-turned-superstorm Sandy hid the East Coast tens of thousands remained without power, disrupting everything from the election to gasoline supplies. We can’t prevent violent weather, but a smarter and stronger electric grid could help mitigate its impact by minimizing damage and speeding recovery, said Massoud Amin, professor of electrical and computer engineering at the University of Minnesota. The Energy Department is overseeing efforts to develop and implement a national smart grid power infrastructure, and the American Recovery and Reinvestment Act has pumped billions of dollars into the public-private sector effort. The result could be fewer and shorter headaches and heartaches in the wake of damaging storms. “The current infrastructure is an amazing achievement of engineering for the 20th century,” said Amin. But changes in business models, fragmented regulatory oversight and a demand for electricity that is outpacing capacity have left the grid inadequate for the 21st century. The result is an unnecessarily fragile system in which failures cascade to spread outages, cause additional damage and delay restoration. Amin, a senior member of the Institute of Electrical and Electronics Engineers who chairs the IEEE Smart Grid newsletter, for nearly 15 years has been advocating the idea of a self-healing grid that could monitor and respond quickly to conditions, minimizing damage with near-real time adaptive behavior. It would not prevent outages. “When a storm hits a populated area you are going to need crews on the ground. What the self-healing grid would do would be to localize the area of disruption,” Amin said. “It could help reduce outages dramatically, by at least an order of magnitude.” This would require end-to-end upgrades of the system, from the fuel source through generation, distribution and transmission, down to the consumer premises. But it could improve monitoring and automate decision making to provide: - Real-time monitoring and reaction to system conditions so the system can tune itself to an optimal state. - Measurements on the system made 20 times a second rather than the current rate of every three or four seconds. - The ability to anticipate problems by looking for anomalies in the system and predicting their impact and the results of various responses. - Rapid isolation of problems through automated action to limit damage. Such a fix would not come cheap. In addition to the new technology to monitor and run our systems, North America needs an additional 42,000 miles of new high voltage transmission lines to incorporate new capacity and integrate local and non-traditional energy sources, such as solar and electric power, into the system. “We need about $30 billion a year for 20 years to enable the smarter, stronger grid,” Amin said. The results would be worth the investment, he said. Electric outages current cost the nation from $80 billion to $180 billion a year in productivity and spoilage, not counting damage to physical infrastructure. Amin estimates that a self-healing grid could cut that figure by $49 billion a year, and that a projected improvement in system efficiency of 4.5 percent would produce additional savings of $20.4 billion a year. The effort also could create thousands of jobs. Overall, Amin estimates that, a self-healing grid would return about $3 to $6 for every $1 spent, and would cost only about one-tenth the amount it would require to bury distribution and transmission lines. Investment, research and implementation on the needed technologies already have begun. The American Recovery and Reinvestment Act of 2010 provided $3.4 billion for a Smart Grid Investment Grant program, and the energy industry has spent an additional $4.4 billion. Much of the money is going to installation of smart meters in customer premises to help enable better monitoring and load control. Further upstream in the system, phasor monitoring units are being installed to monitor conditions at generation and distribution facilities and lines. “We need to speed this up,” Amin said. “I cannot imagine that in the United States we should have to learn to cope with blackouts.” The ability to cope is useful, but as a strategy it is inadequate. William Jackson is a Maryland-based freelance writer.
<urn:uuid:8907b624-ad53-402c-8672-fcdb700a5644>
CC-MAIN-2017-09
https://gcn.com/articles/2012/11/07/could-a-smarter-grid-withstand-storms-like-sandy.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00150-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94616
887
2.59375
3
What Is Shift Management? A Definition of Shift Management Any company or organization that runs two or three work shifts per day handles shift management. Setting the shift schedule is one of the very first tasks businesses undertake when they begin operating, but shift management must be ongoing once a business is up and running. Shift management involves giving each worker a clear idea of his responsibilities, including his time schedule for reporting to work each week. Determining Shifts and Other Shift Factors The block of time during which workers report to work and perform their duties is a shift. Owners and managers often work together to determine the shift times and the frequency of shift changes. You may begin to determine shift times by considering the days and hours that you will be open for business and then set the total number of hours for your workers on a weekly basis. Part-time shifts typically are four hours, and full-time shifts may vary from about eight to twelve hours, depending on the number of shifts you set for your company and the number of hours you plan to be in operation daily and weekly. Some companies survey workers or ask for their available work times on their employee applications, in an effort to match workers to the shifts that best suit their schedules and personal needs. As your business grows, you will need to determine how many employees need to be on shift throughout the week. Once you have determined the shift schedule for your organization, it is helpful to require workers to clock in and out or use some form of a time system in order to get paid. Challenges Associated with Shift Management and 24/7 Operations It can be especially challenging to manage shifts if your organization operates around the clock. You will need to determine when one workday ends and the next begins. You also may need to consider how to calculate overtime if a shift spans two days, and what the pay rate will be if an employee works a shift that spans a paid holiday. These issues are all the more challenging if you have a time and attendance system that operates on a standard 9am – 5pm schedule. There are three areas of shift management that you need to keep in mind when setting your shifts and exploring shift management solutions that will fit your business needs: - Pay periods and shifts – Typically, pay periods in a standard schedule begin on Sunday and end on Saturday. In this type of a shift schedule, shifts also normally are contained within one workday, and do not overlap from one day to the next. With a 24/7 schedule, however, shifts easily overlap pay periods and days. This becomes challenging if workers in one shift make a rate that is different from workers in another shift. - Overtime pay – In a standard shift schedule, workers collect overtime for any work put in after the 40-hour workweek. This is fairly easy to track and manage. However, overtime pay can become a shift management nightmare in a 24/7 operations schedule. If a worker’s shift straddles two different pay periods, you need to be prepared to pay the worker fairly and correctly. Some businesses pay for the worker’s shift in her first pay period when the shift began, some pay in the second pay period when the shift ended, and some split the pay between the two. You need to be aware of these issues, especially if your employees are entitled to overtime pay. - Holiday pay – Holiday pay sometimes is even more of a shift management challenge than overtime pay. You will need to determine how you are going to handle paying employees whose shifts cross over a regular day and a holiday. If your organization gives paid holidays, will your worker get paid for the hours of the regular day and then leave the shift early, when the holiday begins? If it’s a paid holiday but the employee works, what will her compensation be? Shift management solutions can help you plan and schedule shifts for any workforce. If your organization is large, or if you are struggling with the shift management challenges mentioned above, it may be time for you to consider a shift management solution that accounts for employee availability and preferences and sets schedules based on your company’s policies and regulations.
<urn:uuid:ddc32ad3-cc55-4eef-afda-0680053f153d>
CC-MAIN-2017-09
https://www.clicksoftware.com/blog/what-is-shift-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00094-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94509
840
2.90625
3
· Temperature- The temperature of the battery within its environment will affect its capacity and therefore the available runtime. Capacity decreases at lower temperatures and increases at higher temperatures. · Age of the battery - A new battery will only reach its optimum capacity after a two or three discharge and charge cycles, plus as it ages its capacity will decrease, typically down to approximately 80% at the end of its life. · Temperature & battery life - Whilst the capacity of a battery will increase at higher temperatures its life will be reduced, typically by 50% for every 10 deg Celsius rise above 25 deg Celsius. · Recharging the battery - Whilst a battery will recharge to 90% capacity fairly quickly, it can take up to 72 hours to reach its full capacity again. This is because as the battery approaches its fully charged state it becomes more difficult to get energy back into the battery. · The frequency and depth of discharge cycles - The frequency and depth of discharge will affect the life and capacity of the battery, reducing with a higher number and depth of cycles. · The load itself - In particular the actual size of the load compared with the theoretical load. · The load characteristic - The runtime calculation is based on tests with resistive load and may differ with inductive or capacitive load. · Light load accuracy- The estimated runtime values quoted on the web are based on an algorithm which tends to be less accurate for light loads and long runtime applications. · Storage - When a battery is stored, be that within a UPS that is not connected or as a RBC, it will gradually lose capacity through self discharge, a situation which is accelerated at higher storage temperatures. So providing the storage period and temperature has not exceeded the typical battery manufactures specification the battery will return to near full capacity, but sufficient recharge time, 48 - 72 hours must be allowed for the capacity to be recovered. Typical maximum storage period is 12 months at 25 deg Celsius, 8 months at 30 deg Celsius and 4 months at 40 deg Celsius.
<urn:uuid:b037805e-01fa-4717-a2c6-8e7bbacb8a98>
CC-MAIN-2017-09
http://www.apc.com/eg/en/faqs/FA157077/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00446-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932002
405
3.203125
3
Things are, literally, looking up in Japan more than a year after a 3.11-magnitude earthquake and tsunami hit its northeastern coastline, causing the worst nuclear accident since the rupture and explosions at the Chernobyl RBMK reactor in the Ukraine in April 1986. This week, the nation known as the Land of the Rising Sun announced plans to build a solar power complex with a total generating capacity of 100 megawatts (MW) that will replace the crippled Fukushima Daiichi Nuclear Power Plant— making it the biggest solar project in Japan ever. The Tokyo-based multinational electronics corporation, Toshiba (News - Alert), said it will spend around 30 billion yen (US$379.6 million) to construct several utility scale solar plants in Minamisoma, about 16 miles north of the original Fukushima generation sites. Toshiba said it will start building the plants this year and start operations in 2014. The project surpasses an earlier plan by Kyoto-based solar system manufacturer Kyocera Corp. (News - Alert), which, in partnership with two companies headquartered in Tokyo—heavy machinery maker IHI Corp. and Mizuho Corporate Bank—proposed to launch a 70-MW plant in southern Japan. Toshiba’s announcement followed closely upon the Japanese government’s approval of new incentives for renewable energy starting that will be effective as of July 1— including the introduction of feed-in tariffs (FITs), a move that is calculated to unleash billions of dollars in clean-energy investment. Indeed, according to Reuters (News - Alert), Japan is poised to overtake Germany and Italy, to become the world’s second-biggest market for solar power, as incentives drive sales for equipment makers—from Yingli Green Energy Holdings Co. to Sharp (News - Alert) Corp. to Kyocera Corp. To take advantage of the subsidies, Yingli, based in Baoding, China, has made plans to start operations in Japan. Under the new program, utilities will buy solar, biomass, wind, geothermal and hydro power. All costs will be passed on to consumers in the form of surcharges, which the government today said will average out at 87 yen (about US$1.00) a month per household. The government’s previous average estimate was 100 yen (about US$1.25). The measures expand on a program launched in late 2009 that requires utilities to buy solar power that the generator doesn’t need. That policy expanded the market for rooftop residential panels. The new incentives will encourage utility scale projects, including those already planned by Toshiba and Kyocera. Solar stocks rallied upon release of the news. In related news, several days ago, a power company in western Japan was given the go-ahead by the government to begin work to restart two reactors in Ohi town, a process that is expected to take several weeks. Despite lingering safety concerns, the restart could speed the resumption of operations at more reactors across the country. All of Japan's 50 nuclear reactors currently are offline for maintenance or safety checks. Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO West 2012, taking place Oct. 2-5, in Austin, TX. ITEXPO (News - Alert) offers an educational program to help corporate decision makers select the right IP-based voice, video, fax and unified communications solutions to improve their operations. It's also where service providers learn how to profitably roll out the services their subscribers are clamoring for – and where resellers can learn about new growth opportunities. For more information on registering for ITEXPO click here. Stay in touch with everything happening at ITEXPO. Follow us on Twitter. Edited by Amanda Ciccatelli
<urn:uuid:952e5230-1785-496f-8f7b-5e9aaf98fce5>
CC-MAIN-2017-09
http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/06/20/295715-japan-utility-scale-solar-plants-rise-from-fukushima.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00322-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93371
776
2.765625
3
GCN LAB IMPRESSIONS A few things you might not know about Memorial Day - By John Breeden II - May 26, 2011 Other than the Fourth of July, I think Memorial Day is one of the most beloved American holidays, at least among the holidays that take place during warm weather. But although shopping, barbecues, the Indianapolis 500, the beach and fireworks are all part of the fun, let’s not forget what Memorial Day is all about. As holidays go, it’s one of the most solemn we observe. Memorial Day used to be called Decoration Day, the day when the graves of those who died in battle were decorated. And thankfully, that tradition continues. For a long time it happened every May 30 and although that day still stands as the official date, the observation of the holiday was changed in 1968 to provide a three-day weekend. Besides decorating the graves of the fallen, there are some solemn ways to remember those who died, traditions that you may not even know about. Did you know, for example, that the U.S. flag only flies at half-staff for half the day on Memorial Day? On that day, the flag is raised to the top of the staff at dawn and then solemnly lowered to the half-staff position, where it remains until noon. It’s then raised to full-staff for the remainder of the day. Why? The half-staff position remembers more than 1 million men and women who gave their lives in service to their country. At noon, their memory is raised by the living, who promise not to let their sacrifice be in vain, but to instead rise up in their place to continue the fight for liberty and justice for all. Probably the best way to honor the fallen is to visit a local cemetery and place flags on the graves of veterans buried there. Two places where the holiday will be observed in full force are the Gettysburg National Cemetery and the Arlington National Cemetery. At 3 p.m. on Memorial Day, everyone should pause for one minute and remember those who have given the ultimate sacrifice for this nation. Taps should be played. If you don’t happen to be a bugler, you can listen to it courtesy of the Army training manual. Although there are no official words to taps, these are the most popular and beautiful ones. As such, I’d like to end my column with them. Have a safe and solemn Memorial Day, everyone. Day is done, gone the sun, From the hills, from the lake, From the skies. All is well, safely rest, God is nigh. Go to sleep, peaceful sleep, May the soldier, or sailor, God keep. On the land, or the deep, Safe in sleep. Love, good night, Must thou go, When the day, And the night, Need thee so? All is well. Speedeth all, To their rest. Fades the light; And afar, Goeth day, And the stars, Shineth bright, Fare thee well; Day has gone, Night is on. Thanks and praise, For our days, 'Neath the sun, Neath the stars, 'Neath the sky, As we go, This we know, God is nigh. John Breeden II is a freelance technology writer for GCN.
<urn:uuid:c407a908-de21-4bb6-a939-4b9910e4a9a2>
CC-MAIN-2017-09
https://gcn.com/articles/2011/05/26/memorial-day-the-proper-way.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00498-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951588
701
2.796875
3
According to a recent Ponemon study, since 2010 cybercrime costs have climbed 78% and the time required to recover from a breach has increased 130%. On average, U.S. businesses fall victim to two successful attacks per week where their perimeter security defenses have been breached. Penetration testing (pen testing), also known as “ethical hacking,’ is an important and key step in reducing the risks of a security breach because it helps provide IT staff with an accurate view of the information system from an attackers point of view. The pen test process results in an active analysis of the system for any potential vulnerabilities that could result from poor or improper system configuration, from both known and unknown hardware or software flaws, or operational weaknesses in process or technical countermeasures. In other words, through pen testing, IT teams find the holes and vulnerabilities and quickly work to fix these areas to prevent attacks. The one thing that separates a pen tester from an outside malicious attacker is permission to gain entry to the information system. The pen tester will have permission to “attack’ and is thereby responsible to provide a detailed report of results found. Examples of a successful penetration would be obtaining confidential documents, identity information, databases and other “protected” information – all without the need for passwords or other security measures. Pen tests are a component of a full security audit. For example, the Payment Card Industry Data Security Standard (PCI DSS), and security and auditing standard, requires both annual and ongoing pen testing (after system changes). Pen tests are valuable for several reasons, including: - Determining the risk associated with a particular set of attack vectors - Identifying higher-risk vulnerabilities that result from a combination of lower-risk vulnerabilities exploited in a particular sequence - Identifying vulnerabilities that may be difficult or impossible to detect with automated network or application vulnerability scanning software - Assessing the magnitude of potential business and operational impacts of successful attacks - Testing the ability of network defenders to successfully detect and respond to the attacks - Providing evidence to support increased investments in security personnel and technology. Obviously, there are a variety of ways to secure databases, applications, and networks, as there are many layers and levels to be secured. But the only way to truly assess the security of an environment is through direct testing. A good pen tester can actually replicate the types of actions that a malicious attacker would take, giving IT a more accurate view of the vulnerabilities within a network at any given time. There are a number of high quality commercial tools available, that can be implemented to ensure that both testing parameters and results are high-quality and trustworthy, but nothing replaces a hands-on direct test. Even so, the quality of pen testing can vary by the skill and thoroughness of the pen tester. Given the limited time available for testing it is impossible to exercise all aspects of an application with all possible attack vectors. This problem is compounded in environments where secure coding practices have started to take root. Often the first phase of secure coding often involves limiting failure feedback to the users to limit the information a hacker has to determine he has discovered a flaw. Unfortunately these same limitation make the pen testers job more difficult as well. Unfortunately, this means it is highly unlikely that a pen tester will find all the security issues. To aid in finding these partially obscured vulnerabilities it is necessary to monitor the application from within. This insures that tests that breach the application but don’t create a response the pen tester can use will still be seen, as they are still vectors that could be exploited by a dedicated hacker. Further, it’s important to note that a pen test is a snap shot in time and new vulnerabilities appear every day. Companies have to employ continuous monitoring throughout their information systems including in the database tier and be vigilant against attacks. For example, if a pen test was performed on a Monday, the organization may pass the pen test. But what if the next day, there’s an announcement of a new vulnerability in database servers that were previously considered secure? And the next week or next month another vulnerability is announced? This is a scenario that plays out on a regular basis. Companies are constantly playing “catch up’ apply patches. Ongoing, regular pen testing is critical and has proven to be a highly accurate method in identifying information system vulnerabilities. To get the most out of a thorough pen test the system should be properly instrumented to log all activity at the network tier, web tier, and database tier. At the conclusion of the pen test the logs from these instruments can provide extremely valuable insight into the system vulnerabilities. As with most policies and procedures however, there still may be issues that need resolving. Many organization feel that pen testing is an area open for “abuse’ – most likely due to the fact that there are no firmly adhered to rules for the pen testing procedure. It is possible for a pen tester to skirt the process. The PCI DSS regulation has 12 mandatory requirements for stringently protective guidelines, built to preserve the safety and identity of cardholder data – and in particular, section 11.3 for example, gets to the heart of the pen test, which is quite different from the former sub-section requirements. 11.3 is technically not a new requirement. Previous versions of the PCI standard made assumptions merchants would always conduct legitimate pen tests. Unfortunately, 11.3 is an area of the PCI DSS regulation that has been excessively abused. Companies have previously cut corners on this requirement and many pen testers were know to conduct meaningless scans in place of real testing. The new 3.0 version of the PCI DSS regulation effectively ends this scenario and companies will be required to develop and adopt an official methodology for testing. However, some believe that V3.0 is still lacking with regards to the precise industry-accepted methodology for pen testing the merchant should implement. The good news is that the PCI Council has continued to follow up on this issue and is forcing new measures be adopted by organizations around the world. PCI DSS 3.0 requires that organizations identify the scope of their card data environment and have a pen test conducted that proves the card data environment is truly segmented from the rest of their network and the open Internet. With the new rules in place with V3.0, demand for pen testers is on the increase, which is probably a good thing. The new requirements should help stop the abuse, and foster policies for accurate pen testing. These new pen testing requirements are long overdue. Merchants need to take pen testing seriously and adopt the new requirements as soon as possible to ensure they’re prepared for their first PCI DSS 3.0 assessment this year.
<urn:uuid:9dd5c952-4121-4b86-9d94-72b883798f78>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2014/01/23/penetration-testing-accurate-or-abused/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00142-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954245
1,389
2.890625
3
An A-Z guide to the technical terms used in Labs Adware is F-Secure's classification name for software that displays advertisements on the computer or device. The advertisements may be displayed on the desktop or during a web browsing session. Adware is often bundled with free software that provides some functionality to the user. Revenue from the advertising is used to offset the cost of developing the software, which is therefore known as 'ad-supported'. Most users on a computer system will log into a restricted 'user account', which only allows them to makes setting changes to the computer that would affect their own account. Changes made to one user account may not affect settings in another account. For system administration purposes, most computer operating systems have a special, restricted account for making critical changes that may affect all accounts on the machine. Depending on the operating system, this account may be known as root, administrator, admin or similar. A user with access to this account is said to have administrative rights, or essentially total control of the computer system. An alias is the name given by other antivirus vendor(s) for the same unique malware file or family. The differences in names for a given file or family is due to differences in naming procedures used by various antivirus vendors. In describing a malicious file or family, aliases are usually provided to indicate that the varying names identify the same malware. For example, the worm identified by F-Secure as 'Downadup' also has the aliases 'Conficker' or 'Kido', depending on the antivirus vendor in question. Alternate Data Stream An extension to Microsoft's Windows NT File System (NTFS) that provides compatibility with files created using Apple's Hierarchical File System (HFS). Applications must write special code if they want to access and manipulate data stored in an alternate stream. Some applications use these streams to evade detection. An anti-spyware program may be a standalone application, though nowadays many anti-virus programs also include anti-spyware functionality. A program that scans for and identifies malicious files on a computer system. An antivirus program's core is the scanning engine, the module responsible for scanning every file on the computer system to find supicious or malicious files. The scanning engine works in tandem with the program's antivirus database, a collection of virus signatures that identify known malicious files. During the scanning process, the scanning engine compares to each scanned file to those in its database. If a match is found between a virus signature and a scanned file, the file is considered malicious. A collection of virus detections or signatures used by an antivirus program during its scanning process to identify malware. When scanning a computer for malicious programs, an antivirus program compares each file inspected against the virus signatures in its database; if a match is found, this indicates that the file is shares enough similarities with a known malware to be flagged. Because this type of analysis depends on the antivirus program having an accurate signature with which to perform a comparison, it is known as signature-based detection. As new malware is constantly being created, new virus signatures must continually be added to antivirus databases to identify these new threats. An antivirus program is therefore most effective if its antivirus database contains the latest updates. Application Programming Interface (API) An Application Programming Interface (API) is a defined set of instructions, specifications or protocols used to transfer commands or requests between applications. There are many APIs available, and their use is usually dependant on the programming language or software(s) involved.
<urn:uuid:6c15c96b-99ca-48bc-be75-66fa0f8f8a75>
CC-MAIN-2017-09
https://www.f-secure.com/en/web/labs_global/terminology
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00318-ip-10-171-10-108.ec2.internal.warc.gz
en
0.914733
735
3.15625
3
Can SSL and TLS be made Compatible? We are sometimes asked if there is any way to make SSL and TLS be compatible with each other. On the surface, this may seem almost nonsensical, but there are cases where such a question actually makes sense! SSL (Secure Sockets Later) and TLS (Transport Layer Security) are fundamentally the same form of encryption – see SSL versus TLS – what’s the difference. But if that is the case, doesn’t that make them automatically compatible? Well, not really. How are SSL and TLS Used? SSL and TLS are used to secure communications over the Internet, e.g. POP, IMAP, SMTP, Web site traffic, Exchange ActiveSync traffic, API connections, and much more. Their use helps ensure that you are connecting to the proper servers and that the communications are not eavesdropped upon. The actual encryption mechanisms are used by SSL and TLS are the same; however, the difference relates to how the encryption is initiated. - SSL: With a server expecting an SSL connection, it expects the user’s computer to start negotiating security immediately … nothing can happen until the SSL connection is established and the mechanism of establishing SSL is the same no matter what will go through that secure connection once it is established. - TLS: With TLS, the server expects an unencrypted connection from the user’s computer with the computer “speaking the language” of whatever service it is trying to talk to (e.g. SMTP to send outbound email). Before anything sensitive is said, your computer can specify commands in that language to start negotiations to make the communications channel encrypted (e.g. with SMTP, your computer would issue the “STARTTLS” command and then dialog with the server to get things encrypted). Once encryption is established, all the important things like your username, password, and data are sent across safely and securely. So with SSL, you talk security first, business second. With TLS, you start talking business first, but its small talk. You talk security second and then important business third. The level of security is the same and the important business is protected in both cases. So, why are they not compatible? If you have a program that can only talk SSL, say, and not TLS but you need to connect to a service that only supports TLS … you can’t do it. Your program wants to talk security first; their system wants to do service specific small talk first. They don’t jive. A good example might be an outbound email program that can do TLS on port 25 (the standard SMTP port) and SSL on alternate ports (like 465) but which was never made so it could do TLS on alternate ports. Old versions of Microsoft Outlook had this quirk. If you could not connect to port 25 because your ISP was blocking you and you needed to connect securely to an alternate port, you’d better hope there was one with SSL support, because you would not be able to connect securely to an alternate TLS port. If there any way to make them compatible? Well, there is no way to make a “square” SSL peg connect to the “round” TLS hole or vice versa. At least, not without putting some kind of adapter in between. The simplest solution when you need to connect to a remote server that support SSL is usually to use a program like “stunnel“. This is a program that acts like an adapter: - It runs on your local computer or server. - It establishes a connection using SSL to a remote system that talks SSL (it doesn’t matter for what protocol). - You connect your software insecurely to the local stunnel server. E.g. you connect without SSL or TLS, but that is OK since you are connecting from your computer to itself. - Your communications then go securely from your computer to the remote server over SSL due to the stunnel connection. This works great if your program can connect without SSL or TLS and the remote server uses SSL. It is trickier if you need to connect to a TLS-only server and your program only supports SSL. There is no good simple solution for this case except for possibly: - Updating your program to one that supports TLS. - Contacting the service provider to see if they have any alternate SSL-supporting ports. - Using a different provider that supports SSL. LuxSci supports many standard and non-standard ports to address these restrictions.
<urn:uuid:ebc78ffc-0617-4262-bca6-05ff6c341d89>
CC-MAIN-2017-09
https://luxsci.com/blog/can-ssl-and-tls-be-made-compatible.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00370-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943044
950
3.109375
3
Could updated analog computer technology – popular from about 1940-1970 –be developed to build high-speed CPUs for certain specialized applications? Researchers at the Defense Advanced Research Projects Agency are looking to discover -- through a program called Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS) -- what advances analog computers might have over today’s supercomputers for a large variety of specialized applications such as fluid dynamics or plasma physics. +More on Network World: Quick look: 10 cool analog computers+ “[Analog computers and] their potential to excel at dynamical problems too challenging for today’s digital processors may today be bolstered by other recent breakthroughs, including advances in micro-electromechanical systems, optical engineering, microfluidics, metamaterials and even approaches to using DNA as a computational platform. It is conceivable, Tang that novel computational substrates could exceed the performance of modern CPUs for certain specialized problems, if they can be scaled and integrated into modern computer architectures,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office in a statement. “Critical equations, known as partial differential equations, describe fundamental physical principles like motion, diffusion, and equilibrium. But because they involve continuous rates of change over a large range of physical parameters relating to the problems of interest—and in many cases also involve long-distance interactions—they do not lend themselves to being broken up and solved in discrete pieces by individual CPUs. A processor specially designed for such equations may enable revolutionary new simulation capabilities for design, prediction, and discovery. But what might that processor look like?” DARPA recently issued a Request For Information soliciting the industry for details on how such analog or hybrid analog computer systems might work. The RFI is requesting responses in four interrelated Technical Areas as DARPA calls them. These include - Scalable, controllable, and measurable processes that can be physically instantiated in co-processors for acceleration of computational tasks frequently encountered in scientific simulation - Algorithms that use analog, non-linear, non-serial, or continuous-variable computational primitives to reduce the time, space, and communicative complexity relative to von Neumann/CPU/GPU processing architectures - System architectures, schedulers, hybrid and specialized integrated circuits, compute languages, programming models, controller designs, and other elements for efficient problem decomposition, memory access, and task allocation across multi-hybrid co-processors - Methods for modeling and simulation via direct physical analogy Analog computers solve equations by manipulating continuously changing values instead of discrete measurements. In their prime most analog computers were designed for specific applications, like heavy-duty math or flight component simulation. “In the 1930s, for example, Vannevar Bush—who a decade later would help initiate and administer the Manhattan Project—created an analog “differential analyzer” that computed complex integrations through the use of a novel wheel-and-disc mechanism. And in the 1940s, the Norden bombsight made its way into U.S. warplanes, where it used analog methods to calculate bomb trajectories,” DARPA noted. Check out these other hot stories:
<urn:uuid:ad6efb07-b176-46c5-8a80-ae4199fa841a>
CC-MAIN-2017-09
http://www.networkworld.com/article/2899923/data-center/could-modernized-analog-computers-bring-petaflops-to-the-desktop.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00366-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915373
666
3.328125
3
The US military has successfully demonstrated what it says is the first ever in-flight refuelling of an unmanned aircraft, and in doing so completed a flight test program that leads to a new generation of military drones due to take flight in the 2020s. The refueling was conducted on April 22 in the Atlantic Test Range, off the coast of Maryland and Virginia. There, an X-47B autonomous aircraft automatically maneuvered itself to mate with the refueling probe trailing a K-707 tanker aircraft and receive over 4000 pounds of fuel, the U.S. Navy said. The X-47B is an experimental drone built by Northrop Grumman as part of the Navy's Unmanned Combat Air System demonstration program and it has been flying since 2011. During its testing, it has performed others firsts such as a catapult launch from an aircraft carrier and a landing. It has served as a testbed for technologies that will be incorporated into UCLASS (Unmanned Carrier-Launched Airborne Surveillance and Strike), an unmanned drone program the Navy is developing for use from aircraft carriers. Inflight refuelling allows aircraft, both manned and unmanned, to considerably extend how long they can stay in the sky. The X-47B has a range of 2000 nautical miles, but that is on the low side compared to production drones in use by the US military. The RQ-4 Global Hawk, also built by Northrop Grumman, has a range of over 8000 miles. Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn's e-mail address is email@example.com
<urn:uuid:e912dc74-11cc-42fd-a3e9-1cf9a92110fe>
CC-MAIN-2017-09
http://www.arnnet.com.au/article/573482/us-drone-completes-first-in-flight-refueling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00542-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956929
363
2.796875
3
If you happened to be strolling through Chapman Square in Portland, Ore., this past April, you might have come across a curious sight: big, colorful “price tags” hanging from the park’s giant elm trees. Every tag said something different—one read, “This tree has given $20,000 worth of environmental & aesthetic benefits over its lifetime”—but all trumpeted the benefits of trees. Those tags were part of Portland’s first ever Arbor Month. The goal was to get Portlanders to look differently at trees, to see all the ways in which trees are good for the environment and people’s health, from decreasing stormwater runoff to reducing atmospheric carbon dioxide to improving air quality. The city declared that for every dollar spent on a tree, an estimated $3.80 worth of benefits are returned. So how did Portland come up with that figure, or the $20,000 figure for that matter? It used a modeling program called i-Tree, a suite of open-source software that allows cities, states and other users to “strengthen their urban forest management and advocacy efforts by quantifying the environmental services that trees provide.” Introduced in 2006 by the U.S. Forest Service, i-Tree is in its fifth iteration. It has inspired cities from Baltimore to New York City to Milwaukee to Portland to set ambitious tree-planting goals. The free program has been downloaded more than 10,000 times so far. With so many states and localities pruning money from parks and tree-planting programs to balance budgets, i-Tree helps public officials put a monetary value on the benefits of growing them. Take Pittsburgh. Last summer, the city approved a master plan for maintaining and expanding its tree canopy over the next 20 years. The decision came after a nonprofit group called Tree Pittsburgh used i-Tree to determine that the trees planted along sidewalks and medians throughout the city provided $2.4 million worth of environmental and aesthetic value every year. Since the city spends only $850,000 a year on street planting, that’s quite a return on investment: Pittsburgh gets about $3 in benefits for every dollar it invests in trees. i-Tree works by calculating the “leaf surface area” of a city and assigning the canopy an economic value. The value comes from the environmental services trees provide, such as how much ozone, particulates and nitrogen are removed from the air; how much carbon is stored; the effect on building heating and cooling costs; and trees’ effect on hydrology, among other factors. One especially neat feature is a module that links to Google Maps. It helps city foresters, homeowners and other users see the effects a tree would have if planted in a specific place. Researchers want the next version, which will likely be released in 2014, to enable modeling of trees and their benefits to ecosystems 30 to 50 years into the future. For now, i-Tree is just a basic calculator that helps proponents make an economic case for why trees should be in the budget. A growing body of knowledge on the benefits of trees, however, could make i-Tree’s job even easier. Research has already shown that trees increase property values. And now, a new study has found that living near trees dramatically improves health. Conducted over 18 years, research from the U.S. Forest Service has found a correlation between tree loss and human mortality. According to their findings, the loss of trees was associated with about seven additional deaths per year from respiratory causes and almost 17 additional deaths per year from cardiovascular causes per 100,000 adults. That, say researchers, comes out to more than 21,000 deaths in total. It seems trees have a value that goes far beyond dollars and cents. This article originally appeared in GOVERNING magazine.
<urn:uuid:5e093fc1-ed3c-4041-bf5c-89dd2ee88f65>
CC-MAIN-2017-09
http://www.govtech.com/applications/iTree-App-Attaches-Monetary-Value-to-Urban-Canopies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00542-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957667
792
2.59375
3
Today we’re using our smartphones the same way we use our computers – to surf the Web, make financial transactions, conduct business, and download software. Increasingly, we’re also using the same device for personal and business use. So why aren’t we protecting our phones the same way we do our computers? With rising threats of malware and other viruses, we should be. How does malware get onto your phone and what harm can it do? One common way you can be tricked into downloading infected software is through a method called “repackaging.” The cyber criminals take a legitimate application and modify it to include the malware. They then distribute it to a download site, where you think it’s the real thing. This was, apparently, the most common way to attack in the first two quarters of 2011. Increasingly, reports are saying that phones using the Android operating system are the most vulnerable to attack and more recently there was a threat of juicejacking – the siphoning of information off a user’s smartphone while it is charged. Once you’ve downloaded infected content, there are a number of ways that it can attack. It may find a way to make charges with your phone, track your location, access your financial transaction history or look for corporate espionage opportunities by searching for business information. There is even one Android Trojan that records victims’ phone conversations. While this is the bad news, there is some good news – it is relatively easy to keep your phone free and clear of malware and other viruses. Treat your phone like it’s a PC – never click on links or install software from any site unless you know 100 percent that it is trustworthy. Put a password on your phone and ensure all content and information is backed up and can be deleted from afar, just in case it falls into the wrong hands. And lastly, check out some of the antivirus or other mobile security tools that are available – they are getting better all the time and will soon be commonplace on smartphones.
<urn:uuid:134efb6a-5409-45d5-93b4-df4ba388c14b>
CC-MAIN-2017-09
https://blog.gemalto.com/blog/2011/11/28/are-smartphone-viruses-a-real-threat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00362-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924442
419
2.78125
3
"Learn together. Parents are constantly trying to stay on top of and control online and social media use by children and adolescents. The best way to teach online safety is to rely on open and honest communication, and to learn together. Looking through a social media feed together and asking your kids to think about what might be friendly versus what might be hurtful helps your child reframe her thoughts and view photos, updates and comments through an empathic lens. Empower your kids to make safe choices by talking with them, not at them." "There is never one fast and easy piece of advice. You need to know yourself as a parent, and your child, and be comfortable with the uncomfortableness you're going to be facing. But, you need to have a strong relationship between parent and child to know that there is an open door policy both ways. Don't be afraid to tell them the 1% of the online world - the haters, the trolls, the bullies and time-wasters - and your child needs to be able to tell you about those commenters, as well. The online space is a wonderful, educational and almost mandatory world these days, and best to keep those lines of communication open." "Be sure to turn off the hidden location tracking in your child's phone. According the manufacturers this setting is turned on for targeted advertising but it tracks everywhere your child goes and stores their most frequent locations. On top of that the information goes back a year or more. Even with it turned off you can still use "Find My iPhone" or other family locator apps. For iPhones, go to Settings > Privacy > Location Services > System Services > Frequent Locations. Then slide the bar to "Off." For Androids, go to Settings > Network Connections > Location. Click "Google Location History", hit "Delete Location History" at the bottom and click "Off." "Make sure your kids know that they must have permission for anything they download or watch. Keep an eye on what your kids are doing on their devices -- have them teach you or show you." "Teach your kids how to be savvy. Here are a couple examples: 1) Teach them to look at the website's URL to see if it seems like a legitimate link to click. 2) Teach them to never give their real name or real address or real birthday." "1) Know the platform before your child starts using it. If you're going to allow your young person to use a particular social media platform it is imperative that you understand the interface and what anyone can do on that platform. This will allow you to name boundaries and expectations in an informed way. 2) They may be more tech-savvy than you but you are fully in charge. Monitor usage and be clear and concise about your expectations around their behavior and the people they engage with. "In addition to having parental controls and the appropriate safe surfing software in place, what’s more important is that parents become knowledgeable about the evolving technologies/social networks out there and how kids and their peers are using them. In the end, the most impactful way to make a difference in kids’ online usage, is to have the ongoing and open conversation about what they’re doing and seeing online, making sure they are minding their mobile manners and turning into responsible digital citizens." "Tech safeguards can be really helpful, but our kids will be even safer as we focus more on helping them develop the internal safety 'tools': empathy, resilience and the three literacies of this social media environment (digital, media and social literacy)." Top tip for parents: “Make digital conversations as frequent as possible with your kids. Your child's online safety is a reflection of your offline discussions.” Top tip for teachers to share with students: “Social media is more than a venue to interact with friends and classmates. It's your future. Use it with care.” "We always screen apps and check for content before approving for review. Parents should screen for content, too, to make sure it's a good fit for their kids." "Not everything belongs online, try to minimize your kids’ digital footprint, as well.” "For children just starting on digital devices have your computer set up in a place where you can monitor them.” "Recognizing the importance of being *truly* involved in your children’s lives, “there’s no such thing as quality time, there is only quantity time (for and with kids)!”
<urn:uuid:4a87d3ad-46c9-4d5c-abbf-841f8e3b4404>
CC-MAIN-2017-09
https://www.dashlane.com/internet-security-roundup/school
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00538-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957213
936
2.59375
3
Scientific hindsight shows that Google Flu Trends far overstated this year's flu season, raising questions about the accuracy of using a search engine, which Google and the media hyped as an efficient public health tool, to accurately monitor the flu. Nature's Declan Butler reported today on the huge discrepancy between Google Flu Trend's estimated peak flu levels and data collected by the U.S. Centers for Disease Control and Prevention (CDC) earlier this winter. Google bases their numbers on flu-related searches (the basic idea being that more people Googling terms like "flu symptoms" equals more people catching viruses). The CDC, on the other hand, uses traditional epidemiological surveillance methods. Past results have shown Google to have a pretty good track record on mirroring CDC flu charts. But this time, Google's algorithms doubled the CDC's (accurate) figues — overshooting the mark in some regions by an even higher margin. There's no doubt that this year's flu season was severe. Outbreaks hit early and hard by any measure. CDC officials declared an influenza epidemic in early January, Boston's mayor called a public health emergency around the same time, and Chicago hospitals struggled to keep up with emergency room visits. Still, Google's alarming snapshot of over 10 percent of the U.S. population experiencing flu-like illness was nowhere near the actual peak of 6 percent incidence.
<urn:uuid:dfd2876a-06b4-4735-b27b-c338a09bb4c9>
CC-MAIN-2017-09
http://www.nextgov.com/health/2013/02/google-flu-trends-wildly-overestimated-years-flu-outbreak/61313/?oref=ng-relatedstories
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00238-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947281
281
3
3
Near field communication (NFC) has recently popped up in the news. The technology is most closely associated with mobiles phones—Google has added support for NFC in Android, and Samsung has NFC hardware in its Nexus S handset, while Apple is rumored to be adding NFC support to future iPhones. NFC is an evolution of the simple RFID technology employed in "contactless" payment systems such as MasterCard PayPass and Visa payWave. It's also similar to (and compatible with) the FeliCa system used widely in Asia for mobile payments and ticketing systems. In this article, we'll tell you what NFC is, how it works, and how it can be used. RFID and FeliCa: NFC precursors NFC is an evolution of the original specifications used to create RFID tags and contactless payment systems. ISO/IEC 14443 defines specifications for identification cards, contactless integrated circuit cards, and proximity cards, which for the purposes of discussion here we'll call "tags." The spec also describes two primary methods of data transmission over the 13.56MHz frequency band—Type A and Type B—using a standard connection protocol between an active "reader" device and a typically "passive" tag. Type A communication uses Miller bit coding with a 100 percent amplitude modulation. Here, the signal varies between nearly zero amplitude and full amplitude to signal low and high values. Miller, or delay encoding, works by varying the time between signal transitions depending the sequence of bits compared to a clock signal. It also transmits data at 106kBps. Type B uses variations of Manchester encoding along with a 10 percent modulation. Here a 90 percent signal is "low" while a 100 percent signal is "high." Manchester encoding only looks at the signal transition at the middle of a clock period, so the transition from low to high is considered a "0" while a transition from high to low is considered a "1." When two devices are brought within a range of about 4cm (the spec does allow for maximum distances of up to 20cm, though 4cm or less is common), a reader device's RF signal will cause a current to flow through the antenna built in to a RFID tag or smart card, activating its circuit. In this near RF field, the antennas of the two devices act as coils of a transformer with the distance between the two acting as an air core. Changes in signal transmitted by the reader result in changes in power flow in the tag, and changes in power flow of the tag—accomplished via load modulation—result in changes in signal detected by the reader. Data is transmitted back and forth using the various coding schemes described above. Once current starts flowing, the reader will poll the tag to find out what communication method it uses. Communication is half-duplex, meaning that communication only happens in one direction at a time, so the protocol defines specific methods for two devices to ask for data and respond correctly. In effect, the reader will ask the tag to agree to a communication speed and method, and begin setting up a link once the signaling method is agreed upon. After a link is established, the reader will tell the tag what kind of applications it supports and request data or commands. The tag will respond with any data or commands that correspond to the request. To give an example, a credit card reader that works with Visa's oft-advertised "payWave" system will ask the tag embedded in a Visa credit card for the card number and expiration date. The card, if it is designed to respond to that request, will transmit the information back to the reader. The system is application-agnostic, so it's possible to develop applications that use the communication standard and data format in ways that the NFC Forum hasn't foreseen. Once the requested data is transmitted, the tag or reader might request additional data or commands. As long as the reader and tag are in close proximity, these data exchanges can occur. Once data has been exchanged according to either the reader's or tag's design, the connection can be terminated. Alternately, moving the two devices away from each other will also break the connection. In our payWave example, once the reader has the requested credit card information, it will break its connection with the card and begin a transaction authorization. This type of RFID technology is employed in the MIFARE-based cards that are widely used for transit systems and secure building access, biometric passports, and contactless payment systems like payWave and Mastercard's PayPass. Sony developed a similar platform for contactless payments called FeliCa, widely used in Asia. FeliCa-based cards are used for transit systems and security access, as well as contactless payment systems. Incorporation of FeliCa technology in mobile phones in Japan led to the system becoming a de facto mobile payment standard there, with customers able to pay for parking, train fare, vending machine items, and more using a FeliCa-equipped handset. FeliCa uses a slightly different variation of Manchester coding than Type B RFID communication, but similar communication protocols are employed. Data can be transmitted at higher speeds using the FeliCa variation, either 212kBps or 424kBps. FeliCa has been accepted as a standard in Japan, dubbed JIS X 6319-4. Evolution, not revolution The NFC standard grew out of work by Sony and Philips (which at the time owned MIFARE) to make their standards interoperable while extending the general capabilities of the set-up-free two-way communication offered by those standards. The basic NFC communication operation became an accepted ISO standard (ISO/IEC 18092) in 2003, and later an ECMA standard (ECMA-340) as well. In 2004, Sony, NXP Semiconductors (a spin-off of Philips that manufactures MIFARE chips), and Nokia created the NFC Forum. This consortium, whose members include handset makers, mobile carriers, credit card companies, and chipmakers, works to define standards on top of the existing ISO/IEC 18092 spec to ensure compatibility across devices and NFC implementations. The NFC standard incorporates previous communication and linking protocols from both RFID Type A and B and FeliCa. It also defines four tag types and four different modes of operation. In addition, the NFC Forum has defined a standard data format and a number of common operations to facilitate certain applications, such as transferring credit card information, or extracting URLs or other information from "smart posters." However, the system is application-agnostic, so it's possible to develop applications that use the communication standard and data format in ways that the NFC Forum hasn't foreseen. The standards are there to help facilitate interoperability, particularly among similar applications like mobile payment or transit ticketing systems.
<urn:uuid:0b91b544-3f8f-4be7-bc65-05dc8a140540>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2011/02/near-field-communications-a-technology-primer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00414-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924468
1,404
3.046875
3
You are using Cascading Style Sheets (CSS) to format your Web site. You created an externalstyle sheet to apply the same look and feel to all your pages. Which tag will you use on each pageto reference your external style sheet? Consider the following XHTML code from a Web form:How often do you want to be updated?<br/><select name="Frequency"><option> Once a week </option><option> Once or twice a month </option><option> Once a month </option><option […] Which of the following XHTML elements is always placed within the <head> section? Maria and her team are beginning to redesign a corporate Web site. The company owners want tokeep the site’s navigation icons at the top of each page, and enable linked pages to appear inanother section of the same browser window. Which XHTML technique does Maria suggest? James created an XHTML table to show his schedule for each week of the month. He organizedthe table in a standard calendar format, so that each day from Sunday to Saturday is the headerfor a vertical column, and each week of days is displayed horizontally as a row.Each scheduled activity […] Which of the following eventually becomes a site map? You have been asked to determine the download time for pages on your site.One page consists of the XHTML document, several image files, and a Flash presentation.These elements equal 84kilobytes (KB) total. How long will it take for this page to be downloadedby a user with an Internet […] Consider the following XHTML code:<!DOCTYPEhtml PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html><meta http-equiv="Content-Type" content="text/html; […] Rolf’s Web site does not meet the W3C’s Web Content Accessibility Guidelines (WCAG). To makehis site more accessible according to WCAG, which change could he make?
<urn:uuid:446784b9-4d08-4ac3-b187-577d055d5c1e>
CC-MAIN-2017-09
http://www.aiotestking.com/ciw/category/ciw-v5-foundations-sdf-module/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00183-ip-10-171-10-108.ec2.internal.warc.gz
en
0.818633
445
2.796875
3
Black Box Explains...How a line driver operates Driving data? Better check the transmission. Line drivers can operate in any of four transmission modes: 4-wire full-duplex, 2-wire full-duplex, 4-wire half-duplex, and 2-wire half-duplex. In fact, most models support more than one type of operation. So how do you know which line driver to use in your application? The deal with duplexing. First you must decide if you need half- or full-duplex transmission. In half-duplex transmission, voice or data signals are transmitted in only one direction at a time, In full-duplex operation, voice or data signals are transmitted in both directions at the same time. In both scenarios, the communications path support the full data rate. The entire bandwidth is available for your transmission in half-duplex mode. In full-duplex mode, however, the bandwidth must be split in two because data travels in both directions simultaneously. Two wires or not two wires? That is the question. The second consideration you have is the type of twisted-pair cable you need to complete your data transmissions. Generally you need twisted-pair cable with either two or four wires. Often the type of cabling that’s already installed in a building dictates what kind of a line driver you use. For example, if two twisted pairs of UTP cabling are available, you can use a line driver that operates in 4-wire applications, such as the Short-Haul Modem-B Async or the Line Driver-Dual Handshake models. Otherwise, you might choose a line driver that works for 2-wire applications, such as the Short-Haul Modem-B 2W or the Async 2-Wire Short-Haul Modem. If you have the capabilities to support both 2- and 4-wire operation in half- or full-duplex mode, we even offer line drivers that support all four types of operation. As always, if you’re still unsure which operational mode will work for your particular applications, consult our Technical Support experts and they’ll help you make your decision.
<urn:uuid:82b515c1-b0a3-4bee-81e3-fb7487cb30ab>
CC-MAIN-2017-09
https://www.blackbox.com/en-us/products/black-box-explains/black-box-explains-how-a-line-driver-operates
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00111-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922652
458
2.6875
3
How far can virtual worlds go in improving the real one? Earth-simulation projects get increasingly fine-grained, but there is the human element - By Kevin McCaney - Jan 14, 2011 Computer simulation models are great tools for layering massive amounts of data into visual form, and they are becoming incredibly fine-grained, providing high-def views of both the forest and trees in our surroundings. But can they make people omniscient? That almost seems to be the goal of the recently announced Living Earth Simulator project, which seeks to take global modeling to a new level. Simulations now produce detailed models for everything from climate research to astrophysics. But the Living Earth Simulator is aiming for the whole enchilada, from financial systems to entire societies, all in one model. If all goes according to plan, it even could predict the future, in terms of financial crises or pandemic outbreaks. Funded by the European Commission, the simulator would draw on data and resources from around the world, including NASA and several U.S. supercomputers, to essentially model everything happening on the planet. It’s expected to go online in 2022, when we’ll find out how omniscient, or at least prescient, computer systems can be. The project extends an accelerating trend of using geospatial, data analysis and predictive software, along with other tools, to create detailed virtual views of the real world. The National Oceanic and Atmospheric Administration opened last year one climate research supercomputer and announced plans to build another. Researchers will link with those centers through a new, high-capacity network that will carry about 80T a day in weather simulations. NASA also is strong on modeling with several projects, such as Planetary Skin, a collaborative app to collect environmental data satellites and airborne, sea-based and land-based sensors. The agency also worked with Japan to stitch together the most detailed digital image of Earth yet produced. Not to be left out, Google is contributing to global climate modeling with Google Earth Engine, which compiles 25 years of satellite images available for mapping trends in the Earth’s environment. The list goes on. Having such clear pictures of the world can help in a lot of ways, and the technology will improve considerably before the Living Earth Simulator goes online in 11 years. However, predicting the future and preventing disasters could yet be troublesome because of the massively random element that is the human species. No matter how detailed the picture, can it make people wiser? Another fine-grained technology, high-definition TV, hasn’t improved the behavior of the people on screen. People will still be people, so who knows what’s in store. But technology will march on, and views of the Earth will become ever more precise. Human beings might not be able to prevent the end of the world, but we’ll sure know what it looks like when it gets here. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:16b3c51e-5bf6-4ecc-ac5a-ad18e4920149>
CC-MAIN-2017-09
https://gcn.com/articles/2011/01/17/technicalities-virtual-worlds.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00531-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932987
622
3.265625
3
A recent Sophos poll revealed that 63 per cent of system administrators worry that employees share too much personal information via their social networking profiles, putting their corporate infrastructure – and the sensitive data stored on it – at risk. The findings also indicate that a quarter of businesses have been the victim of spam, phishing or malware attacks via sites like Twitter, Facebook, LinkedIn and MySpace. With social networking now part of many computer users’ daily routine – from finding out what friends are up to, to viewing photos or simply updating their online status – Sophos experts note that unprecedented amounts of information is updated every minute. Frequent use of social networking sites makes them a prime target for cybercriminals intent on stealing identities, spreading malware or bombarding users with spam. Sophos research confirms that although one third of organizations still consider productivity issues to be the major reason for controlling employee access to social networking sites, the threat from both malware and data leakage is becoming more apparent with one in five citing these as their top concerns. Cyber attacks: a new frontier Sophos experts note that four of the most popular social networking sites – Facebook, MySpace, LinkedIn and Twitter – have all experienced their fair share of spam and malware attacks during 2009, all designed to compromise PCs, or steal sensitive information. From traditional 419 scams that aim to fool users into sending money to foreign destinations under the ruse that a friend is in trouble, to malware disguised as Facebook error messages, cybercriminals are using the same old techniques, but pushing them out via social media. A typical method of attack is for hackers to compromise accounts by stealing usernames and passwords – often using phishing or spyware – and then, use this profile to send spam or malicious links to the victims’ online friends and colleagues. Sophos research reveals that one third of respondents have been spammed on social networking sites, while almost one quarter (21 percent) have been the victim of targeted phishing or malware attacks. Total lockdown is not necessarily the answer With social networking behavior firmly ingrained in many employees’ daily routines, Sophos experts predict that users will continue to share information inappropriately, putting their identities – and potentially the organisation they work for – at risk. Similarly, as long as users keep falling for social media scams, the fraudsters will continue to exploit social networks, commandeering identities to steal information and spread more attacks. However, banning social networking in the workplace outright may be a rash move – one that could cause more harm than good. Top 5 tips to combat social networking perils In order to help business and users stay safe in the face of social networking, Sophos has put together the following advice: 1. Educate your workforce about online risks – make sure all employees are aware of the impact that their actions could have on the corporate network. 2. Consider filtering access to certain social networking sites at specific times – this can be easily set by user groups or time periods for example. 3. Check the information that your organization and staff share online – if sensitive business data is being shared, evaluate the situation and act as appropriate. 4. Review your Web 2.0 security settings regularly – users should only be sharing work-related information with trusted parties. 5. Ensure that you have a solution in place that can proactively scan all websites for malware, spam and phishing content.
<urn:uuid:b2c8a048-8685-481d-803b-a7914d325ad4>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2009/04/28/one-in-four-companies-report-attacks-via-social-networking-sites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00231-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929934
692
2.765625
3
Different Types of UPS Systems White Paper # 1: The Different Types of UPS Systems There is much confusion in the marketplace about the different types of UPS systems and their characteristics. Each of these UPS types is defined, practical applications of each are discussed, and advantages and disadvantages are listed. With this information, an educated decision can be made as to the appropriate UPS topology for a given need. The varied types of UPSs and their attributes often cause confusion in the data center industry. For example, it is widely believed that there are only two types of UPS systems, namely standby UPS and on-line UPS. These two commonly used terms do not correctly describe many of the UPS systems available. Many misunderstandings about UPS systems are cleared up when the different types of UPS topologies are properly identified. Common design approaches are reviewed here, including brief explanations about how each topology works. This will help you to properly identify and compare systems. A variety of design approaches are used to implement UPS systems, each with distinct performance characteristics. The most common design approaches are as follows: - Line Interactive - Standby on-line hybrid - Double Conversion On-Line - Delta Conversion On-Line The Standby UPS The Standby UPS is the most common type used for Personal Computers. In the block diagram illustrated in Figure 1, the transfer switch is set to choose the filtered AC input as the primary power source (solid line path), and switches to the battery / inverter as the backup source should the primary source fail. When that happens, the transfer switch must operate to switch the load over to the battery / inverter backup power source (dashed path). The inverter only starts when the power fails, hence the name "Standby." The Line Interactive UPS The Line Interactive UPS, illustrated in Figure 2, is the most common design used for small business, Web, and departmental servers. In this design, the battery-to-AC power converter (inverter) is always connected to the output of the UPS. Operating the inverter in reverse during times when the input AC power is normal provides battery charging. When the input power fails, the transfer switch opens and the power flows from the battery to the UPS output. With the inverter always on and connected to the output, this design provides additional filtering and yields reduced switching transients when compared with the Standby UPS topology. In addition, the Line Interactive design usually incorporates a tap-changing transformer. This adds voltage regulation by adjusting transformer taps as the input voltage varies. Voltage regulation is an important feature when low voltage conditions exist, otherwise the UPS would transfer to battery and then eventually down the load. This more frequent battery usage can cause premature battery failure. However, the inverter can also be designed such that its failure will still permit power flow from the AC input to the output, which eliminates the potential of single point failure and effectively provides for two independent power paths. This topology is inherently very efficient which leads to high reliability while at the same time providing superior power protection. Standby On-Line Hybrid The Standby On-Line Hybrid is the topology used for many of the UPS under 10kVA which are labeled "on-line." The standby DC to DC converter from the battery is switched on when an AC power failure is detected, just like in a standby UPS. The battery charger is also small, as in the standby UPS. Due to capacitors in the DC combiner, the UPS will exhibit no transfer time during an AC power failure. This design is sometimes fitted with an additional transfer switch for bypass during a malfunction or overload. Figure 3 illustrates this topology. The Standby-Ferro UPS The Standby-Ferro UPS was once the dominant form of UPS in the 3-15kVA range. This design depends on a special saturating transformer that has three windings (power connections). The primary power path is from AC input, through a transfer switch, through the transformer, and to the output. In the case of a power failure, the transfer switch is opened, and the inverter picks up the output load. In the Standby-Ferro design, the inverter is in the standby mode, and is energized when the input power fails and the transfer switch is opened. The transformer has a special ""Ferro-resonant"" capability, which provides limited voltage regulation and output waveform ""shaping"". The isolation from AC power transients provided by the Ferro transformer is as good or better than any filter available. But the Ferro transformer itself creates severe output voltage distortion and transients, which can be worse than a poor AC connection. Even though it is a standby UPS by design, the Standby-Ferro generates a great deal of heat because the Ferro-resonant transformer is inherently inefficient. These transformers are also large relative to regular isolation transformers; so Standby-Ferro UPS are generally quite large and heavy. Standby-Ferro UPS systems are frequently represented as On-Line units, even though they have a transfer switch, the inverter operates in the standby mode, and they exhibit a transfer characteristic during an AC power failure. Figure 4 illustrates this Standby-Ferro topology. The principal reason why Standby-Ferro UPS systems are no longer commonly used is that they can be fundamentally unstable when operating a modern computer power supply load. All large servers and routers use "Power Factor Corrected" power supplies which present a negative input resistance over some frequency range; when coupled with the relatively high and resonant impedance of the Ferro transformer, this can give rise to spontaneous and damaging oscillations. The Double Conversion On-Line UPS This is the most common type of UPS above 10kVA. The block diagram of the Double Conversion On-Line UPS, illustrated in Figure 5, is the same as the Standby, except that the primary power path is the inverter instead of the AC main. Wear on the power components reduces reliability over other designs and the energy consumed by the electrical power inefficiency is a significant part of the life-cycle cost of the UPS. Also, the input power drawn by the large battery charger is often non-linear and can interfere with building power wiring or cause problems with standby generators. In the Double Conversion On-Line design, failure of the input AC does not cause activation of the transfer switch, because the input AC is NOT the primary source, but is rather the backup source. Therefore, during an input AC power failure, on-line operation results in no transfer time. The on-line mode of operation exhibits a transfer time when the power from the primary battery charger / battery / inverter power path fails. This can occur when any of the blocks in this power path fail. The inverter power can also drop out briefly, causing a transfer, if the inverter is subjected to sudden load changes or internal control problems. Double Conversion On-Line UPS systems do exhibit a transfer time, but under different conditions than a standby or line interactive UPS. While a Standby and Line Interactive UPS will exhibit a transfer time when a blackout occurs, a double conversion on-line UPS will exhibit a transfer time when there is a large load step or inrush current. This transfer time is the result of transferring the load from the UPS inverter to the bypass line. Generally, this bypass line is built with dual Silicon Controlled Rectifiers (SCRs). These solid state switches are very fast, so similar to the Standby and Line Interactive UPS, the transfer time is very brief, usually 4-6 milliseconds. Both the battery charger and the inverter convert the entire load power flow in this design, which causes reduced efficiency and increased heat generation. The Delta Conversion On-Line UPS This UPS design, illustrated in Figure 6, is a new technology introduced to eliminate the drawbacks of the Double Conversion On-Line design and is available in the range of 5kVA to 1 MW. Similar to the Double Conversion On-Line design, the Delta Conversion On-Line UPS always has the inverter supplying the load voltage. However, the additional Delta Converter also contributes power to the inverter output. Under conditions of AC failure or disturbances, this design exhibits behavior identical to the Double Conversion On-Line. A simple way to understand the energy efficiency of the delta conversion topology is to consider the energy required to deliver a package from the 4th floor to the 5th floor of a building as shown in Figure 7. Delta Conversion technology saves energy by carrying the package only the difference (delta) between the starting and ending points. The Double Conversion On-Line UPS converts the power to the battery and back again whereas the Delta Converter moves components of the power from input to the output. Figure 7: Analogy of Double Conversion vs. Delta Conversion In the Delta Conversion On-Line design, the Delta Converter acts with dual purposes. The first is to control the input power characteristics. This active front end draws power in a sinusoidal manner, minimizing harmonics reflected onto the utility. This ensures optimal conditions for utility lines and generator systems and reduces heating and system wear in the power distribution system. The second function of the Delta Converter is to charge the battery of the UPS by drawing power and converting it to the appropriate DC charging voltage. The Delta Conversion On-Line UPS provides the same output characteristics as the Double Conversion On-Line design. However, the input characteristics are extremely different. With full Power Factor Correction, the delta conversion on-line design provides both input power control and output power control. The most important benefit is a significant reduction in energy losses. The input power control also makes the UPS compatible with all generator sets and reduces the need for wiring and generator over sizing. Delta Conversion On-Line technology is the only core UPS technology today protected by patents and is therefore not likely to be available from a broad range of UPS suppliers. Summary of UPS Types The following table shows some of the characteristics of the various UPS types. Some attributes of a UPS, like efficiency, are dictated by the choice of UPS type. Since implementation and manufactured quality more strongly impact characteristics such as reliability, these factors must be evaluated in addition to these design attributes. |Practical Power Range (kVA)||Voltage Conditioning||Cost per VA||Efficiency||Inverter always operating| |Standby||0 -0.5||Low||Low||Very High||No| |Line Interactive||0.5 -3||Design Dependent||Medium||Very High||Design Dependent| |Standby On-Line Hybrid||0.5 -5||High||High||Low||Partially| |Standby Ferro||3 -15||High||High||Low||No| |Double Conversion On-Line||5 -5000||High||Medium||Low||Yes| |Delta Conversion On-Line||5 -5000||High||Medium||High||Yes| Use of UPS types in the industry The current UPS industry product offering has evolved over time to include many of these designs. The different UPS types have attributes that make them more or less suitable for different applications and the APC product line reflects this diversity as shown in the table below: |Use in APC products||Benefits||Limitations||APC's Findings| |Standby||Back-UPS||Low cost, high efficiency, compact||Uses battery during brownouts, Impractical over 2kVA||Best value for personal workstations| |Line Interactive||Smart-UPS, Back-UPS Pro, and Matrix||High reliability, High efficiency, Good voltage conditioning||Impractical over 5kVA||Most popular UPS type in existence due to high reliability, ideal for rack or distributed servers and/ or harsh power environments |Standby On-Line Hybrid||not used by APC||Excellent voltage conditioning Low efficiency, Low reliability, High cost||Impractical over 5kVA||Line Interactive provides better reliability and similar conditioning at a better value| |Standby Ferro||not used by APC||Excellent voltage Conditioning, High reliability||Low efficiency, unstable in combination with some loads and generators||Limited application because low efficiency and instability issues are a problem, and N+ 1 On-Line design offers even better reliability| |Double Conversion On-Line||Symmetra||Excellent voltage conditioning, ease of paralleling||Low efficiency, Expensive under 5kVA||Well suited for N+ 1 designs| |Delta Conversion On-Line||Silcon, Symmetra MW series||Excellent voltage conditioning, High efficiency||Impractical under 5kVA||High efficiency reduces the substantial life-cycle cost of energy in large installations| Different UPS types are appropriate for different applications, and that there is no single UPS type that is ideal for all applications. With the variety of UPS topologies on the market today, these guidelines will help clear confusion about how each topology operates and the advantages and disadvantages of each. There are significant differences in UPS design between available products on the market, with theoretical and practical advantages for different approaches. Nevertheless, the basic quality of design implementation and manufactured quality are often dominant in determining the ultimate performance achieved in the customer application.
<urn:uuid:89447247-d35a-4647-8bbf-b9ad15b70659>
CC-MAIN-2017-09
http://www.apc.com/bi/en/faqs/FA157448/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00407-ip-10-171-10-108.ec2.internal.warc.gz
en
0.900211
2,737
3.046875
3
The responsibility to protect sensitive private information is now legally mandated and has become a key focus for many regulations within multiple industries. Information security is vital to the success of an organisation’s day-to-day operations; and must be managed as a proactive and strategic business process throughout the entire enterprise – not an intermittent or point-in-time event for technology staff alone. Love them or loathe them, log files play a central role in this. Logs are the lifeblood. They tell us the Who, the What, the Where, and the When. They give is insight. They give us answers. Very occasionally they might even make us laugh when the computer jargon points out the very obvious or make a simple fault sound incredibly serious. Because of the widespread deployment of networked servers, workstations, and other computing devices, and the ever-increasing number of threats against networks and systems, the number, volume, and variety of computer security logs has increased greatly. This has created the need for computer security log management, which is the process for generating, transmitting, storing, analysing, and disposing of computer security log data. Log files are critical to the successful investigation and prosecution of security incidents, therefore best practices recommend logging all events. However, enforcing such a policy can often overwhelm already overworked system administrators. The last thing you want is information overload. But it is true to say that logging only subsets is a risk. There are emerging solutions that do indeed gather a log for every event that takes place on the network, and provide an easy way to retrieve specific information if and when required. Log files generally fall into one of three categories. Security software logs primarily contain computer security-related information, while operating system logs and application logs typically contain a variety of information, including computer security-related data. - Anti-Virus Software - Intrusion Detection & Protection - Remote Access Software - Web Proxies - Vulnerability Management Software - Authentication Servers - Network Devices Operating systems (OS) for servers, workstations, and networking devices (e.g., routers, switches) usually log a variety of information related to security. The most common types of security-related OS data are: System Events. System events are operational actions performed by OS components, such as shutting down the system or starting a service. Typically, failed events and the most significant successful events are logged. The details logged for each event also vary widely; each event is usually timestamped, and other supporting information could include event, status, and error codes; service name; and user or system account associated with an event. Audit Records. Audit records contain security event information such as successful and failed authentication attempts, file accesses, security policy changes, account changes (e.g., account creation and deletion, account privilege assignment), and use of privileges. Operating systems and security software provide the foundation and protection for applications, which are used to store, access, and manipulate the data used for the organization’s business processes. Some applications generate their own log files, while others use the logging capabilities of the OS on which they are installed. Applications vary significantly in the types of information that they log. Account information such as successful and failed authentication attempts, account changes (e.g., account creation and deletion, account privilege assignment), and use of privileges. In addition to identifying security events such as brute force password guessing and escalation of privileges, it can be used to identify who has used the application and when each person has used it. Usage information such as the number of transactions occurring in a certain period (e.g., minute, hour) and the size of transactions (e.g., e-mail message size, file transfer size). This can be useful for certain types of security monitoring (e.g., a ten-fold increase in e-mail activity might indicate a new e-mail-borne malware threat; an unusually large outbound e-mail message might indicate inappropriate release of information). In determining which data is sufficient and appropriate to collect, organisations should implement processes that: - Identify components and events that warrant logging. - Establish the amount of data to be logged. - Identify and establish mandated log retention timeframes. - Implement polices for securely handling and analysing log files. The issue of retention has become a difficult one for many organisations. Satisfying the reporting demands of government regulations and corporate security policies requires the retention of vast amounts of security data. Not only must you collect log and event data from security products like firewalls and identity management systems, auditors must also be able to go back several years to trace security violations. One effect of government regulations is that security information, including event logs and transaction logs, has now become legal records that must be produced when requested by legal authorities. This could potentially stretch data retention periods to the duration of the litigation process. Penalties for non-compliance include monetary fines, civil liability and executive accountability. In some cases, such as with Sarbanes-Oxley, the statutes allow for fines that may reach into the millions of dollars. However, the largest penalties for non-compliance are likely to be the market-driven costs of having the company name associated with a security breach, and not being able to demonstrate reasonable security precautions with an acceptable compliance statement. The damaged trust relationship effects customer satisfaction, consumer confidence, and the organization’s ability to compete in the marketplace. On top of retention requirements, log files must be secured and access restricted and monitored. In an attempt to conceal unauthorised access or attempted access, intruders will try to edit or delete log files. Efforts to secure log files should include: - Encryption of data residing on database and in transit where necessary. - Segregation of logged data to an independent server. - Collection of data on Write Once Read Many (WORM) disks or drives. - Secure storage of backup and destruction of log files. Secure log files also assist in effective and timely identification and response to security incidents and to monitoring and enforcement policy compliance. A good log management solution should provide a scalable and centralized process that can collect, normalise, aggregate, compress and encrypt log data from disparate sources such as routers, switches, firewalls, IDS/IPS, AV, SPAM/spyware, Windows, UNIX, and Linux systems to identify security breaches, hacker intrusion and or any other activity that could potentially be crippling valuable corporate assets. A good log management solution should also automate the process of producing reports, with relevant information that will indicate an anomaly or glitch. Having the system email these reports to your inbox at set intervals can save trouble and most importantly time. A solution that automatically mines and manages that data can provide immediate insight into network activity, helping IT departments respond rapidly to security events and other network availability problems. Additionally, with stricter requirements imposed by best practices frameworks and regulatory legislation, companies must find more reliable ways of managing and securely archiving complete log data for compliance purposes and legal protection. Reporting requirements for security information are going to increase. Regulations are sure to call for log data from additional sources. Plan now for performance to handle streams of security information without impacting application performance and storage capacity that offers efficient growth paths as the enterprise storage requirements grow. Log files may not be pretty, but they make fantastic partners, working tirelessly in the background, never complaining, always on top! Sometimes, they can be difficult to make sense of. A centralised log management system will undoubtedly help.
<urn:uuid:26a046eb-8eb6-458a-b25e-a7eca1dfa71f>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2007/01/29/log-management-lifeblood-of-information-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00283-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929325
1,556
2.796875
3
Nvidia believes its advances in graphics hardware could pave the way for brain-like computing, which could lead to the creation of intelligent computers that can learn and make smarter decisions. The company on Tuesday outlined new graphics products that it said could speed up machine learning processes and make them less expensive. It announced the Titan Z, a US$2,999 graphics card, which has 5,760 CUDA cores, 12GB of memory and offers 8 teraflops of performance Titan Z, which fits in a standard desktop, is the "most powerful GPU we've ever built" and it offers "beastly performance," said Nvidia CEO Jen-Hsun Huang during a keynote on Tuesday at the company's GPU Technology Conference, which was webcast from San Jose, California. The company also unveiled the development kit called Jetson TK1, which Huang called the "world's tiniest supercomputer." It is a prototype board based on the Tegra K1 computer. It will come with Linux, programming tools and samples. Developers can use it to write applications designed to recognize objects and identify structures. Nvidia also hopes the development kit will give more mobile developers access to Nvidia's proprietary CUDA parallel programming tools. Artificial intelligence will get better with faster graphics processors, which could help computers learn and spit out results faster, Huang said. Clusters of graphics cards could process vast amounts of data for image recognition, face recognition and video search, and provide results faster. For example, a machine-based learning experiment called "Google Brain" was deployed to recognize cats in YouTube videos. The experiment established a neural network of 1 billion connections spread over 16,000 cores. That level of computing now could cost $12,000 with three computers configured with Titan Z and draw just 2,000 kilowatts of power, Huang said. Adobe is already using machine learning to tune its cloud services closer to users' needs and China-based Baidu is using GPUs for speech recognition and real-time translation on mobile phones, which Huang said could bring to life the concept of a universal language translator from Star Trek. The Titan Z has two 2,880-core Kepler GPUs and 12GB of dedicated memory, and can handle 5K gaming. Beyond supercomputing, the Titan Z could also be used for the "ultimate ultra-high definition gaming rig," Nvidia said in a blog entry on Tuesday. Researchers http://www.computerworld.com/s/article/9244861/Computers_with_brain_like_intelligence_are_getting_closer_to_reality">have struggled in bringing brain-like functionality to chips, and millions of dollars have been poured into building new types of chips and computers that could learn, process in parallel and dynamically rewire. Nvidia did not announce a specific availability dates for Titan Z and the Jetson TK1 mobile development board.
<urn:uuid:2e9ed698-877b-43ea-9489-76fe30c716fc>
CC-MAIN-2017-09
http://www.itworld.com/article/2697584/hardware/nvidia-pushes-brain-like-computing-with-new-graphics-products.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00459-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944575
596
2.59375
3
Big Data A Big Backup ChallengeBacking up Big Data requires a system that is fast, cost effective, and reliable. These are conflicting terms in the world of storage. Big Data is, well, big, and size is not the only challenge it places on backup. It also is a backup application's worst nightmare because many Big Data environments consist of millions or even billions of small files. How do you design a backup infrastructure that will support the Big Data realities? First, examine what data does not have to be backed up at all because it can be easily regenerated from another system that is already being backed up. A good example is report data generated from a database. Once this data is identified, exclude it. Next, move on to the real problem at hand--unique data that can't be re-created. This is often discrete file data that is feed into the environment via devices or sensors. It is essentially point-in-time data that can't be regenerated. This data is often copied within the Big Data environment so that it can be safely analyzed. As a result, there can be a fair amount of redundancy in the Big Data environment. This is an ideal role for disk backup devices. They are better suited for the small file transfers and, with deduplication, can eliminate redundancy and compress much of the data to optimize backup capacity. Effective optimization is critical since Big Data environments are measured in the 100's of terabytes and will soon be measured in the dozens of petabytes. It is also important to consider just how far you want to extend disk backup's role in this environment. Clearly deduplicated disk is needed, but it probably should be used in conjunction with tape--not in replacement of it. Again, often a large section of this data can't be regenerated. Loss of this data is permanent and potentially ruins the Big Data sample. You can't be too careful and, at the same time, you have to control capacity costs so that the value of the decisions that Big Data allows are not overshadowed by the expense of keeping the data that supports them. We suggest a Big Data backup strategy that includes a large tier of optimized backup disk to store the near-term data set for as long as possible, seven to 10 years worth of data being ideal, then using tape for the decades worth of less frequently accessed data. Alternatively you could go with the suggestion we made in a recent article "Tape's Role in Big Data" and combine the two into a single active archive--essentially a single file system that seamlessly marries all of these media types. This would consist of fast but low capacity (by Big Data standards) primary disk for data ingestion and active analytical processing, optimized disk for more near term data that is not being analyzed at that moment, and tape for long-term storage. In this environment data can be sent to all tiers of storage as it is created or modified so that less or even no backups need to be done. Big Data is a big storage challenge, not only to store the data but to put it on a fast enough platform that meaningful analytics can be run while at the same time, being cost effective and reliable. These are conflicting terms in the world of storage. Resolving that conflict is going to require a new way of doing things. Follow Storage Switzerland on Twitter George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
<urn:uuid:b41e8301-b0c2-4118-9a1d-285c5e599390>
CC-MAIN-2017-09
http://www.darkreading.com/database-security/big-data-a-big-backup-challenge/d/d-id/1098260?piddl_msgorder=asc
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00579-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957573
712
2.59375
3
Security (and Privacy) From HTML5 Most HTML5 security checklists rehash the recommendations and warnings from the specs themselves. It’s always a good sign when specs acknowledge security and privacy. Getting to that point isn’t trivial. There were two detours on the way to HTML5. WAP was a first stab at putting the web on mobile devices when mobile devices were dumb. And one of its first failings was the lack of cookie support. XHTML was another blip on the radar. Its only improvement over HTML seemed to be that mark-up could be parsed under a stricter XML interpreter so typos would be more easily caught. XHTML caught on as a cool thing to do, but most sites served it with a text/html MIME type that completely negated any difference from HTML in the first place. Herd mentality ruled the day on that one. CSRF and clickjacking are called out as security concerns in the HTML5 spec. For some developers, that may have been the first time they heard about such vulns even though they’re fundamental to how the web works. They’re old, old vulns. The good news is that HTML5 has some design improvements that might relegate those vulns to history. The <video> element doesn’t speak to security; it highlights the influence of non-technical concerns for a standard. The biggest drama around this element was the choosing whether an explicit codec should be mandated. WebGL is an example of pushing beyond the browser into graphics cards. These hardware for these cards doesn’t care about Same Origin Policy or even security, for that matter. Early versions of the spec had two major problems: Denial of Service and Information Leakage. It was refreshing to see privacy (information leakage) receive such attention. As a consequence of these risks browsers pulled support. Early implementation allowed researchers to find these problems and improve WebGL. Part of its revision included attachment to another HTML5 security policy: Cross Origin Resource Sharing (CORS). Like WebGL, the WebSocket API is another example where browsers implemented an early draft, revoked support due to security concerns, and now offer an improved version. For example, the WebSockets include a handshake and masking to prevent the kind of cross-protocol attacks that caused early web browsers to block access to ports like SMTP and telnet. These examples show us a few things. One, we shouldn’t be surprised at the tensions from competing desires during the drafting process. Two, secure design takes time. (Remember PHP?) And three, browser developers are pushing the curve on security. So is there really an HTML5 injection? What terrible flaws does the new standard contain that its predecessors did not? Not much. An important improvement from HTML5 is that parsing HTML documents is codified with instructions on order of operations, error handling, and fixup steps. A large portion of XSS history involves payloads that exploit browser quirks or bizarre parsing rules. There’s a cynical perspective that HTML5 will bring a brief period of worse XSS problems by developers who embrace HTML5’s enhanced form validation while forgetting to apply server-side validation. There’s nothing misleading about HTML5’s approach to this. More pre-defined <input> types and client-side regexes improve the user experience. It’s not intended to be a security barrier. It’s a usability enhancement, especially for browsers on mobile devices. The Content Security Policy (CSP) introduces design-level countermeasures for vulns like XSS. CSP moved from a Mozilla project to a standards track for all browsers to implement. A smart design choice is providing monitor and enforcement modes. It’s implementation will likely echo that of early web app firewalls. CSP complexity has the potential to break sites. Expect monitor mode to last for quite a while before sites start enforcing rules. The ability to switch between monitor and enforce is a sign of design that encourages adoption: Make it easier for devs to test policies over time. HTML injection deserves emphasis since it’s the most pervasive problem for web apps. But it’s not the only problem for web apps. Other pieces of HTML5 have equally serious concerns. The Web Storage API adds key-value storage to the browser. It’s effectively a client-side database. Avoid the immediate jump to SQL injection whenever you hear the word database. Instead, consider the privacy implications of Web Storage. We must be concerned about privacy extraction, not SQL injection. Web Storage has already been demonstrated as yet another tool for insinuating supercookies into the browser. In an era when developers still neglect to encrypt passwords in server-side databases, consider the mistakes awaiting data placed in browser databases: personal information, credit card numbers, password recovery, and more. And all of this just an XSS away from being exfiltrated. XSS isn’t the only threat. Malware has already demonstrated the inclination to scrape hard drives for financial data, credentials, keys, etc. An unencrypted store of 5MB (or more!) data is an appealing target. Woe to the web developer who thinks Web Storage is a convenient place to store a user’s password. The WebSocket API entails a different kind of security. The easy observation is that it should use wss:// in favor of ws://, just like HTTPS should be everywhere. The subtler problem lies with the protocol layered over a WebSocket connection. Security from controls like HTTPS, Same Origin, and session cookies don’t automatically transfer to WebSockets. For example, consider a simple chat protocol. Each message includes the usernames for sender and recipient. If the server just routes messages based on usernames without verifying the sender’s name matches the WebSocket they initiated, then it’d be trivial to spoof messages. Or consider if the app does verify the sender and recipient, but users’ session cookies are used to identify them. If the recipient receives a message packet that contains the sender’s session ID — well, I hope you see the insecurity there. If there’s one victim of the HTML5 arms race, it’s the browser exploit. Not that they’ve disappeared, but that they’ve become more complex. A byproduct of keeping up with (relatively) quickly changing drafts is that modern browsers are quicker to update. More importantly, self-updating shares a of features like plugin sandboxing, process separation, and even rudimentary XSS protection. Whatever your choice of browser, the only version number you need any more is HTML5. That’s the desire. In practice, accelerating browser updates isn’t going to adversely affect the pwn to own and exploit communities any time soon. IE6 refuses to disappear from the web. Qualys’ BrowserCheck stats show that browsers still tend to be out of date. But worse, the plugins remain out of date even if the browser is patched. In other words, Flash and Java deserve fingerpointing for being responsible for exposing security holes. When was the last time Adobe released a non-critical Flash update? The browser needs the complicity of sites in order for a feature like X-Frame-Options to matter. It’s one thing to scrutinize the design of a half-dozen or so web browsers. It’s quite another to consider the design of millions and millions of web sites.
<urn:uuid:741304d0-48cb-4889-8f92-bf472db48c04>
CC-MAIN-2017-09
https://deadliestwebattacks.com/2012/05/28/html5-unbound-part-3-of-4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00279-ip-10-171-10-108.ec2.internal.warc.gz
en
0.906771
1,558
2.640625
3
Maintaining the Active Directory Environment These questions are based on 70-649 – TS. Objective: Maintaining the Active Directory Environment Sub-Objective: Configure backup and recovery Single Answer, Multiple Choice You are the systems administrator for your company. You install Windows Server 2008 on a computer and configure it as a file server, named FileSrv. The FileSrv computer contains four hard disks that are configured as basic disks. You want to configure Redundant Array of Independent Disks (RAID) 0+1 on FileSrv for performance and fault tolerance of data. To achieve this, you need to convert the basic disks in FileSrv to dynamic disks. Which command should you use? You should use the Diskpart.exe command. RAID is commonly implemented for both performance and fault tolerance. There are various RAID levels that you can choose from to provide fault tolerance, performance or both. RAID 0 uses disk striping and offers the fastest read and write performance, but it does not offer any fault tolerance. If a single disk in a RAID 0 array is lost, all data is lost and will need to be recovered from backup. RAID 1 uses disk mirroring with two disks. This configuration produces slow writes, but relatively quick reads, and it provides a means to maintain high data availability on servers because a single disk can be lost without any loss of data. RAID 0+1 combines RAID 0 and RAID 1 and offers the performance of RAID 0 and the fault tolerance of RAID 1. To be able to configure RAID 0+1, you must have dynamic disks. If your disks are configured as basic disks, you can convert them to dynamic disks with the Diskpart.exe utility. The Diskpart utility enables a superset of the actions that are supported by the Disk Management snap-in. You can use the Diskpart convert dynamic command to change a basic disk into a dynamic disk. The Chkdsk.exe command cannot be used to convert a basic disk to dynamic disk. Chkdsk.exe is a command-line utility that creates and displays a status report for a disk based on the file system. The Chkdsk utility also lists and corrects errors on the disk. You should not use the Fsutil.exe command. Fsutil.exe is a command-line utility that can be used to perform many FAT and NTFS file system related tasks, such as managing reparse points, managing sparse files, dismounting a volume or extending a volume. The Fsutil utility cannot be used to convert a basic disk to dynamic disk. The Fdisk.exe command cannot be used to convert a basic disk to dynamic disk. Fdisk.exe is a command-line utility that can be used to partition a hard disk. You can use the Fdisk utility to create, change, delete or display current partitions on the hard disk and to assign a drive letter to each allocated space on the hard disk. Windows Server TechCenter > Windows Server 2003 Technical Library > Windows Server 2003: Deployment > Windows Server 2003 Deployment Guide > Planning Server Deployments > Planning for Storage > Planning for Fault Tolerance > Achieving Fault Tolerance by Using RAID
<urn:uuid:b5695127-84fb-487b-a03b-3eaf95edd2bf>
CC-MAIN-2017-09
http://certmag.com/maintaining-the-active-directory-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00455-ip-10-171-10-108.ec2.internal.warc.gz
en
0.854718
666
2.515625
3
'Janitor satellite' would clean up space junk - By Kevin McCaney - Feb 24, 2012 Swiss scientists are building a “janitor satellite” to help clean up the burgeoning problem of space junk, but unless the project catches on and attracts a lot of customers, it might not make much of a dent in the problem. A team at the Swiss Space Center at EPFL in Lausanne are developing CleanSpace One, the first of what they plan to be a family of space junk-cleaning satellites that will capture debris orbiting the Earth and bring it back to burn up in the atmosphere. The $11 million project is dealing with a real concern. Inoperable satellites, broken pieces of spacecraft and other debris has been gathering in the upper atmosphere for 50 years. NASA tracks more than 16,000 pieces larger than 10 centimeters in diameter and estimates there are 500,000 pieces smaller than that. DARPA seeks ways to rebuild space junk Space-junk problem: Is a solar sail ship the answer? At orbiting speeds near 17,500 mph, even small pieces can cause damage. In February, 2009, an U.S. Iridium satellite was destroyed when it collided with a defunct Russian satellite. Crews in the International Space Station frequently must adjust their orbit to avoid incoming junk and in June 2011 had to take shelter in inside two Soyuz capsules when debris got a little too close. In September 2011, a railroad car-sized UARS satellite fell to Earth, though it landed safely in the ocean. That same month, the U.S. National Research Council of the National Academies issued a report saying that the space junk problem had reached a “tipping point” at which low-Earth orbit was becoming too risky. Even though spacecraft can maneuver to avoid large pieces of debris, orbital paths are becoming dangerously cluttered with debris of all sizes, the report states. In trying to tackle the problem, scientists at the Swiss Space Center are starting small, although they have some technical challenges to work out. CleanSpace One, which in EPFL illustrations looks like a storage trunk, will be 30-by-10-by-10 centimeters (a foot long and about four inches on each other side), according to EPFL’s announcement of the project. EPFL hopes to launch it within three to five years, and its first target will be one of two out-of-service Swisscube picosatellites launched in 2009 and 2010. After its launch, CleanSpace One will have a tricky job. It will have to adjust its trajectory, using a new type of ultra-compact motor designed for use in space, to match its target’s orbital plane while at orbiting speed, EPFL said. It will then reach out with a gripping claw, grab the target and pull it back in. Then CleanSpace One will “de-orbit” the target by returning to the Earth’s atmosphere, where they will both burn up. One down, 499,999 to go — or so might seem at first. But although the maiden voyage might seem like cleaning up after Mardi Gras by taking trash to the landfill one piece at a time, EPFL is using the first flight to develop the prototype for a fleet of space janitors it expects to be “designed as sustainably as possible [and] able to de-orbit several different kinds of satellites,” Swiss Space Center Director Volker Gass said in the center’s announcement. The center is hoping to sell such satellites once they get into larger scale production and, if customers find the price is right, it could mark a step toward cleaning up the orbital paths. The Swiss aren’t the only ones looking to solve the problem, though. NASA is researching a couple of ideas: using solar sails to power orbiting cleanup crafts, and using lasers to slow down debris enough that it would re-enter the atmosphere and burn up. And the Defense Advanced Research Projects Agency is working on the possibility of using space-borne robots that would salvage parts from old satellites to build new ones while in orbit. In the meantime, NASA and the Air Force have been working on improving their ability to track debris, in order to better avoid space traffic accidents. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:a3e8c172-060d-455b-ac8a-f5b55ed5e7c0>
CC-MAIN-2017-09
https://gcn.com/articles/2012/02/24/swiss-janitor-satellite-to-clean-up-space-junk.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00575-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94335
899
2.703125
3
Last Tuesday, President Barack Obama drank from a Lincoln cup to invoke the 16th president, and took the oath of office on the Lincoln Bible. If Obama had wanted to invoke another past president, say, Bill Clinton, perhaps he should have traded in his BlackBerry for an Apple Newton MessagePad. Or if the president wanted to reference Lyndon B. Johnson's presidency, he could have lugged an IBM 2361 Storage Unit onto the Capitol steps. It might not be as emotionally powerful (or compact) as the Lincoln Bible, but at least it would resonate for IT personnel. Every president has taken office amidst new technological innovations, from the creation of the supercomputer to the success of the BlackBerry and the iPhone. And with the inauguration of each president, it is up to the government to decide which innovations will be utilized by the public sector and which will be left by the wayside. Here are IT highlights during the time of each presidential inauguration starting with Kennedy, along with a few other historical facts. Year of Inauguration: 1961 Information Technology Highlight: IBM Stretch Computing System. The computer pioneered advanced systems such as pipelining, the transistor and the byte, making multiprogramming possible. Originally priced at $13.5 million, it was the fastest computer until 1964. Government Use of IT: The U.S. National Security Agency used the IBM Stretch as its main central processing unit (CPU), while agencies such as the Weather Bureau and the Navy also utilized the system. Size of Federal Government Budget: $100 billion No. 1 Movie: West Side Story No. 1 Song: Bobby Lewis, Tossin' and Turnin' Year of Inauguration: 1965 Information Technology Highlight: IBM 2361 Storage Unit. The IBM 2361 was the largest computer memory ever built, with 16 times the capacity of any previous IBM memory. Government Use of IT: NASA Space Station used IBM 2361s to process large quantities of information used by NASA-based flight controllers for the Apollo and Gemini missions. Size of Federal Government Budget: $120 billion No. 1 Movie: The Sound of Music No. 1 Song: Sam the Sham and the Pharaohs, Wooly Bully Year of Inauguration: 1969 Information Technology Highlight: ARPANET (Advanced Research Project Agency network). ARPANET, the predecessor of the Internet, was developed as a military computer network. Government Use of IT: ARPANET sites were created by the Department of Defense and the Federal Reserve Board. Size of Federal Government Budget: $180 billion No. 1 Movie: Midnight Cowboy No. 1 Song: The Archies, Sugar, Sugar Year of Inauguration: 1974 Information Technology Highlight: Cray-1. Developed in 1974 but not installed until 1976, this supercomputer became one of the best and most successful in history. Government Use of IT: Agencies such as the U.S. Department of Energy, as well as various university laboratories, used the Cray-1A for modeling climate and severe storms. Size of Federal Government Budget: $270 billion No. 1 Movie: The Godfather Part II Song: Barbra Streisand, The Way We Were Year of Inauguration: 1977 Information Technology Highlight: Apple II. This system was the first successful microcomputer produced for the public market. The computer originally cost $1298. Government Use of IT: The Apple II became the standard computer used in the public education system, particularly with the release of the spreadsheet VisiCalc in 1979. Size of Federal Government Budget: $410 billion No. 1 Movie: Annie Hall No. 1 Song: Rod Stewart, Tonight's the Night (Gonna Be Alright) Year of Inauguration: 1981 Information Technology Highlight: Osborne I; IBM PC. The dawn of personal computing in the workplace was about to sweep the country. Government Use of IT: Government was slow to embrace the "microcomputer" era, instead sticking with mainframes for the most part; some smaller governments started to use minicomputers. Size of Federal Government Budget: $680 billion No. 1 Movie: Chariots of Fire No. 1 Song: Kim Carnes, Bette Davis Eyes Year of Inauguration: 1989 Information Technology Highlight: World Wide Web; Desktop Publishing. Two tech acronyms -- WWW and WYSIWYG -- come to symbolize the fast-paced promise of change brought on by computers. Government Use of IT: Although e-mail had been around in proprietary formats, the combination of the Internet and commercial services, such as CompuServe, significantly increased its use in both the public and private sectors. Size of Federal Government Budget: $1.1 trillion No. 1 Movie: Rain Man No. 1 Song: Chicago, Look Away Year of Inauguration: 1993 Information Technology Highlight: Apple Newton MessagePad. This early PDA had handwriting recognition, fax and printing support, and other capabilities. Commercially it was a dud. Government Use of IT: Client-server began its rapid rollout in government, decentralizing data and data centers to agencies and departments. Size of Federal Government Budget: $1.4 trillion No. 1 Movie: Schindler's List No. 1 Song: Whitney Houston, I Will Always Love You Year of Inauguration: 2001 Information Technology Highlight: BlackBerry. The BlackBerry smartphone supports e-mail, text messaging, Web browsing and mobile phone service. Millions of business and government executives embrace (and pray to) this new wireless tool. Government Use of IT: The number of e-government applications grows as the public sector attempts to put as much information (and the occasional service) as possible on the Web. Size of Federal Government Budget: $1.9 trillion No. 1 Movie: A Beautiful Mind No. 1 Song: Lifehouse, Hanging by a Moment Year of Inauguration: 2009 Information Technology Highlight: Cloud Computing. This new innovation involves storing data and obtaining resources "as a service" through a shared "cloud" of Internet servers. Government Use of IT: Government consolidates data centers, shares services and uses business analytic tools, all in an effort to reduce costs while improving IT's public value. Size of Federal Government Budget: $3.1 trillion No. 1 Movie: Slumdog Millionaire No. 1 Song: Beyonce, Single Ladies (Put a Ring on It)
<urn:uuid:3a401beb-ecc1-4b8f-acb7-435c5ec49758>
CC-MAIN-2017-09
http://www.govtech.com/pcio/From-Kennedy-to-Obama-Inaugural-Technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00099-ip-10-171-10-108.ec2.internal.warc.gz
en
0.88474
1,364
3.03125
3
This week has been laser week at the Defense Advanced Research Projects Agency, with two very different laser-based programs hitting major milestones: an inexpensive array of lasers on a single chip that can be used as sensors on drones and robots and a killer laser system that could blow up missiles, shells, and possibly vehicles and people. Yesterday, DARPA announced the successful test of a single-chip laser detection and ranging system that makes it possible to build inexpensive, lightweight short-range "phased array" LADAR that could be mounted on small unmanned aircraft, robots, and vehicles. The technology could bring low-cost, solid-state, high-resolution 3D scanning to a host of devices in the near future. Called SWEEPER (Short-range Wide-field-of-view Extremely agile Electronically steered Photonic EmitteR), the sensor technology embeds thousands of laser-emitting dots microns apart on a silicon chip—creating a "phased array" optical scanning system that can scan rapidly across a 51-degree arc without the need for mechanical rotation. In the latest test, the system was able to scan back and forth across that entire arc more than 100,000 times per second. Like the phased-array radars the military uses on Aegis missile ships and other air tracking systems, SWEEPER doesn't require hardware stabilization. And because of its miniaturization and low power, it could find its way into a wide range of commercial applications—everything from self-driven cars to ultra-high-speed data transmission. It could also turn small surveillance drones into real-time tactical sensors, mapping the battlefield for troops and alerting them of movement—or even be mounted on troops themselves. Reach out and zap someone And for those wanting a little more pew-pew in their lasers, DARPA announced that HELLADS—the High-Energy Liquid Laser Area Defense System, developed by General Atomics—had hit benchmarks for laser power and beam quality and was ready for field tests at White Sands Missile Range this summer. HELLADS will be used in ground tests against "rockets, mortars, vehicles and surrogate surface-to-air missiles," a DARPA spokesperson said in a prepared statement. Depending on the results, it could move to vehicle-based tests by next year. The system combines multiple 75-kilowatt laser modules to create a beam of 150 to 300 kW tactical laser weapon. For comparison's sake, the Navy's Laser Weapon System (LaWS), currently deployed on the USS Ponce is capable of generating a beam of up to 30 kW—enough to explode warheads, fry electronics, and overheat engines on drones, small aircraft, and small boats. The HELLADS' beam is 5 to 10 times more powerful. Unlike other directed-energy weapons, HELLADS' modules can be powered by lithium-ion batteries, which makes it possible to eventually mount it on vehicles or aircraft. The system's desired size is less than three cubic meters—too big for a small drone, but easily mounted on a ship, ground vehicle, or larger aircraft. The batteries are the only real limiting factor on HELLADS' operation; in a shipboard scenario, they could be quickly recharged by ship's generators or draw power directly from them—a design consideration of the all-electric USS Zumwalt (DD-1000) class. And while it's not nearly the output of the 1-megawatt chemical-laser-based Airborne Laser tested aboard a Boeing 747 by the Missile Defense Agency (or the assassination laser Val Kilmer built in Real Genius), 300 kW could be enough to cut through a ship's superstructure and fry more than some electronics. General Atomics executives have said that the company is working on future generations of the lasers used in HELLADS for other Army and Navy laser weapon programs. An improved 150-kW version is planned to be tested aboard an Arleigh Burke (DDG-51) class destroyer as an improved point defense system against aircraft, missiles, and small boats in 2018, and another version is being packaged as a contender for Boeing's High Energy Laser Mobile Demonstrator (HEL MD) for the Army. The third-generation version of the laser system has been designed to fit aboard the upcoming Predator C Avenger drone.
<urn:uuid:efab32c0-b063-491e-bfcc-03f840068517>
CC-MAIN-2017-09
https://arstechnica.com/information-technology/2015/05/darpa-laser-research-boosts-airborne-death-rays-tiny-laser-scanners/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00099-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935698
886
2.890625
3
Johnson O.W.,Montana State University | Fielding L.,BYU Hawaii | Fisher J.P.,U.S. Fish and Wildlife Service | Gold R.S.,BYU Hawaii | And 10 more authors. Wader Study Group Bulletin | Year: 2012 We used light level archival geolocators (data loggers) to track annual migrations of Pacific Golden-Plovers Pluvialis fulva at non-breeding grounds on American Samoa and Saipan, and at nesting grounds near Nome in W Alaska. Among wintering birds, we deployed loggers in spring 2010 and recovered them during the 2010-2011 nonbreeding season when the site-faithful birds had returned; deployment on breeding birds was in summer 2009 and 2010, logger recovery in each group was one year later when the plovers were again nesting. Logger archives from American Samoa and Nome birds revealed a clockwise, circular transoceanic pattern (previously unknown in this species) consisting of three lengthy movements: 1. southward from Alaska in autumn via the mid-Pacific Flyway (American Samoa birds wintered at the same sites where they had been captured, Nome birds wintered variously at Christmas Island, Marshall Islands, Gilbert Islands, Fiji, and Fraser Is., Queensland); 2. in spring, the plovers traveled north-westward to Japan (the track from Fraser Is. was via Taiwan) where they made stopovers averaging about three weeks; 3. from Japan and Taiwan, the final segment was north-eastward to nesting grounds in Alaska. Great circle distances along this annual clockwise journey varied with location of wintering grounds ranging from about 16,000 to 26,700 km. Flights on each of the three segments appeared to be mostly nonstop at estimated mean ground speeds of 59-78 kph over periods of about 3-8 days. Three individuals made transoceanic passages at apparent record-setting ground speeds in excess of 100 kph. In spring, the Saipan birds followed the East Asian-Australasian Flyway with stopovers in Japan and elsewhere in Asia before arriving at nest sites in Chukotka and Kamchatka. Two Saipan birds made long over-water flights from Japan to W Alaska. One of them traveled from the Seward Peninsula to Chukotka and nested there. Where the other bird nested is uncertain because its geolocator failed. In fall, the individual that had reached Chukotka via Alaska backtracked and made a flight from Alaska across the western Pacific to Saipan. The other Saipan birds returned via mainland Asia and Japan. Our findings indicate that Japan is a key stopover (especially in spring when plovers from widely separated areas of the winter range converge there), and demonstrate that Alaska hosts a breeding population of Pacific Golden-Plovers comprised of birds from most if not all of the Pacific winter range. Source
<urn:uuid:b8fff626-c4b2-4581-8b82-cf3d15483efd>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/byu-hawaii-2004967/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00099-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93199
607
2.734375
3
Infrastructure as a Service (IaaS) In the most basic cloud-service model providers of IaaS offer computers – physical or (more often) virtual machines – and other resources. IaaS clouds often offer additional resources such as a virtual-machine disk image library, raw block storage, and file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles. IaaS-cloud providers supply these resources on-demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed. When you're looking to break into the basic levels of cloud service, look to Abacus for the following IaaS options: Platform as a Service (PaaS) In the PaaS models, cloud providers deliver a computing platform, typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. - Abacus i Cloud (Enterprise i Cloud with optional Operations Management Upgrade) A remote, online, or managed backup service, sometimes marketed as cloud backup or backup-as-a-service, is a service that provides users with a system for the backup, storage, and recovery of computer files. Online backup providers are companies that provide this type of service to end users (or clients). Such backup services are considered a form of cloud computing. Online backup systems are typically built around a client software program that runs on a schedule, typically once a day, and usually at night while computers aren't in use. This program typically collects, compresses, encrypts, and transfers the data to the remote backup service provider's servers or off-site hardware. Abacus can offer a secure and reliable backup service for your business data needs with: Disaster Recovery as a Service (DRaaS) Disaster Recovery as a Service (DRaaS) is the replication and hosting of physical or virtual servers by a third-party to provide failover in the event of a man-made or natural catastrophe. Typically, DRaaS requirements and expectations are documented in a service-level agreement (SLA) and the third-party vendor provides failover to a cloud computing environment, either through a contract or pay-per-use basis. In the event of an actual disaster, an offsite vendor will be less likely than the enterprise itself to suffer the direct and immediate effects, allowing the provider to implement the disaster recovery plan even in the event of the worst-case scenario: a total or near-total shutdown of the affected enterprise. In the event of an emergency, you need a reliable partner to get your business back online. Abacus can offer you the following: Software as a Service (SaaS) In the business model using software as a service (SaaS), users are provided access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee.
<urn:uuid:e38a2ab5-2a07-41c7-85b7-e13621a9a835>
CC-MAIN-2017-09
http://abacusllc.com/Cloud/Cloud-101.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00623-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917606
733
2.859375
3
The developers of the Neutrino exploit kit have added a new feature intended to thwart security researchers from studying their attacks. The feature was discovered after Trustwave's SpiderLabs division found computers they were using for research couldn't make a connection with servers that delivered Neutrino. "The environment seems completely fine except for when accessing Neutrino," wrote Daniel Chechik, senior security researcher. Exploit kits are one of the most effective ways that cybercriminals can infect computers with malware. They find vulnerable websites and plant code that transparently connects with another server that tries to exploit software vulnerabilities. If the server finds a hole, the malware is delivered, and the victim is none the wiser. Exploit kits are also sometimes delivered by malicious online ads in attacks known as malvertising. Malware writers and cyberattackers have long used methods to try to stop security engineers from studying their methods. For example, some malware programs are designed to quit if they're running in a virtual machine. Chechik wrote that Trustwave tried changing IP addresses and Web browsers to avoid whatever was causing the Neutrino server to not respond, but it didn't work. But by fiddling with some data traffic that Trustwave's computers were sending to the Neutrino server, they figured out what was going on. Neutrino has been engineered to use passive OS fingerprinting, which is a method to collect and analyze data packets without the entity that is sending the packets knowing their computers are being profiled. In this case, the computer sending the packets is a security researcher's system that's probing the hackers' server. Passive OS fingerprinting captures "traffic coming from a connecting host going to the local network," according to a post on the SANS blog. "The fingerprinting can then be conducted without the remote host being aware that its packets are being captured." Michal Zalewski, a security researcher who works for Google, wrote a tool specifically for passive OS fingerprinting. It offers attackers the advantage of stealth, since active OS fingerprinting -- which involves sending direct traffic to another network -- can trigger alerts from firewalls and intrusion detection equipment. Chechik wrote that Neutrino appears to be using passive OS fingerprinting in order to shut down connections coming from Linux machines, which are likely to be used by security researchers. "This approach generally reduces their exposure to anything from automated scans to unwanted security researchers," he wrote. It's a smart move by Neutrino's developers, because if a server doesn't respond, it's generally considered down. "It is very likely that this behavior would simply be written off as a dead server and Neutrino would achieve its goal of being left alone by anyone who isn't a potential victim," Chechik wrote.
<urn:uuid:8e6a2f74-f532-4c6c-b987-e5c688711fb8>
CC-MAIN-2017-09
http://www.itnews.com/article/3030417/the-neutrino-exploit-kit-has-a-new-way-to-detect-security-researchers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00323-ip-10-171-10-108.ec2.internal.warc.gz
en
0.972545
586
2.984375
3
LIDAR mapping projects explore the fourth dimension - By Patrick Marshall - Mar 14, 2013 Project managers appear to be in consensus about the lessons learned from the first generation of light detection and ranging (LIDAR) mapping, which uses the data gathered from bounced light to create detailed, 3D maps. For one thing, it’s not just three-dimensional. The impacts of man and nature cause changes in topography, including underwater topography. So it's important to rescan periodically, to see how things have changed across the fourth dimension, time. What's more, since the hardware and software continues to improve, and since agencies have come to realize the value of the data, rescans are even more in demand. "We started our collection in 2007, flying at 8,000 feet," said Charles Fritz, director of the International Water Institute, which runs the Red River Basin Decision Information Network. The RRBDIN is mapping chunks of North Dakota, Minnesota and central Canada. "Our spec was we wanted at least one ground point every 1.4 meters of landscape at plus or minus 15 centimeters. With the new technology we can easily get 10 or 11 points per square meter. Think of it as putting on a better and better pair of cheater glasses." The U.S. Army Corps of Engineers currently is on its second LIDAR data-gathering effort covering the nation's shorelines, and has seen how improved technology has broadened the scope of the project. Infrastructure inspections via LIDAR The Army Corps of Engineers manages over 1,000 coastal navigation structures, such as the Kaumalapau Harbor breakwater in Hawaii (pictured above). General monitoring techniques include lidar or photogrammetric surveys, bathymetric sonar surveys, conventional ground surveys, walking inspections, and damage surveys that are more comprehensive than typical field inspections, according to the Corps' Costal and Hydraulics Laboratory. The data is compared to historical data and to standard design methods in order to improve designs. Employing both topographic and bathymetric (working under water) LIDAR in aircraft, the Corp's National Coastal Mapping Program scans the shoreline — including Hawaii, Alaska and the Great Lakes — in a swath 500 meters inland and 1,000 meters offshore. At current funding levels, the team can cover the entire shoreline every five to six years. Chris Macon, technical lead for program, said that the primary purpose of the program has been to track the movement of sand to ensure safe navigation of the country's waterways. "We're finding out how much sand there is, where it is, where is it moving to along the coast and how it is impacting federal navigation projects," Macon said. The airborne bathymetric LIDAR delivers 25-30 centimeters of vertical accuracy, and its maximum penetration is roughly 50 meters in crystal-clear waters, he said. Navigation issues are still the priority, but as LIDAR scanning and analysis has gotten more accurate and applications have proliferated, federal, state and local agencies are asking for more coverage inland. "As our capabilities have grown, adding topographic LIDAR, adding true color imagery and adding hyperspectral imagery, people want more coverage inland," Macon said. In addition to navigation issues, he said, the data is being employed for invasive species mapping, impacts on wetlands and post-hurricane assessments. Beyond collecting better data, LIDAR pioneers agree on the importance of educating and working closely with those who can make the best use of the LIDAR data. "We spend a lot of time talking with our local stakeholders and developing relationships with people throughout the state, letting people know when flights are happening, who can gain from them," said John English, LIDAR data coordinator for Oregon's Department of Geology. "We travel throughout the state on a regular basis, giving presentations and talking about the technology. It's going out to local constituents in different places with different needs and concerns and addressing them directly." Finally, implementers agree that right now the pressing need is for more applications that can make effective use of the data that has already been collected. "The sensor technology to collect the data has reached a point where we have very dense data,” Macon said. "Some people can use the point data and go drive their own products and information from it, but a lot of people don't want to have to do all the analysis and digging into the data to get the information out. That's where we try to help evolve products and provide more information to the users." PREVIOUS: When LIDAR came down to Earth, mapping projects took off NEXT: LIDAR-equipped robots map a city’s interior.
<urn:uuid:fe4c5c31-dd3c-404f-bb04-74d4223b50ff>
CC-MAIN-2017-09
https://gcn.com/articles/2013/03/14/lidar-mapping-explores-fourth-dimension.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00323-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94692
988
2.6875
3
The dark side of the Internet of Everything We’re all learning about the dangers of big data. You buy something on Amazon and suddenly all your contacts know about your purchase. Now it turns out that the simple act of turning on your cell phone exposes you to being tracked across the Internet – even if you have switched off location-based services. Researchers at the University of Illinois’ College of Engineering have discovered that, due to imperfections in manufacturing processes, cell phone sensors have unique characteristics that can be used to identify the phone. If, for example, you’re driving to the gym and use your phone to access check on traffic, that app reads data from the phone’s accelerometer. When you arrive at the gym, data from the accelerometer is also used by an application on the treadmill that measures your gait and distance traveled. If those two transmissions of accelerometer sensor data are compared, an analyst can determine that they came from the same phone. In short, your cell phone transmits data as you move through the digital universe, leaving “fingerprints” that give information not only about your location but about your activities. While the researchers at the University of Illinois focused on accelerometer data, they say there’s no reason to believe other sensors in phones – gyroscopes, microphones, proximity sensors, pressure sensors, cameras, etc. – don’t leave similar fingerprints. So the potential is there to link, identify and track an incredible amount of activity. What’s more, says principal researcher Nuripam Roy, the degree of expertise required to tap into this data is not very high. “Anybody with a little knowledge of statistics and a little knowledge of computer science can do it,” Roy said. “As it can be done pretty easily, it’s a big concern.” That’s especially true since laws and regulations haven’t kept up with the technologies. “There is no regulation, no need to get permission to approach this kind of data,” Roy said. “What we have shown is that this apparently benign-looking data can be as good as sharing the MIN [mobile identification number] number of a phone.” The team tried to find ways to mask the identifying characteristics, without success. “We couldn't find any reasonable way to wipe this off the sensor data,” Roy said. “We need to keep in mind that the sensor data has functionality, so we can't process it too much or may become useless for the application.” Roy added that there may be some ways to take advantage of the sensor fingerprints, such as using cell phones as identity cards. But doing so in a secure way requires more research. In the meantime, he said, manufacturers – and governments that regulate them – may want to consider the implications. “We think that these idiosyncrasies in the sensors are coming from the factory production line,” Roy said. “It may be that there will be more concern about those processes.” Can the manufacturing processes be changed to eliminate the fingerprints? “We need more research,” Roy said. Posted by Patrick Marshall on May 13, 2014 at 6:23 AM
<urn:uuid:83a98f21-72b7-49d4-a38f-1348d719db52>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2014/05/tracking-via-phone-sensors.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00499-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9397
680
3.03125
3
There's an inherent problem with Low Earth Orbit (LEO) satellites of the kind used for remote observation, such as border security and disaster monitoring. The problem is that because of their low orbit—they're a few hundred miles above earth's surface, rather than 22,300 miles as found with Geostationary (GEO) satellites—they can't see their ground station at all times. They can see the earth more clearly, so they are good for monitoring; they are cheap to deploy because they don't need such a big rocket to get it up there; and they don't suffer from as much packet latency as GEO satellites because the distances are shorter. However, they aren't visible from any given point on earth at all times—they're not stationary, and they're low-down. This issue has been a problem over the years. Data has had to be stored on the LEO satellite while it makes its orbit, and then that data has to be sent down to earth as the LEO passes over the base station. The delay means that data is old by the time it's processed. LEO satellites often have an orbit of over 90 minutes. The smuggler it could detect is usually long gone in 90 minutes. The idea is that the LEO does its thing, collecting data, as it does well. But instead of storing it, it sends it immediately up to a GEO satellite, much higher above earth. That stationary, higher GEO satellite has a view of its base station at all times. Thus it can immediately relay the data to earth for processing. This combination of satellites, LEO and GEO, eliminates most delay. And if it works right, for the first time ever, LEOs will have almost real-time data exchange with ground stations. And that's the theory. Making it work is a bit more complicated because you've got to get the data from the LEO to the GEO. EDRS plans to use laser for this, and it has recently had some success with it. Airbus, which is developing the project, said that it had obtained speeds of 600 Mbps sending an image over a 45,000-km link between satellites. And it has said that it is possible to obtain speeds of 1.8 Gbps with its optical laser-based system. Combination of satellites SpaceDataHighway's plan is to use a combination of GEO and LEO satellites, but in fact, you don't need to use LEOs. Any airborne manned or unmanned platform would substitute. And that would include space-crafts and Unmanned Aerial Vehicles (UAVs), commonly known as drones. Interestingly, you could also use the constellation to control beyond-line-of-sight drones. A fully operational SpaceDataHighway payload will be launched in mid-2015, according to Tereza Pultarova, writing in Engineering and Technology Magazine about the project. It's not easy to line the satellites up to make the laser link work, says Chris Wood in Gizmag, but eventually it will become automated, he thinks. So smugglers and other brigands, you've got a bit more time to complete your deeds. But your days are numbered. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:c4b32963-50a3-432e-b77d-ad1dbf7e5a58>
CC-MAIN-2017-09
http://www.networkworld.com/article/2917271/data-center/lasers-will-allow-real-time-satellite-communications.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00443-ip-10-171-10-108.ec2.internal.warc.gz
en
0.972186
690
3.46875
3
Windows Antivirus Machine is a computer infection from the Rogue.FakeVimes family of rogue anti-spyware programs. This program is considered a rogue because it deliberately displays false scan results, fake security warnings, and hijacks your computer so that you are unable to launch your normal Windows programs. This rogue is distributed using three methods. The first method is hacked web sites that contain malicious code that exploit vulnerabilities on your computer to install the infection without your knowledge or permission. The second method is web sites that display advertisements that pretend to be online anti-malware scanners. These advertisements pretend to scan your computer and then state you are infected and should use Windows Antivirus Machine to fix your computer. The last method is Trojans that pretend to be software necessary to view online videos. When Windows Antivirus Machine is installed it will be set to start automatically when you login to Windows. It will also create hundreds of ImageFileExecutionOption Registry keys that make it so you are unable to launch many legitimate programs, including anti-virus programs that could potentially help you remove this infection. Once started, the program will perform a fake scan of your computer and then state it is infected with a variety of malware. If you attempt to remove any of these so-called infections, though, it will state that you first need to purchase it before being allowed to do so. As this is a scam, please do not be concerned by what this program displays. While running, Windows Antivirus Machine will also display fake security warnings that are designed to make you think your data and computer are at serious risk. Some of the messages that you may see include: Software without a digital signature detected. Your system files are at risk. We strongly advise you to activate your protection. Trojan activity detected. System data security is at risk. It is recommended to activate protection and run a full system scan. Keylogger activity detected. System information security is at risk. It is recommended to activate protection and run a full system scan. Just like the scan results, all of these security alerts are fake and should be ignored. Without a doubt, Windows Antivirus Machine is a scam that was created to scare you into thinking that there was a problem with your computer in the hopes that you would then purchase the rogue program. With this said, for no reason should you purchase this program, and if you have, you should contact your credit card company to dispute the charges stating that the program is a scam and a computer virus. To remove Windows Antivirus Machine and related malware, please follow the steps in the removal guide below. Self Help Guide How to remove Windows Antivirus Machine: - Step 1: Print out instructions before we begin. - Step 2: Reboot into Safe Mode with Networking - Step 3: Remove proxy servers from installed web browsers. - Step 4: Use Rkill to terminate suspicious programs. - Step 5: Use Malwarebytes AntiMalware to clean infections. - Step 6: Use HitmanPro to scan your computer for badware - Step 7: Run Secunia PSI to find outdated and vulnerable programs. This removal guide may appear overwhelming due to the amount of the steps and numerous programs that will be used. It was only written this way to provide clear, detailed, and easy to understand instructions that anyone can use to remove this infection for free. Before using this guide, we suggest that you read it once and download all necessary tools to your desktop. After doing so, please print this page as you may need to close your browser window or reboot your computer. Reboot your computer into Safe Mode with Networking using the instructions for your version of Windows found in the following tutorial: When following the steps in the above tutorial, select Safe Mode with Networking rather than just Safe Mode. When the computer reboots into Safe Mode with Networking make sure you login with the username you normally use. When you are at your Windows desktop, please continue with the rest of the steps. This infection changes your Windows settings to use a proxy server that will not allow you to browse any pages on the Internet with Internet Explorer or update security software. Regardless of the web browser you use, for these instructions we will first need need to fix this problem so that we can download the utilities we need to remove this infection. Please start Internet Explorer, and when the program is open, click on the Tools menu and then select Internet Options as shown in the image below. You should now be in the Internet Options screen as shown in the image below. Now click on the Connections tab as designated by the blue arrow above. You will now be at the Connections tab as shown by the image below. Now click on the Lan Settings button as designated by the blue arrow above. You will now be at the Local Area Network (LAN) settings screen as shown by the image below. Under the Proxy Server section, please uncheck the checkbox labeled Use a proxy server for your LAN. Then press the OK button to close this screen. Then press the OK button to close the Internet Options screen. Now that you have disabled the proxy server you will be able to browse the web again with Internet Explorer. To terminate any programs that may interfere with the removal process we must first download the Rkill program. Rkill will search your computer for active malware infections and attempt to terminate them so that they wont interfere with the removal process. To do this, please download RKill to your desktop from the following link. When at the download page, click on the Download Now button labeled iExplore.exe. When you are prompted where to save it, please save it on your desktop. Once it is downloaded, double-click on the iExplore.exe icon in order to automatically attempt to stop any processes associated with Windows Antivirus Machine and other malware. Please be patient while the program looks for various malware programs and ends them. When it has finished, the black window will automatically close and a log file will open. Please review the log file and then close so you can continue with the next step. If you have problems running RKill, you can download the other renamed versions of RKill from the rkill download page. All of the files are renamed copies of RKill, which you can try instead. Please note that the download page will open in a new browser window or tab. Do not reboot your computer after running RKill as the malware programs will start again. At this point you should download Malwarebytes Anti-Malware, or MBAM, to scan your computer for any any infections or adware that may be present. Please download Malwarebytes from the following location and save it to your desktop: Once downloaded, close all programs and Windows on your computer, including this one. Double-click on the icon on your desktop named mb3-setup-1878.1878-22.214.171.1249.exe. This will start the installation of MBAM onto your computer. When the installation begins, keep following the prompts in order to continue with the installation process. Do not make any changes to default settings and when the program has finished installing, make sure you leave Launch Malwarebytes Anti-Malware checked. Then click on the Finish button. If MalwareBytes prompts you to reboot, please do not do so. MBAM will now start and you will be at the main screen as shown below. We now need to enable rootkit scanning to detect the largest amount of malware that is possible with MalwareBytes. To do this, click on the Settings button on the left side of the screen and you wil be brought to the general settings section. Now click on the Protection tab at the top of the screen. You will now be shown the settings MalwareBytes will use when scanning your computer. At this screen, please enable the Scan for rootkits setting by clicking on the toggle switch so it turns green. Now that you have enabled rootkit scanning, click on the Scan button to go to the scan screen. Make sure Threat Scan is selected and then click on the Start Scan button. If there is an update available for Malwarebytes it will automatically download and install it before performing the scan. MBAM will now start scanning your computer for malware. This process can take quite a while, so we suggest you do something else and periodically check on the status of the scan to see when it is finished. When MBAM is finished scanning it will display a screen that displays any malware that it has detected. Please note that the infections found may be different than what is shown in the image below due to the guide being updated for newer versions of MBAM. You should now click on the Remove Selected button to remove all the seleted malware. MBAM will now delete all of the files and registry keys and add them to the programs quarantine. When removing the files, MBAM may require a reboot in order to remove some of them. If it displays a message stating that it needs to reboot, please allow it to do so. Once your computer has rebooted, and you are logged in, please continue with the rest of the steps. You can now exit the MBAM program. If your computer is still in Safe Mode with Networking, you can reboot your computer back to normal mode. Once your computer is rebooted and you are back at the desktop, you can proceed with the rest of the instructions. Now you should download HitmanPro from the following location and save it to your desktop: When you visit the above page, please download the version that corresponds to the bit-type of the Windows version you are using. Once downloaded, double-click on the file named HitmanPro.exe (for 32-bit versions of Windows) or HitmanPro_x64.exe (for 64-bit versions of Windows). When the program starts you will be presented with the start screen as shown below. Now click on the Next button to continue with the scan process. You will now be at the HitmanPro setup screen. If you would like to install the 30 day trial for HitmanPro, select the Yes, create a copy of HitmanPro so I can regularly scan this computer (recommended) option. Otherwise, if you just want to scan the computer this one time, please select the No, I only want to perform a one-time scan to check this computer option. Once you have selected one of the options, please click on the Next button. HitmanPro will now begin to scan your computer for infections. When it has finished it will display a list of all the malware that the program found as shown in the image below. Please note that the infections found may be different than what is shown in the image. You should now click on the Next button to have HitmanPro remove the detected infections. When it is done you will be shown a Removal Results screen that shows the status of the various infections that were removed. At this screen you should click on the Next button and then if prompted you should click on the Reboot button. If HitmanPro does not prompt you to reboot, please just click on the Close button. Once your computer has has restarted or you pressed the Close button, you should now be at your Windows desktop. As many malware and unwanted programs are installed through vulnerabilities found in out-dated and insecure programs, it is strongly suggested that you use Secunia PSI to scan for vulnerable programs on your computer. A tutorial on how to use Secunia PSI to scan for vulnerable programs can be found here: Your computer should now be free of the Windows Antivirus Machine program. If your current security solution allowed this program on your computer, you may want to consider purchasing the full-featured version of Malwarebytes Anti-Malware to protect against these types of threats in the future. If you are still having problems with your computer after completing these instructions, then please follow the steps outlined in the topic link ed below: Purchase the full-featured version of Malwarebytes Anti-Malware, which includes real-time protection, scheduled scanning, and website filtering, to protect yourself against these types of threats in the future!
<urn:uuid:17845f2e-68a7-4dd1-8986-1e2201f2fcdf>
CC-MAIN-2017-09
https://www.bleepingcomputer.com/virus-removal/remove-windows-antivirus-machine
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00619-ip-10-171-10-108.ec2.internal.warc.gz
en
0.895386
2,527
2.515625
3
A research team is demonstrating tables that form local networks among the devices laid upon their surfaces, while also providing wireless charging, at the Ceatec electronics show in Japan. The concept is meant to support ad-hoc networks that are more secure and local than current Wi-Fi networks, without the need for cables. The team from Japan's prestigious University of Tokyo that is working on the technology envisions the tables being used in business meetings or classrooms, where temporary, instant connections are useful. (See a demonstration from the show floor on YouTube.) The tables are made using a thin sheet made of small mesh panels, which can contain electromagnetic waves in two dimensions, while also carrying a small electric current. Devices that interact with the sheet must generally be equipped with a special coupler, although team members said it is also possible to use traditional Wi-Fi antennas for Internet in some cases. "In standard wireless connections, electromagnetic waves are sent through the air, but here connections are made by making contact with the surface," said Akihito Noda, a doctoral student at the University of Tokyo. "Requiring surface contact creates a lot of restrictions, but on the other hand there are also some benefits. For instance, if you don't set devices on the surface, no false connections can be made, so there are security benefits." At a demonstration on the exhibition floor, tablets laid on a table connected easily to a networked router on the same table, avoiding the morass of competing Wi-Fi signals on the show floor. The table also provided enough power to slowly charge a mobile phone and as well as run small fans and LED lights. All devices displayed where equipped with the special couplers, and the low power supply was an obvious limit on the types of gadgets that could be displayed. Noda said Internet connections run at Wi-Fi speeds. For charging, 60-centimeter-square sheets have been tested as safely taking 10 watts of power without any ill effects on users. Devices laid on such sheets charge at about 4 watts. The team, which is working with large Japanese corporations like NEC to develop the technology, is also planning to incorporate it into home furniture, to create surfaces where users can lay their gadgets to automatically charge and join their personal network. The Ceatec exhibition, Japan's largest electronics show, runs this week at Makuhari, just outside of Tokyo.
<urn:uuid:57d08127-7dd8-46ed-bea1-2ea68629485b>
CC-MAIN-2017-09
http://www.itworld.com/article/2722007/mobile/lay-your-tablet-on-a-table--join-its-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00143-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949994
492
3.015625
3
The creators of the Roboy robot wanted it to move as much like a human as possible, using a skeleton of 3D-printed bones and joints, tendons -- and coiled springs in muscles. The springs are there to give Roboy's movements fluidity. One reason most humanoid robots still move, well, robotically is that their movements are too stiff. Human muscles are springy, so if we are nudged or bumped out of the way, we gently bounce back, and if we jump or fall, we can absorb the shock, said Rafael Hostettler, manager of the Roboy project at the Swiss Federal Institute of Technology Zurich. It's possible to model that springiness in software, so a robot can use sensors to detect when there is resistance to its movements, and modify the force applied -- but there's inevitably a lag in the processing of such inputs, so the movement will not be as natural as ours, Hostettler said at the Cebit trade show in Hanover, Germany. The springiness of our muscles also allows us to put more energy into movements such as throwing, building up the tension in one muscle by pulling against another, and then suddenly letting go. That's harder to model in software, and so in their robot's muscles the Zurich researchers decided to use real springs -- thick, coiled ones of the kind seen in a car's suspension, only smaller. Roboy's musculature is much like our own, with paired actuators operating in opposition at each joint and wires in place of ligaments. Other robots might use a single motor able to pull in either direction. The resistance offered by the 12 motors in each of Roboy's arms feels almost human when shaking hands -- although the grip of the hand, with only one motor, is less natural. The work is part of a wider European research project called Myorobotics, which aims to create robots that are cheaper to build and safer to be around -- the idea is that being struck by a springy robot will be less dangerous than a hit from a solid one. If Roboy could stand, it would be 1.42 meters tall, but its leg and foot muscles are not strong enough to balance its 30-kilogram weight, so it can only sit. The most striking thing about Roboy is its enormous head, with glowing eyes that, in a cartoon touch, turn red when the robot is simulating anger. A projector inside the head also animates the lips and can give the impression the robot is blushing. Roboy took around nine months to build. Beyond its role as a proving ground for robotics technologies, the researchers have other applications in mind. "We see it as a training tool for doctors to learn standard tests for stroke diagnosis," said Hostettler. That's something difficult to learn from videos or books, he said, as it requires doctors to get a feeling for the way patients react to physical stimuli. The robot's ability to show different facial expressions will be useful for that project, but its main purpose is as a means to communicate about robot research, he said. The crowds stopping to stare and shake hands with Roboy at Cebit prove his point. Peter Sayer covers open source software, European intellectual property legislation and general technology breaking news for IDG News Service. Send comments and news tips to Peter at email@example.com.
<urn:uuid:49064db3-15f4-4974-9a9a-2dc8877693d0>
CC-MAIN-2017-09
http://www.networkworld.com/article/2175114/data-center/robot-project-aims-to-help-doctors-diagnose-human-stroke-victims.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00143-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953588
701
2.953125
3
Although not originally designed for telephones or tablets, the Linux kernel is now getting more contributions than ever from mobile and portable device vendors, whose input is driving a heretofore unseen rate of development for the open source project. "You see this tremendous acceleration of code happening due to an incredibly broad device enablement," said Jim Zemlin, executive director at The Linux Foundation, referring to how makers of tablets, smartphones, wearable computers and sensor devices are all using the kernel. "Nobody is making anything these days without Linux, unless your name is Microsoft, Apple or BlackBerry," Zemlin said. Collectively, the mobile focused Linaro, Samsung and Texas Instruments increased their collective contributions to the kernel in the past year, to 11 percent of all contributions, up from 4.4 percent the year before according to Linux Foundation's latest yearly report on who contributes to the Linux kernel. Google, historically a strong contributor, also provided significantly more changes to Linux this year as well, thanks in part to the Google's Android operating system, which uses the kernel. The report "Who Writes Linux" details who works on the Linux kernel, which the Linux Foundation calls the largest collaborative project in the history of computing.A The report covers development focused on the work done between Linux 3.3, released in March 2012, and 3.10, finished in A June. More than 1,100 developers from 225 companies contributed to the kernel in this time period. Linux is developed on a community model, in which developers and companies voluntarily contribute changes to the kernel, still overseen by its creator Linux Torvalds. Linux itself is not an operating system, though can be used as a the core of one. The rate of contributions is increasing, according to the foundation. On average, 7.14 changes are now made to the kernel each hour, or about 171 changes every day. Zemlin attributes this acceleration of growth to the growing diversity of hardware devices that are being developed, from tablets to smart sensors. Manufacturers are using Linux as the basis of their own software, rather than developing a operating system kernel from scratch. In turn, many will contribute their own enhancements and corrections to the kernel, in order not to have to maintain their own version of Linux. Red Hat, Intel, Texas Instruments, Linaro, Suse, IBM, Samsung and Google were the top contributors to the kernel is this time period, determined by the number of their contributions. Microsoft, which previously appeared on the list of top contributors thanks to its work on preparing Linux for Microsoft's Hyper-V virtualization environment, has dropped from the list entirely this year. The Foundation released the paper on Monday, in conjunction with the annual LinuxCon conference, being held this week in New Orleans.
<urn:uuid:f1ebb2b9-d5f0-4a2e-a935-f9fa15483175>
CC-MAIN-2017-09
http://www.computerworld.com/article/2485122/linux/linux-gets-a-boost-from-mobile.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00543-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944792
567
2.625
3
Cybercriminals continue to respond with lightning speed when they see an opportunity to exploit a national or global news story to spread malware. In fact criminals are inventing “breaking news” that appears to relate to high-profile current events. The Commtouch Security Lab continually analyzes malicious campaigns that exploit breaking news using the CNN name and other prominent news outlets to lure email recipients to malicious sites. The average time between an actual news event and its exploitation hovered around 22 hours during the last three months. On Friday, September 6, malware distributors invented fake news designed to take advantage of public interest in the possibility of a U.S. airstrike against Syria. The emails used the subject line, “The United States Began Bombing,” and were crafted to appear as a legitimate CNN news alert. It is an example of the cybercriminal community harnessing the interest and anxiousness about current events to increase the success of their malicious campaigns. Prior to the Syria-related example, the average start time for a virus attack was already decreasing. In March 2013, when the new Pope was elected, the first malware and phishing attacks began after 55 hours. In April 2013, after the Boston Marathon bombing, it took 27 hours to see the first related attacks exploiting interest in the event. Further examples include the newborn royal baby and news about the NSA whistleblower Edward Snowden. But examples such as the recent Syria-related campaign in September show that spammers are not waiting around – they are becoming even “faster” than the events themselves.
<urn:uuid:8845a0f5-85f1-42ec-b8bc-3b9ee4cca69c>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/09/27/cybercriminals-exploit-most-news-within-22-hours/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00011-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951008
318
2.671875
3
A $2 billion device attached to the outside of the International Space Station has found particles that could be the building blocks of dark matter. CERN, also known as the European Organization for Nuclear Research, reported today that it is collecting and analyzing data that could offer the first glimpse of dark matter -- mysterious and so-far elusive matter that is thought to make up a quarter of the universe. [ QUICK LOOK: The Higgs boson phenomenon ] Scientists know that dark matter, which neither emits nor absorbs light, is there because of its gravitational influence on the rest of the universe. Beyond that, they know little about what it is. If scientists can understand dark matter, it could offer valuable clues as to how the Milky Way will evolve and whether the universe will stop expanding at some point or if it will expand until it collapses. "Over the coming months, [the Alpha Magnetic Spectrometer ] will be able to tell us conclusively whether these positrons are a signal for dark matter, or whether they have some other origin," said MIT's Nobel-winning physicist Samuel Ting, who leads the team studying the data. Ting expects the scientists to be able to identify the newly found particles within a few months. The Alpha Magnetic Spectrometer, or AMS, went into space in May 2011 on board the space shuttle Endeavour. The 15,251-pound particle detector was attached to the backbone of the station. The device has been orbiting the Earth with the space station ever since, sifting through cosmic particles for data to help scientists answer fundamental questions of physics related to antimatter and dark matter. The main goal of the research is to understand the origin of the universe and what makes it up. The particle detector is made up of a ring of powerful magnets and ultra-psensitive detectors built to track, though not capture, cosmic rays. It will be operated remotely from Earth. By studying these cosmic rays with its highly sensitive monitors, the AMS should be able to identify a single particle of antimatter or dark matter among a billion other particles. The experiment is sponsored by the U.S. Department of Energy. CERN, which runs the Large Hadron Collider, has become a major player in the physics world. The collider, which now is shut down for a two-year-long upgrade, has found a particle that scientists are nearly sure is the elusive Higgs boson. The Higgs boson, which is believed to give everything in the universe weight, could be a key component of everything from humans to stars and planets, as well as everything in the universe that is invisible. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Particle hunter on space station may have found dark matter" was originally published by Computerworld.
<urn:uuid:ce6ad064-d306-41a1-83c1-9c7e93fee2be>
CC-MAIN-2017-09
http://www.networkworld.com/article/2165014/data-center/particle-hunter-on-space-station-may-have-found-dark-matter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00063-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948673
644
3.5625
4
Police Departments nationwide have been using data and statistics to drive policing since the 90s in an approach founded by the NYPD named CompStat was credited with dramatic reductions in crime and increases in efficiency. CompStat, a process and philosophy rather than a single technology or software, uses databases and GIS to record and track criminal and police activity and identify areas that are lagging or need more attention. While it provides much more information than "primal policing", CompStat has advanced little beyond simple spreadsheets and mapping software. Inspired by recent innovations in Big Data and Apache Hadoop and businesses like Walmart or Amazon using analytics to determine future demand, departments across the country and worldwide are looking to take this approach to the next level and go from tracking crime to predicting it. The first department to adopt this strategy was Santa Cruz through their city-wide 6 month Predictive Policing Experiment, named one to Time Magazine's 50 best innovations of 2011. The large scope of the experiment, the statistically average crime rate and the challenges faced by the department make is a good example for other cities. Like most police departments right now, SCPD has a declining budget and shrinking police force. On top of that, in the first 6 months of 2011 it saw an unprecedented crime wave. Driven to do more with less, the department signed on to work with researchers at UCLA to test a new method of modelling crime. UCLA mathematician George Mohler noticed that, over time, crime maps resemble other natural phenomenon and modified algorithms used to predict aftershocks to instead predict future property crimes from past data. Using seismologists’ models for crime isn't as crazy as it sounds, since they've already been adopted in epidemiology and finance. Mohler's approach is supported by popular modern theories on crime, the rational choice model, which states that criminals, like consumers, pick their targets rationally based on value, cost, effort, and risk, and the Broken Window theory, popularized by the same NYPD Commisioner who implemented CompStat, Bill Bratton. Though the Broken Window theory is typically applied to vandalism, the essence is that petty crime leads to major crime and that crime is self-reinforcing by setting norms and making areas seem poorly controlled. Past crimes can be predictive of future crimes because they indicate that an environment is target-rich, convenient to access for a criminal, vulnerable, or simply seems like a good place to strike due to a pattern of crime and poor control. Mohler's algorithm is different from the CompStat approach, which simply identifies "hot spots" where crime is clustered. To predict the most likely type, location, and time of future crimes, Mohler must compare each past crime to the others and generate a massive amount of metadata. For the Santa Cruz Experiment, he went back to 2006, looking at roughly 5,000 crimes requiring 5,000! or 5,000 x 4,999 x 4,998... comparisons. When he compared his method to traditional CompStat maps for the LAPD's archives, he found that it predicted 20 to 95 percent more crimes. The experiment was recently concluded, and the department believes that its predictive policing program was a success. Despite having fewer officers on the force, SCPD reversed the crime wave and lowered crime by 11% from the first half of the year to 4% below historical averages for those months. Still, from that information alone, it's difficult to tell how effective Mohler's strategy was, and we will have a better indication when the LAPD concludes a similar but even larger study in May, that includes a control group. Elsewhere, other departments in the United States and abroad are adopting a Big Data approach to policing as well. Sponsored by the Bureau of Justice Assistance , the Smart Policing Initiative is exploring predictive policing in over a dozen departments and agencies nationwide, including Boston PD, Las Vegas PD, and the Michigan State Police. Some have already yielded results, such as in Richmond, where software using a combination of business intelligence, predictive analysis, data mining, and GIS has contributed to a drastic drop in crime. Predictive policing is also being tried in the UK where, in the single ward of the Greater Manchester area studied, burglary decreased by 26% versus 9% city-wide, prompting follow-up studies in Birmingham. While predictive policing is showing promise and, in limited trials, results, the practice is still in its infancy with plenty of room to grow. Much more metadata can be generated and factors included into the predictive algorithms. For example, Santa Cruz could only predict property crime, as violent crime depends less on targets and opportunities and more on events and interpersonal interactions. In business and counter-terrorism, however, tools like social network analysis and social media monitoring have been used successfully to get a better feel for social dynamics. As predictive policing gets more attention and is adopted more widely, we can expect to see these and other Big Data solutions applied to law enforcement. - Tackling Big Data on Police Use of Force (ctovision.com) - An End to the Law Enforcement Social Media Free Lunch? (ctovision.com) - Crunching Data to Stop Baddies Before They Strike (wired.com) - Government Big Data Solutions Award Nominee: Wayne Wheeles (Sherpa Surfing) (ctolabs.com) - More DC Area Big Data User Groups (ctolabs.com) - Hadoop is an Open Source Revolution: Federal Computer Week Interview (ctolabs.com)
<urn:uuid:ed8a0ca6-e7f9-4122-b89d-8569635a1aae>
CC-MAIN-2017-09
https://ctovision.com/predictive-policing-with-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00415-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946497
1,125
2.78125
3
It will be months before we know the true damage brought about by super typhoon Haiyan. The largest death tolls now associated with the storm are only estimates. Aid workers from across the world are now flying to the island nation, or they just recently arrived there. They—and Filipinos—will support survivors and start to rebuild. But they will be helped by an incredible piece of technology, a worldwide, crowd-sourced humanitarian collaboration made possible by the Internet. What is it? It’s a highly detailed map of the areas affected by super typhoon Haiyan, and it mostly didn’t exist three days ago, when the storm made landfall. Since Saturday, more than 400 volunteers have made nearly three quarters of a million additions to a free, online map of areas in and around the Philippines. Those additions reflect the land before the storm, but they will help Red Cross workers and volunteers make critical decisions after it about where to send food, water, and supplies. These things are easy to hyperbolize, but in the Philippines, now, it is highly likely that free mapping data and software—and the community that support them—will save lives. The Wikipedia of maps The changes were made to OpenStreetMap (OSM), a sort of Wikipedia of maps. OSM aims to be a complete map of the world, free to use and editable by all. Created in 2004, it now has over a million users. I spoke to Dale Kunce, senior geospatial engineer at the American Red Cross, about how volunteer mapping helps improve the situation in the Philippines. The Red Cross, internationally, recently began to use open source software and data in all of its projects, he said. Free software reduces or eliminates project “leave behind” costs, or the amount of money required to keep something running after the Red Cross leaves. Any software or data compiled by the Red Cross are now released under an open-source or share-alike license. While Open Street Map has been used in humanitarian crises before, the super typhoon Haiyan is the first time the Red Cross has coordinated its use and the volunteer effort around it. How the changes were made The 410 volunteers who have edited OSM in the past three days aren’t all mapmaking professionals. Organized by the Humanitarian OpenStreetMap Team on Twitter, calls went out for the areas of the Philippines in the path of the storm to be mapped. What does that mapping look like? Mostly, it involves “tracing” roads into OSMusing satellite data. The OSM has a friendly editor which underlays satellite imagery—on which infrastructure like roads are clearly visible—beneath the image of the world as captured by OSM. Volunteers can then trace the path of a road, as they do in this GIF, created by the D.C.-based start-up, Mapbox: Volunteers can also trace buildings in Mapbox using the same visual editor. Since Haiyan made landfall, volunteers have traced some 30,000 buildings. Maps, on the ground How does that mapping data help workers on the ground in the Philippines? First, it lets workers there print paper maps using OSM data which can be distributed to workers in the field. The American Red Cross has dispatched four of its staff members to the Philippines, and one of them—Helen Welch, an information management specialist—brought with her more than 50 paper maps depicting the city of Tacloban and other badly hit areas. Those maps were printed out on Saturday, before volunteers made most of the changes to the affected area in OSM. When those, newer data are printed out on the ground, they will include almost all of the traced buildings, and rescuers will have a better sense of where “ghost” buildings should be standing. They’ll also be on paper, so workers can write, draw, and stick pins to them. Welch landed 12 hours ago, and Kunce said they “had already pushed three to four more maps to her.” The Red Cross began to investigate using geospatial data after the massive earthquake in Haiti in 2010. Using pre-existing satellite data, volunteers mapped almost the entirety of Port-au-Prince in OSM, creating data which became the backbone for software that helped organize aid and manage search-and-rescue operations. That massive volunteer effort convinced leaders at the American Red Cross to increase the staff focusing on their digital maps, or geographic information systems (GIS). They’ve seen a huge increase in both the quality and quantity of maps since then. But that’s not all maps can do. The National Geospatial-Intelligence Agency (NGA), operated by the U.S. Department of Defense, has already captured satellite imagery of the Philippines. That agency has decided where the very worst damage is, and has sent the coordinates of those areas to the Red Cross. But, as of 7 p.m. Monday, the Red Cross doesn’t have that actual imagery of those sites yet. The goal of the Red Cross geospatial team, said Kunce, was to help workers “make decisions based on evidence, not intuition.” The team “puts as much data in the hands of responders as possible.”What does that mean? Thanks to volunteers, the Red Cross knows where roads and buildings should be. But until it gets the second set of data, describing the land after the storm, it doesn’t know where roads and buildings actually are. Until it gets the new data, its volunteers can’t decide which of, say, three roads to use to send food and water to an isolated village. Right now, they can’t make those decisions. Kunce said the U.S. State Department was negotiating with the NGA for that imagery to be released to the Red Cross. But, as of publishing, it’s not there yet. When open data advocates discuss data licenses, they rarely discuss them in terms of life-and-death. But, every hour that the Red Cross does not receive this imagery, better decisions cannot be made about where to send supplies or where to conduct rescues. And after that imagery does arrive, OSM volunteers around the world can compare it to the pre-storm structures, marking each of the 30,000 buildings as unharmed, damaged, or destroyed. That phase, which hasn’t yet begun, will help rescuers prioritize their efforts. OSM isn’t the only organization using online volunteers to help the Philippines: MicroMappers, run by a veteran of OSM efforts in Haiti, used volunteer-sorted tweets to determine areas which most required relief. Talking to me, Kunce said the digital “commodification of maps” generally had contributed to a flourishing in their quantity and quality across many different aid organizations. “If you put a map in the hands of somebody, they’re going to ask for another map,” said Kunce. Let’s hope the government can put better maps in the hands of the Red Cross—and the workers on the ground—soon.
<urn:uuid:7f5ac238-9c7e-4960-9a36-b5a65b6b7fe4>
CC-MAIN-2017-09
http://www.nextgov.com/cloud-computing/2013/11/how-online-mapmakers-are-helping-red-cross-save-lives-philippines/73637/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00008-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953388
1,505
3.125
3
The means to go green COVER STORY: How to eke out energy savings in servers, processors, networks and storage systems<@VM>Sidebar | Supercomputing: Low and slow will be the way to go<@VM>Sidebar | Industry cuts power use at its own data centers - By Rutrell Yasin - Nov 16, 2007 Editor's note: This report is part of a broader 360-degree joint reporting effort by Government Computer News, Federal Computer Week, Washington Technology and the 1105 Government Information Group. GCN covers the technology developments propelling green IT, FCW focuses on the policy and management aspects of green IT and Washington Technology looks at its effect on contractors and suppliers. The full collection of stories are available here. Want to make your data center more environmentally friendly? Or just cut the power bills for your agency? The path to either goal is the same: Greater energy efficiency. With the general public becoming more aware of energy efficiency and with the cost of the kilowatt hour creeping up, the government data center certainly could stand to sharpen its energy usage profile. For one thing, President Bush has mandated reductions in energy usage. In January, the White House issued an executive order calling for each agency to improve energy efficiency and reduce greenhouse gas emissions either by 3 percent annually through the end of fiscal year 2015 or 30 percent by the end of fiscal 2015, depending on its current energy use profile (GCN.com/872). Data centers would be a good place to start. They can be as much as 40 times more energy-intensive as conventional office buildings, according to a study by Lawrence Berkeley National Laboratories for the American Council for an Energy-Efficient Economy. And a lot of that power is consumed by information technology equipment. Server sprawl is taking up precious data center space and consuming a lot of power, resulting in high utility bills. The combination of memory, disks and network interfaces can exceed the power consumption of a CPU. Plus, excess hardware capacity can lead to significant energy waste. The good news is that many technologies commercially available or soon to be available could improve the energy efficiency of data centers. Advances in virtualization technology allow data centers to pool multiple applications, servers and storage into a single source of shared resources, saving space and power. Multicore processors offer better power management and can handle more workload in parallel than a single-core chip. Ongoing work in national laboratories and standards bodies will pave the way for more energy-efficient Ethernet networks and other networking equipment. Meanwhile, storage devices are expected to become more efficient because of a shift to smaller hard drives and increased use of Serial Advanced Technology Attachment drives, according to a report issued to Congress on server and data center energy efficiency by the Environmental Protection Agency and industry stakeholders. And improved management of storage resources could foster significant data center energy savings. Here is how manufacturers and researchers are developing ways to boost energy efficiency across all the major data center components, including servers, microprocessors, networking and storage.Servers: Utilize more, cool better The underuse of servers is often cited as a reason for sub par energy efficiency in data centers. Efforts to get more out of existing servers could have a significant effect on energy savings in many U.S. data centers and server installations, experts say. 'Servers aren't fully utilized,' said Joe Wagner, senior vice president and general manager of system resources at Novell. The typical volume server runs at between 15 percent and 30 percent of use, compared with 70 percent to 80 percent on a mainframe system, he said. Virtualization is one way to pool and share resources to reduce costs and optimize utilization. Users can virtually collapse workloads, Wagner said. For instance, they can merge three single servers, each running at 15 percent capacity, into one server running at 45 percent to 50 percent capacity. Novell's SUSE Linux Enterprise Server 10 with built-in Xen open-source virtualization software lets users consolidate Microsoft Windows and Linux workloads onto a single server, Wagner said. This can reduce power, cooling and space requirements. Although virtualization also adds a new layer of complexity, Novell's ZENworks Orchestrator and virtual machine management software provides an automated, policy-based solution that can simplify virtualization operations. It can also boost energy efficiency by shutting down machines when they're not in use in addition to distributing virtual and physical workloads across the data center for maximum efficiency. Meanwhile, major computer manufacturers are moving toward the production and marketing of more energy-efficient servers. Several key features are the use of multicore processors with power management and virtualization capabilities, high-efficiency power supplies, and internal variable-speed fans for on-demand cooling, the EPA report to Congress states. Dell, for example, has incorporated the Energy Smart technology used in its desktop computers into Dell PowerEdge Servers to decrease power consumption and overall operating costs. Dell PowerEdge Servers also can work with Emerson Network Power's Liebert XD and DS, two cooling modules that use advances in refrigerants and compressors to improve the energy efficiency of the cooling process. In developing Energy Smart, Dell took a close look at its own data center to determine which equipment was consuming the most power, said Jon Weisblatt, senior manager of solutions marketing at Dell. 'The majority of that was IT equipment.' Sixty percent of the IT power consumption was directly attached to server usage, he said. EPA's Energy Star program has focused on data centers by supporting development of energy performance measures for servers. 'Energy efficiency is the cornerstone of what you can do to make things greener,' said Jack Pouchet, director of green initiatives at Emerson Network Power. 'Data center managers need to assess where they are today. If not, they have no idea where they are going.' The Energy Department is working with other industry stakeholders such as the Green Grid consortium to develop assessment tools within the next 18 months. DOE has assembled the expertise to develop metrics, measurements and tools with the goal of empowering data center decision-makers, said Paul Scheihing, who works at DOE's Office of Energy Efficiency and Renewable Energy in the Industrial Technology Program. DOE is interested in an overarching set of tools that will help data centers profile energy use and gather and quantify metrics, he said.Processors: More cores The two big commodity chip-makers, Advanced Micro Devices and Intel, have been developing and improving multicore chips. Further energy savings can be attributed to the development of dynamic frequency and voltage scaling in addition to virtualization capabilities, experts said. Multicore processors contain two or more processing cores on a single die, which run at slower clock speeds and lower voltages than the cores in single-core chips but can handle more work than a single-core chip. For example, AMD's Quad Core Opteron processor with Direct Connect Architecture provides fast input/output throughput by directly connecting input/output to the CPU, said Rick Indyke, federal business development manager at AMD. An integrated memory controller decreases power by removing external memory controller requirements. AMD PowerNow technology with Optimized Power Management dynamically reduces processor power based on workload, giving users power savings of as much as 75 percent, AMD officials said. The Quad Core Opteron also offers advanced Silicon-on-Insulator technology for faster transistors and reduced power leakage. AMD Virtualization technology, which is hardware-based, lets virtualization software run multiple operating systems and applications on a single physical AMD Opteron processor-based server. Earlier this month, Intel launched the new Quad-Core Intel Xeon processors using 45- nanometer technology that offers reduced idle power levels to maximize efficiency. It does this through a combination of 45-nm low-leakage and system-transparent energy smart technology. A reduction in a processor's idle power usage helps to lower average server power consumption over time during normal server operation, said Nigel Ballard, government marketing manager at Intel. IntelVT FlexPriority, a new VT extension available in the latest Intel Xeon processors, optimizes virtualization software by improving interrupt handling. Intel claims it can boost virtualization performance by as much as 35 percent for 32-bit guests.Networking: Only what you need Servers are not the only components in the data center that draw power. Three efforts are under way at the Lawrence Berkeley National Laboratory (LBNL) to make Ethernet networks more energy efficient. Adaptive Link Rate technology, or Energy Efficient Ethernet, focuses on letting Ethernet data links adjust their speed ' and power ' to traffic levels, said Bruce Nordman, a researcher in the lab's Environmental Energy Technologies Department. Ethernet links do not vary the rate at which data is transmitted even if little data is moving along the link. Higher data rates require a lot of power, so more energy is being used to transmit small amounts of data, LBNL researchers said. Some computers can change the speed of a link when they are in sleep mode or turned off, but the process is too slow when they are idle or active. So the solution is to change the network link speeds quickly in response to the amount of data being transmitted. LBNL is working with the Ethernet Alliance and the Institute of Electrical and Electronics Engineers' 802 standards committee to develop Adaptive Link Rate into a standard, said Mike Bennett, an LBNL researcher and chairman of the IEEE Energy Efficient Ethernet Task Force. Another project aims to develop proxying specifications that would let PCs and other devices sleep while other equipment maintains their network presence. There are many reasons users might need to stay connected to a network while they are not at their desktops. 'I hear of a lot of government agencies where you need to leave your machine on at night to [receive] updates such as security patches,' Nordman said. If the desktop PCs are allowed to stay in sleep mode but still are accessible, data centers could save millions of dollars on energy a year, he added. A proxy can provide a solution, he said. There are three ways to implement proxy, according to a white paper written by Nordman and University of South Florida Professor Ken Christensen: - Self-proxying puts the functionality within hardware, such as a network interface card. 'The key is to not require the power-intensive main processor, memory and most buses to be active during sleep,' the paper states. - Switch proxying puts the functionality into the immediately adjacent network switch so that the end device doesn't have to be changed. Other devices on the network are not aware the end device is asleep. - Third-party proxying puts the functionality somewhere in the network other than the device or adjacent switch. It might be good to have proxying referenced as a standard, but it is not a linchpin for moving forward and implementing some of these approaches in products, Nordman said. 'Proxying involves what a device does when it is not on.' However, Adaptive Link Rate focuses on both ends of the Ethernet network, he added, so for that to operate successfully, there has to be a standard approach for industry. The third project LBNL is working on would establish energy efficiency specifications to help manufacturers develop and users buy network equipment that consumes less electricity.Storage: Get smart A lot of energy-saving effort understandably goes to hardware. But for some observers, green IT begins with green data. The ever-increasing volume of data in storage systems could make these devices the top power hogs in the data center, said Jon Toigo, chairman at the Data Management Institute and founder of the Green Data Project. He noted that research firm IDC projects a 300 percent increase in storage devices purchased between 2006 and 2010. But there are technology and management strategies for saving power on storage ' such as storage virtualization, data deduplication, storage tiering and moving archival data to storage devices that can be shut down when not in use ' said Sateesh Narahari, senior manager of marketing at Symantec. The company offers a handful of products to manage and use storage more efficiently, such as Symantec NetBackup, Symantec Veritas Cluster Server and Symantec Veritas CommandCentral Storage. Still, Toigo questions whether efforts such as data deduplication and storage tiering ' all good initiatives ' are more tactical than strategic. A strategic approach requires knowing what's on your server drives, he said. Typically, about 40 percent of the data is inert; 30 percent is well used; 15 percent is allocated but unused; and 10 percent is orphaned, meaning the owners of the data are no longer with the organization; and 5 percent is inappropriate, Toigo said. The figures come from a study he conducted with Randy Chalfant, Sun Microsystems' chief technology officer. Data center managers need to employ intelligent archiving, which gives users more specific information about the content and context of data stored on systems. Then they must deploy storage resource management technology that has thin provisioning functions so they can reclaim unallocated space, Toigo said. Archiving selectively and intelligently is the strategic approach, he said. 'Intelligent archiving, storage resource management and data hygiene will bring in a lot greener environment than anything on the hardware level,' Toigo said.DOE lab aims to save power by using slower processors, but more of them LAST MONTH, when the Energy Department's Argonne National Laboratory made one of its smaller supercomputer purchases, it went with a largely unknown company, SiCortex. One of the deciding factors was the energy efficiency of SiCortex's SC5832. 'It's a very interesting machine in many ways,' said Ewing Lusk, director of the math and computer science division at the laboratory. 'We see it as a wave of the future in many ways, including the low power.' Although Argonne has plenty of power on tap, the lab is looking at how to factor energy efficiency into tomorrow's supercomputer work. Generally speaking, processors aren't getting any faster, so large supercomputing systems such as those being run by Argonne are using more processors and splitting the work across all of them. This approach, however, leads to concerns about power and cooling usage. The heat that processors give off not only is lost energy and requires additional data center cooling ' and even more electricity ' to dissipate the heat. So interest has been high in 'bringing the power budget down by using slower processors that don't get so hot,' Lusk said. The performance penalty from using slower chips is made up by using more of them. Unlike other cluster vendors, SiCortex has developed its own processor cores, which are packaged six per chip. In recent years, third-party chip fabrication plants and improvements in chip design software have made it easier for new system suppliers to develop their own microprocessors, said John Goodhue, vice president of marketing at the company. The company didn't develop a cutting edge microprocessor but rather built a simple processor tweaked for fast interconnect communications and low-power usage. The SC5832 units house 5,832 64-bit processors. Although each one can only perform at 1 GHz ' about half the lowest clock speeds offered by Advanced Micro Devices and Intel, together they can perform 6 trillion floating-point operations/sec (teraflops). Each node also features interprocessor communications logic, DDR- 2 memory controllers and PCI Express input/output logic. As for energy usage, the machine is impressive on paper. A node in the SiCortex cluster consumes 15 watts, much less than the 250 watts or so consumed by a typical server in a cluster, the company said. Overall, this rig draws about 20 kilowatts of power. It's not all good news, though. As Lusk points out, 'Now we have to program the beast.' The SC5832 will execute your typical DOE lab number crunching for fields such as astrophysics, climate modeling and seismic research. Some of the programs that handle these tasks may have to be written to work in parallel processor environments, and 'that is a little bit of a challenge for some applications,' Lusk said. But that was part of the reason the lab purchased the SC5832, Lusk said. Argonne wanted 'to get people working on that target early.' Earlier this month, the lab contracted with IBM for a much larger, 556-teraflop machine, called BlueGene/P machine, which will have 163,840 processors. The work on SC5832 will help in understanding how to use BlueGene/P as efficiently as possible.Can agencies learn anything from private-sector energy-efficiency efforts? Industry vendors such as Cisco, Hewlett-Packard, Sun Microsystems and others are struggling with the same power, cooling, and space issues that government agencies have. Several vendors have launched initiatives to consolidate data centers and implement technology that will reduce power consumption, save space and enable data center managers to better utilize processing and storage functions. Here's a snapshot look how three of them are moving toward energy-efficient data centers.Cisco: Green in Texas Cisco System's new data center under construction in Richardson, Texas, will be as green as company officials can make it. Richardson was chosen as a site because it is relatively free from potential natural disasters, said Steve Picot, Cisco's regional manager for Federal Data Center Solutions. The company has two primary data centers now ' one in California in the heart of earthquake country and the other in Florida on hurricane alley. The data center will contain 29,000 square fee of raised floor space, divided into four halls that support the needs of Cisco IT and the company's Linksys, Scientific Atlanta, and Cisco government services group, Cisco officials said. The data center, which is nearing completion, will adopt Cisco's Information Technology service-oriented model, in which processing power, storage, and communications can be drawn from one big pool of resources only when needed. The data center will use a number of Cisco tools for application load balancing and server and storage management that are deployed in its current data centers, Picot said. For example, the company's MDS 9000 series switches have a feature called Virtual Storage Area Network (VSAN) built in. Instead of having separate storage devices working together, VSANs provide a way to group together storage into a logical fabric using the same physical hardware infrastructure. IT equipment is being moved into the new facility now and the first applications should start to arrive the beginning of next year with successive waves over the next few years, according to Cisco.Hewlett-Packard: Dynamic cool HP is into the second year of a three-year project to consolidate 85 data centers into six. Three of the centers are primary, and three redundant. When the project is completed in June 2008 there will be two each in Atlanta, Houston and Austin, Texas. HP has designed the six facilities to be lights-out data centers, capable of being managed remotely. This will be enabled through the use of the company's adaptive infrastructure solutions, said Pat Tiernan, HP's vice president of social and environmental responsibility. HP also is implementing smart cooling technologies that optimize airflow for cooling of the data centers. Dynamic Smart Cooling, developed in HP Labs, consists of advanced software residing in an intelligent control node that continuously adjusts air conditioning settings in a data center, based on real-time air-temperature measurements from a network of sensors deployed on IT racks. The technology actively manages the environment to deliver cooling where it is needed most. Additionally, the sensors can tell when a server is heating up and then direct cold air at that server. HP wants to make it easier for user of HP equipment to be green ' from the desktop to data centers by building energy-efficiency into products from the start, Tiernan said. Sun Microsystems: Green is destiny Sun Microsystems recently opened new data centers in Santa Clara, Calif.; Blackwater, United Kingdom; and Bangalore, India, that were built using innovative designs and next-generation energy efficient systems, power and cooling, Officials estimate the company's datacenter efforts will save nearly 4,100 tons of CO2 per year and trim one percent from Sun's total carbon footprint. The data centers were put into operation between January and June of this year. Santa Clara's is the largest, at 76,000 square feet. Efforts to save energy at that facility began with a three-month hardware consolidation and refresh project that increased computing power by more than 450 percent and is expected to save $1.1 million in energy costs a year, Sun officials said. 'Green is a destiny; energy efficiency is a reality,' said Dean Nelson, director of Sun's Global Lab and Datacenter Design Services. If companies strive to make data centers more energy efficient, they will turn green, he said.
<urn:uuid:2da5f87c-641f-4ff0-bdce-0665256548ca>
CC-MAIN-2017-09
https://gcn.com/articles/2007/11/16/the-means-to-go-green.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00060-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933703
4,336
2.59375
3
With the kids on-board for the “Can we learn how to code together?” project, we rushed head-long into the Scratch language. The project, which was developed in 2003 at the MIT Media Lab, is designed for 8- to 16-year-olds, as well as their parents. It introduces a bunch of programming concepts and logic in a fun and easy-to-learn manner. It’s free to use, and you can also then sign up for free with a username/password. This is where our first moment of drama arrived with the kids. They don’t have a lot of experience yet in trying to come up with usernames, especially ones that ask you not to use their real names. My son, of course, wants us to have a username that includes the word “fart”, while my younger daughter just wants a reference to either Elsa or Anna from Frozen. I suggested “TeamShaw” as a username, but my son pointed out that we didn’t want to have our last name involved. In the end, we picked favorite characters from movies, and we ended up with a combination of Woody from Toy Story and Star-Lord from Guardians of the Galaxy. The Scratch site has a neat video on its home page that shows a bunch of the different projects that its users have created, so I played that for the kids to get them motivated and excited, to see that it was more than just a paint program or animation tool. I think the biggest problem in trying to get the kids started with programming is that they have the 50,000-foot idea (such as “I want to invent a game where you can fight your friends”), but haven’t yet figured out the basic parts just yet. Fortunately, after the video was done we could jump right into the on-site tutorial, which lets you create a program with the tools and sort of shows you the interface. Scratch uses a series of visual blocks to represent commands, and when you click and drag them into your “Scripts” area, you can connect several blocks together to start creating your program, which is then “acted out” on the interface’s “Stage”. On the stage is where the program’s “Sprites” appear, and each sprite that you create can include multiple scripts running on them, depending on whether you need the end user to perform input (click a flag, click a spot on the stage, etc.). You can also have multiple sprites doing things at the same time, or reacting to each other, etc. Other cool features include the ability to modify the sprites with color (through a simple Paint-like interface), as well as add sounds or photos through the computer’s microphone and webcam. It was this area that the kids really enjoyed -- instead of creating a soundtrack to their initial game with the provided background music, we recorded them singing their own song, which we were able to play in a loop. After we got through the Scratch tutorial provided on the site, we jumped into the first project in our DK Publishing book, called “Escape the Dragon!”. It’s a simple program where your initial sprite (always a cat, the unofficial mascot of the language) is being chased by a dragon, and the user controls the direction of the cat by moving their mouse around. As you get deeper into the project, you can direct the cat by moving a third Sprite around (in the book, it was a donut). We decided to go off the board a bit and change the sprites from a dragon chasing a cat to a mouse chasing a Mom. We replaced the donut with a bowl of cheese puffs (closest thing to cheese for our mouse). The flexibility of letting kids choose their own sprite designs or come up with their own is very cool -- it reminds me of several different kid-themed painting and rubber stamping apps on the iPad. The combinations may not make sense to the logic-driven parent, but kids are just fine with a mouse chasing a wizard on the moon, for example. The instructions on creating the different scripts were explained well within the book, and after about 60-90 minutes we had our finished “game” that we could start playing (basically, the game’s object is to see how long you can run around the screen before the mouse catches you.) At this point it was time for the kids to go to bed. My oldest daughter (age 8.5) seems to understand the logic within the scripts, but I think for the next project we’re going to switch positions and I’m going to have her doing the click-and-drag portions. For this one, she was reading along in the book and then watching me build the script. My son is more interested in the visual part of the stages, sprites and colors, rather than the logic, such as the “if/then” statements or figuring out how to move. My youngest is just happy to be a part of the team. I also downloaded the “ScratchJr” app for the iPad, which is even more basic since it’s geared to the 5- to 7-year-old crowd. The app has the same ideas in terms of a stage, Sprites, sounds, colors and backgrounds, but the scripts are created by moving visual icon blocks instead of descriptions via words. It helps that we already knew the basics of Scratch to know what icon we needed to start the program (the Green Flag), but it feels like we can get more complex with Scratch once we get a lot of the basics down. Next up: more script work and examples with the Scratch language through the book, and we’re going to see if we can build a simple dice-based game (or even figure out if we can duplicate a Rock, Scissors, Paper program).
<urn:uuid:87ec6b18-bc9a-49b0-8f14-1c84b934eb3b>
CC-MAIN-2017-09
http://www.itworld.com/article/2694942/development/can-keith-code--lesson-1--scratching-the-surface.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00532-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967988
1,244
3.109375
3
It doesn’t required advanced computer programming skills to enter the world of cybercriminal activity. Ransomware kits are a new malware tool that allows criminals without programming skills to target victims and extort payments. And it’s relatively easy to track these kits down — they are already being marketed underground for Windows-based systems. Ransomware is a piece of malware that hijacks the victim’s ability to access data, communicate, or use their system at all. One way ransomware is different from other types of malware — such as backdoors, keyloggers, and password stealers — is that attackers do not rely on their victims using the infected systems for financial transactions. Instead these criminals essentially hold a system hostage. The victims are faced with either losing their data or paying a ransom in the hope of regaining access. Windows Threats Spread Ransomware is a fast-growing criminal enterprise, especially on PCs. Infections on Windows PCs more than tripled during 2012 alone. The large amount of Windows-based malware owes its existence to the easy availability of these ready-to-go malware kits in the underground market. In 2013, ransomware kits will most likely take the lead from malware kits as the most popular turnkey kits. Mobile Ransomware Poised to Grow Attackers have already developed ransomware aimed at holding mobile devices hostage. Although most malware authors do not target mobile devices because more users transact business on desktop PCs, making those systems more indispensable to users, this is changing. McAfee Labs researchers anticipate that the convenience of portable browsers will lead to more people making transactions on the go and keeping valuable data on a device — and those smartphones and tablets will be an even more critical tool to users. As mobile device use continues to rise exponentially, and as businesses rely on mobile devices as a core part of day-to-day operations, mobile ransomware is expected to increase considerably in 2013.
<urn:uuid:10a3bf31-5e38-40d1-90e0-75b980705691>
CC-MAIN-2017-09
https://www.mcafee.com/in/security-awareness/articles/new-threat-ransomware-kits.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00532-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919103
389
2.65625
3
A group of researchers from Technion and Tel Aviv University have demonstrated new and unexpected ways to retrieve decryption keys from computers. Their research is “based on the observation that the ‘ground’ electric potential in many computers fluctuates in a computation-dependent way.” “An attacker can measure this signal by touching exposed metal on the computer’s chassis with a plain wire, or even with a bare hand. The signal can also be measured at the remote end of Ethernet, VGA or USB cables,” they explained. “Through suitable cryptanalysis and signal processing, we have extracted 4096-bit RSA keys and 3072-bit ElGamal keys from laptops, via each of these channels, as well as via power analysis and electromagnetic probing.” Their attacks have been leveraged against GnuPG, and they used several side channels to do it. They measured fluctuations of the electric potential on the chassis of laptop computers by setting up a wire that connected to an amplifier and digitizer. They also found a way to measure the chassis potential via a cable with a conductive shield that is attached to an I/O port on the laptop. Most surprisingly, the signal can also be measured after it passes through a human body. “An attacker merely needs to touch the target computer with his bare hand, while his body potential is measured,” they explained, adding that the measuring equipment is then carried by the attacker. Finally, they also succeeded in extracting the keys by measuring the electromagnetic emanations through an antenna and the current draw on the laptop’s power supply via a microphone. The bad news is that each of these attacks can be easily and quickly performed without the user being none the wiser (the researchers included realistic, every-day scenarions in the paper). More information about the attacks can also be found here.
<urn:uuid:8438d46c-aae7-414b-ba26-be6121fccd39>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2014/08/22/extracting-encryption-keys-by-measuring-computers-electric-potential/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00232-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964176
388
3.125
3
While scientists are engaged in an all-out, worldwide scramble to avert the energy and climate change crises, the biggest discoveries could come from a surprising quarter: a modest redwood home on a wooded, 5-acre tract in rural Maryland, where a lone inventor toils day and night. Ronald Ace lacks hefty academic credentials or any of the billions of dollars that have flowed to other researchers. That hasn’t diminished his zeal in a years-long crusade to accomplish what many scientists deem unattainable. If the 73-year-old inventor is right, he is on the brink of two historic breakthroughs. If his novel ideas are validated and take hold, they could change the world. Those are big ifs. Ace has applied for patents for two inventions that he believes could speed a dramatic transition to cheap and abundant clean energy, shrink oil consumption to a gurgle and reduce greenhouse gas emissions to a smidgen of today’s levels. His “Solar Trap,” first reported by McClatchy in May 2013, has gained some credence from a former solar engineer at the Sandia National Laboratories in Albuquerque, N.M., who did a confidential review and found “no apparent deficiencies.” Ace calls his flat-panel trap “a fundamental scientific and environmental discovery” and contends that it could collect sunlight at high enough temperatures to shatter the barriers to a solar age. The device can capture more than 90 percent of the rays that hit it, as much as 10 times more than sun-tracking photovoltaic panels being installed around the globe, he said. More recently, Ace filed a second patent application, for an invention that he touts as likely able to transform heat into electricity with nearly 100 percent efficiency, 20-fold that of comparable devices in the clean-as-you-can-get field of thermoelectrics. His claim is especially astounding, because it seems to defy the second law of thermodynamics, a pillar of physics that emanated from the work of French scientist Nicolas Leonard Sadi Carnot 190 years ago. But Carnot’s equation dictating heat loss in steam engines has long been interpreted too broadly and doesn’t apply to this device, said Ace, who as a young man worked for more than a decade in the University of Maryland’s molecular physics laboratory. Andrew Masters, who has spent over 20 years as a U.S. thermoelectrics industry engineer, came to the same conclusion after confidentially reviewing Ace’s patent application and Carnot’s work. Masters, who has built prototypes for world-renowned research institutions, said he “couldn’t find a flaw” in Ace’s concept. In a letter to Ace last month, Masters said he’s seen hundreds of proposals to harness the waste heat in thermoelectric devices but never one so “conceptually simple and yet potentially game changing.” Another review came from John Darnell, a recently retired congressional energy adviser who privately analyzed both inventions and concluded, before becoming critically ill, that each would far surpass today’s technology. Ace’s problem is that neither of his inventions has been validated in customary ways, such as in published, peer-reviewed papers or by constructing prototypes, for which he needs financing. The costly patent applications, filed in 148 countries, are still secret and will remain so for nine months or more. So despite his rare blend of expertise in physics, optics and thermal energy, Ace’s expansive claims are sure to draw skepticism, if not disbelief, from the scientific community. To grasp the dimensions of what Ace is proposing, consider this: If the thermoelectric invention works as he hopes, it could offer a way to build affordable electric cars that don’t require recharging and can travel up to 300 miles on two gallons of gas. If both ideas were to perform as envisioned, the devices could be combined to create power plants that have no moving parts, emit no greenhouse gases, deliver electricity for no more than a third of today’s cost and do so reliably with more than 90 percent efficiency — a feat heretofore considered beyond reach. Ace’s chances of success hinge not only on the validity of his scientific conclusions, but also on whether investors, governments, institutions or potential patent licensees put money behind his tiny company, Pinnacle Products LLC, which promotes his energy-saving ideas on the website H2OPE.US. He hopes that the comments from expert reviewers will help attract financing. Ace retained Nathan Siegel, the former Sandia solar engineer who is now an assistant professor of mechanical engineering at Bucknell University, to analyze whether his Solar Trap could overcome the obstacles that for decades have stunted progress in solar energy. A central issue is long-term energy storage, so that solar plants can reliably provide power at night and during long bouts of cloudy weather. Battery storage for photovoltaic panels, which directly convert solar energy to electricity, is expensive and generally measured in hours, not days. Storing solar thermal energy has also been a problem, because scientists haven’t found a way to operate a solar power plant cost-effectively at temperatures above roughly 900 degrees Fahrenheit. Higher temperatures are needed to ensure that enough energy can be stored in cheap materials, such as sand, to fuel a plant when there’s no sunlight. Siegel, in conclusions shared with McClatchy, wrote that in 10 years in the solar field he’s never encountered an approach like Ace’s trap, designed to concentrate energy at 2,400 degrees Fahrenheit and higher for affordable long-term storage. Known as “angular selectivity,” the design capitalizes on largely unnoticed properties of sunlight: it arrives in almost perfectly parallel rays, but when heated, radiates at angles spanning 180 degrees, Ace said. Precisely how the invention works will remain secret until a patent is published. But Ace said he designed it to geometrically ensnare solar rays to prevent diffuse radiation and conduction losses as temperatures hit extremes — a vexing problem for the solar industry. “In my opinion, this approach is unique” and worthy of more comprehensive analysis, Siegel wrote, noting that the device has the potential to collect solar energy at soaring temperatures without a hugely expensive field of thousands of mirrors like those that concentrate sunlight in federally subsidized solar plants. Siegel did not extensively analyze Ace’s lower-temperature, rooftop Solar Trap, designed to power homes and businesses by collecting energy at up to 1,000 degrees Fahrenheit, hot enough to meet much of the world’s energy needs. However, he said Ace’s predictions that it could perform at greater than 90 percent efficiency “are likely accurate.” Discovery of the second possible breakthrough, in thermoelectrics, grew from Ace’s determination to find a way to convert solar energy into electricity in homes and offices without the roar of a turbine or engine. He also sought an alternative to the back-end design of most power plants, which waste 60 percent or more of the heat generated from burning fossil fuels or from nuclear fuel. He lit on thermoelectrics, a little-used process in which an electric current is created when heat flows through certain materials, most commonly the substance bismuth telluride. The thermoelectric industry manufactures the devices primarily for exotic purposes, such as producing electricity in spacecraft. Conventional thinking is that these devices are limited by Carnot’s equation governing unavoidable heat waste. However, when Carnot died in 1832 at age 36, the thermoelectrics phenomenon was little more than a curiosity. As Ace tossed around ideas, he questioned why the waste heat had to be lost. He scoured a book about Carnot, published in 1897, that traced a debate over his equation between two famed physicists: William Thomson of Scotland, known later in life as Lord Kelvin, and James Prescott Joule of England. Joule never accepted Carnot’s argument that a heat engine couldn’t be 100 percent efficient and sowed similar doubts in Kelvin. Neither of the two, however, could produce a heat engine that debunked Carnot. Ace reasoned that Carnot’s theory is ironclad when applied to solids, liquids and gases that expand when heated, as does steam, but that his own thermal-electric design doesn’t expand or contract and “is not a Carnot engine.” Thus, while typical thermal-electric devices waste 95 percent of the heat run through them, Ace said that his can regenerate nearly all of it. Masters, a vice president of Custom Thermoelectric LLC in Bishopville, Md., wrote Ace: “I have gone over your heat exchanger concept many times looking for the flaw that collapses its premise. I cannot find it. … This concept could offer system conversion efficiencies far in excess of anything we have seen to date.” In a phone interview, Masters said that Ace’s concept allows heat to pass through a conductive material not just once, as thermoelectric devices do now, but “over and over again. That’s what makes it unique.” “I believe he really, really has something,” Masters said. While Ace has yet to draw a large investor, he hasn’t applied for an Energy Department grant, mainly because federal grant money cannot be used to finance patents. Nor has he gotten a dime from charitable foundations doling out funds for technology that cuts carbon emissions. Former Republican Rep. Roscoe Bartlett of Maryland, who quietly championed Ace’s efforts until losing his seat in Congress in 2012, said he suspects some investors have balked because Ace is “somewhat eccentric, and what he’s proposing seems too good to be true,” perhaps prompting them to dismiss him as “loony.” But Bartlett, a scientist himself, calls Ace a “national treasure.” Ace said he’s now inviting potential investors to support his work by financing consumer devices he’s invented, fearing that his high-tech energy solutions and planetary-scale talk might frighten people. “America doesn’t have an energy solution,” Ace said. “The world doesn’t have an energy solution yet. That’s my dream. It has always been my dream, but I scare people because I think too big. Somebody has to. We can’t get there from here thinking small.” ©2014 McClatchy Washington Bureau
<urn:uuid:90e9cc07-1db5-4ed8-a1e6-530fc8f9e68d>
CC-MAIN-2017-09
http://www.govtech.com/federal/A-Maryland-Inventors-Big-Energy-Ideas-Have-Promise-and-Big-Ifs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00108-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952876
2,241
3.03125
3
For a long time only “structured” data could be analyzed in databases. But what about data that doesn’t come in convenient rows and columns, like the Human Genome data? For a long time, it was only so-called “structured” data that could be analyzed in databases. This kind of data comes in rows and columns (you know, like you find in spreadsheets) and has a predictable format. Most accounting data is like this and the science of business over the last 30 years has been revolutionized by using high speed, high volume SQL databases to get insight from that structured data. However, what about other sciences? What about data that doesn’t come in convenient rows and columns? I’ve been looking recently at Human Genome data. This definitely doesn’t come into the category of “structured” data, although there is some element of structure to it. For example, human DNA can be expressed as a long sequence of 4 different letters (A, T, C and G). These letters are read in groups of three, and of the 64 possible combinations of these letters, 62 refer to making specific amino acids and the other two are used to indicate a space between larger groupings of these triplets called genes. The human genome can be represented by around 3 billion letters, which would equate to around 800 MB of data (unzipped), although the actual size of a file containing a human genome is much larger due to the way that sequencing is done and because of data quality issues. The science of physically “sequencing” these 3 billion letters from a sample of DNA is now well established. Unfortunately, the science of analyzing that data is rather less well understood. The good news is that anyone can acquire this data. The “1000 genomes project” data, for example, has been available in the AWS cloud for some time and consists of >200 terabytes for just 1700 participants. You can imagine the data volumes associated with more contemporary projects, such as the Million Human Genomes project. But while you can get your hands on genome data, how do you go about analyzing it? The problem is that there’s a lot of it, and it’s very difficult to interpret. Well, you certainly wouldn’t have to start from scratch; geneticists have written a number of libraries to calculate various common metrics of interest. For example, the “Pybedtools” library for Python allows you to identify genes that show a given genetic variation. You could become a Python developer and write a few million lines of code on a big server to make use of this library. Alternatively, you could use EXASOL’s in-memory analytic database (in the cloud or on your own servers) and import these genomic libraries so that you could build User Defined Functions around them. The upshot of this second approach is that you can run database queries that are “in-memory” and parallel and are therefore extraordinarily fast. You also have the benefit of being able to blend this “unstructured” genetic data with, for example, structured patient data and use the SQL language and mainstream business intelligence tools (such as Tableau) to give you great visualizations of the data without requiring lots of computer code. More and more, we are talking to organizations with data requirements that extend well beyond traditional accounting data. Genetics is a growing area of interest, but our system is designed, through the use of our User Defined Function framework, to support any kind of data at all. Why not have a look for yourself? You’ll be surprised at the kinds of analytics you can do with EXASOL.
<urn:uuid:08cffd5f-e9ec-4119-a1aa-ed66447fc3a9>
CC-MAIN-2017-09
http://www.exasol.com/en/blog/2015-08-12-gene-genie-genomic-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00404-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941888
775
2.953125
3
The U.S. Food and Drug Administration announced last month that it will classify metal-on-metal hip implants as high-risk devices. That comes after the artificial joints were found to have failed at high rates, causing disability and meaning additional surgery for thousands of people. But hundreds of other potentially high-risk medical devices remain in use without what many consider to be adequate testing. The FDA's standards for review all high-risk medical devices before allowing them on the market are rooted in the Medical Device Amendments of 1976, which aimed to bring the agency's oversight of devices more closely in line with its regulation of prescription drugs. But it left holes. The legislation created varying safety standards for devices that the FDA would deem as low, medium, and high risk. Medium-risk products, like surgical stitches, could be sold without first being tested on people under most circumstances, provided the device was "substantially equivalent" to one already on the market. For high-risk devices, like artificial hearts, companies were generally required to test their products in people and demonstrate to FDA that the products were safe and effective.
<urn:uuid:726eddc4-f806-4c37-9bc1-6b5a90c1fa9b>
CC-MAIN-2017-09
http://www.nextgov.com/health/2013/02/loophole-keeps-precarious-medical-devices-use/61552/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00404-ip-10-171-10-108.ec2.internal.warc.gz
en
0.97647
227
2.828125
3
Let us face it, modern e-mail communication relying on SMTP is fundamentally broken – there is no sender authentication. There are lot of countermeasures in form of filtering and add-on authentication, but neither of them are proved to be 100% successful (that is 100% hit ratio with 0% of false positives). Spammers always find new ways of confusing filters with random noise, bad grammar, hidden HTML code, padding, bitmap-rendered messages etc. Our world is becoming an overloaded and unusable mailbox of spam. This article will nevertheless try to cover some of the spam problems and possible solutions, but bare in mind that all of these are just no more than a temporary fix. Product spam, financial spam, frauds, scams, phishing, health spam, Internet spam, adult spam, political spam, you-name-it spam. Despite Bill Gates’ brave promise in 2004 (“Two years from now, spam will be solved”) e-mail spam has significantly increased worldwide in the last two years in both volume and size, making over 70% of total e-mail traffic. According to the First MAAWG Global Spam Report from Q1 2006, around 80% of incoming e-mail was detected as abusive. A bit later in Q3 2006 various Internet service providers in the world have reported an alarming increase of unsolicited e-mail in a very short period due to the range of new spamming techniques involved. At the end of 2006 an estimated number of the world’s total spam is around 85 billion messages per day (obviously this number is rather approximate) – and it is exponentially increasing. We all know how much it is going to cost (quick spam calculator). Spammers have undoubtedly adapted and evolved: up to now they used a single IP setup for delivering their unwanted e-mail, usually hopping from one dialup to another. They have used open proxies, open mail relays and other similar easy-to-track sources. Unfortunately, it has changed – current spamming methods now include huge networks (called botnets) of zombie-computers used for distributed spam delivery and Denial of Service attacks. Various new viruses and worms are targeting user computers, making them eventually into huge spam clusters. Not only Microsoft Windows PCs are hacked, more and more Unix and Linux servers are affected too. And it is not for the fame and the glory, but to enable crackers to install and run scripts for the remote controlled spamming. In the meantime, nobody knows how many spambots are currently harvesting the Web in search of new e-mail addresses, their new victims. There is nothing sophisticated in their attacks, only brute force and numbers. Spammers earn a living by making and delivering spam and they do it darn well. Reality check, 123 Due to the troublesome nature of the Internet today, the spammers and the script kiddies can easily put an anti-spam provider out of a job by simply DDoSing them to death (and doing a lot of collateral damage) – and exactly it happened to Blue Security with their successful but quite controversial Blue Frog service. A person known as PharmaMaster took their campaign as open war declaration and wiped them off from the face of the Internet within a single day. Lessons have been learned: the spammers are to be taken seriously and it seems they cannot be dealt by a single uniform blow nor with a single anti-spam provider. What can we do about spam? There are numerous commercial solutions against unsolicited e-mail (SurfControl, Websense, Brightmail, IronPort, etc.) and some of them are rather expensive. Depending on the available budget, requirements and resources at hand, an Open Source solution could be substantially cheaper and possibly equally effective as the commercial counterpart. There is a whole range of readily available Open Source solutions for each of the popular anti-spam techniques for e-mail receivers. Some of them are in the core of the even most advanced commercial solutions. As most of the readers probably know, anti-spam solutions are most effective when different methods are combined together, forming several layers of analysis and filtering. Let us name a few of the most popular. DNS blacklisting is a simple and cheap way of filtering the remote MTA (Mail Transfer Agent) peers. For every remote peer the SMTP service will reverse its IP and check the forward (“A”) record in the BL domain of DNSBL system. The advantage of the method is in its low processing overhead: checking is usually done in the initial SMTP session and unsolicited e-mail never hits the incoming queue. Due to the spam-zombie attacks coming from the hundreds of thousands of fresh IP address every day, this method is today significantly less effective today than it used to be and no more than 40% of total inbound spam can be filtered using this method. There are a lot of free DNSBL services in world, but it is probably best to use well known and reliable providers (and there are even supscription-based DNSBL services) which do not enlist half of the Internet overnight. Some of most widely recognized are Spamhaus and SpamCop, for instance. Almost all FLOSS (Free/Libre/Open-Source Software) SMTP daemons have full RBL support and so does Postfix, Exim, Sendmail, etc. For the SMTP services which do not support DNSBL out of the box it is possible to use DNSBL tests in SpamAssassin, but that usually means no session-time checking. Another variant which Spfilter uses is to store a several DNSBL exports in the form of local blacklists for faster processing. Of course, such a database needs to be synchronized manually from time to time, preferably on a daily basis. The greylisting method is a recent but fairly popular method which slightly delays an e-mail delivery from any unknown SMTP peer. A server with the greylisting enabled tracks the triplets of the information for every e-mail received: the IP address of every MTA peer, the envelope sender address and the envelope recipient address. When a new e-mail has been received, the triplet gets extracted and compared with a local greylisting database. For every yet unseen triplet the MTA will reject the remote peer with a temporary SMTP failure error and log it into a local database. According to the SMTP RFC, every legitimate SMTP peer should try to reconnect after a while and try to redeliver the failed messages. This method usually requires minimum time to configure and has rather low resource requirements. As a side benefit it rate-limits the incoming SMTP flow from the unknown sources, lowering the cumulative load on the SMTP server. There are still some mis-configured SMTP servers which actually do not retry the delivery since they interpret the temporary SMTP failure as a permanent error. Secondly, the impact of the initial greylisting of all new e-mail is substantial for an any company that treats e-mail communication as the realtime-alike service, since all of the initial e-mail correspondence will be delayed at least 300 seconds or more, depending on the SMTP retry configuration of the remote MTA peers. Finally, the greylisting does not do any good to the big SMTP providers which have large pools of mail exchangers (ie. more than /24). The problems can be fixed by whitelisting manually each and every of domains or network blocks affected. Regarding the software which does the greylisting almost every Open Source MTA has several greylisting implementations available: Emserver, Postgrey, Milter-greylist, etc. Sender verify callout SMTP callback verification or the sender verify callout is a simple way of checking whether the sender address found in the envelope is a really deliverable address or not. Unfortunately, verification probes are usually blocked by the remote ISP if they happen too often. Further, a remote MTA does not have to reject the unknown destinations (ie. Qmail MTA usually responds with “252 send some mail, i’ll try my best”). To conclude: it is best to do verification per known spammer source domains which can be easily extracted from results of the other methods (such as the content analysis). The sender verification is supported in most FLOSS MTA: Postfix, Exim, Sendmail (via milter plugin), etc. The content-based filtering is probably the core of most anti-spam filters available. It usually consists of several subtypes, so let us state a few. Static filtering is a type which triggers e-mail rejection on special patterns (“bad” words and phrases, regular expressions, blacklisted URI, “evil” numbers and similar) typically found in the e-mail headers or a body of an e-mail itself. False positives are quite possible with this method, so this type is best used in conjunction with policy-based systems (often named as heuristic filters) such as SpamAssassin and Policyd-weight. Such filters use the weighted results of several tests, typically hundreds of, to calculate a total score and decide if the e-mail is a spam or a ham. In this way, a failure in a single test does not necessarily decide the fate of an e-mail. At least several tests have to indicate a found spam content to accumulate the spam score enough for an e-mail to be flagged as a spam, so this results in a more reliable system. Of course, weighted/scoring type of a filter can contain all of the other filter types for its scoring methods. The next type of the content analysis is the statistical filtering which mostly uses the naive Bayesian classifer for the frequency analysis of word occurrences in an e-mail. Such filtering, depending on an implementation, requires the initial training on an already presorted content and some retraining (albeit in much smaller scale) later on to obtain a maximum efficiency. The Bayesian filtering is surprisingly efficient and robust in all real life examples. It is implemented in the very popular SpamAssassin and DSPAM solutions, as well as Bogofilter, SpamBayes, POPFile and even in user e-mail clients such as Mozilla Thunderbird. Some implementations such as SpamAssassin use an output of other spam filtering methods for a retraining which gradually improves the hit/miss ratio. Most of the implementations (DSPAM, SpamAssassin) have a Web interface which allows a per-user view of the quarantined e-mail as well as the per e-mail retraining. It improves the quality of either the global dictionary (a database of learned tokens) or the individual per user dictionaries. DSPAM, for an instance, supports a whole range of additional features such as: combining of extracted tokens together to obtain a better accuracy, tunable classifiers, the initial training sedation, the automatic whitelisting, etc. Another popular solution is CRM114 which is a superior classification system featuring 6 different classificators. It uses Sparse Binary Polynomial Hashing with Bayesian Chain Rule evaluation with full Bayesian matching and Markov weighting. CRM114 is both the classifier and a language. DSPAM and CRM114 are currently the two most popular and most advanced solutions in this field, and they are easily plugged into most SMTP services. Note that plain Bayesian filters can be fooled with quite common Bayesian White Noise attacks which usually look like random nonsensical words (also known as a hashbuster) in a form of a simple poem. Such words are randomly chosen by a spammer mailer software to reflect a personal e-mail correspondence and therefore thwart the classifier. Most of the modern content analysis filters do detect such attacks – and so does SpamAssassin and DSPAM. A small but significant amount of unsolicited e-mail is the same for every recipient. A checksum-based filter strips all usual varying parts of an e-mail and calculates a checksum from the leftovers. Such a checksum is then compared to a collaborative or distributed database of the all known spam checksums. Unfortunately, spammers usually insert various poisoning content (already mentioned hashbusters) unique to an every e-mail. It causes the checksums to change and an e-mail is not recognized as known spam any more. Two of the most popular services for this typeare Distributed Checksum Clearinghouse and Vipul’s Razor which both have their own software and they are both supported in third-party spam-filtering software such as SpamAssassin. Finally, we are left with several methods of the authentication that basically try to ensure the identity of a remote sender via some kind of an automated process. The identification makes it possible to reject all of the e-mail from the known spam sources, to negatively score or even to deny an e-mail with the identified sender forgeries and to whitelist an e-mail which is valid and comes from the known reputable domains. This method should minimize the possibility of the false positives because a valid e-mail should get higher positive scores (used for the policy filter) right from the start or even completely bypass the spam filters – which can be made more sensitive in return. There are several similar autentication mechanisms available: SPF (Sender Policy Framework), CSV (Certified Server Validation), SenderID and DomainKeys. They are mostly available as third-party plugins for most popular OSS MTA, usually in the form of Perl scripts available at CPAN. Unfortunately, neither of them is a solution recognized widely enough to be used in an every SMTP service in the world. DomainKeys and enhanced DKIM (DomainKeys Identified Mail) protocol use a digital signature to authenticate the domain name of sender as well as content of a message. By using a sender domain name and the received headers, a receiving MTA can obtain the public key of a such domain through simple DNS queries and validate the signature of the received message. A success proves that the e-mail has not been tampered with as well as that the sender domain has not been forged. SPF comes in form of the DNS TXT entries in each SPF-enabled Internet domain. These records can be used to authorize any e-mail in transit from a such domain. SPF records publish the policy of how to handle the e-mail forgeries or the successful validation as well as the list of possible e-mail originating addresses. If none of those match the sender address in received e-mail, the e-mail is probably forged and the receiver can decide on the future of such e-mail depending on SPF qualifiers (SOFTFAIL, FAIL, NEUTRAL, PASS) from a SPF policy. The problem is that SPF breaks e-mail forwarding to the other valid e-mail accounts if the domain administrator decides to use SPF FAIL policy (hard fail), although in the future SRS (Sender Rewriting Scheme) could eventually help it. SenderID is a crossover between SPF and Caller ID with some serious standardization issues and does not work well with mailing lists (necessary Sender or Resent-Sender headers). CSV is about verifying the SMTP HELO identity of the remote MTA by using simple DNS queries to check if the domain in question is permitted to have a remote IP from the current SMTP session and if it has got a good reputation in a reputable Accreditation Service.
<urn:uuid:a2752ca4-f8f1-4fa7-8442-bdc66029cff3>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2007/07/16/open-source-filtering-solutions-and-the-spam-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00632-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937236
3,205
2.65625
3
Ford’s current autonomous research vehicles already have LiDAR built into them for 3D mapping, but those sensors are big, spinning spikes on the roof of the car. At CES in Las Vegas last week, Ford showed a new LiDAR sensor that will be built into the company’s next generation of test cars, which will triple the automaker’s total fleet to 30 cars. This new LiDAR sensor, Velodyne’s Solid State Hybrid Ultra Puck, looks slightly taller than a real hockey puck but has shrunk enough in size to fit into the sideview mirror of a car. This means Ford’s new generation of autonomous research vehicles will look a little less bizarre, and it also means LiDAR will have no problem fitting into the semi-autonomous and fully autonomous cars that will ply our roads someday. LiDAR plays a special role among the many sensors and cameras these cars will need. LiDAR can draw a 3D map of the world around it in real time, helping a car understand its ever-changing surroundings.It does this by sending out thousands of signals per second to bounce off everything within several hundred feet of the car. LiDAR isn’t all-seeing. Heavy fog can stymie its sensors, as can dense forest canopies. That’s why it’s just part of the equipment autonomous vehicles will need to get around in the world—but an important one. Radar sensors can tell if something’s nearby but not what it is. Cameras can show what it is but can only suggest distance or topography. It’s up to LiDAR to fill in those last two kinds of data, and it can also help identify an object by delineating its shape. You can see streets with a map, but LiDAR can tell the car what else is on and around those streets, from pedestrians and other cars to trees and buildings. LiDAR maps look like brightly colored line drawings that change constantly as objects flow by the car. If I could stick one of these LiDAR sensors into my sideview mirror right now I would, just to get that mesmerizing 3D image. But it has a bigger mission than entertaining me, and it’s coming soon to a Ford autonomous research vehicle near you. This story, "How Ford's autonomous test vehicles make 3D LiDAR maps of the world around them" was originally published by PCWorld.
<urn:uuid:d0ce675f-fd4a-4a1a-95ce-3d7c542b7ea2>
CC-MAIN-2017-09
http://www.itnews.com/article/3020407/ces/how-fords-autonomous-test-vehicles-make-3d-lidar-maps-of-the-world-around-them.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00400-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958115
516
2.625
3
Big Data is a phrase we hear over and over again. Yes it's obvious, Big Data means well, big data, lots of it. We all get that Facebook, Twitter and the other mega-web apps generate literally tons and tons of data. But beyond the mega web apps, what really is Big Data? What can we do with it and why does everyone get so excited by it? For help with this I went to my friends at LexisNexis, makers of HPCC Systems. When we speak about big data, the problem is not amassing a lot of data, it is the analysis of the data to make something of value out of it that is the real trick. The folks at LexisNexis have been doing this for a long time. HPCC Systems is LexisNexis's own in-house big data solution, which they open sourced about a year ago. For purposes of this article, though, whether we are speaking about HPCC or Hadoop or any other big data solution, is not important. I wanted to illustrate what you can do with good analysis of big data. I am going to share a case study by HPCC Systems on a proof of concept they did for the Office of the Medicaid Inspector Generation (OMIG) of a large Northeastern state. HPCC Systems was given a large list of names and addresses. Overlapping thier own publicly available data, they sought to identify social clusters of Medicaid recipients living in expensive houses and driving expensive houses. Of course, it helps if you have 50Tb of public data and lots of experience building social graphs. In any event these are the kind of tasks that HPCC and big data solutions are built for. Comparing Medicaid roles with purchases of cars and homes revealed some interesting results. Here is a map that was generated: Not only did the analysis turn up lots of likely Medicaid fraud, but it also turned up connections that could be indicative of money laundering and mortgage fraud. This kind of result simply would not be possible without the power of a big data analysis engine like HPCC Systems. I had a chance to speak with Jo Prichard of LexisNexis, who showed me some other examples of big data analysis. One involved taking the total page views of Wikipedia for the year, along with public mentions of specific personalities. So, tracking hits on Whitney Houston to her Wikipedia hits. Again, the results were pretty extraordinary. Another example was drug prescription abuse. Again overlaying public data on the initial data set shed some eye opening results. This really only scratches the surface of what you can do with big data if you have the horsepower and analysis to use it. In this case it is HPCC Systems, but it could be Hadoop (though the LexisNexis folks say not as easily as you can wtih HPCC) or another big data solution. This kind of insight is what gets people really excited about big data beyond the Facebook-Twitter crowd.
<urn:uuid:86993f33-5c90-414b-8482-9e4b3c9b1caa>
CC-MAIN-2017-09
http://www.networkworld.com/article/2222660/opensource-subnet/so-what-exactly-do-you-do-with-all-of-that-data-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00400-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955397
616
2.578125
3
When you merge a partition with its parent partition, the chosen partition and its replicas combine with the parent partition. You do not delete partitions — you only merge and create partitions to define how the directory tree is split into logical divisions, as shown in the following illustration. Figure 6-2 Before and After a Partition Merge There are several reasons you might want to merge a partition with its parent: The directory information in the two partitions is closely related. You want to delete a subordinate partition, but you don’t want to delete the objects in it. You’re going to delete the objects in the partition. You want to delete all replicas of the partition. Merging a partition with its parent is the only way to delete the partition’s master replica. After moving a container, which must be a partition root with no subordinate partitions, you don’t want the container to be a partition anymore. You experience changes in your company organization, so you want to redesign your directory tree by changing the partition structure. Consider keeping partitions separate if the partitions are large and contain hundreds of objects, because large partitions slow down network response time. The root-most partition in the tree cannot be merged because it is the top partition and has no parent partition to merge with. The partition is merged when the process is completed on the servers. The operation could take some time to complete depending on partition sizes, network traffic, server configuration, etc. IMPORTANT:Before merging a partition, check the partition synchronization of both partitions and fix any errors before proceeding. By fixing the errors, you can isolate problems in the directory and avoid propagating the errors or creating new ones. Make sure all servers that have replicas (including subordinate references) of the partition you want to merge are up before attempting to merge a partition. If a server is down, eDirectory won’t be able to read the server’s replicas and won’t be able to complete the operation. If you receive errors in the process of merging a partition, resolve the errors as they appear. Don’t try to fix the error by continuing to perform operations—doing so only results in more errors. To merge a child partition with its parent partition: In NetIQ iManager, click thebutton . Specify the name and context of the partition you want to merge with its parent partition, then click.
<urn:uuid:fc74779b-c1e7-47b2-8f98-d05ccfec8b85>
CC-MAIN-2017-09
https://www.netiq.com/documentation/edir88/edir88/data/fbgbbijg.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00096-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92483
504
2.578125
3
A consortium of psychiatrists, neurobiologists and scientists will pool resources to devise accurate ways to detect post-traumatic stress disorder, to reduce the number of war veterans who go undiagnosed, Technology Review reports. By examining civilians and military personnel previously involved in automobile accidents, the scientists will draw from genetic data, brain imaging, and other physiological measurements to identify patterns in PTSD sufferers. Roughly 9 percent of American accident survivors develop PTSD. The goal of the consortium is to develop quantitative biomarkers -- for instance, levels of chemicals in blood or brain scan patterns -- that will help hospitals diagnose the disorder more precisely. Their findings could help the growing number of combat veterans with PTSD who don’t get treatment because they haven’t been appropriately identified with the disorder. Nearly 20 percent of military service members who have returned from Iraq and Afghanistan report symptoms of PTSD or major depression, according to the think tank Rand Corp. The researchers also plan to do experiments with animals to learn about the biochemical changes associated with the condition. Nonprofit research organization Draper Labs will integrate the findings by the different research teams involved, the report said. A Massachusetts General Hospital representative is leading the clinical research efforts.
<urn:uuid:4bc121fc-abed-42cf-9ca1-5c8d04a317f8>
CC-MAIN-2017-09
http://www.nextgov.com/defense/2012/08/scientists-band-together-develop-ptsd-biomarkers/57454/?oref=ng-relatedstories
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00272-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926101
242
3.328125
3
When researchers in Germany sat down nearly a decade ago to create a brand new parallel file system for HPC clusters, they had three goals: maximum scalability, maximum flexibility, and ease of use. What they came up with was the Fraunhofer Parallel File System (FhGFS), which is now in use on supercomputers. The initial design considerations and inner workings of FhGFS are described in a ClusterMonkey paper on the file system by Tobias Götz, a researcher at the Fraunhofer Institute for Industrial Mathematics (ITWM). Götz, who now lives and works in Berkeley, California, says ITWM researchers were frustrated with the limitations of existing parallel file systems. “There has to be a better way!” was the rallying cry of a group led by Franz-Josef Pfreundt, head of ITWM’s Competence Center High-Performance Computing (CC-HPC). Pfreundt’s team started from scratch to create an ideal file system that used a “scalable, multi-threaded architecture that distributes metadata and doesn’t require any kernel patches, supports several network interconnects including native InfiniBand, and is easy to install and manage,” Götz writes. The distributed metadata architecture is a key component of FhGHS, and contributes to the high level of scalability and flexibility that FhGHS was designed to provide HPC applications. “The metadata is distributed over several metadata servers on a directory level, with each server storing a part of the complete file system tree. This approach allows much faster access on the data,” he writes. Similarly, the storage system breaks the storage content into “chunks” and distributes them across several storage servers using striping, according to Götz’s paper. The size of the chunks can be defined by the file system administrator. There is no requirement in FhGHS to have dedicated hardware for the file and metadata servers. In fact, they can reside on the same physical server if necessary, Götz writes. This virtual approach also enables users to add as many storage and metadata servers as needed, without requiring any downtime. Administrators can easily create a new FhGHS instance over a set of nodes, which makes it easy to set up a new test environment, either on physical hardware or in the cloud. A Java-based GUI is provided for management and monitoring tasks. The FhGHS file system itself runs on the Linux kernel, and is commercially supported by Fraunhofer. FhGHS was officially unveiled in November 2007 at the SC07 conference in Reno, Nevada. Since then, it has been put to use on several systems, including the Top 500 system at Goethe University in Frankfurt, Germany. Benchmark tests for FhGHS show near linear scalability (94 percent of maximum) on read/write operations on clusters of up to 20 storage servers. Tests of the metadata server demonstrate the capability to generate up to 500,000 files per second. In other words, the creation of 1 billion files would take about half an hour. In head to head competition against Lustre and GPFS on 37-mile and 250-mile 100Gigabit Ethernet test tracks in Dresden, Germany, the group backing FhGHS was one of a few to publish results. In those tests, FhGHS demonstrated throughput of 89.6 percent of theoretical maximum on the 250-mile track in bi-directional mode, and 99.2 percent of maximum in uni-directional mode, according to Götz’s paper. As the HPC community moves towards exascale computing, the folks behind FhGHS think the new file system can provide part of the solution, especially as it has to do with power consumption, fault tolerance, and software scalability. “Fraunhofer has experience that can be used to attack the exascale problem from several directions, the parallel file system being one of them,” Götz writes.
<urn:uuid:9b8dbbad-114b-48d4-a248-ed128f83cd27>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/07/24/fhgfs_designed_for_scalability_flexibility_in_hpc_clusters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00624-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944083
849
2.625
3
The 100-Million-Mile NetworkBy David F. Carr | Posted 2004-02-06 Email Print Think your network is hard to manage? Try remote diagnosis and repair when you're relying on radio signals from Mars. Listening anxiously for any sign of life were navigators at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. They had to fix a broken interplanetary communications link that reached more than 100 million miles (and counting-the distance keeps growing as the orbits of Earth and Mars draw apart). "The most difficult thing is to know how to talk to the spacecraft when you're getting no response from it," says Douglas J. Mudgway, a former National Aeronautics and Space Administration (NASA) engineer who managed communications with the Viking landers in the 1970s and helped save the Galileo mission in the early 1990s. Spirit was exploring the Gusev crater on Mars on Jan. 21, and was already sending back spectacular photographic images. The wandering robot had rolled out of its landing nets and had approached a rock to take measurements using an appendage called the Rock Abrasion Tool. Diagnosing what was wrong with Spirit depended on interpreting squawks, tones and other sounds traveling along a conduit dubbed the Deep Space Network. Operators of this interplanetary signaling system send commands to and listen for data from "nodes" such as Spirit and its twin rover, Opportunity, using three facilities spaced roughly one-third of the way around Earth apart from each other. These communications complexes are in Goldstone, Calif.; near Madrid, Spain; and near Canberra, Australia. This geographic separation means that, as the Earth rotates, at least one of these listening posts will be able to point its antennae toward the spacecraft being tracked at any given moment. Designed much like radio telescopes, the antennae are parabolic dishes as large as 70 meters in diameter (although the trend for the future is to use arrays of smaller antennas). During normal operations, the rovers communicate directly with Earth when receiving instructions or sending back diagnostic information. They send back the bulk of their scientific data and photographs by using NASA's Mars Odyssey and Mars Global Surveyor probes as relay stations. These unmanned craft orbit the red planet carrying cameras, high-gain and ultra-high-frequency (UHF) antennae along with other scientific instruments. The omnidirectional mast antenna sticking up from each rover's top like a dorsal fin knows when to transmit by listening for a signal that one of the orbiters is passing overhead. The orbiter then uses its more-powerful antenna to send as many as one million bits of data per second back to Earth. While fairly fast for an attenuated radio connection, that's only about a tenth of the speed of a cable-modem connection for the average home-computer user. The rover-to-orbiter link uses UHF radio-the same basic technology used for broadcasting channels 14 and higher to television sets in the United States-while long-haul communications to Earth use X-band radio, which is a higher frequency (about 8 gigahertz) and easier to focus into a tight beam. For critical commands, the rovers do communicate directly with Earth over X-band. Each rover has directional antennae that provide relatively strong signals that make it easier for the ground stations on Earth to filter out space noise and terrestrial interference. The omnidirectional antenna can also send and receive X-band when the directional one is not aimed properly. Despite all this radio power, it's not unusual for a connection to be lost, at least temporarily. When Spirit landed the night of Jan. 3, the cheering in the JPL control room-over a series of simple radio tones indicating the lander had survived its fiery descent and dropped to the surface within a protective cluster of airbags-abruptly ended with the announcement, "We currently do not have signal from the spacecraft."
<urn:uuid:aade6a50-ea86-4b63-bece-a6857fc0d976>
CC-MAIN-2017-09
http://www.baselinemag.com/c/a/Business-Intelligence/The-100MillionMile-Network
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00148-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944611
814
3.15625
3
Though true artificial intelligence remains futuristic in terms of practical applications, use of computers to augment our own perception of the world is pushing more prominently into view, with commercials already suggesting ways to overlay information on what we see. With augmented reality (AR) being developed to take advantage of cloud, mobile, big data and social technology -- Gartner's "nexus of forces" -- is it possible AR could become a fifth component of the nexus? AR is generally defined as a direct or indirect view of a real-world environment that is augmented in some way by computer generated input. This means your view of the world around you can be enhanced by external information as desired. IN PICTURES: 10 augmented reality technologies you should know about ] The concept itself is not really new. In fact, most people are familiar with some common uses. In football on TV, for example, the yellow first down line you see on the screen is an example of AR that has been in use for several years. However, this is not the kind of AR that promises to change the world as we know it. AR relies on different aspects of developing technologies such as GPS, computer vision and object recognition. As such, as we see advancements in these technologies, AR stands to benefit along with them. Mobile, cloud, big data and social tech Intel researchers have been working on new processors for smartphones and tablets partially in anticipation of demand for AR capabilities and the power they will require. As technology makes its push into cloud computing, however, this may not even be necessary. Google recently released the Google Goggles application, which allows users to search the Web based on an image captured using the camera in their smartphone. While this does not exactly constitute the sort of real-time AR that has the tech world talking, it does show AR can make strides toward its true potential through the cloud. As with many consumer technologies these days, mobility is the key to success. Devices supporting AR will have to be light on hardware to appeal to a mobile market, which means that the heavy lifting and storage will have to be accessed via network. The Google Glass project -- a computer worn as glasses -- may be the general public's introduction to cloud-based AR. Consumer models of the glasses are expected to make their debut sometime in 2014. Users will wear a small headband with a clear display positioned over one eye. It will record things from your environment such as conversations and images and store them in Google's cloud. From this input, Google can provide relevant information from its search engine or Google+. [Also see: "Google Glass: A lot of hype but little information"] However, if many people used this, the amount of data generated would be astounding. The development of big data capabilities over the next decade predicted by IT researchers will play an important role in these grand-scale AR projects as providers seek to store increasingly data-rich media from the input. On the back end, the size of the database required to provide relevant information in enough contexts for AR overlays to have mass appeal, will not be modest. Image recognition for something as simple as a company logo on the Web requires scanning through petabytes of data. Already requiring several petabytes, AR endeavors like Google Glass could quickly push storage requirements into the next few data measurement units -- exabytes, zettabytes or even yottabytes. Google is not the only contender in AR, though. Other companies are looking at ways to integrate AR by utilizing cloud and social technology. For example, NEC Biglobe and Vuzix teamed up to develop AR glasses focused on recognizing people's faces and pairing the information up with their Facebook and Twitter accounts. AR applications in social technology like this will appeal to the masses, but businesses will also likely find interest as they increasingly utilize less public social technologies such as Salesforce's Chatter. It may be too early to say how large a role AR will play in the next few years, but tools that can boost profits are bound for success. AR developers are certainly keeping big business in mind. As AR develops, the most visible utilization will be in commerce. AR can facilitate a 3D view of a product traditionally advertised in 2D. Lego has already been using AR to allow people to get a preview of what is inside the boxes on shelves. Several other retailers are also looking at ways to integrate AR content into catalogues and magazines. Retailers may also use AR to supplement what customers see in their stores with additional online options. Details and specifications for products can also be made readily available through AR. In the office, AR could be used to increase the effectiveness of collaborative efforts by allowing teams to meet in person or virtually while viewing and manipulating a single set of data. Companies like Gravity Jack have already developed an indoor AR office. If this could be accessed via the cloud, it could potentially bring the bring your own device (BYOD) revolution to an entirely new level. Augmented reality business cards are also becoming more common as people find it an engaging and more useful way to share business information (the amount of information you can make available this way is vastly greater). An AR business card has an image that, when read by a mobile device with a camera, can display everything from a headshot to a resume, LinkedIn account information, personalized video, etc. [Also see: "Slideshow: Techie business cards"] AR has yet to prove itself in business software, but with the growing BYOD trend and the natural tendency for businesses to incorporate software that increases efficiency, AR will likely be considered as long as its progression stays on track with its promise. Nichols is a systems analyst with a passion for writing. His interest in computers began when Deep Blue beat Garry Kasparov in a regulation chess tournament. When Nichols isn't drawing up diagrams and flow charts, he writes for BMC, leading supplier of cloud software solutions.
<urn:uuid:d6f6d3dc-b677-4529-90c1-ca7a64c70580>
CC-MAIN-2017-09
http://www.networkworld.com/article/2164402/tech-primers/augmented-reality-poised-to-leave-a-mark-on-it.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00148-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953519
1,200
2.90625
3
Big data is all data What is “big data?” One convenient litmus test to answer the question is: when the volume, velocity or variety of data becomes too great to handle with conventional data processing tools and techniques, you know you’re dealing with big data. Indeed, the technologies most often associated with big data such as Hadoop and MapReduce are important because they make data characterized by the “three Vs” of volume, velocity and variety more cost efficient and effective. However, this way of looking at the issue limits big data to a technical challenge and misses what has become the real significance of big data: finding new ways to use data to create business value. So if an organization is setting aside certain types of data (such as sensor data captured from their delivery vehicles, or clickstreams from a heavily-accessed website) to the exclusion of other data, they’re missing the real benefit of big data. Think of it this way: for every physical device generating machine data or customer sharing information about themselves in social media, there is a network of data that defines the business context and enriches any analytics that we might perform on the subject. Consider, for example, a delivery vehicle that generates geo-location and temporal data, as well as sensor readings from its mechanical systems such as engine performance, temperature and fuel consumption. A company might use that data to perform analytics to optimize routing, delivery schedules, service agreements, staffing and more. While we can marvel today at the amount of sensor data emanating from a modern delivery vehicle, in fact a lot of useful data about a vehicle (or any other major piece of equipment) is already percolating through many other systems. For example, purchase information about the vehicle, along with technical specifications, might be stored in a ERP system; information about the driver (training curriculum, years of experience, driving record, etc.) could be in an HR system; maintenance records could be in another system. I could go on with examples but you likely get the picture that most of the things generating new or raw data types are also referenced in many other systems around your enterprise. The same principle applies to clickstreams on websites, machine log files and other things. Being able to connect to these systems and augment analytics with additional business context can add a very powerful element. And you shouldn’t ignore so-called unstructured content when considering data to include in a big data project. While social media analytics is a popular and widely-explored use case in the big data world, there is a really a whole universe of human-generated content to be mined. Most organizations manage vast amounts of human-generated content ranging from mundane things like operating manuals to more interesting things like message archives, wikis, lab reports, customer interaction summaries, comments in survey responses and strategy documents—the list is endless. This data is usually stored in content management systems and other secured repositories that don’t lend themselves to easy access with typical big data analytical tools. Here’s a simple taxonomy, or checklist, you should be considering when deciding what types of data to include in your next big data project. Not all types of data are available or relevant for every project, but it may be helpful to go through the step of considering these categories: - Sensor and machine data, which reflects the physical world or the performance of devices across the “Internet of Things” - Business applications and systems of record which contain the transactional records of the organization as well as information about business practices - Human-generated language and content of all kinds, whether in formal documents, message logs, reports or internal discussions - And finally there’s the data outside the organization such as social media content For some more thoughts on how organizations can expand the sources of data they incorporate into their big data projects, I invite you listen to my podcast on the topic. Also review the big data exploration use case, as well as other use cases that the big data team at IBM has identified to help organizations like yours.
<urn:uuid:f2049c43-5345-4474-8b83-2c6918641cf2>
CC-MAIN-2017-09
http://www.ibmbigdatahub.com/blog/big-data-all-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00500-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925026
837
2.65625
3
What Is Write Caching? Most caching solutions only serve recently read data from the cache memory. Read caching is popular because of its safety. All the data in cache is also on disk. Basically when new data is written or existing data is changed, data is written to the hard disk layer and acknowledged back to the application. It is then later promoted to cache when accessed enough times to make it "cache worthy." This means that in the event of a cache failure or in virtualized environments when a VM migration occurs, the data is still on a shared disk and the application should continue to work, albeit slower. [ Here's why software-defined storage won't drive established storage companies out of business: Software-Defined Storage Vs. Traditional Storage Systems. ] There is also a technique called write-through caching where data is written to cache and to disk at the same time but still has to be acknowledged from the hard disk tier. The value in this technique is that data does not have to be written to disk and then moved to cache memory as a separate step. It works on the very accurate theory that what was recently accessed is also most likely to be accessed next. This technique eliminates the extra step to determine cache worthiness. A true write cache writes data only to the cache memory area, acknowledging to the application at that point that the write has been secured. Because flash memory is much faster than HDDs, even on writes, the application should see a significant performance increase. This is especially true in most database environments and virtual desktop environments. Interestingly, write caching can make HDDs more efficient because writes can now be coalesced and written in bigger blocks, which means less data being written to disk and it being written in such a way that it is more read efficient. The big problem with write caching is that if there is a cache failure, there is no copy on disk. As a result, if there is a cache failure or an improperly handled VM migration, data will be lost. The migration issue has largely been handled by virtualization-aware caching solutions that are signaled of an impending migration and will flush cache prior to migrating the VM. The big risk is cache failure and server side caches are particularly exposed because all the cache data is siloed in the server. The Write Value Of Read Caching Read caching and write-through caching already make writes more efficient. First, most environments are more read heavy than they are write heavy. Seventy percent to 80% reads is not uncommon. Let's assume that the caching solution achieves a 95% read accuracy (which is actually a very poor result). With server-side caching that means that 65% to 75% of the reads are no longer traversing the storage network nor do they require the storage system to be involved in handling them. This means that the storage network can now be up to 75% dedicated to the handling of write I/O. Also, the performance delta between hard disk and flash is not as great on write I/O as it is on reads. Flash writes data much slower than it allows data to be read. The good news is that read caching will improve overall write performance. The bad news is those writes are happening at hard drive speeds. Flash would be faster and further optimize the environment, especially those where the write I/O percentage is larger than what I list above. There is a role for write caching but it has to be done safely. We will discuss some of the techniques that we are seeing vendors implement in our next column.
<urn:uuid:39496c76-93b3-4b2c-ac85-ec07335b50de>
CC-MAIN-2017-09
http://www.networkcomputing.com/storage/server-side-write-caching-pros-and-cons/1266923713?piddl_msgorder=
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00500-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961287
720
3.1875
3
A Ransomware infection is a program that ransoms the data or functionality of your computer until you perform an action. This action is typically to pay a ransom in the form of Bitcoins or another payment method. When a computer is infected with ransomware the effects can be either a nuisance or devastating depending on what the infection does. For example, many ransomware just lock you out of your computer, which can easily be fixed with the right tools. Other ransomware, such as Crypto Ransomware, are much more devastating as they will actually encrypt the data on your computer and require you to pay a ransom in order to decrypt your files. Effects of a ransomware infection include: Once you pay the requested ransom, the criminals may send you a code that you can input into the Ransomware program that then allows you to use your computer or decrypt your data. In some situations, though, even if you do pay the ransom, the criminals will just take your money and run, with you being left with your problem unresolved. Though the loss of your data and computer can be devastating, sending the ransom could be even more so. Depending on how the criminals want you to pay the ransom could put you at risk for Identity Theft as the information you send may contain personal information. Therefore, we suggest that you never pay a ransom unless it is absolutely necessary for data recovery. For screenlockers you should never pay a ransom as there are always solutions to remove these infections without paying anything. Last, but not least, it is important to remember that paying the ransom only continues to fuel the release of new variants of these types of programs.
<urn:uuid:d2c196d0-2026-4e8d-a843-bfd1deedf7fe>
CC-MAIN-2017-09
https://www.bleepingcomputer.com/virus-removal/threat/ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00620-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943875
330
2.96875
3
This content is excerpted from the new 3rd Ed. of 'A Practical Guide to Linux: Commands, Editors, and Shell Programming', authored by Mark Sobell, ISBN 013308504X, published by Pearson/Prentice Hall Professional, Sept. 2012, Copyright 2013 Mark G. Sobell. For more info please visit www.sobell.com or the publisher site, www.informit.com The C Shell history mechanism uses an exclamation point to reference events. This technique, which is available under tcsh, is frequently more cumbersome to use than fcbut nevertheless has some useful features. For example, the !! command reexecutes the previous event, and the shell replaces the !$ token with the last word from the previous command line. You can reference an event by using its absolute event number, its relative event number, or the text it contains. All references to events, called event designators, begin with an exclamation point (!). One or more characters follow the exclamation point to specify an event. You can put history events anywhere on a command line. To escape an exclamation point so the shell interprets it literally instead of as the start of a history event, precede it with a backslash (\) or enclose it within single quotation marks. An event designator specifies a command in the history list. Table 8-8 lists event designators. !! reexecutes the previous event You can reexecute the previous event by giving a !! command. In the following example, event 45 reexecutes event 44: 44 $ ls -l text -rw-rw-r--. 1 max pubs 45 04-30 14:53 text 45 $ !! ls -l text -rw-rw-r--. 1 max pubs 45 04-30 14:53 text The !! command works whether or not your prompt displays an event number. As this example shows, when you use the history mechanism to reexecute an event, the shell displays the command it is reexecuting. !n event number A number following an exclamation point refers to an event. If that event is in the history list, the shell executes it. Otherwise, the shell displays an error message. A negative number following an exclamation point references an event relative to the current event. For example, the command !–3 refers to the third preceding event. After you issue a command, the relative event number of a given event changes (event –3 becomes event –4). Both of the following commands reexecute event 44: 51 $ !44 ls -l text -rw-rw-r--. 1 max pubs 45 04-30 14:53 text 52 $ !-8 ls -l text -rw-rw-r--. 1 max pubs 45 04-30 14:53 text !string event text When a string of text follows an exclamation point, the shell searches for and executes the most recent event that began with that string. If you enclose the string within question marks, the shell executes the most recent event that contained that string. The final question mark is optional if a RETURNwould immediately follow it. 68 $ history 10 59 ls -l text* 60 tail text5 61 cat text1 text5 > letter 62 vim letter 63 cat letter 64 cat memo 65 lpr memo 66 pine zach 67 ls -l 68 history 69 $ !l ls -l ... 70 $ !lpr lpr memo 71 $ !?letter? cat letter ... InformIT is running a holiday promotion offering 40% off most books & eBook & video training packages from its site (Expires December 28, 2012 at 11:59 p.m. Eastern Standard Time). Click here for info on the discount coupon code.
<urn:uuid:451d0c20-1ff6-4659-8ebe-bf6d8eb3e6fe>
CC-MAIN-2017-09
http://www.itworld.com/article/2717119/operating-systems/linux-tip--using-an-exclamation-point-----to-reference-events.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00020-ip-10-171-10-108.ec2.internal.warc.gz
en
0.767472
779
3.25
3