text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
"In 1983, IBM developed a private packet radio data network called DCS (Data Communication System) for the use of its field engineers. This was used for call dispatch and call reporting and some other data applications. The terminal device consisted of a hand held data terminal about the size and shape of a house brick, by which name it was consequently known. The radio infrastructure was provided by Motorola and operated at 4.8 Kbps on a carrier frequency of 800 MHz. The network covered the major cities of the United States but the FCC licence did not allow the service to be sold to other users. At the same time, Motorola was building a network using the same technology for public access. In 1990, the IBM and Motorola networks were joined to form a public access network known as ARDIS (Advanced Radio Data Information System). The original implementation of ARDIS used a protocol known as MDC4800, but a new protocol called RD-LAP is being introduced alongside with the advantage of operating at 19.2 Kbps." SG24-4465-01 page 14
<urn:uuid:7c04704a-b8ee-4a27-94bf-8a112aeea3ec>
CC-MAIN-2017-04
http://archive.midrange.com/midrange-l/200904/msg00430.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00520-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958779
235
3.140625
3
lights, depending on the model, can last for years, while incandescent lights last for months. San Jose already has LED traffic lights, but wants to go a step further, powering each with a solar harvesting device. Overall, the city expects to invest in a diverse combination of emerging renewable technologies to make the 100-percent goal possible. "We'll do some things with fuel cell technology and the electric chemical technology that's coming out," O'Mara said. "There are a lot of opportunities with water throughout the coastal area. We're not saying that 100 percent of it has to be derived onsite in the city, but the power that we're buying will come from renewable sources." San Jose already has one of the highest recycling rates in the country: Sixty-two percent of its garbage is recycled. The city plans to ramp up those efforts with a campaign encouraging residents to purchase easily recyclable products. San Jose plans to convert to energy, anything that is remaining in the city landfills, helping it reach another green goal - converting 100 percent of San Jose landfill waste to energy. "The idea is to make it a continuous string where we're diverting and recycling as much as possible. The little bit that's left on the biosolid side, we're converting to energy. That's what we're talking about when we say 'waste-to-energy.' We're not talking about incinerators. The problem with those is they create power, but they pollute," O'Mara said. Most government data centers consume huge amounts of power. Many local governments pursuing green initiatives include data center overhauls, which consolidate servers and deploy more efficient cooling systems. San Jose was ahead of the game on green IT. In 2005, San Jose built a new city hall building and relocated several departments to it. Before the move, those departments occupied several buildings, each with its own data center. Sharing one data center enabled those agencies to slash power consumption. The facility also uses a cooling system that sucks in the cold air from outdoors at night to naturally cool the equipment. "By mixing [cooler] outside air with the chilled water, we're able to reduce the amount of water we need to chill," said Vijay Sammeta, division manager for IT in San Jose. The city also is working to further reduce data center power consumption with server virtualization technology. This allows the work of up to 10 normal servers to be done on one by transforming hardware into software. The city recently switched to more energy-efficient desktops and laptops. It also mandated that all the city's electronic IT hardware must be approved by the Electronic Product Environmental Assessment Tool (EPEAT). The EPEAT is a set of energy- efficiency criteria created by the nonprofit Zero Waste Alliance through a grant from the U.S. Environmental Protection Agency. Many vendors view it as the strictest standard to meet for green products. "We've seen about 40 [percent] to 50 percent less energy use coming out of those [new computers] than the ones we were buying five to six months ago," Sammetta said. "We are now redoing our desktop contract to incorporate green requirements and calculating energy savings as part of the total cost of ownership." He said he hoped the city would implement a five-year replacement cycle for that equipment as part of its green agenda. That would enable the city's IT to keep up-to-date with energy-efficient hardware, making the city greener. But Sammeta said persuading city leaders to fund that replacement cycle has been difficult because San Jose is struggling with a tight budget right now. "The opportunity is right because manufacturing from different vendors, especially the big players, has really gotten on board," he said. "It means investing in those technologies, getting on a PC replacement cycle on a four- or five-year cycle, as opposed to 10 years."
<urn:uuid:61e47e53-fcc4-46c0-ada6-9f27731311be>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Green-Technology-and-Renewable-Energy-are.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00060-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961906
801
2.96875
3
ZF Linux produces a low-cost microprocessor called the MachZ and said that, by using this chip along with the Z-Port blueprints, ISPs can manufacture a PC for $250 (£170). It said the cost is low enough to re-ignite the "free PC" schemes piloted by US ISPs last year. The value of the PC was meant to be recouped through online charges but the high cost of PCs made the schemes too expensive to run. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. ZF Linux said it has developed more methods to simplify PC design than other companies and believes it will succeed for three reasons: the Z-Port can be configured with a free version of Linux, rather than Windows; it is licensing the PC design for free; and the MachZ processor is an integrated "system on a chip" that cuts costs by combining input and output controllers on the same piece of silicon as the microprocessor.
<urn:uuid:b3debe14-4bc7-40dc-ae46-f69c7b6f3bec>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240039759/ZF-Linux-offers-PC-design-licence-free-to-ISPs
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00511-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960304
211
2.625
3
Token Ring is a protocol that is the second most widely-used protocol on Local Area Network (LAN) after Ethernet. In a Token Ring network, all computers are connected in a ring or star topology and a bit- or token-passing scheme is used in order to prevent the collision of data between two computers that want to send messages at the same time. The Token Ring network was originally developed by IBM in the 1970s. It is still IBM’s primary LAN technology, and was eventually standardized with protocol IEEE 802.5. The related IEEE 802.5 specification is almost identical to and completely compatible with IBM’s Token Ring network. In fact, the IEEE 802.5 specification was modeled after IBM Token Ring, and it continues to shadow IBM’s Token Ring development. The term Token Ring generally is used to refer to both IBM’s Token Ring network and IEEE 802.5 networks. Token Ring (IEEE 802.5) is deterministic star-wired ring architecture, and the sequence in which users gain access is predetermined. The controlling station, called the active monitor, generates a special signaling sequence called a Token that controls the right to transmit. This Token is continually passed around the network from one node to the next. When a host has something to send, it captures the Token, and changes it to a frame, setting its status to busy. It then adds the frame header, information and trailer fields. The header includes the address of the hosts that will copy the frame. All nodes read the frame as it is passed around the ring to determine if they are the recipient of a message. If they are, they extract the data, retransmitting the frame to the next host on the ring. When the frame returns to the originating station, it removes the frame and reissues a free token which can then be used by another host. The token-access control scheme thus allows all hosts to share the network bandwidth in an orderly and efficient manner. The advantages of Token Ring, in addition to deterministic, are excellent throughput and efficiency at high load. The major minus point is the presence of a centralized monitor function, which includes a critical component. Another disadvantage is that the ring is broken whenever one host is down or the cable breaks. Stations on a Token Ring LAN are logically organized in a ring topology with data being transmitted sequentially from one ring station to the next with a control token circulating around the ring controlling access. This token passing mechanism is shared by ARCNET, token bus, 100VG-AnyLAN(802.12) and FDDI, and has theoretical advantages over the stochastic CSMA/CD of Ethernet.
<urn:uuid:f43f874a-e346-405c-9893-62da258a178f>
CC-MAIN-2017-04
http://www.fs.com/blog/token-ring.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926988
546
3.890625
4
Understanding Logical Processors Logical processors subdivide a server's processing power to enable parallel processing. Shown here is a server with two physical processors with a view of how the OS recognizes the resulting logical processors. A physical processor—also referred to as a CPU, a socket, or occasionally as a package—is a chip that is visible on a computer's circuit board. Most modern physical processors have two or more cores, which are independent processing units. Typical servers will have multiple physical processors with at least four or as many as 10 cores in each. A logical processor is perceived by Windows as a processor, and each logical processor is capable of executing its own stream of instructions simultaneously, to which the OS can in turn assign simultaneous independent units of work. Windows Server enables each core to appear as a logical processor, so the server shown here, which has two quad-core physical processors, can have eight logical processors. Some processors support a technology called symmetric multithreading (which Intel calls "hyperthreading"), which enables a core to execute two independent instruction streams simultaneously. If the technology were enabled here, the result would be 16 logical processors. While SQL Server 2012 offers licensing that is per-core, that licensing is based on physical cores. The number of logical cores is irrelevant to the per-core licensing costs when licensing physical servers, and instead only plays a role in the number of logical processors that Windows and SQL Server can technically support. Virtual machines (VMs) are licensed based on the concept of a "virtual core," which is a processor as viewed by the VM guest OS. Logical processors have a potential effect in their licensing, as Microsoft has stated that assigning a virtual core to more than one thread at a time (two or more logical processors) or assigning a logical processor to more than one virtual core at a time may incur additional core license charges.
<urn:uuid:3236b2fb-6966-4875-b17c-2882318feb60>
CC-MAIN-2017-04
http://www.directionsonmicrosoft.com/licensing/30-licensing/3420-sql-server-2012-adopts-per-core-licensing-model.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00475-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951695
383
3.953125
4
In the decade after measles was largely eradicated in the United States in 2000, the number of reported cases of the highly contagious disease hovered around 60 each year. But since 2010, the annual number has shot up to 155, according to the Centers for Disease Control and Prevention. And in just the first three months of 2014, 106 cases have been reported across the country. Health officials are worried. CDC issued a travel warning last month after several unvaccinated children returned to the states with measles infections from the Philippines, where the disease is still relatively common. Among physicians, some pediatric infectious-disease specialists have begun pleading with the American public to vaccinate their children. Measles is an airborne viral infection that affects the skin and respiratory and immune systems, starting with a rash and high fever. It can be prevented with a blanket vaccination that also protects against mumps and rubella. Before widespread vaccination efforts took root in 2000, the U.S. saw about 500,000 measles cases each year, which led to 48,000 hospitalizations and 500 deaths. Right now, two outbreaks are slowly spreading on opposite sides of the country. In New York City, thenumber of reported cases in an outbreak that began in February rose to 26 last week. In California, 49 measles cases have been reported this year, compared with four at this time last year. Most current measles cases have been linked to foreign sources, such as the Philippines. But the rate of U.S. parents choosing not to vaccinate their children has increased in recent years, resulting in a higher incidence of the illness. More than 90 percent of young children are vaccinated against measles in the U.S., but laws requiring immunization for schoolchildren vary by state. California is one of 19 states that allows parents to opt out of immunizations for young schoolchildren on the basis of personal beliefs. In these states, the rate of unvaccinated children is higher. And when unvaccinated children are clustered in one region, the risk of an outbreak from an imported infection is higher. New York does not allow such an exemption. A considerable chunk of American physicians, especially young ones, have not seen measles because of its virtual elimination 14 years ago. But as the number of reported cases continues to climb, that will likely change. And it's only April. For more information about measles and its prevention, visit cdc.gov/measles.
<urn:uuid:0ad48b7b-b104-4fba-a7e1-fbd1c1b681a1>
CC-MAIN-2017-04
http://www.nextgov.com/health/2014/04/number-measles-cases-year-already-troubling-and-its-only-april/83206/?oref=ng-relatedstories
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00475-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966744
497
3.234375
3
The Internet of Things (IoT) may be introducing many new possibilities into our work and domestic lives, but there are fears it could lead to mass layoffs in the future. A report published last week by consulting firm Zinnov claims that IoT will impact a staggering 120,000 jobs in India by 2021, although up to 94,000 redundancies could be made. Meanwhile, only 25,000 jobs will be created within the next few years. The main cause of this will be increased automation, whereby humans are replaced by technologies capable of handling the same job. IoT causes mass job losses Connected chips, gadgets and machinery will have detrimental effects in areas such as office work, support and maintenance, it’s thought. Only more skilled employees – such as network engineers and robotics coordinators – will keep their jobs. The connected tech market will consist of 5 percent of the global technology industry and will make $15,000 within the next few years, according to Nasscom. India currently contributes $1.6 billion to the global IoT industry, says Zinnov, but this will reach $7.3 billion by 2021. The country’s corporate sector is driving the industry, making up 80 percent of the country’s overall contribution. The government only funds 20 percent of the domestic IoT market, perhaps due to a lack of understanding and interest in the area. Firms, on the other hand, are looking for ways to save money and streamline their operations. Automation to affect whole tech sector Recently, US tech analyst HfS Research said the technology sector in India will lose more than 4 million jobs by 2021. This, it claimed, will come down to companies automating low-skilled jobs. Hardik Tiwari, engagement lead at Zinnov, told the Economic Times that thousands of jobs will be affected in India. However, countries like the UK and US will also be impacted in similar ways. “Internet-of-things technology will impact 120,000 jobs in the country by 2021. 94,000 jobs will be eliminated, and 25,000 jobs will be created in the five-year period,” he said. Tiwari explained that service companies are reaching out to IoT companies with unique products, something that’ll help the Indian IoT develop into a global leader. “There is a lot of demand from service providers for niche internet-for-things players with intellectual property and platforms. This will help increase the industry’s market share.” Automation needs care Nitin Rakesh, CEO and president of global IT and business solutions provider Syntel, told Internet of Business that companies experimenting with automation need to ensure they have “robust” strategies in place. “A robust and holistic approach to enterprise automation provides a central backbone that empowers companies to modernise so they can survive and thrive in the two-speed world and harness the capabilities of the new IoT paradigm.,” he said. “As the IoT tidal wave gathers strength, a gap is emerging between companies reliant on ageing legacy systems and the growing demand for digital connectivity by consumers. This digital disconnect will create unprecedented challenges for companies across a many sectors, including banking, insurance, healthcare and manufacturing. “In order for companies to become IoT ready, they must find a way to unlock the data within their legacy systems whilst upgrading to more modern digital platforms that support the constant stream of real-time data that IoT-connected devices generate.”
<urn:uuid:ec4545b3-9c0b-4552-9946-d09a8a6195ca>
CC-MAIN-2017-04
https://internetofbusiness.com/iot-result-94000-job-losses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00043-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941544
730
2.671875
3
DOE machines top supercomputer list Energy Department labs run seven of the 10 fastest supercomputers on the Top 500 list. - By Joab Jackson - Nov 18, 2008 Energy Department labs run seven of the top 10 supercomputers on the latest installment of the Top500 , a biannual ranking of the world's most powerful supercomputers. The latest iteration of the list was posted Nov. 17. Los Alamos National Laboratory's Roadrunner, an IBM machine, topped the list, achieving 1.1 petaflops, followed closely by Oak Ridge National Laboratory's Cray-supplied Jaguar, which clocked in at slightly faster than 1.05 petaflops. Lawrence Livermore National Laboratory, Argonne National Laboratory, Lawrence Berkeley National Laboratory and Sandia National Laboratories also had machines in the top 10 spots. Overall, U.S. government agencies or academic institutions run nine of the top 10 supercomputers. NASA's Ames Research Center debuted its new SGI-supplied Pleiades machine at 487 teraflops. And the University of Texas' Texas Advanced Computing Center rounded out the country's showing in the top 10, with its Sun Microsystems-supplied Ranger achieving 433 teraflops. The federal government's dominance of the Top500 lists represents a concerted effort on the part of DOE to beef up the country's supercomputer power. Earlier in the decade, lawmakers were concerned that the United States was losing its technological edge and feared that the country would fall behind in industrial development and academic research. Much of the computational power at DOE's disposal is used for its Innovative and Novel Computational Impact on Theory and Experiment program, which grants computer time, on a peer-reviewed basis, to other government agencies, universities and industry Overall, the United States leads all other countries with the number of supercomputers on the list ' 291 of the top 500 supercomputers are on U.S. soil. Participation in the Top 500 list is voluntary. Organizations submit their benchmarks for inclusion. Researchers at the University of Mannheim, Germany; Lawrence Berkeley National Laboratory; and the University of Tennessee, Knoxville, compile the list. Joab Jackson is the senior technology editor for Government Computer News.
<urn:uuid:2f828e16-0f64-4dfe-a7ad-9f9e83c9e247>
CC-MAIN-2017-04
https://gcn.com/articles/2008/11/18/doe-machines-top-supercomputer-list.aspx?sc_lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876509
470
2.8125
3
Latest Worms Exploiting IM The vulnerability is caused by a flaw in the Windows operating system which allows hackers to exploit the "plug and play" capability of the Windows system. The vulnerability can be exploited by an infected machine creating a denial of service (DOS) attack on other vulnerable machines. By leveraging a chat channel, the initiating hacker gains access to a host machine, leveraging it to attack other networked machines. Once successfully executed, the vulnerability allows a hacker to impact a number of systems, including stealing system information or the most damaging impact of forcing an infected computer into a continual reboot. To learn more about the Zotob and IRCbot worms visit the IMlogic IM and P2P Threat Center. Initially rated a low risk by most security industry threat centers, the rapid propagation of the Zotob and IRCbot worms motivated most providers to increase the risk level. The worm appears to lay quiet on an infected machine until prompted into action by the hacker. The messaging channel opened up by the worm appears to await direction prior to disrupting system activity or propagating itself on the network.
<urn:uuid:943fc54b-3b84-4b00-92f0-2acd41b100fd>
CC-MAIN-2017-04
http://www.cioupdate.com/print/news/article.php/3528106/Latest-Worms-Exploiting-IM.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895144
224
2.75
3
I would rather understand subnetting than memorize charts. References that show you every combination are useful, and I’ll include one below, but the focus of this resource is to help you understand subnetting so that such charts will no longer be needed. I’ll give a hint: It’s all about the binary. Full image available here. Most subnetting charts, and even tutorials, just give you the answer. The concept for the chart above is to reinforce understanding of how we arrive at the answers. And the approach is remarkably simple: you just use the binary steps at the top as your guide for each lateral movement. For example: For a /25 you start in the 128 place (one extra bit to the right), and that’s your subnet. You have 2 networks, each with 126 hosts in them. Fair enough. If you want to go another bit to the right (/26), you’re going to add the binary number for that column to 128, which is 64. So, 128 + 64 = 192. That’s your subnet. And since you’ve gone to the right one— and you were at 2 networks—you now have double that number of networks (remember, each slide is an exponent of two, i.e halving). So you now have 4 networks, each with half the number of hosts in them (64 – 2 = 62). So that’s the secret. You slide back and forth on the binary scale; as you go to the left you go up one exponent of 2 in networks, while simultaneously going down one exponent of 2 in hosts. Another way to say that is that for each bit slide to one side you double either the number of networks or hosts while halving the other. So at one position your networks are 2 and your hosts are 128, and when you go to the right your networks are 4 and your hosts are 64 (-2). If you went to the right instead you’d be at the /24 mark, and guess what? Hosts go UP from 128 to 256 (doubled), while the network goes DOWN from 2 to 1 (halved). One network of 256 (-2) hosts, just like we would expect. So the next time you’re having trouble with doing subnets in your head or on paper, try drawing my chart first (binary, bits, subnets, networks, hosts) and remember the binary sliding concept before looking at the full reference chart shown above. I think you’ll find it enjoyable to grok it rather than look it up. And as always, if you have any questions or comments you can contact me here.
<urn:uuid:f3d1e077-1026-4f21-aa17-452bea700258>
CC-MAIN-2017-04
https://danielmiessler.com/study/subnetting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00191-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945896
564
3.34375
3
There is a certain level of skill to creating an IPv6-capable network. There is even more skill to creating an IPv6-capable network correctly. To help confirm an IPv6-capable network has been configured correctly and that “upstream” IPv6 connectivity is correct, there are several Web sites which offer basic insights into the quality of IPv6 connectivity. Such sites have been around in one form or another since at least 2000. The most famous early “test” Web site was perhaps “www.kame.net” – if the turtle (“kame” in Japanese) moved, the site was being reached via IPv6. The openly available “ipv6calc” software included a CGI that allowed one to confirm not only what IP version one was reaching the Web site with, but also information about the address. To verify basic IPv6 functionality, a good starting point is the Web site “http://test-ipv6.com”. This Web site provides an IPv6 readiness score from 0 to 10 and measures both client and network IPv6 readiness. In addition to testing for basic IPv6 capabilities, it reports on a sampling of IPv6-enabled destinations that can be reached, as well as IPv6 DNS and large packet support. The site is based on open-source software and source code can be found at https://github.com/falling-sky/source/wiki. There are also numerous mirrors around the world. Another IPv6 test Web site is http://ipv6-test.com, not to be confused with http://test-ipv6.com mentioned above. This site offers Path MTU tests. http://ipv6-test.com is a bit less specific than http://test-ipv6.com/ about exactly which tests it performs and http://ipv6-test.com does not appear to provide source code or mirrors, as far as I can tell. A site which offers more comprehensive testing of both IPv4 and IPv6 connectivity is the ICSI “Netalyzer.” This site was funded via a grant from the National Science Foundation and operated by the International Computer Science Institute at University of California -Berkeley. It performs a variety of IPv4 and IPv6 tests, including checks for open network ports, fragmentation functionality, path MTU discovery, general DNS functionality and DNSSec. Netalyzer information is accessible via a Web browser or by running a Java application on the command line. On initial inspection and after an unsuccessful attempt to reach the creators of the site, it appears source code is not available. Certainly, these Web sites only provide a small amount of data in the quest to fully understand the network that one is connected to and there are other diagnostic tools one should seek to obtain a more comprehensive understanding of characteristics, connectivity and performance of a network. However, these tools provide an excellent overview of basic IPv6 connectivity and network capabilities. According to Jason Fesler, developer and maintainer of the test-ipv6.com Web site and the source code, the goal of test-ipv6 was not only to provide basic IP address information, but to help visitors identify certain failure conditions. Said Fesler, “Having no IPv6 is one thing; but having misconfigured IPv6 is a very different problem with very negative user experience issues. Today, the browsers mostly work around those user experience issues; which create a different headache for system administrators — hiding the problems.”
<urn:uuid:31c25f29-9988-418a-9227-f99ea753d994>
CC-MAIN-2017-04
https://resources.arbornetworks.com/h/i/19165913-state-of-ipv6-web-sites-now-offer-easy-ipv6-connectivity-tests
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933294
735
2.875
3
“Big data” isn’t anything new. You’d be excused for thinking otherwise, given the enormous resurgent interest in the mining of staggering amounts of data—which is how we could define big data at its most basic—catalyzed by an incoming tsunami of “big data” tools and technologies which are not particularly novel either. The fact is, governments and large international organizations have been dealing with big data for decades: think census surveys and the research behind UNESCO and UNICEF programs to direct aid and alleviate poverty, for example. Also consider the ongoing revelations about the National Security Agency’s surveillance programs (more about that in a moment). Until recently, those kinds of projects occurred in relative obscurity. Now there is no question that we are seeing tremendous momentum in reporting and understanding of big data use cases at the large-organization level, driven partly by our ability to process big data (bigger and faster servers and storage, the ubiquity of technologies like Hadoop and, of course, the sheer availability of big data) and partly by a rapidly growing awareness of the need to gain insight from this data. This in turn is throwing light on the opportunities leveraged, misused and just plain missed. It is clear that these use cases have both lessons and ramifications for all of us as business professionals and private citizens. A very interesting case study was recently reported by The New York Times regarding the recently completed German census. According to this announcement by the German Statistisches Bundesamt, the federal Bureau of Statistics, the country had 1.5 million fewer people than expected. The census findings were a “double whammy”: a lower (and aging) population means fewer able-bodied workers and less tax collection; plus, who would pay off future debts? Now, one simple reason for the discrepancy was that Germans really weren’t sure what to expect: their last census was, astonishingly, a quarter of a century ago, before East and West Germany was reunified. Unlike, say, the United States, where the populace—at least, those of us fortunate enough to be here legally—happily submits to being counted every 10 years like clockwork, the Germans, it seems, view a census as an invasion of privacy, and are hence reluctant to submit to being counted. And to exacerbate things, German history does not provide a strong supporting case for state monitoring— during the 1930s and 1940s, the Nazis were reputedly using the census as a tool to identify Jews. Subsequent investigations revealed an intriguing fact: most of the missing population consisted of migrants, and unusually healthy foreigners at that. “Demographers were trying to explain the healthy-migrant effect, why they were living to be 110 years old,” quotes the Times. A Classic Data Management Issue Turns out, it was a classic process data quality problem: foreigners were required to register on arrival in the country, but, of course, when they move out, they seldom bother to “deregister.” As a result, migrants continued to be counted long after they had left the country. The “big data” lessons here are undeniable: overcoming the logistical challenges of gathering voluminous data across a large “catchment” area, scrubbing it, integrating, and making sense of it so that the resulting analytics do not mislead, with some master data management (MDM) aspects thrown in the mix. For example, I am curious about the use, if any, of MDM-like de-duping techniques used to identify migrants (or, for that matter, even citizens) that resided at multiple locations over their lifetimes—by no means a rare pattern. This project also underscores the importance of data profiling, the process of previewing the data collected in order to assess its quality and correctness, which would have led to early detection of the data outliers, such as the unusual aging profile of migrants. Back home in the meanwhile, the recent furor over the National Security Agency (NSA) collecting phone records from Verizon and others on a mass scale spotlights the age-old conflict between the need of governments to gather enough information in order to govern effectively—a big part of which is maintaining national security—and avoid trampling the rights of citizens. And, unfortunately for us hapless citizens, it does not appear that the two are mutually exclusive. This has a correspondence in the world of business, too. Replace “government” and citizens” with “companies” and “customers”, and we have the makings of a growing corporate dilemma: in our quest to know as much as possible about our customers and their lifestyles and preferences, at what point does the information we collect stop being benign and begin to get intrusive? Take the example of a grocery store or a Web store that derives increasingly sophisticated insight into consumer preferences—past as well as future—in order to serve the customer better and improve on both sales and margin. The line between sophisticated analytics and intrusion of privacy is mighty fine indeed, as demonstrated by a report that appeared in The Times early last year. One startling and much-publicized case mentioned was how predictive analytics led to identifying a teenage girl who was pregnant and her family wasn’t yet aware of it. The original aim of the software was to identify women pregnant in their second trimester in order to influence their purchases and lock them in for years with targeted marketing. This was accomplished by means of a “pregnancy score” computed from the purchasing records for a basket of about 25 products. When the girl began to receive coupons appropriate to her condition, the secret was out. This sort of predictive analytics (in this case in order to cultivate closer customer connections) is one of many emerging cases in which big data plays a role that we were not aware of. It is worth considering the implications of such use cases as we continue to exploit data. Even as many of us bask in the sunlight of what we like to call “free society”, there are, of course, countries where the very concepts of privacy and individual freedom are viewed as subversive by the state and where citizens have come to accept (albeit with reluctance and resignation, perhaps) that it is indeed the right of the state to delve deep into, and wield control over, their lives. There is an old adage, “knowledge is power when put into action.” As big data continues to propel increasingly deep forays into population analytics by countries and corporations alike, the question of “actionable insight versus actionable intrusion” (pun on “actionable” intended) will only grow in magnitude and complexity. Rajan Chandras, a senior level practitioner in enterprise data management and a freelance technology columnist, is employed at a major healthcare insurance firm in the New York region. You can reach him at rchandras at gmail dot com.
<urn:uuid:bb791c2a-fce3-4e69-bd99-dcacce883647>
CC-MAIN-2017-04
http://data-informed.com/big-data-big-governments-big-business-and-us/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964997
1,445
2.546875
3
Designing a Secret Weapon Designing a Secret Weapon This is where the 5,000-ride database of the Trek Bicycle Corp. comes into play. Using a combination of three-dimensional modeling software from Alias, once a unit of hardware maker Silicon Graphics; mechanical design software from SolidWorks; and low-cost, high-performance personal computers running Opteron processors from Advanced Micro Devices, senior industrial designer Michael Sagan and a project team of 12 worked from December 2002 to April 2003 to simultaneously design bikes that would give Armstrong an edge in two consecutive Tours. The first, which became the basis of Treks Madone line of bikes, was Armstrongs "daily drive" in the race pack, a.k.a. peloton. A second is to be his secret weapon in this years Tour, a version of the Madone called the SSL that is specially designed to race uphill.The ride database helps Sagan and the Project Orion team compute "fluid dynamics," to understand what happens to the "dirty air" that flows past and through Armstrongs always chopping legs as well as surrounding tubes, cranks and pedals. The data also allows the team to perform "finite element analysis, showing them the exact locations of stress on the carbon fibers that make up the frameand where layers of carbon fiber can be reduced. To this end, the Orion team, which included composite engineers Scott Nielson and Brian Schumann and carbon fiber frame pioneer Jim Colgrove, produced a breakthrough in the companys drive to develop ever-thinner sheets of carbon fibers. This year, some layers of carbon will weigh just 55 grams, or a little less than 2 ounces, per square meter. Thats only slightly more than three times the weight of the plastic that wraps a deck of playing cards (15 grams per square meter). Its also about a third of the weight of the production model Trek bike that Armstrong used in his initial Tour victory in 1999. In that bike, the carbon weighed 150 grams per square meter. Even this year, the bulk of Armstrongs latest specially made bike will use sheets of carbon fibers that weigh 110 grams per square meter. But heres where the sensors and strain gauging pay off. Nielson and Schumann looked carefully at the results of stresses placed on every finite element of the frame and were able to replace 110-gram sheets with 55-gram sheets in locations such as the socket that joins tubes together near the handlebars, the rear fork and the seat post. Once any identifiable weight is shaved off, the design of any Armstrong bike then must factor in Armstrongs own preferences. After all, this is a fellow who can instantly tell if the wheelbase has been altered by 3 millimeters. To provide the desired stiffness, the team will rely on benchmarks from tests performed on the flexibility of the rear load-bearing arms of the bike known as the chain stay. To achieve comfort, the team relies on measures of the stiffness of the frame itself. And to predict jitterthe uncomfortable feeling that the bike is out of control on a serious descentthe Trek team relies on results of frontal impact deliberately entered into the database from crash tests. Next Page: Out of the tunnel. The reason: The key stage in the 2004 Tour de France is likely to occur on July 21. Thats when each racer will face cyclings "race of truth," an individual time trial. Each rider departs two minutes apart and races against the clock, without the protection or aid of any teammates. And this year, the key time trial is not on flat ground or mild inclines. It will be a 15-kilometer race up the legendary 1,780 meters (5,840 feet) of LAlpe dHuezdescribed by cyclists as "21 hairpins of pain."
<urn:uuid:ce0b71a1-6db1-432a-8d31-b4d915544796>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Database/Trek-Bicycle-Corp-Tour-de-Force/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00153-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943763
764
2.828125
3
Tech Glossary – U to W UPnP (Universal Plug and Play) Plug and Play describes devices that work with a computer system as soon as they are connected. UPnP is an extension of this idea that expands the range of Plug and Play devices to networking equipment. Universal Plug and Play uses network protocols to allow a wide range of devices to be interconnected and work seamlessly with each other. USB (Universal Serial Bus) USB is the most common type of computer port used in today’s computers. It can be used to connect keyboards, mice, game controllers, printers, scanners, digital cameras, and removable media drives, just to name a few. It is the standard monitor or display interface used in most PCs. Therefore, if a montior is VGA-compatible, it should work with most new computers. Virtualization can refer to a variety of computing concepts, but it usually refers to running multiple operating systems on a single machine. While most computers only have one operating system installed, virtualization software allows a computer to run several operating systems at the same time. Wi-Fi refers to wireless networking technology that allows computers and other devices to communicate over a wireless signal. It describes all network components that are based on one of the 802.11 standards, including 802.11a, 802.11b, 802.11g, and 802.11n. These standards were developed by the IEEE and adopted by the Wi-Fi Alliance, which trademarked the name “Wi-Fi”.
<urn:uuid:43acfd05-1738-43d6-8e0b-b3578c9f3998>
CC-MAIN-2017-04
http://icomputerdenver.com/tech-glossary/tech-glossary-u-w/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00061-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933287
314
3.5
4
If you follow Google relatively closely, you probably know about Project Loon, an effort to provide Internet access to those in rural areas or affected by natural disasters using LTE-equipped balloons. According to a new report from The Guardian, Project Loon has some company. As part of another project, called Project Skybender, Google is experimenting with a fleet of solar-powered drones equipped with a new, super-high-speed wireless broadband technology that’s up to 40 times faster than LTE. Project Skybender has been shrouded in secrecy until now, with tests being conducted high above the New Mexico desert. As part of the tests, Google is using an experimental new technology known as “millimeter-wave radio,” which, according to The Guardian, “could underpin next generation 5G wireless internet access.” The story behind the story: Although Project Skybender is new to us, Google’s interest in millimeter-wave radio technology is not. In 2014, the company filed paperwork with the FCC in which Google laid out plans to test the up-and-coming wireless technology in San Mateo, California. Google had apparently been looking into millimeter-wave radio tech for at least two years prior to that, our Mark Hachman reported at the time. Millimeter-wave technology has a long way to go until it’s ready for prime time, though—its range is limited compared to LTE, as The Guardian notes in its report. Fruit of the (Project) Loon? The Guardian notes that Project Loon and Project Skybender both fall under the purview of Google Access, the same group responsible for Google Fiber, which ReCode profiled back in November. Beyond that, the relationship between the two projects is unclear: Is Skybender the eventual successor to Project Loon? Will the two projects eventually become one? For most of us, the answer to those questions don’t really matter. Google’s work toward expanding broadband’s reach does matter, however, and it’ll be interesting to see how it develops from here. This story, "Google's Project Skybender uses drones to bring super fast Internet to the skies" was originally published by PCWorld.
<urn:uuid:a0f83c09-5036-45d6-99a9-8741b8795d0b>
CC-MAIN-2017-04
http://www.itnews.com/article/3028023/tech-events-dupe/googles-project-skybender-uses-drones-to-bring-super-fast-internet-to-the-skies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00182-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947183
463
2.625
3
October 9th, 2015 - by Walker Rowe Virtualization killed the hardware business for companies like Sun (now Oracle). What virtualization did was make popular the idea of replacing physical servers with virtual ones. That drives down the price and increases standardization, which drives down operating costs. You can even see this in the name, as people call virtual servers “commodity hardware.” That means all of those expensive Solaris servers have been replaced with virtual servers running on high-end PCs. It also means that RISC and other processors have been replaced with the 8088-base CPUs, which are plentiful and inexpensive. Therefore, management, who pays the bills, asks, “Why not do the same thing with networking?” They question, “Why does every network function need to have its own proprietary device? Why do we need one person to configure routers and another person to configure the firewall? Why can’t there be some open standards to make it easier to manage all this?” That is what SDN does, or attempts to do. The idea is to use Linux servers and push networking functions onto those, for the control plane or even the data plane. The data plane, which is where the switches live, can be virtual or physical devices in the SDN model. By moving to virtual switches, a company can replace some of its Cisco, Palo Alto, F5, and Alcatel-Lucent equipment with something open and less expensive. What’s needed to make all of this work are standards, and products built using those standards. Here we present some of those. First we’ll look at some open source products. Then we’ll look at some products that, while they are not open source, support open-sourced interfaces. - Open vSwitch OpenFlow is a standard for the interface between the control and forwarding layers in the SDN architecture. The forwarding later includes switches and routers, both physical and virtual; that could be the same as the data plane, depending on what definition you want to use. (To review, the definition of a router is that it has an internet connection. A switch connects two LAN segments. Each looks in their routing table to forward network traffic to its destination.) Here is a graphic from the Open Networking Foundation to illustrate the SDN abstraction: As you can see, OpenFlow sits between the control layer and the forwarding layer, which they call the Infrastructure Layer in this graphic. At the control layer, the devices are abstracted, meaning the specific implementation details are coded elsewhere. In this mode, if the device is physical and if the device vendor supports the OpenFlow standard, their physical device can be controlled from the control layer. The control plane is run on virtual machines. The switches can be physical devices or virtual ones. A physical switch might be needed when a Linux server cannot handle a large enough volume of traffic, because the physical device is built for that specific routing task. Yet some vendors are focusing on the problem of how to boost throughput from Linux servers to meet such switching demands. xDPd (eXtensible OpenFlow Data Path daemon) xDPd is an open source switch based on the OpenFlow standard. It has a control module, hardware abstraction layer, and platform driver. As you can see from the graphic below, the platform driver is a set of APIs that the hardware vendor would have to implement to make it work with this virtual switch. For example, to plug a Cisco switch into this model, Cisco would have to program to this API. Open vSwitch says their solution solves the problems with the L2 bridge built into Linux that is designed to connect virtual machines to other networks, which they say does not scale. They say their open source product “is targeted at multi-server virtualization deployments, a landscape for which the previous stack is not well suited.” The Open Networking Foundation lists some SDN products here. Below we take a look at a few. The products are grouped into these categories: optical networking, APIs, application acceleration, switches, routers, network virtualization, WAN optimization controllers, network operating systems, security products, and firewalls. All the products listed support OpenFlow. Some are SDN products, and the rest are physical hardware. As you will see, most of what the traditional network vendors are calling switches are not software-based switches running on Linux, but hypervisor-based control plane software. So there is some confusion about where to draw the border with the definition of SDN. This one is a genuine software-based virtual switch running on Linux. 6Wind says their packet processing software provides a 10x boost to network performance over Linux-based VMs. This includes routing, IPsec VPN, NAT, firewall, TCP, and UDP. This is done by using Fast vNIC drivers. Cisco Application Virtual Switch The Cisco Application Virtual Switch is a hypervisor-based virtual switch. Cisco does not want to get left out of the SDN market, but you could say that the very goal of SDN is to do just that. For many years Cisco has had almost a monopoly in the switch market, but not anymore. So SDN could be a threat to them if it gains traction among very large network operators. Cisco Nexus 100c Switch for Microsoft Hyper-V The Microsoft virtual machine operating system is called Hyper-V. It’s not clear how Cisco plans to make money from this, since the product is free. Except that it is designed to work with Cisco switching hardware, too. Juniper Networks OCX1100 This is an open source hardware switch that uses the Open Compute Project (OCP) hardware design. Juniper says their product is optimized for the cloud. It is designed for large cloud service providers who are building virtual machines and storage per the OCP specifications. The switch uses the Junos operating system. To say that it is open yet uses the proprietary Juniper OS means that it can be driven from OpenFlow control plane software. NEC ProgrammableFlow PF1000 Virtual Switch NEC is another one of the big networking companies. This product provides a network control layer on Microsoft 2012 Hyper-V virtual machines. The PF1000 Virtual Switch sits between the hypervisors and the switch hardware. The focus here is on centralizing the provisioning of networks and services, or what they call “automation and control.” It also provides network monitoring. What appears to missing from the effort to move networking to standards is standardization at the API level. In other words, OpenFlow includes specifications for packet forwarding. But there is no single set of commands for operations like add route, delete route, add firewall rule, set gateway, and so forth. Those details are implemented by each vendor. But what you can definitely say is the SDN lets organizations run the control layer on virtualized servers. So companies can use those servers for orchestration, provisioning, monitoring, and control instead of buying more costly, proprietary equipment from Cisco and others.
<urn:uuid:c3b15fb8-a16f-4c59-b0ab-41a3f8e246a1>
CC-MAIN-2017-04
https://anturis.com/blog/software-defined-networking-standards-and-products/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00576-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938735
1,462
2.859375
3
With the UK Carbon Reduction Commitment (CRC) currently in a state of flux, and US congress fighting over whether or not to block the US Environmental Protection Agency from regulating carbon emissions, what’s happening at a more local level. Is the real answer to tackle our emissions at the state level or do we have to wait for government to lead us by the hand? Ten north-eastern and mid-Atlantic states capped and began reducing carbon dioxide emissions in January 2009. That Regional Greenhouse Gas Initiative is working to reduce CO2 emissions from the power sector 10 per cent by 2018. In California they claim to have on of the most comprehensive schemes which kicks off in 2012. Another group of six states and one Canadian province have been working since 2007 in the Midwestern Greenhouse Gas Reduction Accord to form their own system. And then there is the Western Climate Initiative of seven states, including California, and four Canadian provinces. It’s the Californian scheme that seems to be attracting attention at the moment, as it’s one of the most aggressive. At the moment it seems to be running into trouble. Just today a San Francisco Superior Court judge ruled that the California Air Resources Board violated state environmental law in 2008 when it adopted a comprehensive plan to reduce greenhouse gases and again last year when it passed cap-and-trade regulations. Although only a tentative decision, if it is made final, California would be barred from implementing its ambitious plan to combat global warming. So as you can see there is much resistance to thee types of schemes even at the local level. In fact the states that not yet moving to cap-and-trade are those most dependent upon fossil fuels and major carbon emitters – the southeast and the lower Midwest. That includes states like Texas, which not only are not moving on cap-and-trade but are among those actively fighting to prevent the EPA from regulating greenhouse gas emissions within their borders. As governments across the world prevaricate over which scheme to implement (if at all), and the economic downturn provides fuel to those who argue that carbon reduction legislation may damage fragile recovery, we seem to be at a point where nothing much is happening. Hopefully though we will see more action at the state level and in some cases even city or town. Just because the world is waiting doesn’t mean that you have to. We really don’t have too much time to waste..
<urn:uuid:88b88817-2823-4084-b2c3-45e6b09acb2c>
CC-MAIN-2017-04
https://www.1e.com/blogs/2011/02/04/some-us-states-act-locally-on-carbon-emissions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954137
493
2.609375
3
Belhabib D.,University of British Columbia | Mendy A.,Sub Regional Fisheries Commission | Subah Y.,Bureau of National Fisheries | Broh N.T.,Bureau of National Fisheries | And 5 more authors. Environmental Development | Year: 2015 Fisheries catches from the Exclusive Economic Zones of three West African countries, each representing a Large Marine Ecosystem (LME), were reconstructed to cover all fishing sectors over a 6-decade period. While there are strong differences in terms of target species and the ecosystems within which they are embedded, there are similarities in the manner that domestic catches in The Gambia, Liberia and Namibia are under-reported. For The Gambia, catches by all domestic sectors were assessed to be twice the data supplied by The Gambia to the Food and Agriculture Organization of the United Nations (FAO). This is similar to the trend observed for Liberia, representing the Guinea Current LME, and where there is a larger margin for improvement, as the Liberian small-scale sector on its own is twice the entirety of the catches reported to the FAO. In Namibia, representing the Benguela Current LME, reconstructed catches were only 9% higher than the data reported by the FAO on behalf of Namibia, implying that the drastic measures implemented by Namibia to prevent illegal fishing are bearing fruits. Important lessons can be drawn from the present study for other countries in West Africa, notably for combating illegal foreign fishing, still rampant in the CCLME and the GCLME, and in managing small-scale fisheries, which require, as a starting point a detailed knowledge of their catch. These latter points are illustrated, finally, by catch time series for these three LMEs, reconstructed by taking account of the other 20 West African countries. © 2015. Source
<urn:uuid:d67435eb-11bb-441b-ad1d-2bbc4f8e02f4>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/bureau-of-national-fisheries-1797919/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937829
374
2.75
3
The Open System Interconnection (OSI) protocol suite is comprised of numerous standard protocols that are based on the OSI reference model. These protocols are part of an international program to develop data-networking protocols and other standards that facilitate multivendor equipment interoperability. The OSI program grew out of a need for international networking standards and is designed to facilitate communication between hardware and software systems despite differences in underlying architectures. The OSI specifications were conceived and implemented by two international standards organizations: the International Organization for Standardization (ISO) and the International Telecommunication Union-Telecommunication Standardization Sector (ITU-T). This chapter provides a summary of the OSI protocol suite and illustrates its mapping to the general OSI reference model. Figure 32-1 illustrates the entire OSI protocol suite and its relation to the layers of the OSI reference model. Each component of this protocol suite is discussed briefly in this chapter. The OSI routing protocols are addressed in more detail in "Open Systems Interconnection (OSI) Routing Protocol." The OSI protocol suite supports numerous standard media-access protocols at the physical and data link layers. The wide variety of media-access protocols supported in the OSI protocol suite allows other protocol suites to exist easily alongside OSI on the same network media. Supported media-access protocols include IEEE 802.2 LLC, IEEE 802.3, Token Ring/IEEE 802.5, Fiber Distributed Data Interface (FDDI), and X.25. The OSI protocol suite specifies two routing protocols at the network layer: End System-to-Intermediate System (ES-IS) and Intermediate System-to-Intermediate System (IS-IS). In addition, the OSI suite implements two types of network services: connectionless service and connection-oriented service. In addition to the standards specifying the OSI network-layer protocols and services, the following documents describe other OSI network-layer specifications: OSI connectionless network service is implemented by using the Connectionless Network Protocol (CLNP) and Connectionless Network Service (CLNS). CLNP and CLNS are described in the ISO 8473 standard. CLNP is an OSI network-layer protocol that carries upper-layer data and error indications over connectionless links. CLNP provides the interface between the Connectionless Network Service (CLNS) and upper layers. CLNS provides network-layer services to the transport layer via CLNP. CLNS does not perform connection setup or termination because paths are determined independently for each packet that is transmitted through a network. This contrasts with Connection-Mode Network Service (CMNS). In addition, CLNS provides best-effort delivery, which means that no guarantee exists that data will not be lost, corrupted, misordered, or duplicated. CLNS relies on transport-layer protocols to perform error detection and correction. OSI connection-oriented network service is implemented by using the Connection-Oriented Network Protocol (CONP) and Connection-Mode Network Service (CMNS). CONP is an OSI network-layer protocol that carries upper-layer data and error indications over connection-oriented links. CONP is based on the X.25 Packet-Layer Protocol (PLP) and is described in the ISO 8208 standard, "X.25 Packet-Layer Protocol for DTE." CONP provides the interface between CMNS and upper layers. It is a network-layer service that acts as the interface between the transport layer and CONP and is described in the ISO 8878 standard. CMNS performs functions related to the explicit establishment of paths between communicating transport-layer entities. These functions include connection setup, maintenance, and termination, and CMNS also provides a mechanism for requesting a specific quality of service (QOS). This contrasts with CLNS. OSI network-layer addressing is implemented by using two types of hierarchical addresses: network service access-point addresses and network-entity titles. A network service-access point (NSAP) is a conceptual point on the boundary between the network and the transport layers. The NSAP is the location at which OSI network services are provided to the transport layer. Each transport-layer entity is assigned a single NSAP, which is individually addressed in an OSI internetwork using NSAP addresses. Figure 32-2 illustrates the format of the OSI NSAP address, which identifies individual NSAPs. Two NSAP Address fields exist: the Initial Domain Part (IDP) and the Domain-Specific Part (DSP). The IDP field is divided into two parts: the Authority Format Identifier (AFI) and the Initial Domain Identifier (IDI). The AFI provides information about the structure and content of the IDI and DSP fields, such as whether the IDI is of variable length and whether the DSP uses decimal or binary notation. The IDI specifies the entity that can assign values to the DSP portion of the NSAP address. The DSP is subdivided into four parts by the authority responsible for its administration. The Address Administration fields allow for the further administration of addressing by adding a second authority identifier and by delegating address administration to subauthorities. The Area field identifies the specific area within a domain and is used for routing purposes. The Station field identifies a specific station within an area and also is used for routing purposes. The Selector field provides the specific n-selector within a station and, much like the other fields, is used for routing purposes. The reserved n-selector 00 identifies the address as a network entity title (NET). An OSI end system (ES) often has multiple NSAP addresses, one for each transport entity that it contains. If this is the case, the NSAP address for each transport entity usually differs only in the last byte (called the n-selector). Figure 32-3 illustrates the relationship between a transport entity, the NSAP, and the network service. A network-entity title (NET) is used to identify the network layer of a system without associating that system with a specific transport-layer entity (as an NSAP address does). NETs are useful for addressing intermediate systems (ISs), such as routers, that do not interface with the transport layer. An IS can have a single NET or multiple NETs, if it participates in multiple areas or domains. The OSI protocol suite implements two types of services at the transport layer: connection-oriented transport service and connectionless transport service. Five connection-oriented transport-layer protocols exist in the OSI suite, ranging from Transport Protocol Class 0 through Transport Protocol Class 4. Connectionless transport service is supported only by Transport Protocol Class 4. Transport Protocol Class 0 (TP0), the simplest OSI transport protocol, performs segmentation and reassembly functions. TP0 requires connection-oriented network service. Transport Protocol Class 1 (TP1) performs segmentation and reassembly and offers basic error recovery. TP1 sequences protocol data units (PDUs) and will retransmit PDUs or reinitiate the connection if an excessive number of PDUs are unacknowledged. TP1 requires connection-oriented network service. Transport Protocol Class 2 (TP2) performs segmentation and reassembly, as well as multiplexing and demultiplexing data streams over a single virtual circuit. TP2 requires connection-oriented network service. Transport Protocol Class 3 (TP3) offers basic error recovery and performs segmentation and reassembly, in addition to multiplexing and demultiplexing data streams over a single virtual circuit. TP3 also sequences PDUs and retransmits them or reinitiates the connection if an excessive number are unacknowledged. TP3 requires connection-oriented network service. Transport Protocol Class 4 (TP4) TP4 offers basic error recovery, performs segmentation and reassembly, and supplies multiplexing and demultiplexing of data streams over a single virtual circuit. TP4 sequences PDUs and retransmits them or reinitiates the connection if an excessive number are unacknowledged. TP4 provides reliable transport service and functions with either connection-oriented or connectionless network service. It is based on the Transmission Control Protocol (TCP) in the Internet Protocols suite and is the only OSI protocol class that supports connectionless network service. The session-layer implementation of the OSI protocol suite consists of a session protocol and a session service. The session protocol allows session-service users (SS-users) to communicate with the session service. An SS-user is an entity that requests the services of the session layer. Such requests are made at Session-Service Access Points (SSAPs), and SS-users are uniquely identified by using an SSAP address. Figure 32-4 shows the relationship between the SS-user, the SSAP, the session protocol, and the session service. Session service provides four basic services to SS-users. First, it establishes and terminates connections between SS-users and synchronizes the data exchange between them. Second, it performs various negotiations for the use of session-layer tokens, which must be possessed by the SS-user to begin communicating. Third, it inserts synchronization points in transmitted data that allow the session to be recovered in the event of errors or interruptions. Finally, it allows SS-users to interrupt a session and resume it later at a specific point. Session service is defined in the ISO 8326 standard and in the ITU-T X.215 recommendation. The session protocol is defined in the ISO 8327 standard and in the ITU-T X.225 recommendation. A connectionless version of the session protocol is specified in the ISO 9548 standard. The presentation-layer implementation of the OSI protocol suite consists of a presentation protocol and a presentation service. The presentation protocol allows presentation-service users (PS-users) to communicate with the presentation service. A PS-user is an entity that requests the services of the presentation layer. Such requests are made at Presentation-Service Access Points (PSAPs). PS-users are uniquely identified by using PSAP addresses. Presentation service negotiates transfer syntax and translates data to and from the transfer syntax for PS-users, which represent data using different syntaxes. The presentation service is used by two PS-users to agree upon the transfer syntax that will be used. When a transfer syntax is agreed upon, presentation-service entities must translate the data from the PS-user to the correct transfer syntax. The OSI presentation-layer service is defined in the ISO 8822 standard and in the ITU-T X.216 recommendation. The OSI presentation protocol is defined in the ISO 8823 standard and in the ITU-T X.226 recommendation. A connectionless version of the presentation protocol is specified in the ISO 9576 standard. The application-layer implementation of the OSI protocol suite consists of various application entities. An application entity is the part of an application process that is relevant to the operation of the OSI protocol suite. An application entity is composed of the user element and the application service element (ASE). The user element is the part of an application entity that uses ASEs to satisfy the communication needs of the application process. The ASE is the part of an application entity that provides services to user elements and, therefore, to application processes. ASEs also provide interfaces to the lower OSI layers. Figure 32-5 portrays the composition of a single application process (composed of the application entity, the user element, and the ASEs) and its relation to the PSAP and presentation service. ASEs fall into one of the two following classifications: Common-Application Service Elements (CASEs) and Specific-Application Service Elements (SASEs). Both of these might be present in a single application entity. Common-Application Service Elements (CASEs) are ASEs that provide services used by a wide variety of application processes. In many cases, multiple CASEs are used by a single application entity. The following four CASEs are defined in the OSI specification: Specific-Application Service Elements are ASEs that provide services used only by a specific application process, such as file transfer, database access, and order-entry, among others. An application process is the element of an application that provides the interface between the application itself and the OSI application layer. Some of the standard OSI application processes include the following: Posted: Wed Dec 8 13:58:24 PST 1999 Copyright 1989-1999©Cisco Systems Inc. Copyright © 1997 Macmillan Publishing USA, a Simon & Schuster Company
<urn:uuid:b480235a-4730-462f-ad78-431ea7e6f922>
CC-MAIN-2017-04
http://www.cisco.com/cpress/cc/td/cpress/fund/ith2nd/it2432.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00292-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877794
2,622
3.25
3
About NCrypt Ransomware NCrypt is a relatively new ransomware type of virus that encrypts files. During the encryption process, this ransomware adds the .ncrypt file extension to encrypted files so that victims are aware which files have been taken hostage. If victims want to get their files back, they must pay the criminals and then they will supposedly get the decryption key. Paying ransomware developers is always risky because most just take your money and not give you anything. The same could happen with NCrypt Ransomware so we advise against paying. The reason you got infected is because you probably opened a spam email attachment, used Torrents or fell for a fake software update. Unfortunately, there currently is no way to decrypt the files. If you have backup, you must remove NCrypt Ransomware and only then get your files. If you use backup before you delete NCrypt Ransomware, you could end up encrypting those files as well. How does NCrypt File Virus spread? Ransomware usually spreads via spam emails, peer-to-peer networks (Torrents) and fake software updates. Always be very cautious when opening spam emails, we suggest you don’t open them at all, actually. The hackers could be posing as government organizations to pressure you into opening the infected attachment and once you do, the ransomware downloads onto your computer. You could have also gotten NCrypt Ransomware if you used Torrents. Torrents are not safe to use anyway because you can never know what you are downloading. Fake software updates are also a good way to spread ransomware. Never update software from unofficial websites. Why is NCrypt File Virus so dangerous? When NCrypt Ransomware downloads onto your computer, it will encrypt your files and then drop a ransom message onto your desktop. It informs victims that their files have been encrypted and that the only way to recover the files is to pay them 0.2 Bitcoin (around $125). The victims are given an email address, which they are supposed to use to send the payment ID after they pay. Supposedly, the victims would then receive the decryption key. Unfortunately, when the hackers say that the only way to recover the files is to use their decryption key, they are telling the truth. But that does mean you should pay. Experience shows that paying the ransom rarely leads to the decryption key. You might be ignored by the hackers after you make payment and end up just supporting their future projects. Remove NCrypt Ransomware from your computer and then invest in proper backup so that you won’t lose files again. NCrypt Ransomware removal In order to erase NCrypt Ransomware, you must use anti-malware software as manual NCrypt Ransomware removal could be damaging for your computer. If you already have anti-malware software, update it to make sure it can delete NCrypt Ransomware. Automated Removal Tools Download Removal Toolto remove NCrypt File VirusUse our recommended removal tool to uninstall NCrypt File Virus. Trial version of WiperSoft provides detection of computer threats like NCrypt File Virus and assists in its removal for FREE. You can delete detected registry entries, files and processes yourself or purchase a full version. While the creators of MalwareBytes anti-malware have not been in this business for long time, they make up for it with their enthusiastic approach. Statistic from such websites like CNET shows that th ... WiperSoft Review Details WiperSoft (www.wipersoft.com) is a security tool that provides real-time security from potential threats. Nowadays, many users tend to download free software from the Intern ... How Kaspersky Lab Works? Without a doubt, Kaspersky is one of the top anti-viruses available at the moment. According to computer experts, the software is currently the best at locating and destroyin ... Step 1. Delete NCrypt File Virus using Safe Mode with Networking. Remove NCrypt File Virus from Windows 7/Windows Vista/Windows XP - Click on Start and select Shutdown. - Choose Restart and click OK. - Start tapping F8 when your PC starts loading. - Under Advanced Boot Options, choose Safe Mode with Networking. - Open your browser and download the anti-malware utility. - Use the utility to remove NCrypt File Virus Remove NCrypt File Virus from Windows 8/Windows 10 - On the Windows login screen, press the Power button. - Tap and hold Shift and select Restart. - Go to Troubleshoot → Advanced options → Start Settings. - Choose Enable Safe Mode or Safe Mode with Networking under Startup Settings. - Click Restart. - Open your web browser and download the malware remover. - Use the software to delete NCrypt File Virus Step 2. Restore Your Files using System Restore Delete NCrypt File Virus from Windows 7/Windows Vista/Windows XP - Click Start and choose Shutdown. - Select Restart and OK - When your PC starts loading, press F8 repeatedly to open Advanced Boot Options - Choose Command Prompt from the list. - Type in cd restore and tap Enter. - Type in rstrui.exe and press Enter. - Click Next in the new window and select the restore point prior to the infection. - Click Next again and click Yes to begin the system restore. Delete NCrypt File Virus from Windows 8/Windows 10 - Click the Power button on the Windows login screen. - Press and hold Shift and click Restart. - Choose Troubleshoot and go to Advanced options. - Select Command Prompt and click Restart. - In Command Prompt, input cd restore and tap Enter. - Type in rstrui.exe and tap Enter again. - Click Next in the new System Restore window. - Choose the restore point prior to the infection. - Click Next and then click Yes to restore your system. 2-remove-virus.com is not sponsored, owned, affiliated, or linked to malware developers or distributors that are referenced in this article. The article does not promote or endorse any type of malware. We aim at providing useful information that will help computer users to detect and eliminate the unwanted malicious programs from their computers. This can be done manually by following the instructions presented in the article or automatically by implementing the suggested anti-malware tools. The article is only meant to be used for educational purposes. If you follow the instructions given in the article, you agree to be contracted by the disclaimer. We do not guarantee that the artcile will present you with a solution that removes the malign threats completely. Malware changes constantly, which is why, in some cases, it may be difficult to clean the computer fully by using only the manual removal instructions.
<urn:uuid:5a3b7faf-5e1c-46d7-ab48-e6e47f672cba>
CC-MAIN-2017-04
http://www.2-remove-virus.com/remove-ncrypt-file-virus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00320-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887334
1,416
2.84375
3
Mass Notification For Government Government agencies have enormous responsibilities to protect the populace, and mass notification can help. How are some agencies using Intelligent Notification to protect resources, save lives and meet regulations? - Large wildfires rage out of control across four counties; the state must alert and mobilize dispersed firefighters - A bomb threat is called into a city center; emergency services must act quickly to evacuate and protect citizens - A dam threatens to burst; thousands must be notified and a large area must be quickly and efficiently evacuated - A chemical manufacturing plant suffers a hazardous waste spill; nearby residents must be alerted - An infectious disease breaks out; citizens must be alerted to take precautions and seek help if infected - A prisoner has escaped; townspeople are alerted and deputized, and the perpetrator is apprehended
<urn:uuid:13e1cf93-22fa-4d26-bea0-f6f42d62f979>
CC-MAIN-2017-04
http://www.mir3.com/industries/government/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00136-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931913
161
2.984375
3
Public-key cryptography is also called asymmetric. It uses a secret key that must be kept from unauthorized users and a public key that can be made public to anyone. Both the public key and the private key are mathematically linked; data encrypted with the public key can be decrypted only by the private key, and data signed with the private key can only be verified with the public key. The public key can be published to anyone. Both keys are unique to the communication session. Public-key cryptographic algorithms use a fixed buffer size. Private-key cryptographic algorithms use a variable length buffer. Public-key algorithms cannot be used to chain data together into streams like private-key algorithms can. With private-key algorithms only a small block size can be processed, typically 8 or 16 bytes.
<urn:uuid:cb9e4ff4-2e07-413d-b825-d94946da1212>
CC-MAIN-2017-04
http://www.cgisecurity.com/owasp/html/ch13s03.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90782
161
4
4
A major component of IT security is determining who is allowed into your structure both physically and logically, and what can they do once they have gained access? Access control determines who has how much access. To get control, organizations must lock down their systems, including hosts, networks, applications, data stores, and data flows, and address the following: - Communication Security - Logging and Monitoring - Penetration Testing - Remote Access Communication security protects the pathways across which voice and data traverse. The goals of communication security include prevention of eavesdropping to protect confidentiality, assurances of integrity, and the maintenance of availability of the connection itself. All communication channels, whether between devices on the same network, across a VPN, over a remote connection, or wirelessly over radio waves, must be protected. A significant portion of communication security requires appropriate encryption. Encryption is used to protect the data itself while in storage and transit and provide a digital means of authentication. Without proper security, communication is subject to interception, manipulation, or denial of service. Communication security also includes planning for protection, as new technologies and data flow patterns are incorporated into the workplace. Cryptography is the science of obfuscation and is used to protect data while in transit or in storage. Data encryption includes three common sub-divisions: symmetric ciphers, asymmetric ciphers, and hashing. Symmetric cryptography is used for bulk data encryption, protecting information while in transit or in storage. Asymmetric cryptography is used to prove the identity of endpoints (e.g., digital signatures), or provide secure symmetric key exchange (e.g., digital envelopes). Hashing is used to detect alterations or verify integrity of communications and stored data. Intrusion Detection Systems (IDS) are designed to notify administrators of suspect activities in the computing environment. Intrusion Prevention Systems (IPS) detect suspect activities and alter the environment in attempt to thwart those activities. New Intrusion Detection and Prevention (IDP) solutions can perform deep packet inspection on cloud traffic. These tools supplement the security provided by firewalls, proxies, malicious code scanners, and other typical security mechanisms. IDS/IPS/IDP may be able to detect violations based on pattern matching, anomaly detection, and behavior analysis. However, these tools require expertise for proper deployment, configuration, and tuning. Logging and Monitoring Logging and monitoring, in addition to auditing, are essential parts of keeping track of all of the events that occur within an organization’s infrastructure. Each and every piece of equipment that can record a log file should be configured to do so, especially firewalls, proxies, DNS servers, DHCP servers, routers, and switches. Plus, every OS and application that can log events should be enabled as well. The more extensive the logging, monitoring, and auditing, the more evidence will be collected about benign and malicious situations. Other important issues related to event tracking include historical log archival, securing logs, time synchronization, monitoring performance, vector tracking, maintaining accuracy, and complying with rules of evidence and chain of custody. Penetration testing is the third major phase in security assessment and management. Penetration testing is used to stress test a mature environment for issues that cannot be discovered by automated tools or by typical administrators. Penetration testers are skilled in the method and tools of criminal attacks, the art of reconnaissance, and are masters of systems, protocols, and other aspects of IT from the perspective of malicious hackers. Testers craft exploits, modify code, decompile executables, applications, debug scripts, uncover covert channels, and more. These are essential skills of the members of a penetration testing team. A complete understanding of the benefits and the mechanisms of black box security testing will enable an organization to benefit fully from hiring an ethical hacking consultant or developing their own in-house testing team. Remote access is convenient, can reduce costs, and can make work tasks more flexible, but it also increases risk for an organization. Once remote connectivity of any type is enabled for valid user access to a private network, the benefits of physical security are greatly reduced. As soon as authorized outsiders can establish valid connections to internal resources, hackers from across the globe gain the ability to attempt to intrude into those same remote access channels. Remote access includes traditional PSTN modems, VPN connections over the Internet, wireless connections, and more. Remote access often benefits from the implementation of AAA (authentication, authorization, and accounting) servers exclusively for remote users. Adding filters and rigorous oversight, such as with auditing and IDS/IPS/IDP solutions, is essential. Secure remote connectivity is possible, but is more challenging and involved than most organizations realize when first launching telecommuting or remote access projects.
<urn:uuid:172f5ca9-e593-4590-9b4f-9d697d815fb4>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/05/14/access-control-who-gets-in/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923036
971
3.265625
3
Knowing the value of ROI is important when making a business investment because it clearly demonstrates the financial gains of the proposed project, compared to the relative cost. The Return on Investment (ROI) calculation is fairly straightforward, and is defined as the ratio of the net gain from a proposed project, divided by its total costs. In formula form, ROI is represented as: The ROI calculation uses the cumulative investment costs over the analysis period, and compares this with the total savings and other tangible benefits over the same period. The ROI value is usually expressed as a percentage, multiplying the ratio by 100%. For example, if a project has an ROI% of 200%, the expected net benefits of the project are double those of the expected costs for implementing the project. In more basic terms, every $1 invested in the project will yield $2 in net returns. In an analysis where the costs and benefits have been properly estimated, decision makers typically look for higher percentages for ROI as an indication of risk reduction. The higher the percentage the less risk typically, because the benefits are much higher than the costs, and the project is more tolerant if costs overrun predictions, or benefits fall short of expectations. ROI Calculation Example To understand further, let us examine the cash flows from a sample project and the resultant ROI calculation: The ROI in this example was calculated by taking the Cumulative Net Benefits of $425,000 divided by the Cumulative Total Costs of $175,000. Hence, the net benefits are more than double the investment, yielding an ROI% of 243%. Every $1 invested will yield a $2.43 in net returns. Limitations in Calculating ROI? As a simple % calculation, ROI is easy to understand, and easy to apply to comparisons, however the ROI calculation if used as the only measure of a projects viability, has some shortcomings. - The ROI formula shows the net return from investment but does not indicate the time associated with achieving the returns. - The ROI calculation does not recognize that in some cases, the projects total cost and benefit value may be so small that the net benefits are not worth considering. As an example, the ROI% of a planned project might be a significant 500%, but the net benefits of $10,000 on a $2,000 investment are so small that the project is not worth comparing to the millions of dollars in benefit that most corporations are seeking. Conversely, for some projects the costs may be so high that even though the net benefit and ROI yield are high, the project exceeds a reasonable investment risk. For example, a project that costs $10 million and has a projected net benefit of $100M, yields an ROI% of 1000%, but the risk of applying $10 million to a single project might be too high for a cash strapped company. Thus background economic scenario of each situation must be considered. - The standard ROI calculation typically does not use net present value terms in its calculations. Net present value calculations use the “time value of money”, which takes into account the fact that the purchasing power of a dollar received in the future is less than dollar possessed today. The ROI calculation does not take into account that many projects require up-front investments that then need to be offset by savings in outgoing years, but that these savings are not as valuable when compared to up-front costs because money in the future is worth less than today. To resolve this issue, sometimes the ROI formula includes net present value calculations for the net benefits and the costs. Overall, the ROI calculation provides a valuable comparison of the net benefit verses total cost, a ratio that can point towards a solution that delivers optimum financial benefits. But ROI alone is not the only indicator of performance, and should be considered with other factors such as NPV Savings, IRR, and payback period prior to making a purchase decision.
<urn:uuid:d85b5d0c-9da9-4bc8-b6f5-ea41541f9441>
CC-MAIN-2017-04
http://blog.alinean.com/2010/08/return-on-investment-roi-defined.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00402-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937293
811
3.15625
3
Communicating About CommunicabilityBy Larry Dignan | Posted 2003-05-05 Email Print The response to SARS has been commendable, but most medical surveillance systems can't communicate electronically. Using HL7 and the extensible Markup Language (XML), federal-, state- and city-level systems should be able to swap data, such as dates, locations, symptoms, and type of patient, even if their computer systems don't otherwise communicate. One of the linchpins of a NEDSS system is an integrated data repository, a networked set of databases able to share data, inputs, processes, outputs and interrelationships. Here's how these networked databases would operate: Each disease, symptom or illness will have a unique numeric identifier that could be tracked across the local, state and CDC systems. Such a system would be able to track a person and cross-reference symptoms and diseases. Safranek is overseeing a pilot of NEDSS that's designed to supply data from Nebraska's hospitals, labs and health agencies to the CDC. By collapsing a host of disease-oriented databases the state will be able to make real-time monitoring and decisions. If successful, NEDSS will be the first national-disease-surveillance system. Once implemented, the emergence of a single incident of an unfamiliar virus or disease can be communicated to all participants in what businesses would call "real time." "Most clinical information systems are not standardized nationally," says Dr. John Loonsk, associate director for informatics at the CDC. "It's problematic when you are looking to get information out and the data are in different forms and different systems. They're not accessible." These incompatible legacy, or "stovepiped," systems make it nearly impossible to search for patterns across databases. If a person with SARS has HIV and tuberculosis and has moved from Los Angeles to New York, it's nearly impossible to cross-reference given the number of databases, locales and divergent health messaging standards. It would take numerous phone calls and paper trails to find a pattern. Tennessee, for example, is using a 13-year-old system using the almost ancient Disk Operating System (DOS) to report communicable diseases. And the data isn't piped in-it's mailed in, where data entry workers input the relevant data by hand. To get information on reportable diseases to the CDC they are uploaded in a batch. The federal government is prepared to spend $377 million on medical communications and surveillance improvements in fiscal 2003, according to President Bush's proposed budget. The CDC is linking its bio-terror preparedness funding grants to NEDSS-compliance. So far, all 50 states and large cities such as New York have received funding for the surveillance system, but the national rollout, is expected to take "several years," says Loontz. Under a pilot, Tennessee is using federal funds to replace its DOS-based system, with a NEDSS-based system with Microsoft SQL 2000 and BEA Systems WebLogic software running in the background. "We're very excited to make the switch," says Dr. Allen Craig, Tennessee's state epidemiologist. "The old system was a disaster. It served us for a lot of years but not without some bubble gum and rubber bands." The stakes are high for swapping out antique systems. Health and Human Services Secretary Tommy Thompson, testifying before a congressional committee last month, noted the SARS epidemic is a dress rehearsal to see how the U.S. could respond to a bio-terror attack. But a day before Thompson's testimony, a General Accounting Office report concluded "work to improve surveillance systems has proved challenging." The challenges to public health officials are numerous. Cities and states don't have access to data in hospitals' information systems. They also rely on laboratories, hospital staff and physicians to sound early warnings. According to the GAO, several cities are evaluating active systems where databases will comb nontraditional data sources such as pharmacy sales to find suspicious activity or patterns of illness, similar to the way MasterCard and Visa track unusual card activity in different parts of the globe and close down credit. Active systems are the ultimate goal, say health officials. Craig says the state has more updated systems to track 911 calls and emergency room information to spot different trends. The data is run through an SAS business intelligence program that compares recent data to historical data in order to flag items like an influenza outbreak in August. New York City also has a system that tracks nontraditional sources such as 911 calls, emergency room data and pharmacy sales to spot health patterns and possible outbreaks. That system hasn't turned up any SARS warning signs such as a run on flu medicine, says Carubis. A portal, which will allow partners to input health data into a Web form, and a Communicable Disease Surveillance System (CDSS), which will allow electronic data to be exchanged, are planned to launch in June and in the fall, respectively, says Carubis. More importantly, those systems, funded by CDC grants, will be able to communicate with other systems. Carubis says once the systems are able to take electronic data, all it'll take is hospitals to link up their systems-some are more prepared than others. Carubis agrees that it's difficult to put a timetable on NEDSS compatibility. The health industry, however, seems determined to link together. "Nobody is proud of the current approach," says Safranek. "But we are working on it, and there'll be a payoff."
<urn:uuid:0bca99dc-30a0-4836-9df1-c4d8814f7962>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Processes/Diagnosis-Disconnected/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00522-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947009
1,146
2.5625
3
A troubling trend in Android malware is the use of botnets, which are networks of compromised smartphones used to distribute spam and malicious code. These zombie networks enable malware creators to spread their apps in far greater numbers than more typical venues, such as online app stores. Mobile botnets have been discovered before, but the latest one comes with a unique twist, according to security vendor Kaspersky Lab. Rather than send only the original malware used in creating the botnet, the newly discovered network also sends different malware. This is significant because it indicates cybercriminals are renting out botnets to other crooks. The availability of such services shows that the business of Android malware is maturing. For years, cybercriminals targeting Windows PCs have had a wide selection of development tools, exploit kits, botnets and pre-built malware. These services and tools have made it possible for criminals with little tech knowledge to set up shop. Mobile versions of these technologies and services are expected to eventually be offered on the underground for attacking Android, which ran 79% of all smartphones shipped in the second quarter, according to tech researcher Gartner. However, the transition will take time, since a lot of new code has to be written and criminals have to work out important business details, such as payment. In the meantime, progress is being made. Kaspersky says that a botnet created with a mobile Trojan the company calls SMS.AndroidOS.Opfake.a is also being used to distribute Backdoor.AndroidOS.Obad.a. The latter, discovered in June, is the most sophisticated Trojan to date. The multi-functional software can send SMS messages to premium rate numbers, download additional malware from a command and control server and spread to other phones via Bluetooth. The method used to infect Android smartphones starts with a text message that says, "MMS message has been delivered, download from www.otkroi.com." Clicking the link automatically loads Opfake.a, however, the malware cannot be installed unless the user agrees to run it. If he does, then the malware is instructed by its command and control server to send to everyone on the phone's contact list a text message that says, “You have a new MMS message, download at - http://otkroi.net/12.” This time, clicking on the link automatically loads Obad.a. Mischief attributed to Obad.a includes monitoring SMS messages for bank codes. If one is found, the malware hides the code from the phone's user and ships it to a server. The use of Opfake.a to spread indicates that the Obad.a creators are renting a part of the former malware's botnet, according to Kaspersky. In time, these types of partnerships are expected to lead to wider distribution of malware. For now, most of this activity does not affect Android users in the U.S. More than 80% of Obad.a infection attempts occur in Russia. Other countries where the Trojan has been spotted include Kazakhstan, Uzbekistan, Belarus and Ukraine. The developers of Obad.a are using more than just a botnet to spread their brainchild. They also use fake application stores that copy Google Play content and inject code in legitimate sites so visitors are redirected to malicious ones. These two methods are typically used in the Russian Federation, Eastern Europe and Asia, where people often use third-party app stores. In the U.S., the much safer Google Play dominates the market. Whether malware developers can penetrate the U.S. remains to be seen. They will have to develop much better tools and services. But as the Obad.a example shows, they don't lack creativity and we can expect to see more innovation in the future.
<urn:uuid:248e4b6b-40d0-45aa-a80d-e14bb80c71b7>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474756/mobile-security/crooks-use-botnet-for-rent-to-target-android-users.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00522-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934567
778
2.59375
3
Medical Device Flows Through Blood Vessels / February 28, 2012 At last week's International Solid-State Circuits Conference, Stanford Engineer Ada Poon demonstrated a medical device that patients can swallow. This tiny, wirelessly powered, self-propelled device is capable of controlled motion through a fluid – blood, to be exact. And it's small enough to travel through blood vessels, Stanford University news reported. Shown above is the current prototype chip resting on a hand; it is only three millimeters wide and four millimeters long. Photo by Steve Fyffe
<urn:uuid:23d1ec59-bb84-49b0-b463-11a0890e45d9>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Medical-Device-Flows-Through-Blood-Vessels-02282012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912076
118
2.78125
3
Management has bought into the idea of documenting Standard Operating Procedures (SOPs), and now comes the job of writing them. There are four main elements to creating SOPs: - Step-by-step. This is the simplest of the four elements, and it should be used for the simplest of processes. It consists of point-form directions that tend to be confined to one or two pages. It can also be used to outline sub-processes within a more complex SOP. - Hierarchy. A hierarchical SOP combines larger processes and the simple step-by-step instructions. The benefit here is that it has both the detail and the larger outline of the process. When you have both new and more advanced employees working on the same task, new employees can take advantage of the detail, while more experienced employees can check the outline for reference and skim the sub-steps.
<urn:uuid:9260a9ad-9148-4e90-aa63-1d91dbf88974>
CC-MAIN-2017-04
https://www.infotech.com/research/how-to-write-an-sop
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00062-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9351
181
3.421875
3
Buzz phrases of the day include consumerisation of IT and BYOD--bring your own device. The former phrase refers to the use of increasingly powerful and feature-rich devices, be they PCs, smartphones or tablet computers, by consumers. The meteoric rise of the tablet computer embodies this trend. According to comScore, the use of tablets in the US alone took just two years to reach 40 million--compared to seven years for smartphones to reach the same level of adoption. And those end users increasingly want to use their own devices to access both work and leisure applications--the second trend, BYOD--as they are often seen as superior to those issued to them by the organisation. As a result of trends such as these, the number of devices connecting to corporate networks is expanding rapidly and those devices must be managed to ensure that the organisation is not exposed to security vulnerabilities through their use. Traditionally, securing endpoints has been approached by installing software on every device needing to be protected, which works by scanning programs for signatures that have been developed by anti-virus vendors that indicate that the program is malicious. However, this method is no longer sufficient. The number of viruses and other malware has grown dramatically, with an average of 73,000 malware samples being seen daily in 2011, many of which are variants of known viruses that have been developed to avoid detection. The amount of malware that is considered to be aggressively polymorphic is also growing and this is a further problem with traditional anti-malware technologies as this type of malware is designed to modify itself on each infection. A system based on signatures alone provides no defence against threats that vary from those seen before. A further problem is that anti-malware programs are large and tend to get bigger as more signatures are added to their defences. It is well known that they tend to be a drain on computer resources, significantly slowing down computer performance, especially at startup and during scans. Even on corporate-owned devices, many users try to circumvent such controls and many would find it totally unacceptable for an organisation to demand that they deploy such controls on devices that they have purchased themselves. Clearly a new approach is needed--one that provides better protection by guarding against new threats as well as those for which countermeasures have already been made available--and one that does not hinder the user. This can be achieved by subscribing to endpoint security services based in the cloud, whereby only a small agent is placed on each device and protection is applied in the cloud, before exploits can ever reach the device. Such services are new and there are a number of elements that must be considered, including the types of controls that are provided over and above signatures, the availability of cloud-based threat intelligence networks for identifying new threats, privacy and data protection controls, protection for devices when not connected to the network, and remediation capabilities should any threat still be able to break through the barriers. Bloor Research will be participating in a webinar at 10am GMT on Wednesday 29th February 2012 that will outline what organisations should look for when choosing such an endpoint security system and the benefits that they can expect. For more information and to register for this webinar, click on the following link:The changing face of endpoint security.
<urn:uuid:2e330d75-57d2-4e71-b972-713ab576c59a>
CC-MAIN-2017-04
http://www.bloorresearch.com/blog/security-blog/offensive-endpoint-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00239-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969567
659
2.578125
3
Today’s largest HPC systems are dominated (492 of the Top 500) by processors using two instruction sets (x86, Power) from three vendors (Intel, AMD, IBM). These processors have been typically designed for the highest single thread performance, but suffer from high cost (several hundred dollars to over $1500) and power demand (around 60-100W). As we build even larger and higher performance systems moving towards exascale, we might explore other avenues for delivering cost-efficient compute performance and reducing the power consumed by these systems. In particular, there are at least three good reasons to explore whether processors designed for mobile systems can play a role in HPC, which I call innovation, federation and customization. Innovation, because the future of computing innovation is not on the desktop or in servers, but in ubiquitous computing, the internet of things. Federation, because embedded processors, like ARM-architecture devices, are available from a variety of vendors, thus freeing customers from single suppliers, allowing outstanding price and feature competition and increased innovation and flexibility. Customization, because the mobile market thrives on various manifestations of customization, and we in HPC might be able to take advantage of that. Here, we use ARM-architecture processors as representative of mobile system processors, if only because ARM so dominates that space, though other possible processors include x86 (Intel Atom and AMD Geode), IBM PowerPC, MIPS, even embedded SPARC. Innovation in the Post-PC World In 2007, Steve Jobs predicted an upcoming explosion of “post-PC devices,” using the iPod as an example. He didn’t mean to suggest that the PC was dying or was doomed to eventual extinction, any more than PCs killed off workstations or mainframes. He meant that the growth seen in the PC industry was unlikely to continue at the same pace, and, as we’ve seen recently, the new growth path has been moving to phones, tablets, and other mobile, untethered, networked devices. This means that the innovators which have driven the PC world to ever greater capabilities have been moving to these new post-PC devices and ubiquitous computing. Hardware innovation is tending towards smaller, lower power devices. Why do we care? Historically, supercomputers have been built with the devices available at the time. The first Cray-1 used four types of semiconductor chips: two types of NOR gates (fast for logic, bigger but slower for memory fanout), and two types of static RAM (fast for registers, slower but bigger for memory), and lots of wire. Contemporary supercomputers were built with essentially mainframe technology, mostly by the mainframe manufacturers. A number of research parallel processors were designed, built and productively used in the 1980s using essentially workstation technology: printed circuit boards usually populated with commodity processors and connected by a high speed network. By the mid-1990s, massively parallel processors using RISC chips dominated the Top 500 supercomputer list. In 2000, Intel introduced the Pentium 4, adding the double precision vector SSE2 instructions to the x86 family. This made the x86 a viable candidate for real supercomputing. Given the cost advantages of using high volume parts, more parallel supercomputers were designed using Intel and AMD processors. Within four years, over half the Top 500 supercomputers used some flavor of x86 processors, and that number is now close to 90%. The cost of developing viable processors customized for general-purpose HPC is prohibitive, requiring system architects to use the best available commodity processors. Perhaps the one exception to that rule is IBM, which designed a special PowerPC chip for Blue Gene, though they adapted an existing commodity embedded processor rather than building a bespoke processor. When commodity innovation moves to the mobile world, we in the HPC industry may have to look at mobile processors as potentially the most cost effective solutions to our compute problems. The Federation vs. the Empire ARM, Ltd. doesn’t actually produce and sell chips. ARM licenses the core IP to vendors who include ARM cores in their own products. Most of these designs are Systems-on-chip (SOCs), including much of the glue logic on the same chip as the processor, as well as application-specific logic. This makes for better integration and lower part count for the eventual customer. An SOC for a cell phone might include a DSP or two for audio encode/decode, a graphics driver for the display, interface for the keyboard, and radio components in addition to the main processor. An SOC for automotive electronic stability control might have interfaces for wheel speed sensors, accelerometers, an interface to control the brakes, and perhaps even a temperature sensor. ARM processor deliveries are far ahead of x86 and PowerPC processor deliveries each year in units. The architecture is solid and viable. Moreover, there are a number of chip vendors building and supplying parts with ARM architecture cores, giving customers a broad choice of supplier. No one vendor can control availability or price, and there’s no fear of depending on a single source that may choose to change direction or that may not survive the long term. The ARM architecture may be the only viable candidate for an alternative processor to x86 and Power. In the mobile world, standardization on ARM cores as the control processor has produced the same benefits that standardization on x86 has given the desktop. There are many choices for software ranging from operating systems, tools and applications ready to use for ARM processors. There is an army of trained programmers comfortable with programming, optimizing and tuning for ARM processors. There are a plethora of hardware devices that have been designed to work with ARM processors, though most of these would be integrated on the SOC. There are two types of ARM licensees. Most vendors take the ARM core IP and integrate it directly into their own products; such an ARM core will be instruction-set compatible regardless of the vendor. Some vendors acquire an ARM architecture license, allowing them to augment their own ARM implementations. This gives them additional freedom to innovate or add extensions for particular target markets. Customization for HPC Within the ARM world, there is a high level of architectural variety. Among the more than 250 ARM microcontrollers in its catalog, STMicroelectronics, PGI’s parent company, offers one 32-bit ARM microcontroller that draws about 10 milliamps when running at its full speed (32 MHz), and can be scaled to lower clocks and voltages to draw even less current. The latest high end ARM Cortex-A15 design supports one to four cores, SIMD floating point, up to 4MB level 2 cache, and up to 1TB (40 bits) of memory address space. Note there are no Cortex A15 MCUs available yet, though several are in the works. This architectural variety is a real strength of ARM in the mobile market; a designer can choose a version with all the necessary features, and without any unnecessary baggage, and keep within a desired size or power envelope. As specific examples, let’s look at two current ARM processor offerings. One is the SPEAr chip from STMicroelectronics. The high end SPEAr 1340 has two ARM Cortex-A9 cores with up to 600MHz clock, 512KB level 2 cache, a Gigabit Ethernet port, a PCIe link, one SATA port, 2 USB ports, controllers for flash memory, interfaces for memory card, touch screen, small (6×6) keyboard, 7.1 channel sound, LCD controller, HD video decoders, digital video input, cryptographic accelerator, analog-digital converters, and various other IO features. The SPEAr is clearly designed for use in a multimedia device, and is optimized for low power. The second is the ARMADA XP from Marvell; Marvell acquired the XScale business from Intel in 2006. The ARMADA XP is a relatively new product aimed directly at cloud computing. This chip has up to four ARM cores, up to 1.6GHz clock, 2MB shared level 2 cache, interface to DDR2 or DDR3 memory, four Gigabit Ethernet ports, four PCI-E ports, three USB 2.0 ports, two SATA ports, LCD controller, flash memory interface, UART, and more. You could design either of those ARM chips right onto a small motherboard with memory and a disk and package a bunch of them into a 1U rack mount server. However, in the HPC space, do we really need USB ports, touch screen interfaces, and LCD controllers? Removing those from the chip might allow more room for more cores, or something more interesting. The real potential for ARM architecture in HPC, and the third important reason to explore ARM, is the possibility to generate custom parts. Perhaps we could design the InfiniBand drivers right on the chip. Maybe we could add hardware support for quad-precision, which David Bailey and his colleagues predicted we’d want ten years or more ago. There may be an ability to add operations specific to certain markets, such as bioinformatics or financial. Some of the more exciting systems over the past decade are custom designs, including Anton at D.E. Shaw Research, and the MDGRAPE-3 machine at RIKEN in Japan. In each case, custom design gives a significant performance advantage, but at high development cost, including fully custom software. Imagine if we could achieve similar performance advantages for specific applications, but retain most of the design and software development cost advantages of using standard chips. In the mobile ARM space, there are different levels of customization. A fully custom chip would have a number of ARM cores, caches, memory interfaces, perhaps Ethernet or other ports, and maybe even some custom logic. The ARM architecture supports a coprocessor interface, so custom logic could be configured and controlled directly from software, just like early floating point units were. Even the ARM cores themselves can be customized by selecting a specific ARM version, or adding extensions like the NEON SIMD instructions. The design of such a chip is easy on paper, but requires a long sequence of steps and perhaps a year or more before it comes out of fabrication and packaging. The design must be turned into RTL, laid out, verified, qualified on the technology to be used, a mask created, the chip fabricated and then tested. This takes both considerable time and money. In the mobile space, the time and money is justified by very high volumes. Consider that Apple sold over 20 million iPhones and 9 million iPads in a single quarter this year. A custom chip in an iPhone or iPad would be justified based on that volume. A second level is exemplified by the STMicroelectronics SPEAr chip mentioned above. ST offers these with a customizable logic block. During development, the customer would design and experiment with FPGA logic. When ready, the RTL from the FPGA is used to customize the on-chip logic. Because the chip is already designed with the custom logic block in place, validation is only required for the logic block, which takes only a few months. A third level will be supported with the advent of through-silicon vias (TSVs). One obvious use for TSVs is to stack a memory chip on a processor chip, allowing lower latency, and higher bandwidth with many chip interconnections. But another important possibility is the ability to stack an FPGA or custom logic chip between a processor and memory, to be used like a coprocessor. It’s a good time to explore alternatives to current standard processors for HPC, for at least three reasons. First, the HPC market can’t afford to develop its own processors, so it has to adopt the best of the commodity market, and the innovation in that market is moving to mobile. Second, ARM processors are by far the most popular 32-bit processors today and will soon have 64-bit versions available; moreover, there are many suppliers of ARM architecture processors, so if we’re going to look for a viable alternative, ARM is the leading (perhaps the only) candidate. Third, the potential for customization either broadly for the HPC market, or narrowly for specific applications, could give significant benefits to HPC that we simply can’t get from current commodity offerings. Add to these the potential cost and power advantages, and we’d be negligent if we don’t study this now. This doesn’t mean that it’s inevitable, or an easy decision. There are several challenges and missing pieces that need to be filled in along the way. That will be the topic of my next article. About the Author Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.
<urn:uuid:90d75a43-dd36-4bd1-b89e-3e831b7883a6>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/11/09/arm_yourselves_for_exascale_part_1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00541-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932893
2,705
2.796875
3
FODDER FOR MODERN JAPANESE PORCELAIN FINE ARTS By Lim Tai Wei The Japanese have been collecting Chinese ceramics for years. It reached a peak during the 1920s when the fever for the best pieces of Chinese porcelain was on. Many of the Chinese ceramics pieces found in Japan are thus a result of inheritance which has passed down intact through the generations. This process is known as densei-hin. Japanese have been very careful to ascertain the sources of these porcelain pieces and would restrict displays to mainly inheritance and excavated pieces. It is also important to note that Japanese displays of Chinese ceramics have been made to conform to Japanese tastes and aesthetics. The earliest evidence of Chinese influence on Japanese ceramics would be the various objects made in Han dynasty being found at Yayoi sites (second to third century AD). This period is the earliest traceable period that Chinese ceramics was found to be a link between Japanese and Chinese cultures. By the 4th and 5th centuries, there was already a steady flow of Chinese ceramics into the country. During the Nara period, Chinese ceramics was first recorded in Japanese documents. The famous sancai or three colour glazed wares were found for the first time in Japan during this period. This was also te peak of Tang dyansty in China, considered by many to be the peak of civilization in Chinese history, culture and the arts. Many of these wares were simple and many were for daily use. Thus, they have not drawn much attention from the Chinese collectors, museums or antique dealers. However, due to the rarity of sancai pieces now, the early examples of Tang dynasty wares have rose in importance and also becoming important evidence of cultural exchanges between the two countries. Many of these pieces were excavated in Kyushu near Fukuoka. This is not surprising considering that Kyushu was the traditional gateway for Chinese and Korean cultural exchanges with Japan. It is also the closest geographically. From the study of these porcelain and subsequent value-adding and creative innovations, modern Bizen, Arita and many other modern art wares have arisen. They carry some styles, glaze, hand-thrown techniques used by ancient Chinese potters but show a distinct Japanese taste or innovation. Shozo Hamada is an example of these artists. Indeed, after the Meiji restoration of 1868, when modernization took place, many of these works started incorporating modern features like Art Deco or Art Nouveau features into their compositions.
<urn:uuid:4695bba4-ab6f-440b-9006-7acfbffb19a2>
CC-MAIN-2017-04
http://www.easterntea.com/research/japfineart.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.980808
532
3.1875
3
Consider this: Effective risk management underpins a successful project – true or false? In-depth: How to create a clear project plan. Was 'true' your first reaction? We believe that you’re right. All three of us are strong believers in the positive value of a well-managed and controlled approach to project risks. An Internet search for “images of risk management” will return many illustrations of dice being rolled. If it is done well, risk management measures the uncertainty involved when you 'roll the dice' during your project, and allows the project manager to obtain a consensus on how to best handle risks and unexpected events on the project. This article does not cover in detail the processes necessary for effective project risk management. A large amount of material and advice exists on the subject. Rather, we put forward just a few 'pointers to consider' for your project – whether it is already underway or getting ready to start. Take-away points to consider We put forward the following considerations for risk management (this list is not exhaustive or prioritised): - Risk management affects all aspects of your project – your budget, your schedule, your scope, the agreed level of quality, your communications and stakeholder engagement, the success when the project’s output is implemented, and so on. - Risks can be positive (i.e. opportunities), as well as negative (generally referred to as risks). - Risk management is about behaviours that prove that risk management is a top priority for you and the team, such as “being constantly aware of what might happen,” agreeing on strategies for all risks, and undertaking actions to prevent negative risks from becoming issues (i.e. occurred events) whilst maximising the opportunities of positive risks. - Risk management needs to be conducted from the start of the project, constantly discussed and monitored, and involve all members of the project team. - How you choose to handle risks depends on your most influential project stakeholders’ 'appetite for risk'. - Each identified risk needs to be assessed, a strategy for dealing with it agreed upon by all appropriate parties, and tracked until closure. - Project risk management is not “the project manager tracking risks in a Risks Register and sharing it occasionally when or if people ask to see it” – it is much more than that. The essentials of project risk management A project risk can be defined as an uncertain event or condition that, if it occurs, will have a positive or a negative effect on a project’s objectives. Some very comprehensive guidelines and procedures for managing risk are available from many sources. For example, the Project Management Institute describes the following summary process to managing project risks: - Plan risk management. - Identify risks. - Perform qualitative risk analysis. - Perform quantitative risk analysis. - Plan risk responses. - Monitor and control risks. You may come across other models. Your means of conducting risk management and the behaviours you and your team display in 'making it real' make all the difference. We have mentioned 'behaviours' a few times in this article. We are referring to the communication (in all its shapes and forms) that you use, the importance with which you treat risks, and the willingness and drive to see actions through to completion and closure. Here are a few questions for you to ask yourself: - At the start of a project, do you plan how you and the team will approach risks? By this, we do not mean jumping straight to a Risks Register, but putting some serious thought into how risks will be managed during the project. - Do you understand and monitor the appetite for risk of your customer and influential stakeholders? - Do you involve all people in the team to identify project risks – not only at the start, but throughout the project? - Do you review the risks of previous projects, and look to lessons from the past as part of your initial review and identification process? - Do you strive to ensure each risk has an owner, and that the method to tackle them is agreed upon, i.e., whether to mitigate the risk with an action, to transfer, avoid or accept it and so on? - Do you readily assess opportunities as well as negative risks, and devise strategies to maximise the likelihood of opportunities occurring in order to exploit or enhance them? - Do you assess “triggers” to each risk so that you can monitor if/when there is danger of their becoming real? - As well as qualitative assessment of risks, are you able to apply a quantitative financial or time value to each risk, both negative and positive, should it eventuate? If the impact is negative, will it turn into an issue? Can this estimated financial value help you justify an appropriate project contingency in terms of cost and/or time? - Are you pro-active in tracking the agreed strategies to handle risks? - Do you maintain a project Risks Register on a regular basis – moving priorities up and down the list, watching for low-priority risks that may escalate in importance, being attentive to risks that are likely to occur soon? - Do you discuss the “current high-priority risks” with your key Stakeholders at each project review (in whatever forum you have for such review meetings)? - Do you discuss what will happen if major and problematic “unknown unknowns” occur on your project, perhaps with action scenarios if such events happen? Remember: Risk management is your friend and ally As per the title of this article, risk management is the project manager’s friend. Done well, it helps you ensure that the 'appetite for risk' is appropriately understood at the start; that all risks are agreed upon, prioritised, assessed, communicated and understood in alignment with this 'risk appetite'; and that you have a solid platform to track agreed actions, including escalation up the management chain if necessary. The key is to demonstrate positive behaviours in a way that ensures risk management is kept at the forefront of all your project activities. There is always the potential of 'unknown unknowns' impacting your project, but the more you can assess reasonable risks from the start of the project and actively manage them throughout, the better placed you will be as a team to realise a positive outcome for your project. If you have an opinion on this article, we would really like to hear from you.. Please email us at Contactus@pmoracles.com with your point of view. Other articles by these authors: - Project management for the small business - The project management survival toolkit - Understanding project management processes and tools to drive success - How to tailor your presentation to the audience - How to approach a project - The trouble with continuous multi-tasking - Communication risks within and around a virtual team - An objective methodology to project prioritisation - Program & project manager power – What are your most important traits to achieve success - Anatomy of an effective project manager - The unspoken additional constraint of project management - How project managers can help their companies 'go Green' - What makes an effective executive? - Minimising bias of subject matter experts through effective project management - Program and project manager power Read more in CIO Management. Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.
<urn:uuid:2af3d6b7-9ba5-4f61-a203-fb4f461a25c3>
CC-MAIN-2017-04
https://www.cio.com.au/article/385084/risk_management_project_management_go_hand_hand/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93587
1,549
2.546875
3
A few weeks ago, I happened to listen to a radio programme on how punctuation came to exist. This remarkably interesting half hour revealed to me how something that we now see as finished, complete, and solid is in fact the result of invention, evolution, and technical advance. While the timescale of punctuation design has been much longer than that of digital design (more than 2,000 years longer) it demonstrates how good design changes and adapts to changing people and times. Kommas, kolons, and periodos In Ancient Greece, there was no segmentation between words or sentences – not even gaps between one word and the next. This made reading out loud quite difficult as the reader had to work out from the context where the gaps and pauses should be. The need here was to help people easily identify where to leave a short pause (a komma), a longer pause (a kolon), or where the end of a thought was (a periodos). Aristophanes of Byzantium, who worked in the library at Alexandria invented a system of dots that indicated how long a pause to leave. These dots at the time did not have names, but we can recognise them in the punctuation we use today. This was punctuation not for grammar – there were no grammatical rules implied – but to aid in performance of written work. Independent ideas and printing Many hundreds of years later (in the 12th Century) and completely independently, an Italian scholar called Boncompagno da Signa invented another punctuation scheme. In it, he had only two symbols: a forward slash (/) for a short pause, and a horizontal line (–) for a long pause. The horizontal line is probably the source of the dash we use today, but the forward slash ended up as the comma we’re familiar with. The evolution of this slash happened due to technology – printing – and a man called Aldus Manutius, who printed large runs of cheap books. He typeset the forward slash towards the baseline and gave it a slight curve, providing and then popularising the style we recognise today. Punctuation for pauses, or grammar? By the 17th and 18th Century, there was a debate about how punctuation should be used. Should the variety of symbols available be used – as in Ancient Greece – to communicate pauses and indicate how text should be read? Or should it act as signposts to indicate clauses, sentences, paragraphs, and so on – as part of clear grammar? The application of punctuation in grammatical use helps to clarify the meaning of written text – indicating what goes with what, and the meaning of things – a very different purpose than that of helping someone read out loud. Although punctuation seems to be fixed now, with dedicated rules for how to use it (and lots of very angry people on the internet when it’s used “incorrectly”), it’s still evolving. People are inventing new marks to indicate more finessed concepts in written language (the interrobang, the irony mark) and others are taking on new meanings through modern use (the slash, the hashtag). As we find new problems to solve and new technological challenges, punctuation will evolve and change again. Good digital design is evolutionary Punctuation might seem to us to be complex and fully formed, with a specific purpose, but that’s not how it started out. In the same way, when we see digital products it can seem that their designs are born fully-formed and complete. Instead, the best digital design is the result of identifying the problems that need to be solved, and then evolving over time to meet the most important, changing, human needs. It also adapts to the changing technical landscape it exists in – taking advantage of new opportunities and emerging standards. So – like punctuation – digital design is never absolutely right or wrong; it shouldn’t be “done” and left preserved in aspic, but be “done for now” – always seeking to change, adapt, and improve. (An index of the punctuation used in this blog post)
<urn:uuid:39856c00-e6eb-4b71-af80-a1724eae1c5a>
CC-MAIN-2017-04
https://blog.avecto.com/2016/07/komma-leons-what-punctuation-tells-us-about-changing-with-the-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00137-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952324
864
3.078125
3
One of the most important books for understanding modern cyber threat’s is one of the first books on the dynamics of this new domain. It is Clifford Stoll’s The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage. This book documents the steps taken by Stoll to track down what a mystery invader was doing inside the networks of Lawrence Berkeley Lab and is required reading since so many of the methods used by the hacker and the tracker remain important today. It is also important because it shows the value that foreign espionage agents place on using systems to acquire information. From the book description: Cliff Stoll was an astronomer turned systems manager at Lawrence Berkeley Lab when a 75-cent accounting error alerted him to the presence of an unauthorized user on his system. The hacker’s code name was “Hunter” — a mystery invader hiding inside a twisting electronic labyrinth, breaking into U.S. computer systems and stealing sensitive military and security information. Stoll began a one-man hunt of his own, spying on the spy — and plunged into an incredible international probe that finally gained the attention of top U.S. counterintelligence agents. The Cuckoo’s Egg is his wild and suspenseful true story — a year of deception, broken codes, satellites, missile bases, and the ultimate sting operation — and how one ingenious American trapped a spy ring paid in cash and cocaine, and reporting to the KGB.
<urn:uuid:b80740d6-7b4e-4c79-b86a-2171b77552f1>
CC-MAIN-2017-04
http://www.fedcyber.com/2012/07/24/soviet-cyber-espionage-in-1986-the-cuckoos-egg/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00045-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935876
302
2.609375
3
Born Abu Ali Sina Balkhi, or ibn Sina, in Persia around 980 and known in western history by the Latin version of his name, “Avicenna”. He was a physician and philospher and studied and taught mathematics, physics, chemistry, astronomy, logic, geometry, geology, paleontology and more. The author of over 400 works, his major effort was known as the “Book of Healing” (Kitab al-shifa’) and was about a wide range of subjects including logic, mathematics, philosphy and religion. This book is credited with dividing the subject of mathematics into four subject areas of: geometry, astronomy, arithmetic, and music. Ibn Sina was probably the leading writer in the field of medicine in his time, producing an encyclopedic collection of medical knowledge called “al-Qanan”. Many of his works were later translated into Latin and had wide influences on rennaissance era thinkers.
<urn:uuid:b937f7a2-219f-4a69-9bfb-71f296927ea2>
CC-MAIN-2017-04
http://www.hackingtheuniverse.com/science/history-of-science-and-technology/era-middle-ages/0980-ibn-sina-bio
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.984968
200
3.140625
3
DDR4 memory is finally making its way to the mass market, three years after it was initially sampled and nine years after work began on it. Given where it will most likely be used, there's a reason they went so slow. DDR4 will take off in servers first, a change from past memory introductions. With DDR, DDR2 and DDR3, they were introduced on desktops, then moved to laptops and servers. Here it's reversed, because DDR4 will most benefit servers. As a result, the server and memory vendors were much slower and more deliberate, taking far longer than usual to qualify the memory because servers cannot tolerate down time. DDR4 has a number of improvements over the current generation of memory, but it comes down to two big changes: lower power requirements and faster bus speeds. The power draw drops by 20%, from 1.5 volts to 1.2 volts. In a server with hundreds of gigabytes of memory, up to 1TB, that will really add up as you add more memory sticks. An even bigger savings comes from the fact that DDR4 will have much higher chip densities. SK Hynix of Korea has already announced a 128GB DDR4 memory module. Anything past 8GB in capacity for DDR3 was prohibitively expensive. Not that DDR4 will be cheap. IHS (formerly iSuppli) estimates DDR4 will be 40 to 50% more expensive than an equal capacity DDR3 module and the two won't become equal in price until 2016. But if you have the money to deploy a server with a terabyte of memory, this probably isn't too much of a problem. The other big change is speed. DDR3 topped out at 2.1Ghz. DDR4 is being introduced at 2.1Ghz and some modules at Computex are above 3Ghz. In theory, DDR4 could hit 4.2Ghz. Some people have told me that above 1.8Ghz, memory performance really levels off, but that's on desktops. In a server environment, where there may be hundreds of virtual machines running at the same time and gigabytes of data I/O into and out of the server, that could make a difference. Time will tell. DDR4 won't go mainstream until at least next year. Intel announced its first DDR4 chipset, the X99, which will go with its Extreme Edition CPU. The Extreme Edition is for crazy overclockers and gamers who want every last volt of speed they can get. The Extreme Edition CPU alone is $999, plus the rest of the parts. IHS said they sell maybe 100,000 units per quarter, but it's a highly lucrative market. So, among the new products at Computex: * ADATA showed solutions that start at 2133MHz and will go all the way to up to 3200MHz. Besides the price difference, there will also be differences in memory latencies between the 2133Mhz chips and 3200Mhz chips. ADATA will offer DDR4 in 8GB and 16GB kits, with two memory sticks per kit. * Corsair showed off Dominator Platinum Kits for enthusiasts, running at 2400MHz speeds, while the Value kit was at 2133MHz, the minimum speed for DDR4 memory. * Crucial announced the Ballistix Elite brand of DDR4, with speeds of 2666MHz and 3000MHz and modules of 4GB or 8GB at first, which could eventually reach 32GB. * G.Skill showed off 4GB modules and 8GB modules with varying speeds, from 2133Mhz to 2666Mhz. Server vendors will be snapping up these modules this year, and by this time next year, we should see a mainstreaming of DDR4 in desktops and laptops. In the mean time, I'm in no hurry. These days, the slowest component of my PC is me.
<urn:uuid:32125c2c-0b81-4576-8eb5-21340a97f43b>
CC-MAIN-2017-04
http://www.itworld.com/article/2695323/hardware/the-ddr4-floodgate-opens-at-computex.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00165-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956021
809
2.53125
3
With a growing number of DDoS attacks being observed across the internet, it is important to understand the risk they pose and the ways to defend against them. A denial of service (DoS) attack is a malicious attempt to render a network, computer system or application unavailable to users. Most attacks of this nature utilise a large number of computers in what is known as a distributed denial of service (DDoS) attack. To deny service, an attack consumes more resources than the target network, computer system or application has available; in doing so, resources cannot be allocated to new connections and any services provided by the target become unavailable. Using a large number of computers simultaneously improves the efficiency and reliability of an attack. Although DDoS attacks are not a direct threat to the security of sensitive information stored within an organisation, they can cripple critical systems whose availability is relied upon to conduct key business initiatives. The threat has become ever more concerning as governments and criminal organisations generate the resources and capabilities necessary to carry out sophisticated, multi-faceted denial of service attacks. This article aims to provide an overview of a number of common denial of service attack vectors. Hopefully you will gain an understanding of the way in which these attacks operate and are evolving, along with the challenges faced in defending targeted organisations. Akamai’s recent State of the Internet Report1 observes a 54% increase in denial of service attacks across their networks between the first and second quarters of 2013. The report highlights that, “There is a very real possibility this trend will continue”. Akamai also identify that ports 80 and 443, typically used to host web applications, have become the most popular ports for attackers to target. Arbor networks observed similar trends in their third quarter review2, in particular they note a “very rapid growth in the average attack size in 2013”. This is supported by the data graphed below showing the average increase in the size of DDoS attacks over the last four years. An interesting aspect of this graph is the rapid growth of volume in attacks seen this year, highlighting the rate at which malicious actors are increasing their DDoS capabilities. What we are observing is an increase in both the size and complexity of attacks. Both of these traits must be considered if we are to develop effective defences in the modern threat landscape. On the one hand we must be able to mitigate the sheer amount of ingress traffic that will appear under a DDoS attack; on the other hand we must be able to distinguish legitimate influxes of traffic from malicious floods and apply effective filtering mechanisms. DDoS attacks have traditionally focused on the consumption of network bandwidth along with the abuse of layer 4 protocols. UDP, ICMP and SYN floods are examples of DDoS attacks that use transport layer protocols. SYN floods are among the most commonly used traditional attacks and are of particular interest as they have been utilised by activist groups using tools such as Brobot and the Low Orbit Ion Cannon (LOIC). SYN floods exploit the behaviour of computer systems in their attempt to connect to one another using the TCP three-way-handshake. The TCP protocol states that if a client wishes to connect to a server it must first send a packet known as a SYN request. The server should then respond with a SYN/ACK packet and wait for the client to acknowledge the connection with a final ACK packet. Whilst the server is waiting for the response, the connection remains in a half-open state typically for a period of 75 seconds. The half-open connection is maintained by the server in a finite memory space which, if exhausted, will drop further connection requests. During a DDoS SYN flood attack, SYN packets are sent from a number of computers distributed across the internet to a single target server initiating the first stage of a TCP three-way-handshake. Often each packet indicates responses should be sent to a spoofed random IP address. The server will respond by sending a SYN/ACK packet to each IP address it believes is initiating a request; however, the final acknowledgement will never be returned. The target server is left with many connections in a half-open state as it is forced to handle many unresponsive connection requests. By inundating the target with SYN requests it is very easy to exhaust the memory used to handle the connections, causing all subsequent requests to be dropped. Whilst SYN floods are very powerful and still relevant, their use is becoming less widespread as automated defence systems have been designed and are being implemented by organisations wishing to mitigate DDoS attacks against their networks. Current anti-DDoS solutions are effective at handling transport layer attacks. Akamai, for example, did not analyse SYN floods, UDP floods or other transport layer volumetric attacks, as they were automatically mitigated and absorbed by their systems. A new class of denial of service attacks known as Distributed Reflective Denial of Service (DrDoS) is increasing in popularity as malicious actors find ways to reflect and amplify traffic off misconfigured public servers across the internet. DDoS attacks can be amplified to dramatically increase the amount of traffic they can direct towards a target. Amplification techniques have evolved from using low level protocols such as ICMP to higher level protocols such as DNS. TheSMURF attack, for example, utilises ICMP to connect to misconfigured networks and broadcastICMP echo requests to every computer connected to that network. The source IP address defined in each echo request is spoofed to that of the target server, causing each computer on the vulnerable network to send an ICMP echo response to this address. In allowing broadcast requests to be forwarded onto its network, the edge router in this scenario is amplifying a single request by a factor of the number of computers on its internal network. Attacks have since been developed that operate in a similar fashion, although this family of attacks is again being defended against. Layer 7 protocols are now being used to achieve traffic amplification. The DNS protocol is a perfect example of a layer 7 protocol being used in such a way. DNS requests operate over UDPand so do not require an underlying connection to be maintained. When a DNS Resolver receives a DNS request, it is processed and returned to the address given in the request. This address can be spoofed to that of a target server. As DNS requests are generally much smaller than their responses, a small amount of request traffic can generate a very large amount of response traffic. A DDoS attack can utilise this to reflect its traffic off misconfigured DNS Resolvers and onto target networks, having the effect of amplification in the process. In the last few months, analysts have begun to see an increase in the amount of DDoS amplification attacks that utilise the CHARGEN protocol. CHARGEN is a UDP based protocol, meaning that, as with DNS based amplification, destination addresses can be easily spoofed. Interestingly, even though this obscure protocol is rarely used legitimately, there are estimated to be over 100,000 exploitable CHARGEN servers currently on the internet, and recent activity shows an increase in the number of CHARGEN based DrDoS attacks. CHARGEN listens on port 19 and, upon receiving a request, will simply return a random amount of data between 0 and 512 bytes in length. This functionality can be abused by sending requests with no data at all that tell the CHARGEN server to send its response to a target server. This exemplifies the need for network administrators to ensure un-used and outdated services are cleaned from their networks. In the last year alone, amplification attacks have increased by 265%. As you can see, modern denial of service attacks are relying less on exploiting transport layer protocols and more on opportunities at the application layer. A search of the CVE vulnerability database returns over 12,000 publicly disclosed denial of service vulnerabilities using application layer protocols. Prolexic’s 2013 third quarter DDoS report3 highlights a 101% increase in the number of layer 7 exploits used in DDoS attacks compared with the same time last year. Using layer 7 attacks achieves greater obscurity as UDPand TCP connections are used legitimately. Layer 7 attacks also require fewer connections and are therefore more efficient. With many bespoke applications being deployed within organisations, it is important to identify whether or not they can be exploited to achieve a denial of service. Web servers have been targeted by layer 7 attacks exploiting mechanisms in the handling of HTTP requests. Recent versions of the Apache web server are vulnerable to attacks of this nature, in which a single computer can cause a denial of service. This kind of attack is very direct: it does not consume network bandwidth and so other services running on the target’s network will still be available. The attack uses fragmented requests to keep many connections simultaneously open, eventually holding all available connections to the server. The attacker sends only a partial HTTP request to the server; fragments of the remaining request are then sent incrementally to the server keeping the connection alive. As long as the full request is never completed, the connection will never be closed and made available to other users. The server only has enough available memory to be able to maintain a finite number of simultaneous connections. This fact is exploited to consume all available connections and deny service to legitimate users. The attack has been understood for many years, but only recently became popular through the distribution of tools such as SlowLoris, which were used during the Iranian revolution to deny service to a number of government websites, whilst keeping traffic to a minimum so as not to disrupt Iranian networks as a whole. Attacks such as these can also be run through anonymising networks, masking the true identity of the traffic’s source. It is evident that denial of service attacks are becoming more sophisticated. As mitigation techniques improve so too are the methods used to exploit them. When assessing the threat of denial of service attacks to an organisation, it is important to be aware of the latest exploits being used. Amplification techniques are only now beginning to be used to generate record breaking volumetric attacks. These techniques must be understood as they continue to evolve. As the denial of service attack surface continues to expand, we are tasked with constantly adjusting our approaches to mitigation. Organisations who rely on technology to maintain critical aspects of their business now understand that the threat that denial of service attacks pose is ever increasing. As denial of service attacks are often used in conjunction with more targeted attacks, their presence may also serve as an indication that the business as a whole is being targeted. If the availability of services is of paramount importance to the operation of your business, then denial of service remediation should be a key consideration in improving your company’s security posture.
<urn:uuid:37cfefe1-f606-4c9e-9bcf-b5f71673867c>
CC-MAIN-2017-04
https://www.mwrinfosecurity.com/our-thinking/denial-of-service/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94894
2,162
2.828125
3
At some point in our lives, we’ve all gone through some online account creation process and created a password. Frequently, we’re required to choose a password that includes something like at least one capital letter, one number, et cetera. The stricter the criteria, the more layers of security we think we’ve added to our passwords. However, that’s not actually the case. The math behind password security To understand why, let’s do some math. (It will be really simple, I promise.) One method of attempting to defeat a password is to simply try all possible character combinations. This tactic is referred to as a “brute force” attack. It goes without saying that a password with fewer possible combinations is easier to brute force than a password with more possibilities. In cryptography, this is referred to as the ‘keyspace’. Statistically, if an attacker is brute forcing a password, they must try fifty percent of the keyspace before they have a better than fifty percent chance of gaining access. Now, suppose I’m getting an account for a site that only requires two-character passwords. Since there are a total of 95 printable ASCII characters, that means we have a total of 95 x 95 possible combinations. Out of those 9,025 combinations, suppose an attacker could try five passwords per second. To try half the possible combinations would take about 900 seconds. But what if the site decides to enforce a policy that one character in the password must be a number? This requirement reduces the possible number of passwords to only 950 (95 x 10). The same attacker would be able to try half the possible passwords in only 95 seconds. Of course, no site would allow two-character passwords (I hope). The math however, is similar with longer passwords. Other password pitfalls That’s also not to say that a password should be something as simple as ‘walrus’ – walrus is a common word found in the dictionary and is subject to a “dictionary attack”. A dictionary attack is an attempt to gain access to a system by trying common words and passwords (such as ‘letmein’, ‘password123’, et cetera). Beyond these common pitfalls, there are other ways that a password can be less secure – or easy to guess. Things like a spouse’s name, your date of birth, mailing address, et al. are examples of what not to use in a password. The building blocks of a good password Longer passwords are better, and contrary to the common misconception, they don’t have to be difficult to remember. A simple phrase can be easier to remember, but long enough to be difficult to brute force. Something like ‘ILikeDrinkingWhiskey,ButNotMoreThan5Shots.’ mixes upper and lower case, special characters (comma, ampersand, et cetera), and numbers but, due to one particularly eventful evening, is very easy for me to remember (although that’s not my actual password). It’s also a good practice to reset your passwords frequently. For example, when getting started with a new server, we recommend immediately resetting your password and following these guidelines: - Use at least 8 characters. - Use a combination of upper-case letters, lower-case letters, and numbers. - Avoid words or names, especially your name or the name of your business. - Avoid a password that shares the same characters as the previous password. For example, changing “Ccodero1” to “codero2” is not a safe practice. The moral of the story is that a password should use as many different characters from as many different character groups as possible. No password will make any system perfectly secure, but by using the tips above, you can make it as hard on the attackers as you can. After all, passwords are one of your lines of defense keeping your dedicated server environment secure. Of course, sometimes even the best laid plans go awry, and the most thoughtful passwords get lost in the rush of day to day life. If you ever lose your Codero Cloud password, you can recover it by following these steps, or chatting with one of our experts. Tags: online security
<urn:uuid:574c1549-0009-47bb-9a84-19006d485bdd>
CC-MAIN-2017-04
http://www.codero.com/blog/password-security-best-practices-and-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930187
912
3.734375
4
Adaptive Learning Platform Dynamics in the learning and assessment space are changing with the proliferation of online media and devices like smartphones and tablets. Apart from the extensive use of online media for imparting education, a shift to the student-oriented way of teaching is also being witnessed. HCL’s adaptive learning platform is a solution that uses digital media to impart interactive teaching. This adaptive learning systems connect students, teachers & parents and can be assessed on the move. Adaptive learning management systems will familiarize themselves based on the student’s response/ progress made. They assist teachers and parents to monitor the student and ensure improvement and completion of the curriculum. This adaptive learning software has features that assist in the learning process include flash, audio and videos illustrations, online submission of assignments, collaborative learning with other students, announcements/ alerts and performance dashboard. HCL’s adaptive learning management system provides teachers and parents the ability to get a real-time view of the progress made by students along with an assessment report and system generated recommendations. This system is developed by HCL’s skilled team of professionals by understanding teachers' as well as the students' perspective about a beneficial adaptive learning platform. Benefits of Adaptive Learning Platform - Interactive and fun way of learning - Adapts itself to user’s needs - Connects students, teachers and parents on a common platform - Assesses and monitors progress and aids student’s improvement
<urn:uuid:edeb36b7-3732-4ba0-bb5c-8852a272923b>
CC-MAIN-2017-04
https://www.hcltech.com/media-entertainment/adaptive-learning-platform
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00403-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942976
297
2.734375
3
NASA is exploring its communications options with Mars. The space agency this week issued a Request For Information that looks to explore options where it would buy commercial communications services to support users at Mars, including landers and rovers and, potentially, aerobots and orbiters. From NASA: “ In this model, the commercial provider would own and operate relay orbiter(s), and NASA would contract to purchase services over some period of time. In exploring this model, NASA encourages innovative ideas for cost-effective approaches that provide backward-compatible UHF relay services for existing landers, as well as significantly improved performance for future exploration activities. One example is deploying optical communications for Mars proximity operations and/or deep-space communications.” +More on Network World: Finding life in space by looking for extraterrestrial pollution+ NASA noted that it recently demonstrated optical communications from the Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft at the Moon to Earth, with download rates of 622Mb/sec. It also demonstrated an error-free data upload rate of 20Mb/sec transmitted from the primary ground station in New Mexico to the spacecraft orbiting the Moon. NASA said it also “welcomes interactions with relay infrastructure that might be deployed in support of other Mars commercial objectives.” NASA said its current Mars relay infrastructure is aging, and there is a potential communications gap in the 2020s. That’s why NASA is interested in exploring alternative models to sustain and evolve the Mars relay infrastructure. The current strategy has been cost effective to date, because NASA has launched science orbiters to Mars on a steady cadence; the cost of the relay infrastructure has effectively been limited to the incremental cost of adding a relay payload to them. Such efficiency is expected to continue with the arrival of the Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft at the red planet on September 21, 2014, and the European Space Agency’s ExoMars/Trace Gas Orbiter in 2016. Each orbiter carries a NASA-provided Electra relay payload. According to NASA, Mars landers and rovers are highly constrained in mass, volume, and power. One consequence of these constraints is a substantial restriction in the data rates and volumes that can be communicated on the direct link between the Mars surface spacecraft and Earth. For instance, at large Earth-Mars distances, the Curiosity rovers X-band direct-to-Earth (DTE) link operates at data rates of less than 500b/sec when communicating to a Deep Space Network 34m antenna. Such data rates are not sufficient to support typical surface exploration needs, NASA says. To address this limitation in DTE bandwidth, the Mars Exploration Program (MEP) has employed a strategy of including a proximity-link telecommunication relay payload on each of its Mars science orbiters, NASA stated. Currently, operating in the UHF band (390-450 MHz), these relay payloads establish links with landers and rovers on the surface as they fly overhead, supporting very high-rate, energy-efficient links between orbiter and lander. The orbiters, with much larger high-gain antennas and higher power transmitters, can then take on the job of communicating on the long-haul link back to Earth, the space agency said. For now, NASA has two relay orbiters in operations at Mars: the Odyssey spacecraft, launched in 2001, and the Mars Reconnaissance Orbiter (MRO) spacecraft, launched in 2005. These orbiters enable communication links from the Curiosity rover operating at rates of up to 2 Mb/sec Similar relay support has been provided to the prior Spirit rover and Phoenix lander missions and continues to be provided to the Opportunity rover. Check out these other hot stories:
<urn:uuid:cc5646e0-0dc8-4cfc-9dca-f8ef6baf50cd>
CC-MAIN-2017-04
http://www.networkworld.com/article/2457675/security0/nasa-looking-for-out-of-this-world-mars-communications-services.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929057
777
2.9375
3
Japan's Internet infrastructure has remained surprisingly unaffected by last week's devastating earthquake and tsunami, according to an analysis by Internet monitoring firm Renesys. Most Web sites are operational and the Internet remains available to support critical communication functions, Renesys CTO James Cowie wrote in a blog over the weekend . In the immediate aftermath of the earthquake off the Japanese coast, about 100 of Japan's 6,000 network prefixes -- or segments -- were withdrawn from service. But they started reappearing on global routing tables just a few hours later. Similarly, traffic to and from Japan dropped by about 25 gigabits per second right after the Friday quake, but returned to normal levels a few hours later. And traffic at Japan's JPNAP Layer 2 Internet exchange service appears to have slowed by just 10% since Friday, according to Renesys. "Why have we not seen more impact on international Internet traffic from this incredibly devastating quake? We don't know yet," Cowie wrote. An unknown number of people were killed and whole cities devastated by what was one of the worst earthquakes in over 100 years. The quake, which initially measured 8.9 on the Richter scale, generated a huge tsunami that inundated parts of Japan and put almost the entire Pacific coastline on a tsunami alert. The effects of the quake, in terms of human loss and economic damage, are expected to be huge. The quake also disrupted electricity supplies and knocked two nuclear power plants out of commission . One reason Internet connectivity appeared to fare better could be that undersea cables remained relatively untouched by the quake, unlike in 2006 when an earthquake in Taiwan resulted in a large number of major cable breaks, Cowie said. This time, the only noticeable breaks were in two segments of Pacnet's EAC submarine cable system, Cowie said. The system, which is owned by a consortium of six companies, is designed to provide up to 1.92 terabits per second of capacity across the Pacific. The breaks led to outages in several networks in Japan, the Philippines and Hong Kong. Sections of the Pacific Crossing undersea cabling system connecting the U.S to Asia also appear to have been damaged. A note posted on Pacific Crossing's Web site this morning noted that two of the cables are currently out of service. Pacific Crossing's cable lading station in Ajigaura, Japan was evacuated as a result of the tsunami. No information is available about when restoration efforts will resume, Pacific Crossing said. Renesys noted that "lingering" problems with landing station equipment could generate new problems over the next few weeks. Even so, "it's clear that Internet connectivity has survived this event better than anyone would have expected," Cowie said. He noted that Japan's attempts to build a "dense web" of domestic and international Internet connectivity appears to have paid off. "At this point, it looks like their work may have allowed the Internet to do what it does best: route around catastrophic damage and keep the packets flowing, despite terrible chaos and uncertainty." Jaikumar Vijayan covers data security and privacy issues, financial services security and e-voting for Computerworld. Follow Jaikumar on Twitter at @jaivijayan or subscribe to Jaikumar's RSS feed . His e-mail address is email@example.com . Read more about internet in Computerworld's Internet Topic Center. This story, "Japan's Internet Largely Intact After Earthquake, Tsunami" was originally published by Computerworld.
<urn:uuid:1e262b2c-5abc-4003-8651-80a77e38b7f6>
CC-MAIN-2017-04
http://www.cio.com/article/2410273/internet/japan-s-internet-largely-intact-after-earthquake--tsunami.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963812
726
2.53125
3
In our previous discussion, we saw that dividing an OSPF autonomous system into multiple areas can improve the scalability. An example of a multi-area OSPF topology is shown in Figure 1: Let’s imagine that we’ve allocated the IP address space as follows: - Area 0: 10.0.0.0/24 through 10.0.255.0/24 (256 subnets) - Area 1: 10.1.0.0/24 through 10.1.255.0/24 (256 subnets) - Area 2: 10.2.0.0/24 through 10.2.255.0/24 (256 subnets) Our OSPF autonomous system contains a total of 768 subnets, and unless we prevent it, each subnet will be known by every router. With a little configuration, however, we can do much better. As you can see, the subnets within the areas fall into three blocks: - Area 0: 10.0.0.0/16 - Area 1: 10.1.0.0/16 - Area 2: 10.2.0.0/16 What we can do is configure the ABRs (R2 and R4) so that instead of advertising each individual prefix from one area to another, they will instead advertise summary blocks. For example, we will configure R2 to advertise the 10.1.0.0/16 block from Area 1 into Area 0, and likewise the 10.0.0.0/16 block from Area 0 to Area 1. Similarly, we’ll have R4 advertise the 10.2.0.0/16 block from Area 2 into Area 0, and the 10.0.0.0/16 block from Area 0 into Area 2. What will be the result? Each internal router (R1, R3 and R5) has the LSDB for its particular area, and running SPF will send that area’s prefixes into its routing table. Thus, for the prefixes within their respective areas, the internal routers see things as follows: - R1: 10.1.0.0/25 through 10.1.255.0/24 (the 256 subnets in Area 1) - R3: 10.0.0.0/25 through 10.0.255.0/24 (the 256 subnets in Area 0) - R5: 10.2.0.0/25 through 10.2.255.0/24 (the 256 subnets in Area 2) Being an ABR, R2 will have the LSDB for both Area 2 and Area 0, and therefore will see 512 prefixes total for those two areas. Likewise, R4, the ABR connecting Area 2 to Area 0, will also see 512 prefixes for those areas. In addition, since the ABRs are advertising the blocks of subnets from one area to another, each router will see one prefix for each area to which it is not directly connected. Therefore, the total numbers of prefixes known to the routers will be: - R1: 258 prefixes (256 for Area 1, summaries for Area 0 and Area 2) - R2: 513 prefixes (256 for Area 1, 256 for Area 0, summary for Area 2) - R3: 258 prefixes (256 for Area 0, summaries for Area 1 and Area 2) - R4: 513 prefixes (256 for Area 2, 256 for Area 0, summary for Area 1) - R5: 258 prefixes (256 for Area 2, summaries for Area 0 and Area 1) In the case of the internal routers (R1, R3 and R5), the routing tables have gone from 768 to 258 entries, a reduction of nearly two-thirds. In the case of the ABRs (R2 and R4), the number of routing tables has gone from 768 to 513, a reduction of nearly one-third. As you can imagine, as the total number of subnets goes up, the savings that can be realized by using multiple areas becomes even greater. Since OSPF area numbers are thirty-two bit variables, they can be represented in dotted-decimal format, which can sometimes be convenient. For example, instead of numbering our areas 0, 1 and 2, we could make them Area 0.0.0.0, Area 10.1.0.0, and Area 10.2.0.0, deriving the area numbers from the IP address ranges within them. What about Area 0, which contains the 10.0.0.0/16 block? Can we make that Area 10.0.0.0, then? No, because there are some rules we must follow when it comes to multiple areas. - First, if we have more than one area, one of them must be Area 0 (or Area 0.0.0.0, if you prefer). - Also, unless special arrangements are made, the topology of an OSPF autonomous system must be a simple hub-and-spoke, with Area 0 (the backbone) in the middle. - One last thing … we can’t run a link directly from Area 1 to Area 2, bypassing Area 0, because OSPF won’t like that at all! There are additional rules and options, some of which we’ll discuss in the next installment. Author: Al Friebe
<urn:uuid:5bf5e2ad-a7c4-4b08-9cd8-30729dc0724e>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/11/30/ospf-part-5/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903156
1,165
2.515625
3
Social media is changing the way government communicates with its citizens. A 2011 study by the Fels Institute of Government at the University of Pennsylvania found that 90 percent of cities and counties had established a presence on social networking channels such as Twitter and Facebook. In the intervening years, there has been a steady stream of reports about others working to close the gap. Cities and counties have used social media to broaden transparency, increase citizen engagement and feedback, and improve public perception of what government does. As important as transparency and perception are, these somewhat intangible benefits are sometimes not enough to justify the investment to pursue anything more than a placeholder in social media. The ability to save lives changes everything, including thinking about the return on investment for social media in government. There have been three headline-grabbing examples in the last year. During Hurricane Sandy, a determined social media manager effectively transformed the New York Fire Department’s Twitter account -- @FDNY – from a source of safety tips to a hub for coordinating emergency response when phone lines were down and people couldn’t get through to 911 or 311. It also helped squash rumors circulating after the storm. Twitter also quickly became the most effective channel for the Boston Police Department to get the word out when terror struck the Boston Marathon last May. Fueled by national media coverage, the follower count for the @bostonpolice Twitter account surged from an already impressive 50,000 to an extraordinary 300,000, serving as became a beacon of clarity 140 characters at a time in a noisy and error-dilled chacophony. As with Sandy, @bostonpolice was used to dispell false rumors in the hours after the bombing. It was also used to help ensure officer safety as out-of-town news crews descended on Boston, and announce important developments. In fact, the first official announcement that suspect Dzhokhar Tsarnaev had been captured was delivered as -- you guessed it -- a tweet. Like this story? If so, subscribe to Government Technology's daily newsletter. Every town experiences its share of emergency situations, whether it's a natural disaster or a crime that shakes the community. It is clear from the recent New York and Boston examples that in these types of situations, social media provides government with an incredibly powerful channel to squash rumors, disseminate official information and align community interests. A social networking presence, however, is only effective as its reach. A city or police Twitter account created as a placeholder is just that: a placeholder. It is not at all equipped to create the type of immediate response we saw in Boston. It is also unlikely that a dormant presence can be kickstarted quickly enough at the time of need. Most situations simply do not receive the type of attention and media coverage that gave the @bostonpolice Twitter account such an enormous boost. The reality is, when the situation arises, we must rely on the network and following we’ve already established across our social networking channels. That’s the urgency in rethinking government’s approach to social media. There is a hard ROI to be realized when communities are in crisis. The same is true for social media’s role in the early identification of public health concerns or even the mundane but important work of keeping traffic flowing after accidents and road closures. The time and effort put into promoting your social networking presence -- and building a truly engaged audience -- is not just for fun and games. Nor does it always have to drive business metrics such as page views or building awareness, or even participation in government programs. Rather, public officials must come to think of building out a social network as a down payment on establishing the most rapid, viral and open communications link you can possibly create with your citizens. The type of communications link that can save lives -- and it has. Your network keeps you safe and makes you strong ... if you have one. To be clear, the investment required to build a social media following is non-trivial: Moreover, there needs to be oversight to ensure consistent and appropriate messaging, especially as staff members outside of your traditional communications team speak on behalf of their agencies. And, of course, attention must be paid to legal policies and requirements such as employee conduct rules and record retention. Creating an active social media presence is a significant undertaking for government but, given the changing nature of emergency management and crime response, the investment in social can serve the larger public policy priorities of public safety and emergency preparedness. It’s not simply about likes and followers. It’s about being able to communicate with your citizens in the most effective way possible in a time of need. It’s about the potential to save lives. Anil Chawla is an experienced technologist and entrepreneur, with a proven track record of working with businesses to address challenges related to social media. He has over a decade of experience creating software products, and has focused the last four years developing social media technology. Mr. Chawla is the CEO of ArchiveSocial, which he founded to help government organizations navigate the important legal and regulatory challenges they face related to social media management.
<urn:uuid:78137050-b20d-4714-9756-416f64fe710f>
CC-MAIN-2017-04
http://www.govtech.com/local/Industry-Perspective-Social-Media-is-Serious-Business-for-Government.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00120-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953499
1,044
2.890625
3
Researchers examining how tornadoes form have turned into “twister chasers” rushing from site to site to get their equipment on the ground before the tornado hits. Their efforts are aimed at more accurately predicting when touchdown will occur to improve evacuation and warning, but this is, as one can imagine, quite a dangerous and troublesome process. Simply getting to the cell’s site at the right time is a challenge for the VORTEX2 group, thus other compute-based esearchers are looking to distributed computing to take on the challenge and reduce the complex of fieldwork. The Center for Analysis and Prediction of Storms undertook a hybrid computing project called the “Linked Environments for Atmospheric Discovery II” (LEAD II) with the help of the Big Red cluster and Microsoft’s Azure to better predict tornado conditions and to aid in the advancement of VORTEX2 goals. Although LEAD II is only a fraction of the larger VORTEX2 project, part of what makes it remarkable is that it signals the first use of a hybrid workflow model for the participants, which was created using Microsoft’s Trident Scientific Workflow Workbench to handle the front-end workflow system, which then doled out pieces of it to backend Unix and Linux-based resources, including Indiana University’s Big Red. As LEAD II researcher Beth Plale explained, “A lot of scientists use Windows tools such as Excel…We think that utilizing a Windows workflow system on a Windows box is a step towards providing broader flexibility, because of this affinity of a lot of scientists to use Excel and because of the emergence of the cloud-based Azure platform.” Overall, the team decided that the use of the hybrid workflow model for the project was a success due to the added flexibility it offered, not to mention a great exercise in experimenting with different models for use in future projects.
<urn:uuid:11125565-eaf7-4860-8663-7f91933d578e>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/09/02/hybrid_workflow_model_at_heart_of_tornado_research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00028-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937941
388
2.65625
3
Cloud Governance: Data Residency and Sovereignty Many organizations face a significant set of data residency (also referred to as data sovereignty) challenges when they are contemplating a move to the cloud. Cloud data residency is defined as maintaining control over the location where regulated data and documents physically reside. Privacy and data residency requirements vary by country and users of cloud services need to consider the rules that cover each of the jurisdictions they operate in as well as the rules that govern the treatment of data at the locations where the cloud service provider(s) provision their services (e.g., their data centers). Depending on the specific countries in which they operate, companies may need to keep certain types of information within a defined geographic jurisdiction. Countries that have various degrees of data residency or data sovereignty requirement include Canada, Germany, Switzerland, China and Australia, to name a few. In cloud environments, where datacenters are located in various parts of the world, cloud data tokenization can be used to keep sensitive data local (resident) while tokens (replacement data) are stored and processed in the cloud. Tokenization is a process by which a sensitive data field, such as a Primary Account Number (PAN) from a credit or debit card, is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated original value. While various approaches to creating tokens exist, frequently they are simply randomly generated values that have no mathematical relation to the original data field. This underlies the security of the approach – it is nearly impossible to determine the original value of a sensitive data field by knowing only the surrogate token value. How is Tokenization Different From Encryption? Encryption is an obfuscation approach that uses a cipher algorithm to mathematically transform sensitive data’s original value to a surrogate value. The surrogate can be transformed back to the original value via the use of a “key”, which can be thought of as the means to undo the mathematical lock. So while encryption clearly can be used to obfuscate a value, a mathematical link back to its true form still exists. Tokenization is unique in that it completely removes the original data from the systems in which the tokens reside. As such, advantages of tokenization are: - Tokens cannot be reversed back to their original values without access to the original “look-up” table that matches them up to their original values. These tables are typically kept in a “hardened” database in a secure location inside a company’s firewall. - Tokens can be made to maintain the same structure and data type as their original values. While format-preserving encryption can retain the structure and data type, it’s still reversible back to the original if you have the key and algorithm. Because tokens cannot be reversed back to their original values, tokenization is frequently the de facto approach to addressing market requirements related to data residency. Blue Coat Cloud Data Protection Gateway The Blue Coat Cloud Data Protection Gateway provides a flexible data tokenization platform that provides: - The ability to preserve SaaS functionality across a wide array of applications while maintaining the highest level of tokenization protection. - High availability and enterprise-level performance, with the ability to scale the solution across multiple dimensions. - Open integration and configuration options that simplify deployment and facilitate expanded use of the platform. - A hybrid architecture that gives customers the flexibility to consider multiple deployment options, including hosted models to eliminate the need for any upfront capital expenditures.
<urn:uuid:62cb7475-0618-4e6a-8e15-e87870922454>
CC-MAIN-2017-04
https://www.bluecoat.com/resources/cloud-governance-data-residency-sovereignty
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00240-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908088
721
2.625
3
Huang C.B.,Xinjiang Institute of Ecology and Geography | Huang C.B.,Cele National Field Science Observation and Research Station for Desert | Zeng F.J.,Xinjiang Institute of Ecology and Geography | Zeng F.J.,Cele National Field Science Observation and Research Station for Desert | And 6 more authors. Shengtai Xuebao/ Acta Ecologica Sinica | Year: 2014 Water and nitrogen (N) are two primary factors controlling plant growth in desert ecosystems. Most studies have focused on water stress resulting from the low rainfall and high evaporation rates in arid areas. However, irrigation has become the main strategy for vegetation recovery in the southern rim of Tarim Basin. Many studies have shown that irrigation is most effective when nutrients are not limited, and fertilization is most effective when plants are not water-stressed. In addition, N not only affects drought tolerance through changing dry matter partitioning, but also plays an important role in ecosystem functioning and vegetation succession. Therefore, the combined effects of water and N on vegetation recovery and reconstruction in this area should be evaluated. We conducted a pot experiment to study characteristics of N allocation, use, and resorption, and growth of Calligonum caput-medusae Schrenk seedlings under different irrigation treatments (4.6, 6.1, 7.7, 9.2, 13.0 kg/ plant per irrigation event). The results showed that the amounts of both N and dry matter per whole plant significantly increased with increasing amounts of irrigation. However, C. caput-medusae Schrenk seedlings were infected with powdery mildew at the high irrigation level (13.0 kg/ plant). During the early growth stage, irrigation promoted dry matter accumulation in and N allocation to, assimilating branches. On average, these assimilating branches accounted for 39.5% of whole-plant dry matter accumulation and 66.1% of whole-plant N.allocation. During the late growth stage, stems and older branches became the main organs for dry matter and N accumulation, on average accounting for 54.7% and 47.8% of whole-plant dry matter accumulation and N accumulation, respectively. The dry matter accumulation in and N allocation to stems and older branches was positively affected by irrigation at the end of the growing season. The plants allocated more dry matter and N into assimilating branches in their early growth stage to obtain more photosynthates, and to stems and older branches at the late growth stage to accumulate more energy for plant growth the following year. The root/ shoot ratio of C. caput-medusae Schrenk seedlings was markedly higher under dry conditions than under irrigated conditions. The mean value of N resorption efficiency (NRE) at the early and late growth stages was 64.4% and 58.1%, respectively. The NRE was positively affected by irrigation at the early growth stage, but negatively affected by irrigation at the late growth stage (at the end of the growing season). There was a clear seasonal variation in N use efficiency (NUE), with a low mean value (120.5 g/ g) at the early growth stage and a high mean value (235.8 g/ g) at the late growth stage. Irrigation significantly enhanced the NUE of assimilating branches, stems, and older branches, and roots at the late growth stage. Although the whole-plant NUE increased significantly under higher irrigation levels, excessive irrigation did not increase the NUE. These findings suggested that both N and biomass could be distributed to the most appropriate organ of C. caput.medusae Schrenk at different growth stages to adapt to the arid and barren natural environment. Since plant growth and N use could be limited by over-irrigation and water stress, medium irrigation levels (7.7-9.2 kg/ plant) were appropriate for establishing C. caput. medusae Schrenk seedlings in this area. Source
<urn:uuid:b2ff1ef4-8697-4173-b267-c821a5937ff6>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/cele-national-field-science-observation-and-research-station-for-desert-1156116/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00240-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946145
834
2.6875
3
Diagnostic Tools is a collection of generic utilities, for day-to-day management of the system and network. The tools can be used to troubleshoot, debug connectivity issues, packet loss and latency in a LAN environment. The following are the tools available in this group Ping : Utility to determine whether a specific IP is accessible in the network. It helps in discovery of the status of a network device; whether the device is alive or not. Before you ping a device you can configure the ping settings like number of packets, time to live, size, and timeout. Scan : Utility to scan a range of IP's to check if the given range of IP addresses are accessible. The tool displays the IP Address, the response time, and the DNS name of the discovered device. This tool uses the basic PING function as a base to perform the scan. Ping : Utility to check if a specific IP is SNMP enabled. It helps the network engineers to know the availability of a device and also provides basic information like DNS name, system name, location, system type, and system description. Following the SNMP discovery, if required, more details of the node can be retrieved using SNMP Tools like SNMP walker, MIB Browser and SNMP Graph Scan : Utility to scan a range of IP addresses to check if the IP Addresses are SNMP enabled or not. The tool displays the IP address, response time, DNS name, system name, and system type. Proxy Ping : Utility to remotely initiate a PING test from a router to another IP which is remotely located. The router acts as the proxy for the target device and responds to the ping request. Trace Route : Utility to record the route (route is calculated in terms of hops, i.e number of routers it crosses) through the network between the sender's IP and a specified destination IP. The user can configure the settings such as number of hops, and timeout value.
<urn:uuid:608cfa7b-f6af-4219-8691-e00f928e61e8>
CC-MAIN-2017-04
https://www.manageengine.com/products/oputils/help/diagnostics/diagnostics_tools.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.872707
413
2.640625
3
Google Launches Quantum Artificial Intelligence Lab Quantum computing promises to dramatically accelerate important computational tasks in ways not seen before in traditional computers or supercomputers. Traditional computers use binary arithmetic for their logic, while quantum computers replace binary arithmetic with the laws of quantum physics. Quantum law applies to all physical systems at the atomic scale. Because a quantum computer can require fewer steps to achieve a result compared with a conventional computer, it can achieve much faster performance than a conventional computer. A Google spokesperson declined to comment further about the lab when contacted by eWEEK.Google has actually been involved in other quantum computing initiatives in the past, including previous collaborations with D-Wave and NASA. The search giant is involved in many research fields in computing. In February, it announced its first-ever Google App Engine Research Awards to seven projects that will use the App Engine platform's abilities to work with large data sets for academic and scientific research. The new program, which was announced in the spring of 2012, brought in many proposals for a wide variety of scientific research, including in subject areas such as mathematics, computer vision, bioinformatics, climate and computer science. Google created the fledgling App Engine Research Awards program to bolster its support of academic research, while providing academic researchers with access to Google's infrastructure so they can explore innovative ideas in their fields, according to Google. The App Engine platform is particularly suited to managing heavy data loads and running large-scale applications. Google is active in providing resources for research and educational projects in many areas. Also in February, the company announced its ninth annual Google Summer of Code contest, which invites college students to learn about the world of open-source code development. The program has involved some 6,000 college and university students from more than 100 countries since its start in 2005.
<urn:uuid:ecbcc94b-1b79-481d-a0a8-5d44b352ec1c>
CC-MAIN-2017-04
http://www.eweek.com/cloud/google-launches-quantum-artificial-intelligence-lab-2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00450-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949153
361
3.125
3
Did you knew every time you fill in your username and password on a website and press ENTER, you are sending your password. Well, of course you know that. How else you’re going to authenticate yourself to the website?? But, (yes, there’s a small BUT here).. when a website allows you to authenticate using HTTP (PlainText), it is very simple to capture that traffic and later analyze that from any machine over LAN (and even Internet). That bring us to this website password hacking guide that works on any site that is using HTTP protocol for authentication. Well, to do it over Internet, you need to be able to sit on a Gateway or central HUB (BGP routers would do – if you go access and the traffic is routed via that). But to do it from a LAN is easy and at the same time makes you wonder, how insecure HTTP really is. You could be doing to to your roommate, Work Network or even School, College, University network assuming the network allows broadcast traffic and your LAN card can be set to promiscuous mode. So lets try this on a simple website. I will hide part of the website name (just for the fact that they are nice people and I respect their privacy.). For the sake of this guide, I will just show everything done on a single machine. As for you, try it between two VirtualBox/VMWare/Physical machines. p.s. Note that some routers doesn’t broadcast traffic, so it might fail for those particular ones. Step 1: Start Wireshark and capture traffic In Kali Linux you can start Wireshark by going to Application > Kali Linux > Top 10 Security Tools > Wireshark In Wireshark go to Capture > Interface and tick the interface that applies to you. In my case, I am using a Wireless USB card, so I’ve selected wlan0. Ideally you could just press Start button here and Wireshark will start capturing traffic. In case you missed this, you can always capture traffic by going back to Capture > Interface > Start Step 2: Filter captured traffic for POST data At this point Wireshark is listening to all network traffic and capturing them. I opened a browser and signed in a website using my username and password. When the authentication process was complete and I was logged in, I went back and stopped the capture in Wireshark. Usually you see a lot of data in Wireshark. However are are only interested on POST data. Why POST only? Because when you type in your username, password and press the Login button, it generates a a POST method (in short – you’re sending data to the remote server). To filter all traffic and locate POST data, type in the following in the filter section http.request.method == “POST” See screenshot below. It is showing 1 POST event.
<urn:uuid:73726c64-a9f8-43a2-bd05-0bab6c5e73f0>
CC-MAIN-2017-04
https://www.blackmoreops.com/2015/04/11/website-password-hacking-using-wireshark/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00450-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922641
619
2.5625
3
NASA's Mission to Mars NASA's Curiosity rover touched down on Mars in early August. The rover’s mission -- a joint operation between NASA and its commercial space partner, the United Launch Alliance – is to answer the scientific questions NASA has been exploring for years, focusing on the Mars environment's makeup. August 5, 2014 New features and images added. June 12, 2014 Before humans can land on Mars, scientists have to wrestle with atmospheric conditions back home. June 4, 2014 But we’ll have to increase NASA’s budget and cooperate with China, report concludes. May 15, 2014 Elon Musk wants a ticket to Mars to cost $500,000. For those left behind, he'll have a cheap electric car for you. April 18, 2014 From space, the U.S. Curiosity rover looks scarab-like. April 8, 2014 NASA's Curiosity transmits Martian landscape image with a curious light in the distance. March 7, 2014 Today, Florida. Tomorrow, Mars. January 24, 2014 Astronomers are trying to figure how a rock shaped like a jelly doughnut has suddenly appeared out of nowhere on the surface of Mars. August 23, 2013 If engineers can build bots that can land themselves on Mars, surely they can produce less sophisticated systems for our vehicles. August 6, 2013 From the earliest years of the space program, the exploration of other worlds has been a source of the same techno-anxieties we have today. July 25, 2013 The Mars Reconaissance Orbiter spots Curiosity down on the surface June 28, 2013 Hint: it can involve ... theme weeks. May 17, 2013 Another victory for Opportunity, the spunky little rover driving on Mars April 4, 2013 It sits on the Red Planet, flapping hauntingly in the wind. March 20, 2013 Behold: "One of the whitest things" we've seen on the Red Planet March 12, 2013 NASA's Mars Curiosity rover drilled into a rock and found that it contained a clay-like material. February 8, 2013 ... Really, what is it? January 28, 2013 See how it stacks up with other extra-planetary exploration in one chart. December 5, 2012 Robotic science rover is set to launch in 2020. November 21, 2012 John Grotzinger, the principal investigator for the mission, said his team has found something really cool. October 31, 2012 "We have the same laws of chemistry, physics. If there are any locations where there are the basic ingredients, there should be the basic ingredients for life." October 31, 2012 NASA's Curiosity Rover is starting to get some analysis from its soil scooping mission October 24, 2012 A fleet of vehicles ready to explore lunar and Martian terrains October 11, 2012 A foreign object in Martian soil has made the rover stop in its tracks. September 11, 2012 The agency tries to keep humans from inadvertently populating other planets. September 10, 2012 Photos have not revealed any water or signs of life. September 7, 2012 The little rover is on the go, and NASA's Mars Reconnaissance Orbiter was there to document it. September 5, 2012 The red planet becomes the red and blue planet. August 28, 2012 Even better photographs of Mt. Sharp, the rover's eventual science destination. August 24, 2012 NASA has released a three-minute clip with sound. August 23, 2012 Humans, via their robotic rover, are now leaving their mark on the Red Planet. August 22, 2012 Planned for 2016, NASA's next mission to Mars will examine the planet's geophysics. August 20, 2012 N165 is the (un)luckiest bit of basalt on Mars. Which is saying something because there is a lot of basalt on Mars. August 16, 2012 NASA's scientists and engineers provided a behind-the-scenes look at their mission. August 13, 2012 Jet Propulsion Lab previously used cloud services from Microsoft, Google, Lockheed Martin Corp. August 10, 2012 The Curiosity science team gave the first firm indication of where they might be driving next. August 9, 2012 But it's not from the rover you think. Don't forget about Opportunity, who's been exploring the planet for eight years. August 6, 2012 Thanks to NASA, the world was able to follow the Mars rover's landing Monday morning. August 3, 2012 The latest mission to Mars is set to touch down on the red planet early Monday morning. July 31, 2012 Rover will inspect Mars' environment for minerals, gases and water.
<urn:uuid:c4200b63-99c3-45f8-ab55-da1d81906915>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/nasas-mission-mars/57640/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00478-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918507
949
3.84375
4
A malicious program that secretly integrates itself into program or data files. It spreads by integrating itself into more files each time the host program is run. If your Microsoft Exchange server gets infected, install a Gateway scanner such as F-Secure Anti-Virus for Microsoft Exchange to protect it.Microsoft has made a free tool available to clean up an infected Exchange mail database at: A virulent and widespread computer virus was found on Friday, March 26, 1999. This virus has spread all over the globe within just hours of the initial discovery, apparently spreading faster than any other virus before.Melissa works with Microsoft Word 97, Microsoft Word 2000 and Microsoft Outlook 97 or 98 e-mail client. You don't need to have Microsoft Outlook to receive the virus in e-mail, but it will not spread itself further without it.Melissa will not work under Word 95 and will not spread further under Outlook Express.Melissa can infect Windows 95, 98, NT and Macintosh users. If the infected machine does not have Outlook or internet access at all, the virus will continue to spread locally within the user's own documents.The details below refer to the Melissa.A variant. The virus spreads by e-mailing itself automatically from one user to another. When the virus activates it modifies user's documents by inserting comments from the TV series "The Simpsons". Even worse, it can send out confidential information from the computer without users' notice.The virus was discovered on Friday, late evening in Europe, early morning in the US. For this reason, the virus spread in the USA during Friday. Many multinational companies reported widespread infections, including Microsoft and Intel. Microsoft closed down their whole e-mail system to prevent any further spreading of the virus. The number of infected computers is estimated to be tens of thousands so far and it is rising quickly."We've never seen a virus spread so rapidly," comments Mikko Hypponen, F-Secure's Manager of Anti-Virus Research. "We've seen a handful of viruses that distribute themselves automatically over e-mail, but not a single one of them has been as successful as Melissa in the real world." "The virus won't spread much during this weekend. We will see the real problem on Monday morning", continues Hypponen. "When a big company gets infected, their e-mail servers are seriously slowed down and might even crash, as people start to e-mail large document attachments without realising it." For more information on Melissa, see Global Melissa Information Center at http://www.F-Secure.com/melissa/ Melissa was initially distributed in an internet discussion group called alt.sex. The virus was sent in a file called LIST.DOC, which contained passwords for X-rated websites. When users downloaded the file and opened it in Microsoft Word, a macro inside the document executed and e-mailed the LIST.DOC file to 50 people listed in the user's e-mail alias file ("address book").The e-mail looked like this: - From: (name of infected user) - Subject: Important Message From (name of infected user) - To: (50 names from alias list) - Body: Here is that document you asked for ... don't show anyone else ;-) - Attachment: LIST.DOC Do notice that Melissa can arrive in any document, not necessarily just in this LIST.DOC where it was spread initially.Most of the recipients are likely to open a document attachment like this, as it usually comes from someone they know. After sending itself out, the virus continues to infect other Word documents. Eventually, these files can end up being mailed to other users as well. This can be potentially disastrous, as a user might inadvertently send out confidential data to outsiders.The virus activates if it is executed when the minutes of the hour match the day of the month; for example, 18:27 on the 27th day of a month. At this time the virus will insert the following phrase into the current open document in Word: - "Twenty-two points, plus triple-word-score, plus fifty points for using all my letters. Game's over. I'm outta here". This text, as well as the alias name of the author of the virus, "Kwyjibo", are all references to the popular cartoon TV series called "The Simpsons". For more information on this connection, see this Simpsons web page: The main difference between Melissa.I and Melissa.A is that this variant uses a random number to select subject lines and message bodies of outgoing messages from eight different alternatives: 1. Subject: Question for you... It's fairly complicated so I've attached it. 2. Subject: Check this!! This is some wicked stuff! 3. Subject: Cool Web Sites Check out the Attached Document for a list of some of the best Sites on the Web 4. Subject: 80mb Free Web Space! Check out the Attached Document for details on how to obtain the free space. It's cool, I've now got heaps of room. 5. Subject: Cheap Software The attached document contains a list of web sites where you can obtain Cheap Software 6. Subject: Cheap Hardware I've attached a list of web sites where you can obtain Cheap Hardware" 7. Subject: Free Music Here is a list of places where you can obtain Free Music. 8. Subject: * Free Downloads Here is a list of sites where you can obtain Free Downloads. In the last subject, the asterisk will be replaced with a random character.Unlike Melissa.A, this variant uses a different registry key (called "Empirical") to check whenever mass mailing has been done.Melissa.I contains an additional payload as well. If the number of minutes equals the number of hours, the virus inserts the following text to the active document: - All empires fall, you just have to know where to push. At the same time, the virus clears the mark from the registry causing the mass mail part to be reactivated a soon as a document is opened or closed, a new document is created or the Word is restarted. This Melissa variant sends itself to 100 recipients from each Outlook address book. The E-mail looks like this: - Subject: Duhalde Presidente - Body: Programa de gobierno 1999 - 2004. W97M/Melissa.U is a similar to Melissa.A. Unlike Melissa.A, this variant uses the module name "Mmmmmmm" and it has a destructive payload. This variant deletes the following system files: To do this, the virus removes hidden, system, read-only and archive attributes from these files. Unlike W97M/Melissa.A, it sends itself only to 4 recipients. The message itself is also different: - Subject: pictures (user name) - Body: what's up ? Where (user name) is replaced with Word's registered user name.The following texts will be added to every infected document: - Loading... No - >>>>Please Check Outlook Inbox Mail<<<<< This variant has been detected since October 13th, 1999. This variant is similar to Melissa.U. This variant sends itself to 40 recipients and the message is different: - Subject: My pictures (user name) The message body is empty, and (user name) is replaced with Word's registered user name. After Melissa.V has mailed itself, it will delete all files from the root of the following drives: When this has been done, the virus shows a message box with the following text: - Hint: Get Norton 2000 not McAfee 4.02 This variant has been detected since October 13th, 1999. Melissa.W does not lower macro security settings in Word 2000. Otherwise it is functionally equal with Melissa.A. Melissa.AO uses Outlook to send e-mail message with: - Subject: Extremely URGENT: To All E-Mail User - - Body: This announcement is for all E-MAIL user. Please take note that our E-Mail Server will down and we recommended you to read the document which attached with this E-Mail. - Attachment:[infected document] The payload activates at 10 am on 10th day of each month when the virus inserts the following text to the active document: - Worm! Let's We Enjoy. Description Created: 2010-07-23 10:10:17.0 Description Last Modified: 2010-07-23 10:13:03.0
<urn:uuid:dd25faa3-0eab-40dc-8a85-8d84e0a4a9a4>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/virus_w32_melissa.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00478-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899151
1,771
2.703125
3
The South Dakota State Historical Society has recently added a new Web site to give researchers more insight into their family history. The "Biographical Index of South Dakotans" is an online index of biographies written on prominent South Dakotans from roughly 1897 to 1930. The index lists surnames in alphabetical order giving the book number and page number in which the biography is found. "These biographies were written in 12 different books which are found in the library of the South Dakota State Archives," said Matthew T. Reitzel, manuscript archivist with the State Historical Society, located in the CulturalHeritageCenter. "Various libraries throughout South Dakota may also have copies of the books listed in the index, and will find this Web site useful." Reitzel mentions that the index itself contains 7,139 names and that a few of the biographies also feature a photo of the individual. "The index itself was created several years ago," Reitzel said. "Historical Society volunteers recently transferred the typewritten index to a digital format, making it searchable online." The State Archives Web site also has a link "For Genealogists" which has online indexes for naturalization records, newspaper databases, cemetery records, South Dakota surname indexes and other genealogy resources.
<urn:uuid:7df47ef2-ef98-4417-94cb-f86c2d190cc2>
CC-MAIN-2017-04
http://www.govtech.com/health/South-Dakota-State-Historical-Society-Offers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954382
262
2.59375
3
What Is Tokenization in the Cloud? Tokenization is the process of substituting a piece of sensitive data element with a random, non-sensitive equivalent, referred to as a token. The token has no extrinsic or exploitable meaning or value. Applications and processes can operate with tokens the same way as they would with the original data. Unlike encryption (hyperlink here to the encryption hubpage), in tokenization there is typically no mathematical relationship between the original data and the token. For example, a 16-digit credit card number 4362 4890 2300 8650 may get a replacement token that looks like this: 4362 04F5 3A0D 8650 Tokenization is often used in high assurance environments where it is critical to limit the exposure of the original data to applications, data stores, users, and processes, thereby reducing the risk of a data compromise. Another common driver for tokenization is data residency where regulations prevent data from leaving geographic boundaries. A common method for tokenization systems is using a database to map tokens to original data and back. As such, the security, reliability, and availability of the database are of utter most importance. In a distributed environment, there is the additional requirement of keeping distributed token databases consistent to provide data integrity across data centers. click to zoom Take a Deeper Dive - Tokenization may be used to safeguard sensitive data involving, for example, bank accounts, financial statements, medical records, criminal records, driver’s licenses, loan applications, stock trades, voter registrations, and other types of personally identifiable information (PII). Tokenization is often used in credit card processing. The PCI Council defines tokenization as “a process by which the primary account number(PAN) is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated PAN value. The security of an individual token relies predominantly on the infeasibility of determining the original PAN knowing only the surrogate value”. The choice of tokenization as an alternative to other techniques such as encryption will depend on varying regulatory requirements, interpretation, and acceptance by respective auditing or assessment entities. This is in addition to any technical, architectural or operational constraint that tokenization imposes in practical use. - The security and risk reduction benefits of tokenization require that the tokenization system is logically isolated and segmented from data processing systems and applications that previously processed or stored sensitive data replaced by tokens. Only the tokenization system can tokenize data to create tokens, or detokenize back to redeem sensitive data under strict security controls. The token generation method must be proven to have the property that there is no feasible means through direct attack, channel analysis, token mapping table exposure or brute force techniques to reverse tokens back to live data. Tokenization, as a data protection technology, has been gaining significance in the wake of Snowden’s revelation of government surveillance and several countries’ move to adopt data residency regulations. Read more about how the industry is considering tokenization technologies. - Merchant community calls for an open and universal tokenization standard to protect payment card information - Federal Reserve’s Mobile Payments Industry Working Group issues statement that sees tokenization as the solution to a number of problems with respect to mobile and electronic payment adoption - Banks push for tokenization standard to secure credit card payments - The Accredited Standards Committee X9 is working on a set of standards to govern tokenization Articles on Cloud Data Tokenization Read the latest updates, insights, tips and emerging trends for tokenization in the cloud Compliance and Tokenization Tokenization technologies and regulatory compliance are closely linked together. Because tokenization completely replaces sensitive values with random values, systems that use the token instead of the real value are often exempt from audits and assessments required by regulations, thus reducing duration and cost of deployment. In an environment where the application that processes the data resides outside the regions to which personal data are permitted to transfer (often by privacy laws), tokenization allows you to use the application without violating data residency constraints, since the data processed by the application bears no relationship to and information of the original data. Lastly, breach notification laws often do not apply when the tokenized data is compromised, provided that the token database is not disclosed. This is the case with many US State breach notification laws as well as with EU’s data protection directive. Tokenization: Expert Series DLP Piper’s privacy experts discuss meeting European Data Protection and Security Requirements with CipherCloud Solutions Visa’s tokenization best practices guide is one of the earliest industry guides for secure tokenization. Smartcard Alliance Payment Council’s whitepaper on encryption, tokenization discusses these technologies’ impact on payment fraud prevention. Resources: Insights and Media About Cloud Data Tokenization
<urn:uuid:7bb619f1-7f33-4f8e-995e-0820950ded5c>
CC-MAIN-2017-04
https://www.ciphercloud.com/resources/tokenization-in-the-cloud-resource-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891983
993
2.921875
3
The Wi-Fi Alliance sent out a press release this morning sharing the results of their Wi-Fi Security Barometer Survey. The survey was conducted in August and involved interviewing 1,000 people regarding the security of the wireless networks they use. 97% of those responding thought their network was secure but when it came down to brass tacks and follow-up questions that asked if specific methods had been employed, the survey found networks were less secured than individuals thought. Most users have taken basic security steps but only 59% were using strong passwords. The Wi-Fi Alliance sees this as an increase in basic measures with definite room to grow for consumers taking more steps and becoming more educated about securing their Wi-Fi. Other survey findings included: - Two out of three Wi-Fi users recognize that responsibility for the security of their data lies with them - Eighty-five percent of survey respondents understand that their Wi-Fi devices should not be set for automatic sharing, yet only 62 percent actually have auto-sharing turned off - Only 18 percent of users report that they use a VPN (virtual private network) tool when in a hotspot - Users who have suffered the effects of a computer virus are no more likely to have better Wi-Fi security behavior than those who have never had any computer viruses - Users who ranked themselves as “tech-savvy” are no more likely to score better on measures of Wi-Fi security behavior than those who said they are less comfortable with technology To help, the Wi-Fi Alliance has security tips at http://www.wi-fi.org/security. They have also established a simple checklist to cover the basics of securing your network. Getting a passing grade on Wi-Fi security can be as simple as A-B-C: A: Enable WPA2™ security on your network and devices. Look for products with Wi-Fi Protected Setup™ for simple, easy-to-use steps to enable security. B: Passwords are in your control. Create a strong Wi-Fi network password that is at least eight characters long and includes a mixture of upper and lower case letters and symbols. It is a good practice to change passwords on a regular basis, perhaps once a year during Cyber Security Month. C: When on the go, connect to networks you know and trust and turn off automatic sharing on devices so you can control what you connect to and who/what connects to you. I’d go ahead and throw “Use HTTPS whenever possible, especially when using Wi-Fi” in there and then you’re pretty well off for consumer use of Wi-Fi.
<urn:uuid:6bf8cb5d-b531-4bec-967e-b86aba3af138>
CC-MAIN-2017-04
https://www.404techsupport.com/2011/10/is-your-wi-fi-secure-or-do-you-just-think-it-is/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00166-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967613
546
2.671875
3
The world is now a big digital multimedia market and different kinds of content are easily available in different media forms today. This has led to the need for continuous innovation in data transfer and connectivity. Though the traditional wi-fi serves this purpose, its functionality, especially its speed, is limited by its capabilities. These limitations are now being addressed by WiGig technology, which uses 60 GHz frequency for high-speed data transfer. WiGig allows wireless communication at multi-gigabit speeds and enables high-performance wireless data. According to OECD, the external financial flows to African economies are increasing and so is its population. In terms of employment, the growth has been good but the rate of job creation remains a bit slow. WiGig technology is based on IEEE 802.11ad, the standard wireless communication which makes use of 60 GHz frequency. This technology supports data transfer up to speeds of 7 Gbits/sec. The WiGig technology can be used to transfer data from smartphones, notebooks, personal computers or any other device (WiGig compatible) to other devices. It can also be used to transfer videos from these very devices. To overcome signal decay, WiGig uses a process called ‘adaptive beamforming’. The antennas do this by adjusting both the amplitudes and the phase shifts of their broadcasted waves. The reception of the signal is then optimized by minimizing different kinds of problems like the error between the antennas’ output and the expected signal. The internet penetration in the Middle East has increased from 48.1% since last year December during which the number (number of internet users) stood at 113.6 million. While the internet penetration in Africa was about 26.5% last year, with the number of internet users being 297.8 million. Samsung recently developed a “high-performance modem technologies” along with a “wide-coverage beam-forming antenna”, which could resolve the line-of-sight (range) issues to an extent. This means that one of the largest consumer device makers will be embracing WiGig. The company also remarked that this tech will be “integral” to Samsung’s smart home and the internet of things efforts. Some of the major vendors of this technology mentioned in the report are Qualcomm, Cisco, Intel, Dell, Panasonic and Agilent Technologies. The demand for high-speed data access and connectivity is driving the need for technologies that have the capabilities to meet this particular internet demand. The other factors driving the growth of WiGig Market are its compatibility with different devices and also its cost-effectiveness. Few limitations of the WiGig technology are the range which it supports, i.e. natural interference problems, so the data transfer would require the devices to be close by and in line-of-sight of each other, and the other limitation of the WiGig market is its low availability of vendors for the same. WHAT THE REPORT OFFERS
<urn:uuid:aae5eb74-02ed-4a02-8984-742200bc5852>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/middle-east-and-africa-wigig-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944744
614
2.578125
3
If attackers intent on data theft can tap into an electrical socket near a computer or if they can draw a bead on the machine with a laser, they can steal whatever is being typed into it. How to execute these attacks will be demonstrated at the Black Hat USA 2009 security conference in Las Vegas later this month by Andrea Barisani and Daniele Bianco, a pair of researchers for network security consultancy Inverse Path. “The only thing you need for successful attacks are either the electrical grid or a distant line of sight, no expensive piece of equipment is required,” Barisani and Bianco say in a paper describing the hacks. The equipment to carry out the power-line attack could cost as little as $500, and the laser attack gear costs about $100 if the attacker already owns a laptop with a sound card, says Barisani. Carrying out the attacks took about a week, he says. “We think it is important to raise the awareness about these unconventional attacks and we hope to see more work on this topic in the future,” Barisani and Bianco say in their paper. Others with more time and money could doubtless create better spying tools using the same concepts, they say. Diagram showing where data is stolenIn the power-line exploit, the attacker grabs the keyboard signals that are generated by hitting keys. Because the data wire within the keyboard cable is unshielded, the signals leak into the ground wire in the cable, and from there into the ground wire of the electrical system feeding the computer. Bit streams generated by the keyboards that indicate what keys have been struck create voltage fluctuations in the grounds, they say. Attackers extend the ground of a nearby power socket and attach to it two probes separated by a resistor. The voltage difference and the fluctuations in that difference – the keyboard signals – are captured from both ends of the resistor and converted to letters. To pull the signal out of the ground noise, a reference ground is needed, they say. “A “reference” ground is any piece of metal with a direct physical connection to the Earth, a sink or toilet pipe is perfect for this purpose (while albeit not very classy) and easily reachable (especially if you are performing the attack from [a] hotel room,” they say in their paper. Since keyboards and mice signals are in the 1 to 20 kHz range, a filter can isolate that range for listening, they say. Variations in individual keyboards and mice result in each keyboard signaling in a slightly different frequency range. With careful filtering, that makes it possible to zero in on a particular keyboard in an environment where many keyboards are in use, the researchers say. The attack proved successful when tapping electric sockets located up to 15 meters from where the target computer was plugged in the researchers say. This method would not work if the computer were unplugged from the wall, such as a laptop running on its battery. The second attack can prove effective in this case, Bianco’s and Barisani’s paper says. Attackers point a cheap laser, slightly better than what is used in laser pointers, at a shiny part of a laptop or even an object on the table with the laptop. A receiver is aligned to capture the reflected light beam and the modulations that are caused by the vibrations resulting from striking the keys. This modulation is converted to an electrical signal that is fed into a computer soundcard. “The vibration patterns received by the device clearly show the separate keystrokes,” the researchers’ paper says. Each key has a unique vibration pattern that distinguishes it from the rest. The spacebar creates a significantly different set of vibrations, so the breaks between words are readily apparent. Analyzing the sequences of individual keys that are struck and the spacing between words, the attacker can figure out what message has been typed. Knowing what language is being typed is a big help, they say. Laptop lids, especially shiny logos and areas close to the hinges, provide the most easily read vibrations. Anyone worried about this type of attack can make sure there is no line of sight to the laptop, move position frequently while typing and polluting the signal by striking random keys and later deleting them with the backspace key. While they admit their hacking tools are rudimentary, they believe they could be improved upon with a little time, effort and backing. “If our small research was able to accomplish acceptable results in a brief development time (approximately a week of work) and with cheap hardware,” they say. “Consider what a dedicated team or government agency can accomplish with more expensive equipment and effort,”
<urn:uuid:e268e2bd-99b7-4db3-97a8-1396a9007217>
CC-MAIN-2017-04
http://www.cnmeonline.com/security-2/how-to-use-electrical-outlets-and-cheap-lasers-to-steal-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00093-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956286
965
2.859375
3
With all the end-of-year prognosticating that’s going on, we thought it would be fun to take a look back at some of the office imaging milestones as a change of pace. You’ve heard the old saying, “You can’t tell where you’re going if you don’t know where you’ve been.” This historical perspective illustrates how far the industry has come in a relatively short time, and suggests what’s possible in the future. 1876: Thomas Edison invents the Electric Pen. The Edison Electric Pen was designed to make copies of handwritten text. It worked as a perforating device to create stencils, and was sold with all the equipment and materials needed to make copies from those stencils. 1887: AB Dick licenses Edison’s patents to develop the mimeograph. AB Dick took Edison’s stencil-based copying to the next level, first by using a waxed paper stencil and then a coated paper wrapped around a rotating drum. Mimeograph machines are still used today, especially in regions where electricity is in short supply. 1938: Chester Carlson and Otto Kornei invent xerography. Xerography is the technology on which all electrostatic printing and copying is based today. It took Carlson 18 years to refine and automate the process to the point where it could be commercialized. 1949: Haloid introduces the first commercial xerographic copier. Haloid’s XeroX Model A was the first modern copying device. It was messy to use and required a number of manual steps. As a result, it saw limited success. Still, Haloid saw enough potential in the product to eventually change its name to Haloid Xerox, then later to Xerox. 1951: First commercial inkjet printing devices emerge. Siemens is credited with developing the first printing devices using ink. The application was printing output from medical strip chart recorders. 1959: Xerox 914 becomes the first successful plain paper copier. The Xerox 914 is the product that established the copier as a standard and necessary device for the office, at least for companies that could afford one. Haloid Xerox sold 10,000 of them by 1962, twice as many as projected. 1968: 3M introduces the first color copier. The Color-in-Color copier used a dye-sublimation process rather than an electrostatic process. It was popular primarily with artists. 1973: Canon introduces the first electrostatic color copier. Although an important milestone, the Canon Color Copier had little success. 1975: The Xerox Telecopier 200 is the first laser-based electrostatic device. The Telecopier 200 was actually a fax machine, but its use of laser technology set the stage for the advent of laser copiers and printers. It used technology developed years earlier at Xerox’s Palo Alto Research Center. 1976: IBM introduces the first laser printer. The IBM 3800 was designed for use with mainframe computers for high-volume printing using continuous feed paper. IBM beat Xerox, which invented the technology, to the market by a year when the Xerox 9700 was released. 1984: HP introduces the Thinkjet, the first mass-market inkjet printer. Although inkjet output devices had been around for many years, HP popularized the technology with the Thinkjet. In 1987, HP’s Paintjet became the first full-color inkjet printer. 1985: The first MFP devices emerge (sort of). In 1985, you could turn your Commodore 64 computer and dot-matrix printer into a multifunction printer with an add-on device from Scanntronic. This do-it-yourself conversion preceded commercial MFPs, which emerged in the 1990s, by at least five years.
<urn:uuid:c6ec297b-1323-47ee-825e-7f866015ed74>
CC-MAIN-2017-04
http://www.enxmag.com/twii/the-week-in-imaging-twii/editors-blog/2016/12/a-look-back-at-document-imaging-milestones/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943005
806
2.78125
3
How smart should the smart grid be? Updating the electrical grid makes sense, but there are still a few bugs to work out - By Chris Bronk - Jun 24, 2010 Chris Bronk is a research fellow at Rice University’s Baker Institute for Public Policy and an adjunct instructor of computer science at Rice. Although the big energy issue during the 2008 presidential election was the debate over clean coal, the biggest development from the Obama administration's Energy Department has been its smart-grid initiative for the electricity industry. The economic stimulus law set aside some $11 billion for smart-grid technologies, nearly double the money for any other energy initiative and roughly a sixth of the stimulus package's energy component. The smart grid was a shovel-ready project, and since the passage of the act, millions of U.S. households have seen a shiny new digital power meter show up on their houses. The smart grid is supposed to make the electricity grid more intelligent by incorporating information technology. Basically, we are converting our 20th-century grid from a broadcast model of electricity delivery to a two-way communication system through which electricity is produced and distributed more efficiently. At least, that’s the rhetoric. Here’s how I see it. The new meters allow your electrical utility to charge prices based on demand in a more fine-grained manner than before. Usually, we sign up for month-to-month pricing of our electricity or lock in a rate if we don’t want to play the market. Every now and again, the meter reader comes by and sees how much electricity each of us uses, passes along that information, and we get a bill. The smart meter allows utilities to store data on how much electricity we consume and when, down to the second. That means the once-a-month meter guy pricing model — and his job — will go right out the window, which is both good and bad. The power company could charge more at peak times, such as late afternoon and early evening when we jack up the air conditioning and flip on the TV. Also, our appliances could potentially talk to the power company and turn themselves down or off to avoid brownouts. The big positive is that the smart grid has the potential to be truly green IT. As heavily regulated producers and distributors, most power companies increase their profits on volume. The more electricity they sell, the more they take to the bank, depending how much it costs to supply the electricity. And there’s the rub. The more electricity the power producers supply, the more coal, natural gas and other fossil fuels they must burn. More fossil fuel burning equals more greenhouse gases, global warming and hungry polar bears. So, if done right, the smart grid could allow the utilities to remain profitable while supplying us with less electricity. That’s smart. Of course, that end-state is some distance from here. The utilities have plugged in the meters and can bill us creatively, but we could also creatively bill ourselves. As expected, the biggest concern with the smart grid is keeping the system secure from outside tampering. Here in Houston, I’d be thrilled to have an electric bill about one-tenth of what mine is at this time of year. That provides an incentive to hack my meter and have the local utility believe that I set my air conditioning at 90 degrees and have about as many appliances as the average Amish family. The government will have to establish the plans and policy to deal with customers smart enough to figure out how to hack their piece of the grid. Well, that would be the smart thing to do, at least. Chris Bronk is a research fellow at Rice University’s Baker Institute for Public Policy and an adjunct instructor of computer science at Rice. He previously served as a Foreign Service Officer and was assigned to the State Department’s Office of eDiplomacy.
<urn:uuid:78aafaf2-705f-489f-8f83-e418cc9d5e3e>
CC-MAIN-2017-04
https://fcw.com/articles/2010/06/28/comment-chris-bronk-smart-grid.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956762
801
2.609375
3
A team of researchers from the Massachusetts Institute of Technology have devised a new framework for solving difficult network optimization problems. Mathematicians and computer scientists have long used the maximum flow problem, or “max flow,” to determine the most efficient path between points on a network, but as networks grow ever-more complex, solving the series of equations becomes prohibitively time-consuming. Recently, however, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an almost-linear-time algorithm for computing “max flow” that could boost the efficiency of even the largest networks. The maximum-flow problem represents a network as a graph with a series of nodes and connecting lines – aka vertices and edges. Considering that each edge has a maximum capacity, the goal of “max flow” is to calculate how many units of something can be moved from one end of the network to the other. While traditional versions of “max flow” work well for smaller networks, as networks get exponentially larger, the problem becomes intractable, requiring too much time and overhead. “There has recently been an explosion in the sizes of graphs being studied,” says one of the lead authors, Jonathan Kelner, an associate professor of applied mathematics at MIT and a member of CSAIL. “For example, if you wanted to route traffic on the Internet, study all the connections on Facebook, or analyze genomic data, you could easily end up with graphs with millions, billions or even trillions of edges.” The CSAIL researchers developed a new theoretical algorithm that has the potential to dramatically reduce the number of operations needed to solve the max-flow problem. With this near-linear solution, it may be possible to optimize traffic across enormous networks like the Internet or the human genome. Where previous algorithms treated all the paths within a graph as equals, the new technique pinpoints the routes that create a bottleneck within the network. The team’s algorithm separates each graph into clusters of well-connected nodes, and the paths between them that create bottlenecks. “Our algorithm figures out which parts of the graph can easily route what they need to, and which parts are the bottlenecks. This allows you to focus on the problem areas and the high-level structure, instead of spending a lot of time making unimportant decisions, which means you can use your time a lot more efficiently,” Kelner says. The enhancements have resulted in a near-linear algorithm, i.e., the amount of time required to solve a problem being almost directly proportional to size of the network. If the number of nodes expands by a factor of 10, the time to solution would go up by a factor of 10 (or close to it) instead of a factor of 100 or 1,000, which would be experienced under previous techniques. “This means that it scales essentially as well as you could hope for with the size of the input,” says Kelner. Besides Kelner, the research team includes Lorenzo Orecchia, an applied mathematics instructor at MIT, as well as graduate students Yin Tat Lee and Aaron Sidford. The authors will present their paper at the ACM-SIAM Symposium on Discrete Algorithms, which takes place this week in Portland, Oregon. Their work, which won the best paper award at this year’s ACM-SIAM conference, also appears in the ACM-SIAM Symposium on Discrete Algorithms journal.
<urn:uuid:ab3e0ede-53cf-4ee4-a375-941faae78979>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/01/09/near-linear-network-optimization-technique-proposed/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00331-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953493
733
3.84375
4
Invention, Watson, Diversity Invention is simply in IBM's DNA. Herman Hollerith, who founded IBM's precursor, Tabulating Machine Company, invented a tabulating machine that was used in U.S. censuses. IBMers also invented the technology behind Excimer Laser Surgery, which became the foundation for LASIK surgery. And IBM also was the first major IT vendor to get behind Linux. IBM helped establish the open-source operating system as a mainstream software platform in business by declaring in 2005 that it would not enforce its patents against the Linux kernel.In another type of innovation, IBM led the way for equal employment opportunity, particularly in IT. Before the H-1B frenzy to draw qualified IT workers from abroad, IBM made efforts to integrate its workforce in the U.S. during a time when it was not popular. A description of the policy on IBM's site reads: One year before the 1954 Supreme Court decision Brown v. the Board of Education and 11 years before the Civil Rights Act of 1964, Thomas J. Watson, Jr. issued a policy letter to his employees stating: "It is the policy of this organization to hire people who have the personality, talent and background necessary to fill a given job, regardless of race, color or creed." IBM has historically taken an intellectual approach to its hiring process, being truly blind to human traits beyond expertise and character. Its diversity initiatives reflect this thinking and have helped redefine the workplace. But perhaps the hottest and most recent illustration of IBM's innovative prowess can be summed up in one word: Watson. An IBM summary says Big Blue's computer, code-named Watson, leverages leading-edge Question-Answering (QA) technology, allowing the computer to process and understand natural language. It incorporates massively parallel analytical capabilities to emulate the human mind's ability to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately, demonstrate confidence to deliver precise final answers. Watson demolished human competitors in a highly touted series of Jeopardy! games. It is a technology with enormous upside. In discussing Watson with eWEEK, Steve Mills, IBM's senior vice president and group executive for Software & Systems, compared Watson to a search engine, specifically Google, and said Watson is a totally different type of technology. Though Mills added that "We can do what they do." But IBM decided to build Watson. "We built it to come back with THE answer or a relatively few answers and then you apply your judgment on top of that," Mills said. Mills' comment sort of reminds me of how the rock group Led Zeppelin once talked about their love for all forms of music, particularly R&B. And they said something to the effect of: "We can play what they play, but they can't play us." The group then went on to back up its claim by throwing down on a rocked out version of James Brown's "Sex Machine." The two versions are now part of an innovative mashup. IBM's research and engineering prowess gives the company that same kind of capability to be whatever it wants to be. It's part of the culture. As IBM director and American Express CEO Kenneth Chenault put it at IBM's Centennial celebration: "The greatest invention ever created by IBM is the IBMer." And he noted that IBM is marked by "Reinvention and constant values - unchanging change. It may sound like an oxymoron but it's at the heart of IBM." Anyway, let me wind this up. IBM is the most innovative company in IT hands down. As part of its story on Salesforce.com being the No. 1 innovator, Forbes gives you Chatter, which is Facebook for the CRM world. As part of my selection of IBM as most innovative, I give you Watson. Who you gonna call? Chatter is basically a clone of Facebook. And Salesforce will acknowledge as much. It says so in the Forbes piece - that Salesforce.com CEO Marc Benioff assigned his development team to make the company look like a social network. This piece here is not meant to be a Forbes or Salesforce hate fest. I gain tons of insight from Forbes. And I have the utmost respect for Salesforce.com and its "No Software" strategy - particularly its pioneering efforts in the cloud. Yet, my personal vote for the company's most innovative move - after the initial cloud play - is its pioneering of the whole PaaS (platform-as-a-service) phenomenon with Force.com. That was a smart move. Plus, I can't totally hate on Salesforce because some of my former colleagues and industry icons work there. They make my top 10 most innovative list. The Forbes' print edition includes a beautiful photo of Benioff that absolutely captures the man and the character of his company. He's peering around a corner with a total Cheshire cat smile that makes you wonder where the canary is. Yeah, they're innovative; just not as innovative as IBM. For its part, IBM became a leader in the supercomputer space and was the first to break the petaflop barrier - to operate at speeds faster than one quadrillion calculations per second.
<urn:uuid:95ac44e5-f4ca-4f9b-a34d-90495a639653>
CC-MAIN-2017-04
http://www.eweek.com/c/a/IT-Infrastructure/Why-IBM-is-the-Most-Innovative-Company-in-IT-439547/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00057-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959884
1,062
3.078125
3
Big Data Modeling – Part I – Defining “Big Data” and “Data Modeling” Last month I participated in a DataVersity webinar on Big Data Modeling. There are a lot of definitions necessary in that discussion. What is meant by Big Data? What is meant by modeling? Does modeling mean entity-relationship modeling only or something broader? Defining “Big Data” The term “Big Data” implies an emphasis on high volumes of data. What constitutes big volumes for an organization seems to be dependent on the organization and its history. The Wikipedia definition of “Big Data” says that an organization’s data is “big” when it can’t be comfortably handled by on hand technology solutions. Since the current set of relational database software can comfortably handle terabytes of data and even desktop productivity software can comfortably handle gigabytes of data, “big” implies many terabytes at least. However, the consensus on the definition of “Big Data” seems to be with the Gartner Inc. definition that says that “Big Data” implies large volume, variety, and velocity of data. Therefore, “Big Data” means not just data located in relational databases but files, documents, email, web traffic, audio, video, and social media, as well. The various types of data provides the “variety”, and not just data in an organization’s own data center but in the cloud and data from external sources as well as data on mobile devices. The third aspect of “Big Data” is the velocity of data. The ubiquity of sensor and global position monitoring information means a vast amount of information available at an ever increasing rate from both internal and external sources. How quickly can this barrage of information be processed? How much of it needs to be retained and for how long? What Is “Data Modeling”? Most people seem to picture this activity as synonymous with “entity relationship modeling.” Is entity relationship modeling useful for purposes outside of relational database design? If modeling is the process of creating a simpler representation of something that does or might exist, we can use modeling for communicating information about something in a simpler way than presenting the thing itself. So modeling is used for communicating. Entity relationship modeling is useful to communicate information about the attributes of the data and the types of relationships allowed between the pieces of data. This seems like it might be useful to communicate ideas outside of just relational databases. Data modeling is also used to design data structures at various levels of abstraction from conceptual to physical. When we differentiate between modeling and design, we are mostly just differentiating between logical design and design closer to the physical implementation of a database. So data modeling is also useful for design. In the next part of this blog I’ll get back to the question of “Big Data Modeling.”
<urn:uuid:39038d8f-c775-4da6-ab32-f0ee321dc073>
CC-MAIN-2017-04
https://infocus.emc.com/april_reeve/big-data-modeling-part-i-defining-big-data-and-data-modeling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00267-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926439
612
2.6875
3
There are many interesting new issues that seem to have come with the addition of voice and video to the data network. Most of the engineers that are now working on VoIP networks come from either a pure data network background or a traditional phone system background. Each network offered certain issues that where common to those specific networks. As the world of networking converged these two networks over the past dozen years or so, there have been many new issues to overcome and learn how to solve. The growing pains associated with the shift from two networks into one seem to still be mounting based on the amount of technical support calls and forum questions on the common issues of a data/voice/video network. Many different types of issues are common with data/voice networks such as bad voice quality relating to issues such as delay, jitter, and dropped packets. These issues usually can be fixed with the proper configuration of your Quality of Service settings. One Way Audio Troubleshooting Methodology One issue that seems to materialize more frequently is the issue of “One Way Audio”. This scenario occurs when party A in a call can hear party B, but party B cannot hear party A. One-Way Audio Issues in an IP Telephony network can be varied, but the root of the problem usually involves IP routing issues. Here are a few items to check to make sure routing issues are eliminated as the root cause of the one way audio issues. 1. Always check basic IP reachability first. Because Real-Time Transport Protocol (RTP) streams are connectionless (transported over UDP), traffic may travel successfully in one direction but get lost in the opposite direction. This diagram shows a scenario in which this can happen: Subnets A and B can both reach Subnet X. Subnet X can reach Subnets A and B. This allows the establishment of TCP connections between the end stations (A and B) and the Cisco CallManager. Therefore, signaling can reach both end stations without any problems, which allows the establishment of calls between A and B. Once a call is established, an RTP stream that carries the audio must flow in both directions between the end stations. In some cases, Subnet B can reach Subnet A, but Subnet A cannot reach Subnet B. Therefore, the audio stream from A to B always gets lost. This is a basic routing issue. Use IP routing troubleshooting methods in order to successfully ping Phone A from Gateway B. Remember that ping is a bi-directional verification. This document does not cover IP routing troubleshooting, however, you should confirm these as some initial steps to follow: - Default gateways are configured at the end stations. - IP routes on those default gateways lead to the destination networks. 2. Bind the H.323 Signaling to a Specific IP Address on the Cisco IOS Gateway and Routers When the Cisco IOS gateway has multiple active IP interfaces, some of the H.323 signaling may be sourced from one IP address and other parts of it may reference a different source address. This can generate various kinds of problems. One such problem is one-way audio. In order to get around this problem, you can bind the H.323 signaling to a specific source address. The source address can belong to a physical or virtual interface (loopback). Use the h323-gateway voip bind srcaddr ip-address command in interface configuration mode. Configure this command under the interface with the IP address to which the Cisco CallManager points. 3. Bind the MGCP Signaling to the MGCP Media Packet Source Interface on the Cisco IOS Gateway One-way voice can occur in Media Gateway Control Protocol (MGCP) gateways if the source interface for signaling and media packets is not specified. You can bind the MGCP media to the source interface if you issue the mgcp bind media source-interface interface-id command and then the mgcp bind control source-interface interface-id command. Reset the MGCP gateway in Cisco CallManager after you issue the commands. One Way Audio on Cisco IP Communicator With regards to issues involving IP Communicator, the issues mentioned above could still apply. Investigate those possibilities first. If the above issues seem to be in check then one of the following should be investigated. 1. Version Control Software VPN clients are overlaid on top of an existing IP network, meaning that there are essentially two IP addresses on the computer when a VPN is in use: - The IP address from the underlying network - The IP address provided by the VPN client that is used by parties on the other side of the connection to communicate with applications on the computer Some VPN clients, such as Cisco Systems VPN Client 3.x, assign the VPN IP address at a very low level, which makes it difficult for Cisco IP Communicator to specify the correct address. To eliminate this problem, Cisco IP Communicator queries the Cisco VPN client directly. Other VPN Clients, such as the Microsoft PPTP client and Cisco VPN Client 4.x simply appear as alternative network interfaces. In these cases, the IP address can be selected with the same auto-detection process that is used to resolve selection when there are multiple interfaces. Other third-party VPN clients might be unsupported and result in one-way audio. Fix the problem by running the Cisco IP Communicator Administration Tool to create a getIP.asp audio IP address reflector web page. Once that is complete, specify the URL for the web page in Cisco CallManager Administration. Cisco IP Communicator will attempt to fetch this reflector page rather than using other methods of auto-detection. The reflector page returns the IP address from which it sees the request originate, which is a relatively reliable way to identify Cisco IP Communicator’s VPN IP address. See the opening paragraphs of this section (Resolving Audio IP Address Auto-Detection Problems) for more information on completing the getIP.asp tasks. By making sure your PCs are running the latest VPN client (VPN 5.x as of today), and the CIPC application (CIPC 7.0.3 as of today), you will eliminate most of the one way audio issues that were part of the earlier versions of these applications. As with most issues in technology there is not one correct answer to every problem. In that most networks are very different from each other not all answers will work on every network. The information above has been found to be the most common fixes for most one way audio issues, but most certainly will not be the fix in every network. - Troubleshooting One Way Voice Issues - Administration Guide for Cisco IP Communicator Release 7.0 (pdf file) Author: Paul Stryer
<urn:uuid:cc400dcd-17f4-4d7f-b1af-71ade2f37fdb>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/03/30/voip-networks-and-one-way-audio/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00561-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921811
1,387
2.609375
3
National Security AgencyPosted 2002-09-10 Email Print A year after Sept. 11, law enforcement far too often finds itself left alone on the front line of defense.: Oceans of Information"> National Security Agency: Oceans of Information The National Security Agency (NSA) is the biggest and most sophisticated spy organization in the world. From its "listening stations" on five continents, the agency harvests phone calls, e-mails, faxes and radio signals every second of every day, pouring the information into memory banks capable of storing 5 trillion pages of data. According to a July 2001 Washington Post report, the agency yanks enough data from the ether every three hours to fill the Library of Congress. More linguists and mathematicians work at the NSA than anywhere else in the world, and it also owns the world's largest collection of supercomputers. One Cray machine used by the agency handles 64 billion instructions a second. Just running the agency's collection of supercomputers alone requires as much electricity as the city of Annapolis, Md. To cool the computers, it keeps 8,000 tons of chilled water; one particularly powerful supercomputer is submerged in a nonconducting liquid to keep it from overheating. Despite the agency's technological savvy, however, the congressional subcommittee report makes it clear that the NSA has a lot of work to do to get its internal computer systems to operate together. For years, different divisions within the agency worked in separate worlds for security reasons. They developed their own software, bought their own hardware, and built their own networks. When Lt. Gen. Michael Hayden took over the agency in 1999, it had 68 different e-mail systems. If he wanted to send out an e-mail to all NSA employees, he'd have to send the message 68 times. Under Hayden, the agency is inching toward creating systems that talk to each other. Last year, it began awarding its first contracts under Project Groundbreaker, a $5 billion, 10-year project that is farming out, for example, the running of the NSA's office-technology infrastructure, which includes thousands of computers and a thicket of software and communications systems. The ultimate goal: that NSA workers would be able to send top-secret files to colleagues within the company without having to navigate different systems and multiple layers of bureaucracy. The Project Groundbreaker contracts will not touch upon the NSA's core of surveillance networks. Unlike the CIA, where spies are out in the field performing hands-on snoopingreferred to as HUMINT, or "human intelligence"the NSA relies almost entirely upon technology to gather its oceans of information. The NSA does share information with the CIA, but there's nothing formal about it. While the NSA and CIA wouldn't comment on their data and information-sharing capabilities, Steven Aftergood, a senior research analyst at the Federation of American Scientistsa private policy research organizationsays the panoply of different systems the two agencies use barely works within each agency, never mind between the two agencies. "Some people say the NSA and the CIA are further apart than the CIA and the FBI," says Rindskopf-Parker, the former University of Wisconsin general counsel, who also has served as counsel for both the CIA and for the National Security Agency. Still, there's nothing technologically that is stopping the intelligence and law enforcement agencies from communicating with one another, says Matthew DeZee, who ran the CIA's computing and communications infrastructure on a global basis between 1999 and 2001. "The technology is there to do whatever is needed to get doneor at least it's available," DeZee, now CIO for the state of South Carolina, says. "There will be minor problems like interoperability, the typical stuff you run into whether it's a government agency or a corporation. The technology is there for people to communicate." The technological challenge in sharing data is the range of different classified levels and the security each level demands, at different agencies. "Multilevel security is a killer issue," he says. "It tends to produce networks (where) everyone using them has the same clearance."
<urn:uuid:1f447935-d6c0-43a2-a5f2-0a3803b36afa>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Security/The-Disconnected-Cop/8
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00011-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949599
850
2.515625
3
The title of this post seems to be lost on some who are responsible for security architecture. One of my reflections on this past summer is that not everyone is aware of the difference between weaker and stronger forms of multifactor authentication. You have likely read about multifactor authentication, have used it with your social networking websites, or maybe you have used a form of multifactor authentication in a corporate environment. This is all very good news. The bell has tolled for single-factor username-and-password schemes and people are starting to realize that this old stalwart of authentication needs to be retired as soon as possible. Keylogging, man-in-the-middle attacks and social engineering techniques leveraged by cunning identity thieves are in the news every day. The time has come for multifactor authentication. Why? It makes the job of a malicious hacker more difficult. As with all attacks, malicious hackers are looking for ways to steal your identity. Nobody in the cybersecurity business is getting fired right now for suggesting that multifactor authentication should be used in their enterprise. It’s a good idea that has reached the executive ranks. But without understanding the offensive side of the security equation, there are some in the defensive side of cybersecurity who have forgotten that not all multifactor authentication techniques are equal. Simply, it’s smart to choose a multifactor authentication that matches the risks. This summer I spoke to security architects in large enterprises in both North America and Europe and their job was to protect one of three things: money, privacy or critical infrastructure. Some of these professionals were planning to employ SMS-based multifactor authentication. This is where users log in to a website and are challenged to enter a code that is sent to their mobile devices via text message (SMS). SMS-based multifactor authentication is better than single-factor username-and-password authentication. These professionals had every reason to be glad to be working on these projects. But I challenged some of them to explain to me why they did not choose a stronger form of multifactor authentication. “What’s wrong with SMS?” they asked. What bothered me was not that they were employing SMS, but that they did not know the weaknesses. In addition, I witnessed a demonstration at the Def Con 21 conference in Las Vegas this year where SMS messages were being intercepted — by a Femtocell device hacked by ethical researchers — and projected onto a screen. This was a friendly environment and nobody was hurt, but it laid bare the weakness of non-encrypted messages like SMS. There are other forms of multifactor authentication that are much stronger than SMS, and even easier for the end-user. An example includes innovative virtual smart credentials embedded onto mobile devices. The chain of communication is encrypted, and doesn’t require the user to type a code. It’s not often that better security can also mean a better user experience. Your money and privacy are important to you. Before you log in to a bank, conduct a transaction with your government, or turn on a pump at a critical infrastructure plant, you should consider that there are malicious individuals or groups out there who strive to obtain your identity for illegal gain. Making it more difficult for the bad guy means choosing a method of authentication that does not easily give away your identity. SMS multifactor authentication is a step above username-and-password solutions, but if what you are protecting is important to you, there are stronger methods.
<urn:uuid:fee3dab6-4c96-44a3-819a-dad41e6edeaa>
CC-MAIN-2017-04
https://www.entrust.com/multifactor-authentication-techniques-equal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00497-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960862
713
2.703125
3
Over the last two years, the pace of ‘digital disruption’ has picked up and virtually every industry is feeling the impact. Digital disruption has empowered the education industry with its own set of opportunities. In a relatively short period of time, the focus has changed from teaching to learning, from teacher to learner, from blackboards to electronic whiteboards, and from on-demand learning to continuous learning. Digital technologies are opening up new methods and opportunities to improve the learning process. For example, distance learning platforms, virtual learning environments and massive online open courses (MOOCs) are using the reach of the Internet to scale the resources of scarce, subject matter experts and extend education to new groups of learners. These platforms enable teachers to interact with students and other instructors and for students to download extra materials, upload completed assignments and more. Flipped classrooms have become possible in which the pedagogical model of the lecture followed by homework is reversed. Lectures are now videos that the student has to view at home, while class time is dedicated towards more interactive activities: exercises, discussions and projects. Analytics and adaptive learning are facilitating improved learning outcomes by giving instructors insight into how well students navigate the online components of courses and where students may need additional support, or if the course itself needs to be improved to make it more accessible. Student competency can be analyzed using responses to questions or other algorithms that compare proficiency versus learning objectives, so that appropriate remediation can be offered to learners who do not yet meet the objectives. 3D printing now provides learners with newer ways to visualize and express their ideas. It is not about replicating ideas, but rather about creating new ones that convert theory in textbooks to hands-on concepts. Virtual reality (VR) and augmented reality (AR) will give both teachers and learners a new dimension to the learning experience. In the context of digital storytelling, VR and AR can make the representation and the characters more compelling: imagine students exploring the human body or learning how a virus spreads using VR games. The Internet of Things (IoT) has its own place in education technology. Researchers are looking at ways to use gesture-based controls, which send data to Internet-connected devices, to more effectively and efficiently perform many time-consuming activities that are done manually today, such as registering attendance. Wearables are providing additional channels to capture data that can be further edited, composed and shared. For example, Google Glass enables students and teachers to search for information on a particular topic, easily take a picture or record video that supports creation of a report on that topic, and even answer and translate questions in a foreign language. Or, medical students could watch different medical procedures in real time. This technology is fairly new and the challenge is selecting the right wearable that can have an impact on the curricular and learning pedagogy. Imagine, though, how fast the adoption of technology has been in education: the transition from textbooks to desktops to laptops to mobile devices and now to wearables. Machine learning, a type of artificial intelligence, is another advanced technology facilitating a more personalized education experience. For example, enabled systems can take in new teacher or student assessment data, learn from it and dynamically adjust learner courses to present material to students where more practice is needed or even schedule meetings with teachers. Grading systems can use this technique to interpret student behavior based on their responses and realign the learning content or assessments. In this case, educational data mining (EDM) can be used to reveal the system usage behavior. The clustering technique is then applied to characterize the learner’s behavior and group them based on it. Forerunners who use these technologies with good implementable ideas will help advance educational approaches. Learners will benefit from even more compelling and distinct learning experiences, and administrators and educators will have valuable tools for continually improving learning outcomes.
<urn:uuid:aa79a952-471e-4b64-9b86-842570ffefd5>
CC-MAIN-2017-04
http://www.ness.com/of-all-things-digital-in-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00249-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939394
792
3.234375
3
Tech Glossary – C A cable modem is used for connecting to the Internet and is much faster than a typical dial-up modem. While a 56K modem can receive data at about 53 Kbps, cable modems support data transfer rates of up to 30 Mbps. Instead of connecting to a serial port like a external dial-up modem, cable modems attach to a standard Ethernet port so they can transfer data at the fastest speed possible. This term is pronounced like “cash” — not “catch,” and definitely not “cash?.” There are many different types of caches but they all serve the same purpose. A cache stores recently-used information in a place where it can be accessed extremely fast. For example, a Web browser like Internet Explorer uses a cache to store the pages, images, and URLs of recently visted Web sites on your hard drive. Another common type of cache is a disk cache. This stores information you have recently read from your hard disk in the computer’s RAM, or memory. Since accessing RAM is much faster than reading data off the hard disk, this can help you access common files and folders on your hard drive much faster. A chipset describes the architecture of an integrated circuit. This includes the layout of the circuitry, the components used within the circuit, and the functionality of the circuit board. For example, the chipset of a modem card is much much different than the chipset of a computer’s CPU. This wireless technology enables communication between Bluetooth-compatible devices. It is used for short-range connections between desktop and laptop computers, PDAs (like the Palm Pilot or Handspring Visor), digital cameras, scanners, cellular phones, and printers. Cloud computing refers to applications and services offered over the Internet. These services are offered from data centers all over the world, which collectively are referred to as the “cloud.” This metaphor represents the intangible, yet universal nature of the Internet. The name “codec” is short for “coder-decoder,” which is pretty much what a codec does. Audio and video files are compressed with a certain codec when they are saved and then decompressed by the codec when they are played back.
<urn:uuid:90eb098d-ad28-441d-8169-32a6cf602ccf>
CC-MAIN-2017-04
http://icomputerdenver.com/tech-glossary/tech-glossary-c/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924802
463
3.6875
4
HPC can save energy companies enormous amounts of time in the development of new products and technologies compared to the traditional methods, according to hpc4energy.org, an incubator project at the Lawrence Livermore National Laboratory. According to the infographic below, HPC analysis is saving companies like General Electric, Robert Bosch, and the New England Independent System Operator (IS) thousands of hours by using supercomputers to develop new products and technologies. By parallelizing the positive sequence load flow (PSLF) software calculations and running it on a supercomputer instead of a desktop PC, the GE Energy Consulting group was able to drop the time required to run the analysis from 23.5 days to 23 minutes. That’s a savings of 33,817 minutes on the calculation that’s used to model how large blocks of electricity can safely and securely flow across a power grid. Bosch also saw big time savings on calculations it was running to model how a gasoline engine can switch from traditional spark ignition (SI) to homogeneous charge compression ignition (HCCI), which car makers are considering adopting as a way to deliver Diesel-engine like efficiencies in gasoline engines. It used to take Bosch 14 days to model 10 cycles in the engine as it transitions SI to HCCI modes. Running on Sierra, Lawrence Livermore’s 165-terraflop supercomputer, the company was able to model the transition in just 4.5 days. Meanwhile, the New England ISO used the lab’s HPC resources on a project that is comparing two different ways of performing unit commitment (UC) calculations to match expected demand with generation sources on an electric grid. As the New England ISO seeks more energy from renewable sources, it’s grappling with how to meet the demands of its 6.5 million customers while dealing with the lower reliability rates of renewable energy sources. By parallelizing its UC modeling software and running it on a 1,600 core system, the company was able complete modeling in 90 minutes, compared to 33.5 days using a desktop. Lawrence Livermore and its HPCInnovationCenter group started the hpc4energy project in 2011 with the goal of pairing national labs with a select group of energy companies to demonstrate how HPC resources can help the energy companies with their technology and product development, and to boost American competitiveness in the energy field. The lab put out a request for proposals, and in 2012 picked six of the 30 submitted proposals to execute, at no cost to the participants.
<urn:uuid:fb007dc0-b9f0-4f9d-be09-a3673d1ca7d1>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/06/04/hpc4energy_incubator_touts_successes_with_energy_companies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00002-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915612
521
2.765625
3
Sequencing the human genome has become an increasingly faster and cheaper task. While simplification of this process is welcome, it also creates some issues regarding delivery and analysis of sequencing data. One company believes it can solve these issues with the cloud. Technological advancements have greatly simplified the process of sequencing. Deepak Singh, Ph.D., principal product manager for Amazon Web Services, underscores this point: “It took more than 10 years and billions of dollars to sequence the first human genome. Recent advances in genome sequencing technology have enabled researchers to tackle studies like the 1000 Genomes Project by collecting far more data faster.” The task can now be accomplished in 24 hours for $1,000, creating an exponential growth in genomic data and introducing storage and delivery challenges. Last week, Technology Review startup DNANexus. The company views itself as a manager and distributor of data produced by sequencing centers. Genetic storage and analysis are accomplished through their platform, which leans on Amazon Web Services (AWS) rather than requiring an in-house cluster. DNANexus views the cloud as the best vehicle to deliver and analyze sequencing data. The process begins at the sequencing center, where lab data is uploaded to AWS through the DNANexus website. Once transferred, the information can be accessed from the Web and analyzed using tools built into the site. Andreas Sundquist, the company’s CEO and cofounder, is banking on exponential growth for services like DNANexus. While Sundquist estimates that 20,000 complete genomes have been sequenced already, he anticipates that number to grow to a million in the next few years. If that figure becomes a reality, the amount of information produced could exceed an exabyte. DNANexus is not the only organization that recognizes the benefits of cloud services. Recently, the National Institutes of Health announced that data from the 1000 Genomes Project was publicly available through Amazon Web Services. Since the group’s inception in 2008 their dataset has grown to roughly 200 terabytes of genomic information. In the future, Sundquist would like to see his company aggregate multiple genetic databases, possibly leading to better research and treatment of genetic-based diseases. He also believes, given the improving technology, that every member of developed nations will have their genome sequenced. This prediction even includes newborn babies. “I think probably you’ll stick your thumb in your cell phone and it will be built-in,” says Sundquist. While there isn’t currently an app for that, it’s not impossible to imagine one down the road.
<urn:uuid:7bf575b5-528f-4169-9d56-a70120c8a92b>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/05/07/genomic_data_gets_comfy_in_the_cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00398-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949923
534
2.59375
3
Photomultiplier Tube Technology Historically, gamma camera technology dates back to the late 1950s when Hal Anger invented the first clinically successful scintillation camera. Earlier, Benedict Cassen’s invention of the rectilinear scanner capable of performing static studies using radioactive tracers brought about the emergence of nuclear imaging. However, it was Anger’s initial scintillation camera design based on seven photomultiplier tubes (PMTs) coupled to a Nal scintillation crystal that revolutionized nuclear imaging and established itself as a core medical imaging modality. The application of PMT technology to accurately determine the position of where scintillation occurred (and thus the photon’s origin) enabled clinicians to capture functional images of entire organs with faster acquisition times and higher resolution. Although manufacturers have achieved significant improvements in image resolution and uniformity since that time, the majority of today’s commercial gamma camera systems still utilize the same underlying detection technology used in traditional Anger cameras. Detection occurs through a two-step process; first, a NaI(TI) (thallium-doped sodium iodide) scintillation crystal captures and transforms a released γ-photon into a light photon. A photomultiplier tube mounted onto the scintillation crystal then transforms the light photon into an amplified electric signal proportional to the energy of the original scintillation light. State-of-the-art PMT technology now equips photomultiplier tubes with individual analog-to-digital converters (ADC) that can carry out individual digitalization and signal processing including sampling, integration, and event positioning. Earlier systems composed of analog electrical components offered less system stability and greater noise, resulting in lower quality images than digital systems. Manufacturers realized that the earlier the digitalization of an electrical signal was performed, the more one could minimize signal and image degradation. In addition, manufacturers now construct PMTs in various geometries (rectangular, circular, and hexagonal). Consequently, cameras now contain larger and higher-efficient fields-of-view (FOV) that minimize dead space and increase image resolution. Finally, significant improvements in attenuation correction and image reconstruction algorithms have resulted in clearer and more uniform images. However, the demand for cameras with larger FOVs by end-users (especially in nuclear cardiology) has also resulted in manufacturers increasing the number of PMTs within its detectors. End-users are attracted to cameras with larger FOVs for their ability to minimize image artifacts and improve workflow. However, to create larger FOVs, manufacturers must increase the number of PMTs, resulting in significant increases in system weight, bulkiness, and cost. These larger footprints for PMT-based systems have placed severe limitations on the mobility and installation options for gamma cameras. Solid-State Detector Technology The development of solid-state detectors in nuclear imaging provides an affordable and attractive solution for the next generation of SPECT cameras. Historically, earlier generation solid-state detectors severely underperformed PMT-based cameras which resulted in limited clinical adoption and stagnant sales. However, advancements in solid-state technology electronics and scintillation-detector materials have significantly improved solid-state based SPECT detector performance such that it equals or exceeds PMT-based cameras in all major performance categories, including energy resolution, intrinsic spatial footprint size, and power requirements. In addition, solid-state based electronics offer higher sensitivity and a low signal-to-noise ratio (SNR), which translates to images with higher clarity, that are produced with shorter acquisition times. Most important is the fact that SPECT cameras based on cadmium zinc telluride (CZT) and cesium iodide thallium (CsI(TI)) crystals can convert and digitalize gamma radiation in a single step, eliminating the need for bulky PMT technology. The advances made allow manufacturers to offer systems with smaller footprints and increased mobility as both weight and size dimensions are drastically reduced. According to Dr. Jack Juni of CardiArc, despite the fact that virtually all cardiologists refer patients for SPECT imaging to diagnose CAD development, less than 10 percent of them possess imaging equipment. With size being the most important limiting factor, solid-state manufacturers emphasize that their systems have the advantage of being small enough to fit into the majority of physician offices, with system footprints as small as 7 X 8 feet*, requiring minimal to no room modifications. With prices similar to traditional Anger-based cameras, preference for dedicated cardiac cameras should shift to solid-state digital detectors. Growing adoption of these cameras by freestanding imaging centers and physician offices could significantly increase procedure volumes, driving demand for additional installed base units. Historically, 15 years after Anger made his scintillation camera commercially available, a clinical survey showed that greater than half of its respondents still used the rectilinear scanner despite Anger’s camera offering superior image quality and faster acquisition times. Similarly, solid-state detector technology has been approved by the FDA since 1997, but high cost and sporadic ability of CZT crystals have limited the adoption of the technology by a majority of companies. In addition, long replacement cycles and substantial cuts in reimbursement may have had a stronger adverse impact on technology adoption than previously expected. One strong driver for solid-state technology could be its ability to be coupled with other imaging modalities, such as MRI, to create novel hybrid systems capable of producing both functional and anatomical high-resolution images. In addition, solid-state crystals have been demonstrated to contain the best low-energy resolution and are sensitive to a wider range of energies, thus potentially allowing dual radionucleotide imaging of similar energies with high accuracy. Despite the promise of this new technology, major manufacturers have been slow to adopt it. While some have created prototype SPECT cameras with CZT digital detectors, none are currently prepared to offer solid-state digital detectors on the commercial market. One restraint could be the high manufacturing costs and sporadic availability of CZT crystals. Despite the fact that CZT crystals offer the best energy resolution, some manufacturers have chosen to work with CsI crystals that offer comparable sensitivity and image resolution at a lower cost. Although the future for solid-state digital detectors remains uncertain, it looks very promising, as manufacturers strive to meet the growing demand for smaller and more versatile dedicated cardiac cameras for nuclear cardiology. *This applies to currently available commercial solid-state detectors. At least one manufacturer (CardiArc) is taking pre-orders for a gamma camera that would offer a 6' X 7' footprint.
<urn:uuid:d636c2aa-b4b2-4150-ae7d-9f918a5733d9>
CC-MAIN-2017-04
http://www.frost.com/sublib/display-market-insight-top.do?id=11200288
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00086-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931632
1,349
3.328125
3
The Layer 2 Tunneling Protocol (L2TP) is a standard protocol for tunneling L2 traffic over an IP network. Its ability to carry almost any L2 data format over IP or other L3 networks makes it particularly useful. But L2TP remains little-known outside of certain niches, perhaps because early versions of the specification were limited to carrying PPP -- a limitation that is now removed. It is desirable to tunnel L2 traffic over routed L3 networks because L2 networks are generally more transparent, easier to configure and easier to manage than L3 networks. These are desirable properties for a range of applications. In data centers, a flat network is essential for promoting virtual machine (VM) mobility between physical hosts. In companies with multiple premises, the sharing of infrastructure and resources between remote offices can be simplified by L2 tunneling. The L2TP protocol itself is an open standard defined by the IETF. This article concentrates on the latest Version 3 of the specification, which describes tunneling multiple L2 protocols over various types of packet-switched networks (PSN). The standard discusses tunneling over IP, UDP, Frame Relay and ATM PSNs. [ IN DEPTH: Complete guide to network virtualization ] An L2TP connection comprises two components: a tunnel and a session. The tunnel provides a reliable transport between two L2TP Control Connection Endpoints (LCCEs) and carries only control packets. The session is logically contained within the tunnel and carries user data. A single tunnel may contain multiple sessions, with user data kept separate by session identifier numbers in the L2TP data encapsulation headers. Conspicuously absent from the L2TP specification are any security or authentication mechanisms. It is typical to deploy L2TP alongside other technologies, for example IPSec, to provide these features. This gives L2TP the flexibility to interoperate with various different security mechanisms within a network. The four use cases discussed below illustrate how L2TP works in a variety of scenarios, from simple point-to-point links to large networks. Whether you're running a single-site corporate LAN or a complicated multi-site network, L2TP has the scalability to fit into your architecture. L2TP/IPSec as a VPN Today, with diverse mobile devices used throughout businesses, and pervasive availability of broadband in the home, most corporate networks must provide remote access as a basic necessity. Virtual private network (VPN) technologies are an essential part of meeting that need. Since L2TP doesn't provide any authentication or encryption mechanisms directly, both of which are key features of a VPN, L2TP is usually paired with IPSec to provide encryption of user and control packets within the L2TP tunnel. Figure 1 shows a simplified VPN configuration. Here the corporate network on the right contains an L2TP Network Server (LNS) providing access to the network. Remote workers and mobile devices may join the corporate network via IPSec-secured L2TP tunnels over any intermediate network (most likely the Internet). Clients attaching to the VPN will often run L2TP and IPSec software directly. It is normally unnecessary to install extra software in client systems to communicate with an L2TP VPN server: L2TP VPN software is provided with Windows, OS X, iOS, Android and Linux systems. L2TP to extend a LAN An L2TP-based VPN works well to allow individual clients to make single links with a remote LAN. Our next example takes the VPN concept and runs with it, employing L2TP to merge two or more LANs. Many businesses have the challenge of managing several remote locations, all of which must share data and network infrastructure. By using L2TP to provide tunnels between each individual LAN, we can create one unified network with easy access to resources from any location. Figure 2 shows a simple deployment using L2TP to join two LANs over the Internet. Rather than running L2TP software on each host in each office, a separate machine is used as an LCCE endpoint at each office location. The LCCE machines bridge Ethernet frames from the LAN with the L2TP interface to the remote site, thereby acting as a gateway between the LANs. Depending on the LAN configuration and the nature of the intermediate network, it may be necessary or desirable to add packet filters at the LCCE to confine certain traffic to the LAN of origin instead of passing it over the tunnel. Just as in the point-to-point VPN case, security is an important consideration for remote office connections. IPSec is usually deployed to provide traffic encryption between sites. L2TP as a part of an ISP network So far we've considered using L2TP as a means of extending a corporate network, but as we scale up outside of the office L2TP continues to prove useful. Our next example (see Figure 3) shows how L2TP is employed as a part of an Internet Service Provider (ISP) network. Here L2TP is used to tunnel data from a customer's premises to the ISP's IP network. The L2TP tunnels and sessions span an intermediate network managed by a wholesale provider, which sells access to the ISP directly. Individual customers connect to a local LCCE acting as an L2TP Access Concentrator (LAC), which is administered by the wholesale provider. The LAC will dynamically create L2TP tunnels and sessions to the customer's ISP. Information on which ISP to tunnel to might be based on static configuration stored on the LAC, or discovered using a RADIUS lookup when the customer connects. This configuration allows the ISP to manage client IP allocation and Internet access as they choose, since each client device behaves as though it were connected into their L2 network. L2TP in a public-access Wi-Fi network Our final example considers networking an urban area or large corporate campus, using L2TP as an integral part of a public access Wi-Fi network. In this configuration, shown in Figure 4, local Wi-Fi access points provide client devices with Internet access. Each access point forwards client data over an L2TP session to a centralized network. This network manages IP address allocation and routing to the Internet, typically with network address translation. Using L2TP in this network allows a single supplier to provide Internet access to a wide variety of customers without needing to manage an Internet connection at each Wi-Fi access point location. Choosing WiMax as an interconnect allows metropolitan area networks to be provided with Wi-Fi access using a single high-bandwidth Internet connection. Although L2TP has a history of being a rather obscure protocol, L2TPv3 provides immense flexibility for all kinds of uses. In any situation where you need the flat topology and "plug and play" configuration of a Layer 2 network, L2TP is a mature technology that can work well. As with any established and open protocol, L2TP is widely supported on a variety of target platforms, including mobile devices. Even better, with multiple projects supporting L2TP on Linux or BSD platforms, there is no need to make expensive hardware investments to support an L2TP deployment on your network. Katalix Systems is a software consultancy based in the U.K., with expertise in Linux, networking and embedded systems. Katalix develops both off-the-shelf and bespoke software solutions, and maintains the L2TP subsystem of the Linux kernel. ProL2TP, their enterprise-class L2TP software suite, provides comprehensive L2TPv3 support on generic Linux systems. Read more about lan and wan in Network World's LAN & WAN section. This story, "What Can L2TP Do for Your Network?" was originally published by Network World.
<urn:uuid:a1583c98-39c6-466d-b5db-4c642513e592>
CC-MAIN-2017-04
http://www.cio.com/article/2388503/networking/what-can-l2tp-do-for-your-network-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908186
1,611
3.46875
3
The virtualization wars will heat up in 2008 as competitors vie to provide the management hub for virtualized utility infrastructures in the enterprise. Microsoft, for example, has announced its intention to become a dominant player for what they call “Dynamic IT.” However, virtualization is not one technology but several. It's time to take a step back and review what virtualization is and how it works. This note reviews four main types of virtualization: 1. Hypervisored or Virtual Machine Virtualization. 2. Operating System Virtualization. 3. Application Virtualization. 4. Presentation Virtualization. Virtualization is essentially a trick that fools software or hardware into thinking it is dealing with something real, when in fact it is interacting with an abstraction. The various forms of virtualization are differentiated by where that “abstraction layer” occurs.
<urn:uuid:a69ceb51-4eb3-4826-a5e9-a69db3ec7868>
CC-MAIN-2017-04
https://www.infotech.com/research/peeling-the-virtualization-onion-without-tears
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931621
183
2.875
3
It sounds like a math phobic's worst nightmare or perhaps Good Will Hunting for the ages. Those wacky folks at he the Defense Advanced Research Projects Agency have put out a research request it calls Mathematical Challenges, that has the mighty goal of "dramatically revolutionizing mathematics and thereby strengthening DoD's scientific and technological capabilities." The challenges are in fact 23 questions that if answered, would offer a high potential for major mathematical breakthroughs, DARPA said. So if you have ever wanted to settle the Riemann Hypothesis, which I won't begin to describe but it is one of the great unanswered questions in math history, experts say. Or perhaps you've always had a theory about Dark Energy, which in a nutshell holds that the universe is ever-expanding, this may be your calling. DARPA perhaps obviously states research grants will be awarded individually but doesn't say how much they'd be worth. The agency does say you'd need to submit your research plan by Sept. 29, 2009. So if you're game, take your pick of the following questions and have at it. - The Mathematics of the Brain: Develop a mathematical theory to build a functional model of the brain that is mathematically consistent and predictive rather than merely biologically inspired. - The Dynamics of Networks: Develop the high-dimensional mathematics needed to accurately model and predict behavior in large-scale distributed networks that evolve over time occurring in communication, biology and the social sciences. - Capture and Harness Stochasticity in Nature: Address Mumford's call for new mathematics for the 21st century. Develop methods that capture persistence in stochastic environments. - 21st Century Fluids: Classical fluid dynamics and the Navier-Stokes Equation were extraordinarily successful in obtaining quantitative understanding of shock waves, turbulence and solitons, but new methods are needed to tackle complex fluids such as foams, suspensions, gels and liquid crystals. - Biological Quantum Field Theory: Quantum and statistical methods have had great success modeling virus evolution. Can such techniques be used to model more complex systems such as bacteria? Can these techniques be used to control pathogen evolution? - Computational Duality: Duality in mathematics has been a profound tool for theoretical understanding. Can it be extended to develop principled computational techniques where duality and geometry are the basis for novel algorithms? - Occam's Razor in Many Dimensions: As data collection increases can we "do more with less" by finding lower bounds for sensing complexity in systems? This is related to questions about entropy maximization algorithms. - Beyond Convex Optimization: Can linear algebra be replaced by algebraic geometry in a systematic way? - What are the Physical Consequences of Perelman's Proof of Thurston's Geometrization Theorem?: Can profound theoretical advances in understanding three dimensions be applied to construct and manipulate structures across scales to fabricate novel materials? - Algorithmic Origami and Biology: Build a stronger mathematical theory for isometric and rigid embedding that can give insight into protein folding. - Optimal Nanostructures: Develop new mathematics for constructing optimal globally symmetric structures by following simple local rules via the process of nanoscale self-assembly. - The Mathematics of Quantum Computing, Algorithms, and Entanglement: In the last century we learned how quantum phenomena shape our world. In the coming century we need to develop the mathematics required to control the quantum world. - Creating a Game Theory that Scales: What new scalable mathematics is needed to replace the traditional Partial Differential Equations (PDE) approach to differential games? - An Information Theory for Virus Evolution: Can Shannon's theory shed light on this fundamental area of biology? - The Geometry of Genome Space: What notion of distance is needed to incorporate biological utility? - What are the Symmetries and Action Principles for Biology?: Extend our understanding of symmetries and action principles in biology along the lines of classical thermodynamics, to include important biological concepts such as robustness, modularity, evolvability and variability. - Geometric Langlands and Quantum Physics: How does the Langlands program, which originated in number theory and representation theory, explain the fundamental symmetries of physics? And vice versa? - Arithmetic Langlands, Topology, and Geometry: What is the role of homotopy theory in the classical, geometric, and quantum Langlands programs? - Settle the Riemann Hypothesis: The Holy Grail of number theory. - Computation at Scale: How can we develop asymptotics for a world with massively many degrees of freedom? - Settle the Hodge Conjecture: This conjecture in algebraic geometry is a metaphor for transforming transcendental computations into algebraic ones. - Settle the Smooth Poincare Conjecture in Dimension 4: What are the implications for space-time and cosmology? And might the answer unlock the secret of "dark energy"? - What are the Fundamental Laws of Biology?: This question will remain front and center for the next 100 years. DARPA places this challenge last as finding these laws will undoubtedly require the mathematics developed in answering several of the questions listed above. Layer 8 in a box Check out these other hot stories:
<urn:uuid:de090b22-d300-431d-a512-8e6994196067>
CC-MAIN-2017-04
http://www.networkworld.com/article/2346451/security/the-world-s-23-toughest-math-questions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00232-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911079
1,093
3.09375
3
The Night Shift feature in iOS 9.3 lets you adjust the color temperature of the display, shifting away from blue spectrums of light, in the putative interest of improving sleep. But Apple makes no promises. On its website, Apple notes, “Many studies have shown that exposure to bright blue light in the evening can affect your circadian rhythms and make it harder to fall asleep.” In iOS, the feature is explained with “This may help you get a better night’s sleep.” In fact, this feature likely will have little or no effect on most people. Apple hasn’t misrepresented any of the science, but clinical work done to date doesn’t point a finger right at mobile devices or even larger displays. Night Shift also can’t remove enough blue to make a difference if that color is the culprit. And blue light may not be the trigger it’s been identified as. While researchers haven’t tested the new feature yet, several factors add up to at best a placebo effect and a reminder to power yourself down. Apple might have done better to create something called Night Safe, an option that would countdown the moments until you’d be locked out of your hardware till morning except for emergencies or going through a tedious override process—a Do Not Disturb on reverse steroids. Jumping to the chase, if you’re ready to crash: If you want to sleep better, the almost universal suggestion from both sleep and lighting researchers is to turn off any screen two hours before your planned bedtime. Some also recommend using warmer lighting throughout your house in sources you use in the later evening. Why do you feel blue? Our circadian rhythm, a biological cycle, regulates how our body functions and repairs itself, although it’s commonly associated with sleep and wakefulness. It’s roughly 24 hours for human beings, and our bodies use a number of cues to keep us on track. Getting out of sync can contribute to illness, obesity, diabetes, and even an increased risk of cancer. Researchers have conducted studies over decades that isolate people from external cues to see what a natural cycle looks like, and how we sleep and wake. More recently, a lot of clinical and survey work has looked into measuring the effect of lighting: cycles of light and dark, light temperature, brightness, and other factors. A discovery about 20 years ago helped make a connection, the limits of which are still being felt out. Many animals, including humans, produce the hormone melatonin across the circadian cycle, but it’s suppressed to low levels during natural waking hours. As it gets dark, that suppression abates, and melatonin production helps us become sleepy and remain asleep. (It has many other attributes, too, and other hormones have cycles that seem less tied to sleep.) Melatonin production starts ramping up about two hours before your body’s natural sleep cycle would start—often described in research as about 10 p.m. in local time. And it’s produced in the largest quantities in the wee hours, wherever in the world you are, right in the middle of what your body perceives as the darkest time of day. Since this light receptor type was discovered, scientists have connected in many, many studies not just light and melatonin suppression, but specifically light that’s heavy on blue frequencies. Blue light can ostensibly offset the cycle of hormone production by a couple of hours or more. This has led to speculation that staring at television sets, monitors, and mobile displays disrupts or delays sleep. If you have to get up or are woken up at a fixed time, as for most people, this both reduces sleep and throws off the body’s endocrine and other systems. Daylight has a large proportion of short-wavelength light at the blue end of the spectrum (around 460 nanometers). Indoor lightning has been traditionally “warmer,” or towards the yellow, longer-wavelength end (about 555 nm) or red at the far end (650 nm). That’s true of fire and most incandescent lighting. But a shift in lighting over decades has shifted towards cool, “white,” or “daylight” illumination, whether incandescent, fluorescent, or LED. While thought of as whiter, they actually produce bluer light, resembling more closely our perception of a sunlit day. This description of color gets labeled color temperature, and is measured in kelvins (K). On one end of the spectrum, you have red/yellow candlelight at 1,000K, considered very warm; at the other end, pure blue sky is 10,000K, considered very cool. Most LCD monitors and mobile displays can calibrate against a standard called D65, which centers at 6,500K, fairly blue and fairly cool—it’s described as outdoor daylight at noon. Many displays are tuned or default to a higher temperature, though, and are much bluer. Specific research and reasonable speculation centers around how predominantly blue light from television sets, computer monitors, laptop displays, and mobile screens might be connected with the increase of a host of ailments in nations in which a large percentage of residents use those technologies before and at bedtime. Of special interest is the simple lack of sleep. The CDC estimates 50 to 70 million U.S. adults have disorders that prevent them sleeping sufficient to be alert, productive, and rested on an average day. All the discussion of blue light has led to programs and extensions for many computer platforms that attempt to reduce the production of blue light in order to avert circadian rhythm disruption. The f.lux software is a well-known example, available for OS X, Windows, Linux, and rooted Android phones. (It could be installed through a work-around in iOS, until Apple asked f.lux to stop distributing it.) iOS’s Night Shift is just the latest entrant for color-temperature shifting, albeit making it available to roughly 500 million devices via iOS 9.3. Only devices released starting in about 2013 have hardware that supports the feature, according to Apple’s feature notes. But the big problem is that there’s no solid evidence that mobile screens’ color temperature is the real culprit, nor whether devices and monitors can shift enough to matter if they were—or even if blue light on its own is the trigger. While expose to colors of light has been well researched, it’s not entirely clear that merely seeing light heavy in the blue part of the rainbow is the trigger—or at least the sole trigger. It may be that a shift in color in the hours around twilight, which comes with a change from blue to yellow, could be a more significant marker. Blue may be a red herring. It might also be the intensity of light or the proportion of the visual field it occupies. A large, bright screen that’s far away could have as little or the same effect as a small, bright screen close up. Many of the studies until recently used full-room illumination or specifically-tuned light sources (like panels used to treat seasonal affective disorder), and have taken place in highly controlled laboratory environments that block all other light. Because of the cost and complexity of the experiments, the most rigidly constructed ones often involve only a dozen or so individuals who spend several days under observation. Mariana G. Figueiro, a professor at Rensselaer Polytechnic Institute and the program director of its Lighting Research Center, says her group has used precise measurements of light sources and displays to calculate predicted effects and performed clinical testing to test outcomes. She notes there’s a huge variation between an iPhone, a tablet, and large-screen televisions. “People tend to have a misconception that because it looks bright, because your visual system is so sensitive, that it is affecting your melatonin,” she says. Her work and that of others has shown you “can still suppress melatonin with a warm color if it’s a high light level.” Even what’s being displayed matters. Dr. Figueiro says a Facebook page with a white background and mostly text produces more light than the same page viewed with white on black text. Although she hasn’t tested Night Shift yet, she says that in terms of size and brightness, it’s more likely that an effect on melatonin production would come from adjusting an iPad Pro than an iPhone of any size, due to light and intensity of light produced. But beyond the variation, there’s the degree of blue removal. Ray Soneira, the president of DisplayMate, a company that makes video-diagnostic hardware and software, says that Night Shift and related software doesn’t turn down blue spectra in the correct range enough, thus not providing assistance even if other physiological factors prove true. Via email, Dr. Soneira explains that he feels there’s a paucity of “understanding of displays, light spectra, or human color vision” among many researchers in the field that’s leading to a mismatch in what’s being tested and conclusions reached. As a result, those studies are influencing system design without a firm grounding. In the case of Night Shift and similar systems, he argues that the blue component would need to be entirely removed or reduced significantly more than the systems offer, which in turn would make the display too yellow for most people. He writes, “Just slightly reducing the blue, which is what most apps do, won’t accomplish much, so the improvements people experience are often mostly due to placebo and their own conscious modification of their behavior in using displays.” In any case, Dr. Figueiro says sleep research shows there’s an extremely important and often overlooked factor that requires more discipline than an automatic color-temperature adjustment. “Disruption of sleep is not just melatonin suppression; it’s what you’re doing to your brain to keep it alert,” she says. She recommends turning of all your screens two hours before going to bed. “These programs help, but they don’t completely remove the possibility of suppressing melatonin.” But she’s not disregarding color as a factor. Instead of focusing on screens, her group is working on an app that would gather information about your light exposure across a day and make recommendations about the best times to get the right light. With remote-controlled, color-variable bulbs from Hue and others, she suggests a future in which this app could change overall lighting to fit your needs, and, just maybe, have a real impact on your sleep. This story, "iOS 9.3: The new Night Shift feature probably won’t help you sleep better" was originally published by Macworld.
<urn:uuid:8666bcae-ebf3-488e-acc3-066aeabc2438>
CC-MAIN-2017-04
http://www.itnews.com/article/3047121/iphone-ipad/ios-93-the-new-night-shift-feature-probably-wont-help-you-sleep-better.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00140-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938904
2,264
2.8125
3
The water coming from the tap doesn't look right, or smell right. Is it time to worry, or can the pitchers be filled and poured? When this situation happened in December of 2004, residents of Halifax, Nova Scotia, were able to log on to the city's Web site and instantly find out that the reason for the water discoloration was a broken water main. They found out where crews were working to fix the broken pipes, what streets were closed and what to do about drinking water. The Halifax Regional Municipality (HRM) Alert System is a system designed for emergency situations. Whether it is inclement weather, natural disasters or man-made situations, critical information can easily flow from municipal leaders to citizens, calming panic and coordinating efforts of emergency personnel. Serving an area of approximately 2,353 square miles -- about the size of Delaware with a population of approximately 360,000 -- the HRM Alert System was designed to work for both regional emergences and more local problems. After the terrorist attacks of September 11th forced the rerouting of air traffic, and the devastation of Hurricane Juan in 2003 required the same, it was obvious that some kind of information relay system was necessary. By taking suggestions from the residents, as well as statistical information from the Web site of the time, Halifax was able to design what Richard Herritt, acting manager of E-Commerce and Web Services for Halifax, described as a system which "provides the ability to display an attention-drawing message on key pages of the Halifax.ca Web site." Along with the Web site alerts, Halifax has a call center that gives people "a concise, consolidated source of all emergency information." The 24/7 center is fed the same information as the Web site, from the same database. What made the alert system work well when first used in the water main incident, was not so much the technological aspect, but compliance to the specific appropriate protocols and processes set out to make it work. Herritt explained that the code for the program is simplistic, with less than 300 lines of code and html to display the database and less than 700 lines of code and html to administer the data. It is the "efficient/timely updating of information when the system is active" which has made HRM Alert System a success. For example, even when there is no emergency situation, the system is updated regularly with date and time stamps, and out-of-date information is given new status which is "just as important as new information to maintaining the trust of the citizen in the timeliness and accuracy of the messages," explains Herritt. It is important in emergency situations to have reliable information, so the HRM Alert System was designed to eliminate conflicting reports by being a single point of access. When an emergency occurs, the Alert System can be activated by anyone with the correct credentials from any computer with Internet access. This, combined with the call center, provides timely broadcasting of emergency information to the citizens. By site tracking, Halifax is able to measure citizens' confidence in the Alert System. "With each subsequent event (of similar scope) we have recorded continuous increases in the usage of the messaging system by citizens and have received feedback from a number of different audiences (citizens, businesses, media) indicating the usefulness and growing expectations that these users now have on this system," explained Herritt. With this expanded confidence and use, the HRM Alert System is planned to expand to different pages on the Web site which are not "entry sites," such as transit schedule and event pages which may be bookmarked by users, so as to increase the amount of users exposed to emergency alerts. So the next time a major snowstorm hits Halifax, or the water becomes contaminated, the residents will have all the information they need with the click of a mouse.
<urn:uuid:29dc029c-ffcb-471d-9a02-5044638ee4ce>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Halifax-Regional-Municipality-Alert-System.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00378-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961648
783
2.84375
3
Cyber Security is hard to get a handle on. Ask even the most sophisticated experts. It’s constantly evolving and changing. Cyber criminals are coming up with new exploits as quickly as companies are able to patch them up. Cyber security is a constant battle between the “good guys” and the “bad guys.” In this case, the good guys are your company and its employees, and the bad guys are trying to shut you down, steal your information, or take your identity. As is always the case, it’s not as simple as finding a solution to keep the bad guys away forever. That’s just not possible. Cyber security is an everyday battle. Vigilance is needed. Reliable tools of combat are a necessity. Disaster can strike at any moment. Luckily, there are ways to stay safe. Antivirus software, third party security firms, and high, well-thought out levels of encryption and access are only some of the ways to keep the bad guys at bay. The most important tool, however, is knowledge. If you don’t know what you’re up against, how can you hope to combat it? We’ve compiled some of best information available to teach you the ins and outs of corporate cyber security, including: - Cloud Security - Internet of Things Security - Mobile Payment Security - Cyber Insurance After reading this article you’ll be ready to start building a digital fortress against any intruders that may wish you harm. Malware is any type of malicious software that tries to infect a computer or mobile device. Types of malware include spyware, adware, phishing, viruses, Trojan horses, worms, rootkits, ransomware and browser hijackers. Malware most often comes through the internet or email. It can come from hacked websites, game demos, music files, toolbars, software, free subscriptions, or anything else you download from the web onto a device which is not protected with anti-malware software. Spyware is a type of malware that cybercriminals use to spy on you, and it lets them gain access to personal information, banking details, or online activity. It collects information about your surfing habits, browsing history, or personal information (such as credit card numbers), and often uses the Internet to pass this information along to third parties without you knowing. Keyloggers are a type of spyware that monitors your keystrokes. Spyware is often bundles with other software or download on file-sharing sites. It is secretive, so often people are unaware it’s on their computer. New or unidentifiable icons appearing in the task bar, searches redirecting you to a different search engine and random error messages could indicate you’ve been infected with spyware. Adware bombards users with ads and pop-up windows that could be dangerous to the device. It could be also a type of free software supported by advertisements that show up in pop-up windows or on a toolbar on your computer or browser. Adware comes the same way spyware does, and you can detect it by noticing ads popping up in applications where you haven’t seen them before. If your browser’s homepage has changed on its own it may be the result of adware. A computer virus is a program or piece of code that is loaded onto your computer without your knowledge or permission. Most viruses are destructive and designed to infect and gain control over vulnerable systems. A virus can spread across computers and networks by making copies of itself. Viruses come from commonly used programs or files attached to emails. If you have a slow or non-existent internet connection, or your antivirus and firewall have been disabled, it may be a virus. A Trojan virus is a type of virus that pretends to be something useful, helpful or fun, but really steals data. Trojans often are spread via an infected email attachment or a download that hides in free games, applications, movies or greeting cards. Your computer will often slow down due to a Trojan. A Computer Worm is a program that self-replicates and spread through networks. They are transmitted through attachment, file-sharing networks and links. They consume large amounts of memory or bandwidth and the network servers will often stop responding because of them. A Rootkit is a program that allows hackers to have administrative access to your computer. It can be installed through commercial security products or third-party app extensions. Detecting rootkit-like behavior can be tedious work. When searching your system memory, monitor all ingress points for invoked processes, keeping track of imported library calls (from DLLs) that may be hooked or redirected to other functions. A Browser Hijacker takes over your computer’s browser settings and redirects to websites of its choice. They come from add-on software, extensions, browser helper objects and toolbars. If your browser’s home page is overridden, and when you try to open it, you’re automatically redirected to the hijacker’s website, you may have a browser hijacker. Phishing attacks occur when a cybercriminal attempts to trick someone into giving over sensitive information such as social security numbers, passwords, bank account information, credit card information, PIN numbers, addresses, social media accounts, birthdays, etc. Some statistics: - 23% of email recipients open phishing messages - 11% click on attachments - 15-20% of workers’ web sessions are initiated by clicking a link in an email - 92% of employees trust the security of the company’s email system and feel their email is safe Cybercriminals typically use phishing attacks in order to steal identities or sell the information. The information gathered can let criminals withdraw money, make purchases, open credit card accounts and more. Phishing attacks always involve deception in some way. Cybercriminals will create fake messages and websites to trick users into giving over information. They will use photos, names and company info to make the messages and websites look as legitimate as possible. Messages could come from financial institutions, government agencies, retailers, social networks, and even friends. The cybercriminals may even redirect to a legitimate site and use a fake pop-up to gather info, or gather info then send the user to a legitimate site, leaving them none the wiser. The information is gathered by getting the user to reply, follow a link, or download an attachment. There are a number of strategies for phishing attacks. Heimdal security gives great explanations in its blog post, The ABCs of Detecting and Preventing Phishing: An email directed at specific individuals or companies. Attackers gather all information available about the target including personal history, interested, activities, details about colleagues and more. This information is usually publicly available on social media and such. The attackers then create a highly personalized email that requires urgent action. As the email seems personal and legitimate, users typically don’t double check due to the sense of urgency. Spear phishing attacks are most successful and account for 95% of attacks. An email directed at high profile targets within companies, typically upper management and senior executives. They emails are made to look like critical business emails and sent from legitimate authorities. They often include legal subpoenas, managerial issues, or consumer complaints. Attackers make off with a high return on investment because they are able to get personal and/or professional information about a high-level employee. An email that uses legitimate, previously delivered emails to carry out an attack. Attackers will use original emails and clone them to create an almost identical version. They are then resent as the original or an updated version of the original, with the attachment or link replaced with a malicious version. Phishing attacks distributed via email or social media as a message sent by compromised accounts of friends or on the behalf of a cloud service provider. This attack will ask users to download a document that was uploaded to a cloud service. These attacks come from communications claiming to be law enforcement agencies, such as the IRS or FBI. However, government agencies do not initiate contact with taxpayers via email, and will never request personal or financial information through email. Social Media Phishing: These attacks occur when cybercriminals create websites to look identical to social media platforms like Facebook and LinkedIn, using similar URLs and emails, in an attempt to steal login information. Users are asked to reset passwords, an d taken to a fake landing page that looks identical to the social media platform to enter login information. Attackers then access the account, sell them the info to third parties, or blackmail the user. You can avoid phishing attacks by checking: - Sender Details: Make sure the sender is legitimate. Often, if the domain name is different than the company, it’s fraudulent. - Message Content: An attackers message will often ask you to send them or verify personal information via email, will stress the urgency of delivering this info through threat or promotion, or will claim there was a problem with a recent purchase. - Message Form: Hover over the URL with your mouse before clicking – if the URL that shows is different than the one displayed, it could be a trap. Also, look out for IP address links or URL shorteners, as they can hide nefarious links. Typos and spelling mistakes aren’t normal. Poor design and missing signatures could indicate fake messages. - Attachments: Attachments can be files that contain links or hid malware. - External Links: If you already clicked on the link, make sure the website starts with “https” instead of “http”, which indicated the website has Secure Sockets Layer (SSL) and is encrypted. Ransomware is a type of attack that utilizes malware in order to hold the user’s information ransom. It can happen on a personal level, where personal information like credit card info, bank account numbers, social security numbers and more can be held ransom. The attacker will threaten to release or sell the information unless the user pays them a sum of money. On the business level, the impact can be much more dangerous. Customer information or trade secrets can be stolen. If customer info is stolen it could cripple the company permanently. The attacker could threaten to sell trade secrets to competition. Again, the attacker will demand a sum of money in order to return the info. The biggest problem with ransomware is that there is really no guarantee that the data is returned. Attackers could release the info after receiving payment, or demand more and more money. Here’s an example of how a ransomware attack could be carried out, from The Hacker News: In early 2016 hackers were believed to be carrying out social engineering hoaxes by luring victims into installing deadly ransomware through email spam. The spam contained malware disguised as a Microsoft Word file. The ransomware was dubbed “Locky.” The ransomware worked due to macros, which are series of commands and instructions that group together as a single command to automate tasks in word. Hackers sent the files in the form of company invoices with word file attachments that embed vicious macro functions. Source: The Hacker News When a victim opens the document, a doc file is downloaded. When the file is opened, a popup appears that asks to “enable macros.” Once the macros have been enabled, an executable from a remote server is downloaded and run – this executable is the Locky Ransomware, which them begins to encrypt all the files on the computer and the network. Source: The Hacker News Nearly all file formats are encrypted and filenames are replaced with a “.locky” extension. Once encrypted, the ransomware malware displays a message that instructs infected victims to download TOR and visit the attacker’s website for further instructions and payments. The hackers then ask for a payment in order to receive the decryption key. The Locky ransomware even had the capability to encrypt network-based backup files. This is one of the reasons that companies are encouraged to keep sensitive and important files in a third party storage as a backup plan. As this example shows, ransomware can be extremely dangerous to companies. All of the examples of malware can be used to eventually hold a person or company ransom. A Distributed denial-of-services (DDoS) attack is an attempt to make an online service unavailable by flooding it with traffic from multiple sources. BBC Tech has a good explanation in this video. There are several types of DDoS attacks, as TechLog360 explains: - TCP Connection Attacks – These attempt to use up all the available connections to infrastructure devices such as load-balancers, firewalls and application servers. Even devices capable of maintaining state on millions of connections can be taken down by these attacks. - Volumetric Attacks – These attempt to consume the bandwidth either within the target network/service, or between the target network/service and the rest of the Internet. These attacks are simply about causing congestion. - Fragmentation Attacks – These send a flood of TCP or UDP fragments to a victim, overwhelming the victim’s ability to re-assemble the streams and severely reducing performance. - Application Attacks – These attempt to overwhelm a specific aspect of an application or service and can be effective even with very few attacking machines generating a low traffic rate (making them difficult to detect and mitigate). DDoS attacks can even be committed using a botnet of Internet of Things (IoT) devices. This happened recently with the Dyn cyber attack. The DDoS attack on Dyn began at 11:10 UTC on October 21, 2016. At this point a volumetric DDoS attack was carried out on the DNS provider that sent an unreasonable amount of traffic toward the target, causing it to effectively run out of network resources. What was unique about the DDoS attack on Dyn was that it was carried out using Internet of Things devices. A relatively new form of attack, Internet of Things presents is a particularly juicy opportunity for hackers. Any device connected to the web can potentially be utilized to carry out attacks. For the Dyn attack, specifically, a Marai malware botnet was used to carry out the attack. The same botnet that was used on Krebs on Security. Hackers used devices like routers, webcams, security cameras, and DVRs in order to create the botnet and launch the DDoS attack. Over 100,000 devices were used in the Dyn attack, rendering the provider unable to process requests, and effectively locking down the sites that use Dyn services. The attacks came in traffic bursts 40 to 50 times normal flows, and lasted over 9 hours. What’s so scary about Marai is that the code is available to the general public. The owner of the botnet published the source code online and now any hacker or group of hackers can utilize it to their advantage. DDoS attacks come with a cost to companies: How do we protect from viruses, malware, and the like? Aside from best practices and staying vigilant, antivirus software is an absolute must in the battle against cyber criminals. From How to Geek: Antivirus software runs in the background on your computer, checking every file you open. This is generally known as on-access scanning, background scanning, resident scanning, real-time protection, or something else, depending on your antivirus program. When you double-click an EXE file, it may seem like the program launches immediately – but it doesn’t. Your antivirus software checks the program first, comparing it to known viruses, worms, and other types of malware. Your antivirus software also does “heuristic” checking, checking programs for types of bad behavior that may indicate a new, unknown virus. Antivirus programs also scan other types of files that can contain viruses. For example, a .zip archive file may contain compressed viruses, or a Word document can contain a malicious macro. Files are scanned whenever they’re used – for example, if you download an EXE file, it will be scanned immediately, before you even open it. It’s possible to use an antivirus without on-access scanning, but this generally isn’t a good idea – viruses that exploit security holes in programs wouldn’t be caught by the scanner. After a virus has infected your system, it’s much harder to remove. (It’s also hard to be sure that the malware has ever been completely removed.) There are a number of antivirus software out there for personal use and enterprise use. AV-TEST is responsible for testing out multiple protection applications in antivirus software, and Tech Worm was able to categorize the best antivirus software depending on the reason for usage. - Best Antivirus for Repairing Tools: The award for the best repair operations for tools built into security products went to Avira Antivirus Pro, while the second award, which was the best standalone repair and cleaning tools went to Virus Removal Tool. - Best Antivirus for Performance: In the consumer class, the award was a tie between Bitdefender Internet Security and Kaspersky Internet Security, while on an enterprise level, the award was given to Bitdefender’s Endpoint Security. - Best Antivirus for Usability: Avira Antivirus Pro and Kaspersky Internet Security won the awards for the consumer class. On the enterprise level however, the award was snagged by Intel Security’s McAfee Endpoint Security. - Best Antivirus for Overall Protection: For the consumer and enterprise class, Symantec was the clear winner, with its Norton Security antivirus, which grabbed the best-in-class title in the consumer class while the company’s Endpoint Protection took the title in the corporate category. Cloud security has become increasingly important to companies as cloud adoption has skyrocketed over the past several years. According to TechCrunch, firewall and switching vendors will fade and companies that provide encryption and anti-malware technology will thrive due to the cloud computing environment. As more companies move to off-premise cloud solutions, hackers will begin to attack the off-premise areas in order to steal data. Anti-malware technology is needed, but the tech needs to be specifically designed to be inserted into cloud systems. More APIs and frameworks from cloud providers will allow for third-party anti-malware integration. Firewall vendors have traditionally focused on access control. Firewalls determine who can talk to what over which protocol, and are typically IP-centric. While the cloud is in need of the advanced functions that firewalls provide, core access control is often imbedded into a cloud provider’s system, meaning they don’t need the added protection firewall vendors provide. Load balancers face the same problem. They have long distributed traffic across servers to handle high volumes of users or visitors. However, cloud providers already feature auto-scaling, so customers don’t need to pay a third party to provide it. Encryption has typically only been deployed in certain scenarios by companies. With the cloud, however, everything always needs to be encrypted. In the past, agent-based encryption has been tough to deploy because it doesn’t work seamlessly with other infrastructure functions. With the cloud, vendors are incentivized to create solutions that overcome the limits of traditional encryption. While cloud providers offer built-in encryption, companies will want to bolster security with their own third-party choices. Switching products offer complex features like VLANS, but these features aren’t necessary with cloud computing. Traditionally, switching products have used elaborate protocols to determine what can talk to what. In cloud computing, network control is defined up front and deployed automatically. So there is no need to set up network access control policies, and therefore less need for software switches. Storage is a huge growth area with the cloud. Cloud computing provides readily accessible infrastructure to store data. Companies will need to leverage both public and private clouds in order to manage all of this data. So software storage vendors could see a lot of opportunity, especially when combined with the amount of data that will come with IoT devices. All in all, cloud security will need to be handled differently than on-premise security has typically been handled. Internet of Things Security The Internet of Things (IoT) brings an entirely new wrinkle to the cyber security landscape. No longer do we need to only worry about desktops, laptops and mobile devices being a gateway to our network. As IoT is further deployed, hundreds, thousands, and even millions of devices will be connected to the network for some companies. The biggest problem is that IoT devices have extremely poor security on the whole, and there is a total and utter lack of standards to ensure that the devices you purchase are safe. Here’s a good example from Ars Technica: Shodan, a search engine for IoT, launched a feature that let users browse vulnerable web cams that use Real Time Streaming Protocol (RTSP) to share video. These cameras have no password authentication in place. Shodan searches the internet, finds susceptible webcams, and takes a screenshot. The results included images of marijuana plantations, back rooms of banks, children, kitchens, living rooms, garages, front gardens, back gardens, ski slopes, swimming pools, colleges and schools, laboratories, and cash register cameras in retail stores. This highlights the problem with IoT devices. They are becoming available, but the security is massively flawed and unregulated. As it stands there are multiple alliances. Open Connectivity Foundation, One M2M, IoT Forum, ISO, Industrial Internet Consortium, OpenFog, LoRa Alliance, OMA, AllSeen Alliance are some of them. There are multiple protocol standards: MQTT, COAP, DDS, AMQP, XMPP and so on. There are even multiple networking standards. Zigbee, WiFi, Bluetooth, Lte, SIGFOX, NB-IoT, and Z-Wave are only some of them. When you have this many standards and protocols you really have none. Consumers and companies are unaware of the standard of security in these devices. Security research Brian Knopf is trying to change this. He offers preliminary cireteria that his company, I Am The Cavalry, will use to judge IoT devices: Source: Ars Technica The US Air Force has also contracted Peiter “Mudge” Zatko of the L0pht hacker group to create a “Consumer Security Report” for IoT devices. For now, there isn’t much more to say on IoT security. Be extremely careful. Make sure any device connected to the network is secure. Hire third party security firms to test them. Don’t let IoT devices be the cause of a cyberattack on your company. Mobile Payment Security According to TechCrunch, as of early 2016, only 20% of people with an iPhone that works with Apple Pay have ever tried Apple Pay. Mobile payment is slow to adoption, but that doesn’t mean it’s something that companies don’t need to worry about. Innovation in mobile payment is expected innovate in a number of ways in the next few years: - Peer-to-Peer payments will allow users to transfer money directly to one another. - Plastic will be replaced by smart phones, where all credit cards will be consolidated. - Centralized awards points will have merchants accept loyalty rewards from one another. - Charitable contributions will evolve to allow for donations to individuals. - Virtual banks will replace brick-and-mortar institutions. If you’re a company that receives payments, especially from consumers, you’ll need to ensure that your line of retrieval of these payments is totally secure. How so? Some companies have turned to host cared emulations (HCE). This is a technology that emulates a payment card on a mobile device using software. At this point, many mobile payment credentials are stores on the device hardware as a secure element. HCE removes third-party involvement by moving credentials off the device. Cryptocurrency may also be the way of the future for mobile payments. Cryptocurrency is a digital currency in which encryption techniques are used to regulate the generation of units of currency and verify the transfer of funds. Cryptocurrency relies heavily on blockchain. According to The Wall Street Journal: A blockchain is a data structure that makes it possible to create a digital ledger of transactions and share it among a distributed network of computers. It uses cryptography to allow each participant on the network to manipulate the ledger in a secure way without the need for a central authority. This means that the ledger is incorruptible. It exists on its own, completely encrypted, and keeps a tally of every transaction made that cannot be altered after the fact. That means that, when multiple businesses are interacting on the same leger, exchanging capital, there doesn’t need to be multiple copies for each company. The single copy is kept for them all, saving time, money, and possible confusion as to transactions. Currently, companies like IBM, Intel, Cisco, JP Morgan, Wells Fargo and State Street have created their own global online ledger known as the Open Ledger Project. A recent 2016 survey found that 59 percent of organizations incorporate cyber insurance into their strategic plans to manage cyber risks, with the highest rate among large corporations. According to TechCrunch, “Cyber insurance is a sub-category within the general insurance industry, offering products and services designed to protect businesses from internet-based risks.” The cyber insurance market has grown from 10 insurers to 50 in the past few years. Cyber insurance policies typically include a combination of first-party coverage, which covers direct losses to the organization, and third-party coverage, which protects against claims against the organization by third parties. There is, however, no standard form of cyber insurance on the market. Insurance companies seek to understand the client’s risk profile in order to determine a premium, based on the scale of the business and the sensitivity of the data it handles and stores. A lack of history in this area means there is little known about whether policies are sufficient. A recent attack on health insurer Anthem is estimated to cost the company more than a billion dollars, while its insurance policy is estimated to pay out only between 150 million and 200 million dollars. That isn’t to say you shouldn’t invest on a policy. Anthem still got back up to 200 million dollars in losses. That’s significant. Just understand that the market is still undefined, and ensure that you are getting sufficient coverage for what you’re paying. This Expert Guide for IT Professionals highlights what you’ll need to know about next-gen security tools, ways that IT security is changing, how network assessments can be a foot in the door, and why you need to take on the “cybersecurity educator- in-chief” role for your customers. Plus, a security services practitioner shares simple advice he gives customers to boost their information security and reduce financial risk.
<urn:uuid:e6c4cb65-ebfb-458d-ba86-fdcf905db3bd>
CC-MAIN-2017-04
https://techdecisions.co/it-infrastructure/ultimate-guide-corporate-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931857
5,668
2.8125
3
The patent application, filed in August and published today, summarizes the invention thus: "The portable computing device includes an enclosure that surrounds and protects the internal operational components of the portable computing device. The enclosure includes a structural wall formed from a ceramic material that permits wireless communications therethrough. The wireless communications may for example correspond to RF communications, and further the ceramic material may be radio-transparent thereby allowing RF communications therethrough." With the introduction of Microsoft's wireless-capable Zune media player, analysts have anticipated Apple would add similar capabilities to its iPod. At the very least, this filing demonstrates that Apple's engineers are working on it. In some ways, this filing is more about materials science than electronics. The patent application is focused on the company's innovative use of ceramics as a housing for electronic components. "It should be noted that ceramics have been used in a wide variety of products including electronic devices such as watches, phones, and medical instruments," the filing states. "In all of these cases, however, the ceramic material (sic) have not been used as structural components. In most of these cases they have been used as cosmetic accoutrements. It is believed up till now ceramic materials have never been used as a structural element including structural frames, walls or main body of a consumer electronic device, and more particularly an enclosure of a portable electronic device such as a media player or cell phone."
<urn:uuid:75f08142-b9a6-4643-b2a9-270e4d4ca4ff>
CC-MAIN-2017-04
http://www.networkcomputing.com/wireless/apple-seeks-patent-wireless-handheld/1251957944
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953812
291
3.109375
3
Amelia Earhart search gets help from State Department's tech tools - By Alice Lipowicz - Mar 22, 2012 The State Department may have provided a high-tech key to solving the mystery of lost aviator Amelia Earhart. Department photography experts supplied image analysis and enhancement to help identify what might be a piece of her airplane in a photograph taken shortly after she disappeared over the Pacific Ocean 75 years ago. Ric Gillespie, leader of The International Group for Historic Aircraft Recovery, told NPR on March 21 that State’s photo analysis helped to kickstart a new privately-funded effort to find Earhart’s Lockheed Electra plane. Gillespie said his group had obtained the photograph taken off the coast of Nikumaroro Island, about 1,600 miles southwest of Hawaii, where Earhart’s plane was believed to have disappeared. He said there was a report from a photo specialist that a small piece of the plane was visible in the photograph, apparently lodged on a reef. “So we asked the State Department to help us with forensic imaging analysis, and the opinion of their photographic specialist was the same as ours,” Gillespie told NPR, according to a report published by KUNC Community Radio for Northern Colorado. “That was something of a breakthrough for us. If it is, indeed, a picture of one of her landing gears, it tells us where the airplane went over the edge of the reef and it's right where we thought it should be.” In the last century, the State department was directly involved in rescue and recovery efforts to locate Earhart’s plane shortly after she lost radio contact with the Coast Guard in July 1937. A British survey team took a photo in October 1937 in the area where her plane was believed to have gone down. While the photograph had been examined by research teams many times, investigators took a new look in 2010, and their suspicions were triggered, Gillespie said, according to a report by CNN. Gillespie’s group had the photo checked in a blind review at the State department, which determined that the photograph included an image of landing gear for a Lockheed Electra, the CNN report said. Secretary of State Hillary Clinton appeared with Gillespie and other volunteers this week to launch the privately-funded effort led by Gillespie’s organization to find Earhart’s remains. "Amelia Earhart may have been an unlikely heroine for a nation down on its luck," Clinton said at the event, according to a transcript, "but she embodied the spirit of an America coming of age and increasingly confident, ready to lead in a quite uncertain and dangerous world." State is one of several agencies that are using advanced imaging technologies. In 2010, State and the FBI announced the release of “age-progressed” digitally-enhanced photographs of terrorist suspects on the most wanted list. The Secret Service also uses sophisticated visual forensic technologies. A State spokesman referred all inquiries about the photo enhancement for the Earhart photo to Gillespie and his organization. Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week.
<urn:uuid:eb001ea4-bb4b-4b78-979d-b5fdd1899f82>
CC-MAIN-2017-04
https://fcw.com/articles/2012/03/22/state-department-amelia-earhart.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970945
651
2.65625
3
Google Chrome Browser Exhibits Risky BehaviorEven Google Chrome, touted for its security architecture, has security issues. According to one security expert, there are no secure browsers. Google's Chrome browser may have been designed with security in mind, but that hasn't immunized it from security concerns. Google is planning to fix the issue shortly. "We believe this behavior does not introduce any particular risk for the vast majority of users who do not use view-source: to browse Web pages," said a company spokesperson in an e-mailed statement. "We're working to more accurately align the view-source: page with expected behavior." In a phone interview, Hansen acknowledged that this bug isn't particularly serious because the only people who regularly view Web page source code are developers and because the Chrome installed base is still small. "It's not like an earth-shattering bug," he said. "I just find it sort of weird when people talk about Chrome as super-secure. It's built with WebKit and WebKit is not necessarily secure." WebKit is the open source browser layout engine used by Chrome and Apple's Safari Web browser. Microsoft's Internet Explorer uses a proprietary layout engine called Trident. Mozilla's Firefox uses a rendering engine called Gecko. With regard to WebKit's security, Hansen said, "I don't think WebKit really has had enough eyeballs on it." And he says Mozilla's Firefox has security problems that arise from the same lack of focused scrutiny by security professionals. "For the most part, it's just a bunch of random dudes who are contributing to it," he said. Internet Explorer, he insists, is leaps and bounds more secure than the competition because so many more people are pounding on it. Nevertheless, in the past three months Microsoft has issued three "browser-and-get-owned" security advisories regarding non-browser software components that have undermined the security of Internet Explorer. While Microsoft's secure coding practices may lead to better security from the statistical perspective of bugs per lines of code -- and that's an issue about which there's ongoing debate -- Hansen isn't recommending Internet Explorer. "I'm not saying if you use Internet Explorer you're safe," he said. "There isn't a secure browser." Though he says that friends of his in the security industry have confided that Firefox is full of bugs, he says he uses Firefox because "it's better for breaking into stuff." Black Hat is like no other security conference. It happens in Las Vegas, July 25-30. Find out more and register.
<urn:uuid:5571760c-c697-4d24-ac03-74c63b7b4307>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/google-chrome-browser-exhibits-risky-behavior/d/d-id/1081275
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954296
531
2.546875
3
Recent Examples of MPP Systems and Clusters This lecture is devoted to examination of a number of multicomputer systems. 1. The Sun Microsystems E25K multiprocessor. 2. The IBM BlueGene 3. The Cray Red Storm 4. The Cray XT5 1. The Google cluster. 2. Some typical blade servers. 3. The “SETI at home” distributed computing effort. There are a number of motivations for this lecture. The primary motivation is to show that recent technological improvements (mostly with VLSI designs) have invalidated the earlier pessimism about MPP systems. We show this by describing a number of powerful MPP systems. * Tanenbaum [Ref. 4, page 627] likes to call these “Collections of Workstations” or “COWs”. The E25K NUMA Multiprocessor by Sun Microsystems first example is a shared–memory NUMA multiprocessor built from seventy–two Each processor is an UltraSPARC IV, which itself is a pair of UltraSPARC III Cu processors. The “Cu” in the name refers to the use of copper, rather than aluminum, in the signal traces on the chip. A trace can be considered as a “wire” deposited on the surface of a chip; it carries a signal from one component to another. Though more difficult to fabricate than aluminum traces, copper traces yield a measurable improvement in signal transmission speed, and are becoming favored. Recall that NUMA stands for “Non–Uniform Memory Access” and describes those multiprocessors in which the time to access memory may depend on the module in which the addressed element is located; access to local memory is much faster than access to memory on a remote node. The basic board in the multiprocessor comprises the following: CPU and memory board with four UltraSPARC IV processors, each with an 8–GB memory. As each processor is dual core, the board has 8 processors and 32 GB memory. 2. A snooping bus between the four processors, providing for cache coherency. 3. An I/O board with four PCI slots. expander board to connect all of these components and provide communication to the other boards in the multiprocessor. A full E25K configuration has 18 boards; thus 144 CPU’s and 576 GB of memory. The E25K Physical Configuration Here is a figure from Tanenbaum [Ref. 4] depicting the E25K configuration. The E25K has a centerplane with three 18–by–18 crossbar switches to connect the boards. There is a crossbar for the address lines, one for the responses, and one for data transfer. The number 18 was chosen because a system with 18 boards was the largest that would fit through a standard doorway without being disassembled. Design constraints come from everywhere. Cache Coherence in the E25K How does one connect 144 processors (72 dual–core processors) to a distributed memory and still maintain cache coherence? There are two obvious solutions: one is too slow and the other is too expensive. Sun Microsystems opted for a multilevel approach, with cache snooping on each board and a directory structure at a higher level. The next figure shows the design. The memory address space is broken into blocks of 64 bytes each. Each block is assigned a “home board”, but may be requested by a processor on another board. Efficient algorithm design will call for most memory references to be served from the processors home board. The IBM BlueGene The description of this MPP system is based mostly on Tanenbaum [Ref. 4, pp. 618 – 622]. The system was designed in 1999 as “a massively parallel supercomputer for solving computationally–intensive problems in, among other fields, the life sciences”. It has long been known that the biological activity of any number of important proteins depends on the three dimensional structure of the protein. An ability to model this three dimensional configuration would allow the development of a number of powerful new drugs. BlueGene/L was the first model built; it was shipped to Lawrence Livermore Lab in June 2003. A quarter–scale model, with 16,384 processors, became operational in November 2004 and achieved a computational speed of 71 teraflops. The full model, with 65,536 processors, was scheduled for delivery in the summer of 2005. In October 2005, the full system achieved a peak speed on 280.6 teraflops on a standard benchmark called “Linpack”. On real problems, it achieved a sustained speed of over 100 teraflops. The connection topology used in the BlueGene is a three–dimensional torus. Each processor chip is connected to six other processor chips. The connections are called “North”, “East”, “South”, “West”, “Up”, and “Down”. The Custom Processor Chip IBM intended the BlueGene line for general commercial and research applications. Because of this, the company elected to produce the processor chips from available commercial cores. Each processor chip has two PowerPC 440 cores operating at 700 MHz. The configuration of the chip, with its multiple caches is shown in the figure below. Note that only one of the two cores is dedicated to computation, the other is dedicated to handling communications. In a recent upgrade (June 2007), IBM upgraded this chip to hold four PowerPC 450 cores operating at 850 MHz. In November 2007, the new computer, called the BlueGene/P achieved a sustained performance of 167 teraflops. This design obviously has some “growing room”. The BlueGene/L Hierarchy 65,536 BlueGene/L is designed in a hierarchical fashion. There are two chips per card, 16 cards per board, 32 boards per cabinet, and 64 cabinets in the system. We shall see that the MPP systems manufactured by Cray, Inc. follow the same design philosophy. It seems that this organization will become common for large MPP systems. The AMD Opteron Before continuing with our discussion of MPP systems, let us stop and examine the chip that has recently become the favorite for use as the processor, of which there are thousands. This chip is the AMD Opteron, which is a 64–bit processor that can operate in three modes. In legacy mode, the Opteron runs standard Pentium binary programs unmodified. In compatibility mode, the operating system runs in full 64–bit mode, but applications must run in 32–bit mode. In 64–bit mode, all programs can issue 64–bit addresses; both 32–bit and 64–bit programs can run simultaneously in this mode. Opteron has an integrated memory controller, which runs at the speed of the processor This improves memory performance. It can manage 32 GB of memory. The Opteron comes in single–core, dual–core, or quad–core processors. The standard clock rates for these processors range from 1.7 to 2.3 GHz. The Red Storm by Cray, Inc. The Red Storm is a MPP system in operation at Sandia National Laboratory. This lab, operated by Lockheed Martin, doe classified work for the U.S. Department of Energy. Much of this work supports the design of nuclear weapons. The simulation of nuclear weapon detonations, which is very computationally intensive, has replaced actual testing as a way to verify designs. In 2002, Sandia selected Cray, Inc. to build a replacement for its current MPP, called ASCI Red. This system had 1.2 terabytes of RAM and operated at a peak rate of 3 teraflops. The Red Storm was delivered in August 2004 and upgraded in 2006 [Ref. 9]. The Red Storm now uses dual–core AMD Opterons, operating at 2.4 GHz. Each Opteron has 4 GB of RAM and a dedicated custom network processor called the Seastar, manufactured by IBM. Almost all data traffic between processors moves through the Seastar network, so great care was taken in its design. This is the only chip that is custom–made for the project. The next step in the architecture hierarchy is the board, which holds four complete Opteron systems (four CPU’s, 16 GB RAM, four Seastar units), a 100 megabit per second Ethernet chip, and a RAS (Reliability, Availability, and Service) processor to facilitate fault location. The next step in the hierarchy is the card cage, which comprises eight boards inserted into a backplane. Three card cages and their supporting power units are placed into a cabinet. The full Red Storm system comprises 108 cabinets, for a total of 10,836 Opterons and 10 terabytes of SDRAM. Its theoretical peak performance is 124 teraflops, with a sustained rate of 101 teraflops. Security Implications of the Architecture world on national laboratories there are special requirements on the architecture of computers that might be used to process classified data. The Red Storm at Sandia routinely processes data from which the detailed design of current The solution to the security problem was to partition Red Storm into classified and unclassified sections. This partitioning was done by mechanical switches, which would completely isolate one section from another. There are three sections: classified, unclassified, and a switchable section. The figure above, taken from Tanenbaum [Ref. 4], shows the configuration as it was in 2005. The Cray XT5h The Cray XT3 is a commercial design based on the Red Storm installed at Sandia National Labs. The Cray XT3 led to the development of the Cray XT4 and Cray XT5, the latest in the line. The XT5 follows the Red Storm approach in using a large number of AMD Opteron processors. The processor interconnect uses the same three–dimensional torus as found in the IBM BlueGene and presumably in the Cray Red Storm. The network processor has been upgraded to a system called ‘Seastar 2+”; each switch having six 9.6 GB/second router–to–router ports. The Cray XT5h is a modified XT5, adding vector coprocessors and FPGA (Field Programmable Gate Array) accelerators. FPGA processors might be used to handle specific calculations, such as Fast Fourier Transforms, which often run faster on these units than on general purpose processors. 2008, Cray, Inc. was chosen to deliver an XT4 to the The Google Cluster We now examine a loosely–coupled cluster design that is used at Google, the company providing the popular search engine. We begin by listing the goals and constraints of the design. There are two primary goals for the design. provide key–word search of all the pages in the World Wide Web, returning in not more than 0.5 seconds (a figure based on human tolerance for delays). “crawl the web”, constantly examining pages on the World Wide Web and indexing them for efficient search. This process must be continuous in order to keep the index current. There are two primary constraints on the design. obtain the best performance for the price. For this reason, high–end servers are eschewed in favor of the cheaper mass–market computers. provide reliable service, allowing for components that will fail. Towards that end, every component is replicated, and maintenance is constant. What makes this design of interest to our class is the choice made in creating the cluster. It could have been created from a small number of Massively Parallel Processors or a larger number of closely coupled high–end servers, but it was not. It could have used a number of RAID servers, but it did not. The goal was to use commercial technology and replicate everything. According to our textbook [Ref. 1, page 9–39], the company has not suffered a service outage since it was a few months old, possibly in late 1998 or early 1999. The Google Process We begin by noting that the success of the cluster idea was due to the fact that the processing of a query is one that can easily be partitioned into independent cooperating processes. Here is a depiction of the Google process, taken from Tanenbaum [Ref. 4, page 629]. The Google Process: Sequence of Actions The process of handling a web query always involves a number of cooperating processors. 1. When the query arrives at the data center, it is first handled by a load balancer. This load balancer will select three other computers, based on processing load, to handle the query. 2. The load balancer selects one each from the available spell checkers, query handlers, and advertisement servers. It sends the query to all three in parallel. 3. The spell checker will check for alternate spellings and attempt to correct misspellings. 4. The advertisement server will select a number of ads to be placed on the final display, based on key words in the query. 5. The query handler will break the query into “atomic units” and pass each unit to a distinct index server. For example, a query for “Google corporate history” would generate three searches, each handled by a distinct index server. 6. The query handler combines the results of the “atomic queries” into one result. In the above, a logical AND is performed; the result must have been found in all three atomic queries. 7. Based on the document identifiers resulting from the logical combination, the query handler accesses the document servers and retrieves links to the target web pages. Google uses a proprietary algorithm for ranking responses to queries. The average query involves processing about 100 megabytes of data. Recall that this is to be done in under a half of a second. The Google Cluster The typical Google cluster comprises 5120 PC’s, two 128–port Ethernet switches, 64 racks each with its own switch, and a number of other components. A depiction is shown below. Note the redundancy built into the switches and the two incoming lines from the Internet. Blade Servers and Blade Enclosures Blade enclosures represent a refinement of the rack mounts for computers, as found in the Google cluster. In a blade enclosure, each blade server is a standard design computer with many components removed for space and power considerations. The common functions (such as power, cooling, networking, and processor interconnect) are provided by the blade enclosure, so that the blade server is a very efficient design. The figure at left shows an HP Proliant blade enclosure with what appears to be sixteen blade servers, arranged in two racks of 8. Typically blade servers are “hot swappable”, meaning that a unit can be removed without shutting down and rebooting all of the servers in the enclosure. This greatly facilitates maintenance. Essentially a blade enclosure is a closely coupled multicomputer. Typical uses include web hosting, database servers, e–mail servers, and other forms of cluster computing. According to Wikipedia [Ref. 10], the first unit called a “blade server” was developed by RTX Technologies of Houston, TX and shipped in May 2001. It is interesting to speculate about the Google design, had blade servers been available in the late 1990’s when Google was starting up. information is taken from the SETI@Home web page [Ref. 7]. SETI is the acronym for “Search for Extra–Terrestrial Intelligence”. Radio SETI refers to the use of radio receivers to detect signals that might indicate another intelligent species in the universe. The SETI antennas regularly detect signals from a species that is reputed to be intelligent; unfortunately that is us on this planet. A great deal of computation is required to filter the noise and human–generated signals from the signals detected, possibly leaving signals from sources that might be truly extraterrestrial. Part of this processing is to remove extraterrestrial signals that, while very interesting, are due to natural sources. Such are the astronomical objects originally named “Little Green Men”, but later named as “quasars” and now fully explained by modern astrophysical theory. Radio SETI was started under a modest grant and involved the use of dedicated radio antennas and supercomputers (the Cray–1 ?) located on the site. In 1995, David Gedye proposed a cheaper data–processing solution: create a virtual supercomputer composed of large numbers of computers connected by the global Internet. SETI@home was launched in May 1999 and continues active to this day [May 2008]. Many computer companies, such as Sun Microsystems, routinely run the SETI@home on their larger systems for about 24 hours as a way of testing before shipping to the customer. A Radio Telescope Here is a picture of the very large radio telescope at Arecibo in Puerto Rico. This is the source of data to be processed by the SETI@home project at Berkeley. Arecibo produces about 35 gigabytes of data per day. These data are given a cursory examination and sent by U.S. Mail to the Berkeley campus in California; Arecibo lacks a high–speed Internet connection. The Radio SETI Process Radio SETI monitors a 2.5 MHz radio band from 1418.75 to 1421.25 MHz. This band, centered at the 1420 MHz frequency called the “Hydrogen line” is thought to be optimal for interstellar transmissions. The data are recorded in analog mode and digitized later. When the analog data arrive at Berkeley, they are broken into 250 kilobyte chunks, called “work units” by a software program called “Splitter” Each work unit represents a 9,766 Hz slice of the 2,500 kHz spectrum. This analog signal is digitized at 20,000 samples per second. Participants in the Radio SETI project are sent these work units, each representing about 107 seconds of analog data. The entire packet, along with the work unit, is about 340 kilobytes. This figure shows the processing network, including the four servers at Berkeley and the 2.4 million personal computers that form the volunteer network. In this lecture, material from one or more of the following references has been used. Organization and Design, David A. Patterson & John L. Hennessy, Morgan Kaufmann, (3rd Edition, Revised Printing) 2007, (The course textbook) ISBN 978 – 0 – 12 – 370606 – 5. Architecture: A Quantitative Approach, John L. Hennessy and David A. Patterson, Morgan Kauffman, 1990. There is a later edition. ISBN 1 – 55860 – 069 – 8. Computer Architecture, Harold S. Stone, Addison–Wesley (Third Edition), 1993. ISBN 0 – 201 – 52688 – 3. Computer Organization, Andrew S. Tanenbaum, Pearson/Prentice–Hall (Fifth Edition), 2006. ISBN 0 – 13 – 148521 – 0 5. Computer Architecture, Robert J. Baron and Lee Higbie, Addison–Wesley Publishing Company, 1992, ISBN 0 – 201 – 50923 – 7. 6. W. A. Wulf and S. P. Harbison, “Reflections in a pool of processors / An experience report on C.mmp/Hydra”, Proceedings of the National Computer Conference (AFIPS), June 1978. 7. The link describing SETI at home: http://setiathome.berkeley.edu/ 8. The web site for Cray, Inc.: http://www.cray.com/ 9. The link for Red Storm at Sandia National Labs: http://www.sandia.gov/ASC/redstorm.html 10. The Wikipedia article on blade servers: http://en.wikipedia.org/wiki/Blade_server
<urn:uuid:064760ca-a0b9-4da6-b57a-bac1d36a2f1a>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155_Slides/Chapter13/Multiprocessors_03.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920566
4,297
2.90625
3
As I write this post, it has been less than 24 hours since the bombings at the Boston Marathon, and almost just as long since the Wikipedia page about the bombings was created. While many people, no doubt, first heard about the tragedy via social media networks like Twitter and Facebook, the news made it to Wikipedia almost as fast. This raises the question of whether Wikipedia itself can be used to identify breaking news stories. The Wikipedia Live Monitor in action That’s a question that three researchers have recently attempted to answer. Thomas Steiner, from Google Germany, Seth van Hooland from the Universite Libre de Bruxelles and Ed Summers from Library of Congress found that spikes in real-time edits to Wikipedia articles can be used to identify breaking news almost as quickly as social networks. While breaking news tends to hit social networks first, the authors found that the lag time between the news being mentioned on those networks and Wikipedia being updated was much shorter than previously speculated, in some cases only minutes. Specifically, they created the Wikipedia Live Monitor, an open source tool based partially on Wikistream, which monitors live updates to Wikipedia articles. The Wikipedia Live Monitor watches for changes via IRC to Wikipedia articles in any of 42 languages (those with over 100,000 articles). Breaking news candidates are identified as article clusters (that is, multiple articles in different languages about the same topic) that have had frequent, recent edits by multiple editors. Breaking news candidates were then validated (or rejected) as such by manually searching Twitter, Facebook and Google+ for mentions of the event in question. They found that monitoring Wikipedia edits was a reliable way to identify breaking news, with a lag time of approximately 30 minutes between the news being mentioned on social media networks and the Wikipedia being updated. However, they found that the lag time could be much shorter for “global breaking news like celebrity deaths.” For example, in the case of Pope Benedict XVI resigning, they found that their system would have identified the news based on Wikipedia edits only two minutes after Reuters broke the news on Twitter. Interesting stuff, but is there a practical application for this knowledge? Well, the authors found that looking to Wikipedia first, then validating against social networks (as opposed to looking first to social networks for breaking news) resulted in fewer false positives, but just as many true positives. So maybe the upshot is that, using a tool like the Live Monitor, we can look to Wikipedia as a more reliable source of breaking news. While I don’t think this will make me start using Wikipedia as a breaking news source, I must say that watching live updates to Wikipedia via the Live Monitor is kind of mesmerizing. If nothing else, this research underlines the power of crowdsourced information. Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:080f73b5-863d-4517-9ac2-33e1be428b4a>
CC-MAIN-2017-04
http://www.itworld.com/article/2709210/big-data/don-t-have-enough-sources-for-breaking-news--try-wikipedia.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945584
620
2.671875
3
IPv6: Are We There Yet? The first big push toward implementation of IPv6 was mobile devices. Now, one of the driving forces is the Internet of Things. As the name implies, this means everything, including machine to machine communication (M2M). In 1998, IPv6 was officially introduced with RFC 2460. Now the Internet, businesses, and home users are all implementing IPv6, correct? Well, maybe not totally correct or even close to correct. The fact that the widespread implementation of IPv6 is taking so long leads to some questions: - Why is IPv6 implementation taking so long? - Who is Using IPv6 now? - What are the predicted trends for IPv6? - What are the business benefits of implementing IPv6? Why is IPv6 implementation taking so long? The world has been hearing for years that we are running out of IPv4 address space, yet it seems that most organizations have not been rushing to immediately implement IPv6. The cost associated with new hardware and software that supports IPv6 is certainly one of the reasons for this lack of interest, along with the associated expenses of training for IPv6 implementation. Also, many organizations implemented strategies such as Network Address Translation (NAT) and Port Address Translation (PAT) as a workaround for needing more IP addresses. While this mitigates the problem of running out of IPv4 address space, it introduces new problems. Multiple NAT implementations make troubleshooting more difficult and break the original end-to-end IP model. One of the original organizations pushing for IPv6 compliance in the United States was the Department of Defense (DoD). By 2008 the DoD had piloted IPv6 on its network backbone. However, in a December 2014, a report by the DoD determined the following: "Although DoD satisfied the requirement to demonstrate IPv6 on the network backbone by June 2008, DoD did not complete the necessary Federal and DoD requirements and deliverables to effectively migrate the DoD enterprise network to IPv6. This occurred because: - The DoD Chief Information Officer (CIO) and U.S. Cyber Command (USCYBERCOM) did not make IPv6 a priority - The DoD CIO, USCYBERCOM, and Defense Information Systems Agency (DISA) lacked an effectively coordinated effort and did not use available resources to further DoD-wide transition toward IPv6; and - The DoD CIO did not have a current plan of action and milestones to advance DoD IPv6 migration." Marco Hogewoning, one of the Reseaux IP Europeenne (RIPE) IPv6 Working Group co-chairs, cites the "lack of a clear business case to recover the cost of such a deployment" as a reason for the slow implementation. He goes on to say, "The fundamental problem here is that the majority of market players still view IPv6 as a product, rather than what it really is: a building block to a new future." This paper will explore looking at IPv6 as a building block and also we'll take a look at who is implementing IPv6 as well as the current trends in IPv6 implementation.
<urn:uuid:099d323b-4f8c-47c5-888f-3d8b958c79e8>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/ipv6-are-we-there-yet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94475
654
2.546875
3
Solar Catamaran Finishes Trip Around the World / May 22, 2012 In September 2010, the world's largest solar powered boat -- the TÛRANOR PlanetSolar -- set sail from Monaco to become the first boat to circumnavigate the globe using only the sun's power. Last month, it finished its journey where it began. According to gizmag.com, a crew of five piloted the 102-foot long, 49-foot wide vessel, which is covered in 5,780 sqare feet of solar panels. These provide power to four electric motors (two located in each hull), that have a maximum output of 120 kW and can propel the boat to a speed of 14 knots. It is constructed mainly of a light yet durable carbon fiber-sandwich material. Photos courtesy of planetsolar.org
<urn:uuid:4da3a16e-847b-458d-9b43-0ac2dea58a08>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Solar-Catamaran-Finishes-Trip-Around-the-World-05222012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00205-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935538
171
2.546875
3
Figuring out how Edward Snowden breached National Security Agency computers is sort of like solving a puzzle. Take public information, such as the congressional testimony of the NSA director and Snowden's own words, and match it with an understanding how organizations get hacked, and the pieces seem to fall into place. Security software maker Venafi says it used that approach to conclude that Snowden fabricated secure shell keys and digital certificates to gain access to documents on NSA computers he had no right to access. Secure shell, or SSH, is a cryptographic network protocol used to secure a channel linking two computers over an insecure network. If he had not used encryption, he absolutely would have been caught in his tracks right way. Jeff Hudson, Venafi's CEO, challenges the NSA and Snowden to prove him wrong. An NSA spokeswoman declined to comment on Venafi's analysis, referring comments to the Department of Justice, which is conducting the investigation into the Snowden leaks. DoJ also declined to comment. Venafi isn't the first organization to offer its take on how Snowden breached NSA computers, leaking stolen data to reveal secrets about NSA surveillance programs (see NSA E-Spying: Bad Governance). The news service Reuters reports that Snowden used login credentials and passwords provided unwittingly by colleagues at a spy base in Hawaii to access some of the classified material he leaked to the media. Reuters, citing a source, says Snowden may have persuaded about two dozen fellow workers at the NSA regional operations center in Hawaii to give him their logins and passwords by telling them they were needed for him to do his job as a computer systems administrator. Exploiting Systems Administrator's Privileges But Venafi went further. Employing Lockheed Martin's Kill Chain model - which identifies patterns that link individual intrusions into broader campaigns - Venafi in its analysis surmises that Snowden employed existing systems administrator's security privileges to determine what information was available and where it was stored. Then, he gained unauthorized access to other administrative SSH keys and made it look as if he could be trusted and gain access to files and systems he wasn't authorized to see. "This is relatively easy to do if the organization has not protected and secured these technologies, the capabilities," Hudson says. "The NSA hadn't, and most global 2000 companies haven't." NSA Director Gen. Keith Alexander told Congress that Snowden was able to fabricate digital keys because of the agency's failure to detect anomalies, according to Venafi's report. "Venafi's analysis of statements from Gen. Alexander in congressional testimony gives credence to the theory that Snowden generated credentials," says Richard Stiennon, a security analyst and author of the book Surviving Cyberwar. Hudson, in an interview with Information Security Media Group, says Snowden exploited security technologies to move from one computer to another. "These systems gave him greater and greater privilege, and greater and greater access," Hudson says. "What he did was use the classic attack method: he surveilled the situation, he targeted the data he wanted; he got onto those systems; he exfiltrated the data." With massive amounts of data, Snowden needed to transfer information among systems undetected, and he apparently did that by encrypting the data he pilfered, according to the analysis. The Venafi analysis quotes Snowden as saying: "Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on." By encrypting the data, Hudson says, Snowden was able to keep the transfer of top-secret data hidden from the NSA. "If he had not used encryption," he contends, "he absolutely would have been caught in his tracks right way." Hudson says Snowden also altered systems' log files to camouflage his malicious actions. Of course, being in the business of selling software and services to secure cryptographic keys and digital certificates provides Venafi with a financial incentive to warn other organizations about the insider threat posed by the likes of Snowden. Still, cybersecurity concerns Venafi presents are worthy of consideration. Is Venafi objective in its analysis? You can be the judge of that. Let us know what you think by commenting in the space below.
<urn:uuid:811eb7d1-e5d0-426b-846b-acafcceb3668>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/blogs/how-did-snowden-breach-nsas-computers-p-1578/op-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96684
846
2.640625
3
If there’s one thing we learned from acclaimed TV series “The Walking Dead”; is that a blow to the head is what it takes to really kill a zombie. Sadly, the same can’t be said when it comes to Zombie Banks. First off, Zombie banks are those financial institutes that rely on their government to survive – in many cases – in the form of corporate bailouts or other forms of taxpayer-backed credit support. These banks would not survive otherwise due to their large number of non-performing assets which result in having an overall negative net worth. Owing to the bad debts they carry, their liabilities are greater then their assets; and as a result, they are unable to perform as normal banks should, they are not supporting the growth of the economy and are costing taxpayers a lot of money. Nevertheless in some cases, these banks are kept on life support, staggering around between the world of the living and the dead, because they are “too big to fail”. Governments would either pump investments or work with the Central Bank to promote schemes that encourage these banks to lend to consumers and businesses in an attempt to cover their losses. Lowering interest rates to lower the cost of funding is also another means of survival. Despite all that, creditors will keep coming because they believe that their government will continue to support it. Until it doesn’t. One such instance in recent years is the Ex-Im Bank in the US.Established in the 1930s to promote US manufacturing during the Great Depression, it has continued to have this purpose ever since, even though US exports are hitting record highs. The decision to dissolve this zombie bank in 2015 was one among many other smaller banks in the US facing similar insolvency problems. Another recent example is one of Europe’s largest Investment Banks which reported €256 million in profit for the first half of 2016; an 81% decline for the same period in 2015. Although having bounced back as the year closed; shares remain at an all-time low and there are indications that if it remains weak and facing new regulatory fines, this bank may require assistance. At present, the government of this bank denies reports that they are working on a rescue plan and confirms that there are enough assets and capital to stop it from going bankrupt, which has helped ease worries. In an attempt to circumvent these situations, the FDIC (Federal Deposit Insurance Corporation) in the US; the organization responsible on evaluating the viability of the Banking market and ensuring that bank deposits are safeguarded; are keeping a list of what they call “Problem Banks”. Banks on this list have serious deficiencies with their finances, operations, or management that threaten their continued solvency. Once a bank is included on that list, they are subject to closer regulatory scrutiny. They can also expect to receive instructions from regulators about what steps must be taken to rebuild their financial strength. In 2012 alone, a total of 51 banks with total assets of $12.0 billion failed, costing the FDIC Deposit Insurance Fund $2.51 billion in losses. So the question that I ask is how technology can also help reverse the state of a bank in this situation. We have never seen a Zombie on the silver screen recovering from this ferocious virus; and that’s probably for good reason. The hope now is to save others from this horrifying fate. It’s Problem Banks then that Digital Transformation can be deemed as a viable vaccine. If you look at it banks fail due to shortcomings in mainly three areas: 1) Risk: When there is a question about the viability of a bank’s long term assets from which it earns its income, there is also a question about whether this bank is able to fulfill its commitments to its customers. Banks who employ Data Science are able to mitigate some of those risks by understanding the trends occurring in their market space. Understand the data at hand and you rise above competition and be able to devise self-adjusting strategies that can guide your salesforce through the turmoil. Data Analytics helps banks to build data models to measure the performance of the companies they invest in against the changes in their respective industries; allowing them to make the right lending decisions. 2) Fraud: Suspicious activities happening under the radar not only hurt a bank’s ledger but also its reputation. Fraud detection technologies can provide the proactiveness of identifying these trends before their full damage takes effect. By creating collections of data (structured and unstructured), integrating it with the bank’s systems for added accuracy and then analyzing it; you can have an enhanced view on any client engagement that does not follow policy. Add to that the ability to perform social mining and you get a new dimension into uncovering new data patterns that can support a bank’s anti-money laundering activities, but also identity fraud and insider trading. 3) Regulation: Recent financial regulations such as Basel III & GDPR, impose new information management practices to be put in place to ensure the quality of a bank’s data. In the case of Basel III, it scores the bank based on its data availability, completeness, quality and consistency. This data can pertain to customer transactions or wealth management activities; it can be living in core systems or legacy applications waiting to be sunset. Failure to properly aggregate this information and produce the necessary regulatory reports, undermines the bank’s capability in healthy survival in its market. At the end, Digital Transformation is ever growing with healthy banks and is being heavily invested in. The question is why is Digital still not considered as an imperative survival strategy for Problem Banks? If you want to survive the apocalypse, start identifying areas of failure in your operations and apply a proper dose of Digital. Remember these banks need your money to survive and not your
<urn:uuid:c0c8b23c-c33a-4877-be5b-4fc9b8b6b6db>
CC-MAIN-2017-04
http://sparkblog.emc.com/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00325-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960292
1,197
2.765625
3
There is a lot more to our young girls than just pink and princesses; they may well hold the keys to solving the skills gap in high-tech and engineering jobs. That is the message of a new video that has gone viral on YouTube this week promoting GoldieBlox, a start-up toy company that sells games, books and clothing to encourage girls to become engineers. The commercial features three girls bored with watching a princess television show, after which they use their baby dolls, tea sets and pink dress-up clothes to build a Rube Goldberg device that circles inside and outside of the house only to change the channel to a cartoon featuring an aspiring girl engineer playing with GoldieBlox toys. The commercial is set to the Beastie Boys’ song “Girls,” but replaces the traditional lyrics like “Girls to do the dishes...girls to do the laundry,” with “Girls build a spaceship, girls code a new app, girls that grow up knowing that they can engineer that.” “I thought back to my childhood with the princesses and the ponies and wondered why construction toys and math and science kits are for boys,” Debbie Sterling, founder and chief executive of GoldieBlox, told The New York Times. “We wanted to create a cultural shift and close the gender gap and fill some of these jobs that are growing at the speed of light.” The federal government is not immune from the challenges of finding highly-skilled workers in science, technology, engineering and math fields, particularly as it faces a potential brain drain, with Baby Boomers in those fields transitioning into retirement. A report released in May by the Partnership for Public Service and Booz Allen Hamilton found that STEM fields are more top-heavy than other federal jobs fields, with 74 percent of federal STEM workers over age 40 and just 7.6 percent under age 30. And while women currently make up nearly half of the American workforce, only 23 percent of workers in STEM-related jobs are women. The number of women enrolling in computer science degrees also is decreasing, from 37 percent in 1985 to just 22 percent in 2005. Still, new jobs statistics suggest at least some positive turnaround in the number of women entering information technology. According to a Dice.com analysis of Bureau of Labor Statistics data, nearly 40,000 jobs were created in tech consulting through September of this year, and of those, 24,100 positions have gone to women. “I introduced my nine-year-old daughter to codeacademy.org, and she got interested in learning HTML to build her first website,” Shavran Goli, president of Dice, told Wired Workplace. “As a technology-industry parent, I’m keen on ensuring my children will make an informed decision if a tech career is right for them. The challenge is to do this at scale. Organizations like Girls Who Code are trying to get the word out, but as with most things in education, it starts with the parents first. If we get to our daughters, nieces or other girls early, there’s no telling what positive changes will come to technology in the future.”
<urn:uuid:31bf17cd-57f2-42b1-92b2-19d5097f945c>
CC-MAIN-2017-04
http://www.nextgov.com/cio-briefing/wired-workplace/2013/11/viral-video-aims-inspire-girls-enter-stem-fields/74372/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00049-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958722
662
2.515625
3
Pennsylvania's Governor Edward G. Rendell helped to dedicate Pennsylvania's newest wind farm on October 16th, 2008. The wind farm has transformed a former surface coal mine in Somerset County into a source of clean, renewable energy that will protect the environment and benefit Pennsylvania's consumers and economy. "We are changing the way electricity is produced and consumed in Pennsylvania by developing renewable and alternative energy sources and implementing advanced energy efficiency and conservation technologies," Rendell said as he visited the Casselman Wind Power Project, developed by Iberdrola Renewables. "The demand for wind power is growing across Pennsylvania and the nation because it's an alternative source of energy from a domestic resource that can be produced at a stable, competitive price." The 23 wind turbines at the Casselman Wind Power Project will produce enough clean energy to meet the equivalent needs of more than 10,000 homes annually, although the power will be sold to commercial and industrial consumers through FirstEnergy Solutions. The utility signed a 23-year agreement to purchase the power generated by the wind farm. Such power purchase agreements are tools to reduce project-financing costs and ultimately keep costs lower for power consumers. About one-third of the wind farm is located on a reclaimed surface coal mine. The Pennsylvania Energy Development Authority invested $500,000 in 2006 to rehabilitate the former mine site, which helped to offset the increased costs and foregone revenues associated with using this environmentally scarred land. The nature of soils at previously mined areas require higher than normal construction costs and this former mine site is at a lower elevation, making the turbines less productive. Iberdrola has taken additional steps to protect the environment with help from Department of Environmental Protection for a study of bat mortality that was conducted at the site by the Bats and Wind Energy Cooperative. The study, partially funded by DEP, tested the effect of stopping wind turbines during low-wind conditions, when bats are most active. The study, the first of its kind in the U.S., examines the value of lost electricity sales due to the temporary shutdowns. The final report is anticipated for public distribution by April 1. "By producing energy from the wind responsibly, rather than burning fossil fuels, the Casselman Wind Farm annually will also avoid emissions of almost 57,000 tons of carbon dioxide and almost one million pounds of sulfur dioxide into the air," Rendell said. "That is equivalent to removing more than 10,750 cars from the road annually, or to the amount of carbon dioxide absorbed by more than 46,000 acres of trees in a year."
<urn:uuid:b0ca81c9-770d-419f-8017-eac5a506c610>
CC-MAIN-2017-04
http://www.govtech.com/technology/Transformed-Mine-Site-to.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00261-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964891
527
2.921875
3
Jonathan Petit, Security Innovation’s principal scientist, presented “Self-driving and connected cars: Fooling sensors and tracking drivers” (pdf) at Black Hat Europe in Amsterdam. One of the key takeaways was that “fooling camera-based systems is easy and cheap.” For instance, it takes less than $60 worth of off-the-shelf hardware to successfully defeat a LiDAR ibeo LUX 3 system that costs thousands of dollars and is responsible for sensing obstacles. Without a human to make driving decisions, “autonomous automated vehicles unconditionally rely on their on-board sensors to detect surroundings objects and understand their environment.” To do this, “valid and accurate sensor data are required to make appropriate driving decisions such as emergency brake, changing trajectory or rerouting.” Yet Petit and his fellow researchers performed black-box attacks, successfully blinding, jamming, replaying and spoofing in various laboratory conditions. Lidar, a laser ranging system, is somewhat like radar but works by shooting laser pulse pings at objects ahead of it and interpreting the echoed readings from the reflection. Lidar is commonly used in collision avoidance systems and adaptive cruise control. The ibeo LUX 3 can track up to 65 objects at a maximum distance of 200 meters, but is vulnerable to relay and spoofing attacks. In one attack which works at distances up to 100 meters, Petit created illusions of fake cars, pedestrians and walls in front of, beside and behind the lidar unit. “I can spoof thousands of objects and basically carry out a denial of service attack on the tracking system so it’s not able to track real objects,” Petit told IEEE Spectrum. “I can take echoes of a fake car and put them at any location I want and I can do the same with a pedestrian or a wall.” It doesn’t cost a fortune either as the attack can “easily” be done “with a Raspberry Pi or an Arduino.” The researchers were able to blind and confuse auto controls when attacking a MobilEye C2-270 camera, which is responsible for things like rear collision alerts, lane departure and pedestrian alerts. Using a laser, LED light sources and a screen, the researchers were able to carry out “jamming, blinding and scenery attacks.” In “Remote Attacks on Automated Vehicles Sensors: Experiments on Camera and LiDAR (pdf),” the researchers concluded: We showed blinding and confusing auto controls attacks on the camera, and relaying and spoofing attacks on the LiDAR. For the MobilEye C2-270, a simple laser pointer was sufficient to blind the camera and prevent detection of a vehicle ahead. A cheap transceiver was able to inject fake objects that are successfully detected and tracked by the ibeo LUX 3. These attacks prove that additional techniques are needed to make the sensor more robust to ensure appropriate sensor data quality. Privacy problems with connected cars While the first part of Petit’s Black Hat presentation focused on the security of autonomous automated vehicles, the second half focused on driver privacy and connected cars. Key takeaways from the connected vehicle privacy portion included the facts: - Everyone can deploy a surveillance system to track connected vehicles. - It is cheap and easy. There are a plethora of potential privacy violations when it comes to connected cars. Petit explained, “Connected Vehicle is an upcoming technology that allow vehicles and road-side infrastructure to communicate to increase traffic efficiency and safety. To enable cooperative awareness, vehicles continually broadcast messages containing their location. These messages can be received by anyone, jeopardizing location privacy.” The research paper “Connected Vehicles: Surveillance Threat and Mitigation” (pdf) presented “the first real world experiment focused on tracking capability of a mid-sized observer and pseudonym change frequencies.” After deciding that an attacker would most likely target road intersections for eavesdropping, Petit and a team of researchers deployed Intelligent Transportation Systems (ITS) hardware on a small scale at the University of Twente. “The equipment was deployed for 16 days, during which the vehicle transmitted 2,734,691 messages and we eavesdropped on 68,542 messages.” Experiment results demonstrate that location tracking is easy to perform, and that two sniffing stations are sufficient to offer 40% road-level tracking, while eight sniffing stations offer 90%. Connected cars are here and their connectivity will only increase as they talk to road-side infrastructure; fully autonomous vehicles may start to be a common sight by 2020. Like the security issues, there are ways to solve the privacy issues; Petit believes the time is now to get started.
<urn:uuid:774d959a-ea37-4dca-bde1-8e42b1deaf60>
CC-MAIN-2017-04
http://www.computerworld.com/article/3005436/cybercrime-hacking/black-hat-europe-it-s-easy-and-costs-only-60-to-hack-self-driving-car-sensors.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941938
994
2.59375
3
Penetration Testing and Computer Network Security What is Penetration Testing? A penetration test, or pen-test, is the process of actively testing your organization’s security measures by attempting to penetrate network security using a variety of measures. It is, in essence, hacking your organization’s network in order to evaluate and harden the security measures already in place. Why GDF’s Pen-Testing is Different If your network has never been pen-tested, or if your security measures are haphazard – perhaps your company is smaller and has grown quickly – chances are a typical pen-test team could breach your system in under an hour. Many companies pen-test first and then tell you what is wrong after. This is not the way we handle a penetration test at GDF. Instead, we follow a cost saving and very effective pen-test procedure that we’ve developed after evaluating the thousands of tests we’ve performed. Harden the Network first, THEN Pen-Test It - We start by conducting a thorough interview with your security and IT personnel to find out about your system and its current security posture. Before we even begin penetration testing, our experts will immediately make suggestions regarding ineffective equipment and technologies, such as poor choices in security software, easily breachable firewalls, weak security procedures, etc. - We work with you to develop a vulnerability assessment. This is basically a full evaluation of the current state of your security posture and is typically performed using commercial software packages that scan and search your system for both internal (within your local network) and external (from the Internet) security issues, problems in your network set-up, etc. In many cases, companies can perform their own vulnerability assessment using software tools we gladly recommend, which results in cost savings for you and in no way compromises the viability of the pen-test. - Pre-Test Preparation. After we review the vulnerability assessment, our team makes specific recommendations regarding all aspects of your security – we want your network as secure as possible BEFORE we do the actual penetration test. - We perform a full penetration test using whatever types of attacks or breach techniques are needed to defeat your now upgraded security and gain access to your system(s). Post Test Deliverables After the completion of penetration testing, we provide a detailed analysis of the methods and techniques used during the test, the results of the various attempts at compromise, as well as detailed documentation on remediation of any security flaws found. GDF simply provides the most thorough and cost effective penetration test you can get. What is Tested? Penetration testing involves the systematic analysis of all the security measures in place. A full test should include some or all of the following areas, with the exact requirements usually being agreed upon in a formal scoping document prior to commencement (this list is provided courtesy of the OSSTMM): | || |
<urn:uuid:df220cc1-42a5-4e0c-9637-d07fcdfe9e5d>
CC-MAIN-2017-04
https://evestigate.com/network-security-penetration-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934338
596
2.515625
3
Eight in ten execs say security landscape is more dangerous Innovations that make data more accessible and devices more mobile also create new challenges for information technology (IT) professionals responsible for cybersecurity, according to new research released recently by IT industry association CompTIA. The vast majority of business and IT executives (83 percent) surveyed for CompTIA's Ninth Annual Information Security Trends Study believe the security threat level is on the rise. "Our data suggests that there is no single overriding factor behind this sentiment," said Tim Herbert, vice president, research, CompTIA. "Rather, it's a combination of elements that each chip away at information safety and security defenses in some way." One of the biggest factors driving cybersecurity concerns today is the greater interconnectivity of devices, systems and users, the survey reveals. Among the disruptive IT trends contributing to greater interconnectivity – and raising new security concerns – are the proliferation of big data, social technologies, cloud computing and mobility. "Billions of devices connect to the Internet daily, and with each touch point there's a potential for new security vulnerabilities," Herbert said. "With more data being produced and touched by more people, the potential for data loss or leakage grows accordingly." One in five organizations in the CompTIA study reported definitely experiencing the loss of sensitive data in the past 12 months, while 32 percent reported likely data loss. Among companies that experienced such data loss in the past 12 months: The research indicates organizations are most likely to struggle with data loss prevention (DLP) efforts when data is in motion, such as transmitting sensitive information in an unencrypted format. Security Breaches on the Rise, with Malware and Hacking at Top of List Beyond data loss, three in four organizations reported first-hand experience with a security incident in 2011, a slight increase over the 2010 rate. On average, organizations reported seven incidents for the year, about half classified as serious. Topping the list of security concerns is the all-encompassing threat known as malware. Yet while malware represents the most pervasive threat, in some ways malware attacks are less feared than highly targeted distributed denial of service attacks, advanced persistent threats and other types of malicious hacking attacks. In the CompTIA study, 58 percent of respondents believe hacking is a more critical threat today compared to two years ago. Human error continues to be a significant factor in security breakdowns. A net of 53 percent of IT and business executives say human error is more of a factor today than it was two years ago. Seven in ten organizations rate security as a higher or upper level priority this year, compared to 49 percent in 2010. Four out of five companies expect to increase information security budgets. The intensified focus on information security has created a job market where the demand for skilled workers exceeds the current supply. In the CompTIA study, 40 percent of organizations say they face challenges in hiring IT security specialists. Organizations view certified staff as an integral part of their security apparatus. More than eight in ten organizations formally or informally use security certifications as a means to validate expertise; and 94 percent believe security certifications deliver a positive return on investment. CompTIA's Ninth Annual Information Security Trends Study is based on an online survey of 500 U.S. IT and business executives directly involved in setting or executing information security policies and processes within their organization. Data was collected in November and December 2011. The complete report is available at no cost to CompTIA members who can access the file at CompTIA.org or by contacting firstname.lastname@example.org. In addition to comprehensive market research, CompTIA offers the IT industry a broad selection of other resources related to cybersecurity, including: CompTIA is the voice of the world's information technology (IT) industry. Its members are the companies at the forefront of innovation; and the professionals responsible for maximizing the benefits organizations receive from their investments in technology. CompTIA is dedicated to advancing industry growth through its educational programs, market research, networking events, professional certifications, and public policy advocacy. For more information, visit www.comptia.org.
<urn:uuid:94942cee-17ab-4e55-b66e-fcd254eee7eb>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/innovations-that-bring-new-benefits-also-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942866
849
2.578125
3
Attackers use a method called scanning before they attack a network. Scanning can be considered a logical extension (and overlap) of active reconnaissance since the attacker uses details gathered during reconnaissance to identify specific vulnerabilities. Often attackers use automated tools such as network/host scanners and war dialers to locate systems and attempt to discover vulnerabilities. An attacker follows a particular sequence of steps in order to scan any network, and the scanning methods may differ based on the attack objectives which are set up before the attackers actually begin this process. Attackers can gather critical network information such as the mapping of systems, routers, and firewalls with simple tools like Traceroute. They can also use tools like Cheops to add sweeping functionality along with what Traceroute renders. Port scanners can be used to detect listening ports to find information about the nature of services running on the target machine. The primary defense technique against port scanners is to shut down unnecessary services. Appropriate filtering may also be adopted as a defense mechanism, but attackers can still use tools to determine filtering rules. The most commonly used tools are vulnerability scanners that can search for several known vulnerabilities on a target network and potentially detect thousands of vulnerabilities. This gives attackers the advantage of time because they only have to find a single means of entry while the systems’ professional has to secure many vulnerable areas by applying patches. Organizations that deploy intrusion detection systems still have reason to worry because attackers can use evasion techniques at both the application and network levels. Certified Ethical Hacker v7
<urn:uuid:c33b0a43-e3ca-4730-bc76-06e347d889f1>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/08/11/the-5-phases-of-hacking-scanning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00426-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944603
308
2.765625
3
The world will be introduced to the new Internet Protocol (IP) language this summer as web giants such as Google, Microsoft Bing, Yahoo, Facebook and over 1,000 more sites around the world agreed to trial IPv6. World IPv6 Day will take place on June 8th this year and all associated websites will enable the new Internet Protocol v6 (IPv6) addressing standards on their services. Confusion still exists over what IPv6 is and how it will affect people. Each time you go online you are assigned an IP address to identify your device, whether that be a PC, mobile or tablet. This allows you to connect with other websites, with websites currently using the old IPv4 standard. This standard allows a certain amount of addresses and these are about to run out – therefore the adoption of IPv6 is a race against time. Internet users do not need to worry as they can test their internet connection compatibility thanks to Google at http://ipv6test.google.com/. IPv4 is still, for now, being catered for so even if broadband providers in the UK lack IPv6 support, users shouldn’t have any trouble using their connection as normal. Google admits that it could still take “years for the Internet to transition fully to IPv6”, which is in large part due to the slow pace of progress by large ISPs and hardware manufacturers (such as broadband routers). For more information on visit the ispreview website.
<urn:uuid:d62bd362-3caa-4404-ae2e-5ecab6380594>
CC-MAIN-2017-04
https://www.gradwell.com/2012/01/19/world-ipv6-launch-day-has-been-announced/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930703
303
2.546875
3
If you are a system administrator, IT professional, or a power user it is common to find yourself using the command prompt to perform administrative tasks in Windows. Whether it be copying files, accessing the Registry, searching for files, or modifying disk partitions, command-line tools can be faster and more powerful than their graphical alternatives. This tutorial will walk you through creating a command-line toolkit that contains useful programs and utilities that can make administering and using your computer easier and more efficient. The tutorial will also walk you through configuring your PATH environment variable so that these tools are available whenever you need them without having to specify the complete path to your toolkit folder. At the end of the tutorial we have listed a variety of command-line programs that are included with Windows or are by 3rd party developers that you can use as part of your command-line toolkit. The first step is to create a folder that you will use to store your command-line programs. This folder can be located anywhere, but should have a name that describes what it is being used for. Some example folder names that you can use are bin, cl, or command-line. For the purpose of this tutorial, we will use the folder C:\command-line to store the command-line tools that we would like to use. Once the folder has been created, we now want to add it to the Windows PATH so that we do not have to type the full path to the command-line tool every time we wish to use one. To do this, click on the Start button and type System. If you are using Windows 8, you can just type System from the Start Screen. When the search results appear, click on the System control panel in the search results to open the control panel as shown below. Now click on the Advanced system settings option as indicated by the red arrow in the image above. This will open the Advanced tab for the System Properties screen. Now click on the Environment Variables button to open a screen that lists the various environment variables that are configured in Windows. Under the System variables box scroll down till you see the Path variable. Once you see that variable, double-click on it to open a screen where you can edit it. The Path variable is a list of folders separated by a semi-colon (;) that Windows will use to search for programs to execute when you type them in. When you try to launch a program from the command-line, Windows will search through all the folders in its path and execute the program if it is found. As we do not want to have to type the full path to a command-line program (C:\command-line\program.exe) every time we use it, we can add the C:\Command-line folder to our path so we only have to type the program name (program.exe) to launch it. As our command-line tools in this tutorial are located in C:\command-line we want to add this folder to the end of the list of folders that are already present in the Variable value field. To do this, go to the very end of the text in the Variable value field and type ;C:\command-line. When you do this you will need to substitute C:\command-line with the path to your folder. When you are done, you should now see the field that looks similar to the image above. To save your changes, click on the OK button and then close the System Control Panel. Now whenever you type in a program name that is stored in your command-line program folder, Windows will be able to find it and execute it. This section will list a variety of command-line programs that can you use to start your toolkit. When using the list below, if the program is not bundled with Windows, then the name of the program will also be a link to the site that you can use to download the program and save it to your command-line folder. If the program name does not contain a link, then it is bundled with Windows and can already be used from your command prompt. If there are any other tools that you recommend we add to this list, please let us know. Administration and Troubleshooting Programs |AccessChk||AccessChk lists the kind of permissions specific users or groups have to resources including files, directories, Registry keys, global objects and Windows services| |at||The AT command schedules commands and programs to run on a computer at a specified time and date. The Schedule service must be running to use the AT command.| |CoreInfo||Coreinfo is a command-line utility that shows you the mapping between logical processors and the physical processor, NUMA node, and socket on which they reside, as well as the cache’s assigned to each logical processor.| |driverquery||Displays a list of installed device drivers.| |MpCmdRun.exe||A command-line interface for Windows Defender. To execute this program you must use the full path: %ProgramFiles%\Windows Defender\MpCmdRun.exe| |net||Various Windows management commands. More information can be found here.| |netsh||Netsh is a command-line scripting utility that allows you to, either locally or remotely, display or modify the network configuration of a computer that is currently running. More information can be found here.| |powershell||Windows PowerShell is a task-based command-line shell and scripting language designed especially for system administration. More information can be found here.| |PsLogList||Allows you to list the contents of local or remote computer's Windows Event Log.| |PsPasswd||PsPasswd is a tool that lets you change an account password on the local or remote systems.| |PsService||Allows you to list and configure Windows services.| |runas||Run a program as another user.| |rundll32||Execute functions exported in a DLL file.| |sc||Manage Windows Services.| |shutdown||Shutdown a local or remote computer.| |SigCheck||Verify that images are digitally signed and dumps version information contained within the file.| |UnixUtils||A collection of Unix utilities that have been ported to Windows. These utilities are very useful and include programs like grep, split, tar, dir, etc.| |wmic||A program that allows command-line and batch file access to Windows Management Instrumentation. More information can be found here.| |WUInstall||A command-line Windows Update installer and management program.| Boot and Windows Startup Programs |bcdboot||The bcdboot.exe command-line tool is used to copy critical boot files to the system partition and to create a new system BCD store. More information can be found here.| |bcdedit||The Bcdedit.exe command-line tool modifies the boot configuration data store. The boot configuration data store contains boot configuration parameters and controls how the operating system is booted. This tool is for Windows Vista and later. More information can be found here.| |bootcfg||More information can be found here.| |repair-bde||The bootcfg command is a Microsoft Windows Server 2003 utility that modifies the Boot.ini file. This command has a function that can scan your computer's hard disks for Microsoft Windows NT, Microsoft Windows 2000, Microsoft Windows XP, and Windows Server 2003 installations, and then add them to an existing Boot.ini file or rebuild a new Boot.ini file if one does not exist. You can use the bootcfg command to add additional Boot.ini file parameters to existing or new entries. More information can be found here.| File Comparison, Search, and Viewing Programs |comp||Compares the contents of two files or sets of files.| |findstr||Searches for strings in files. This is a powerful tool, but contains a limited Regular Expression functionality. If you want a string searching tool with greater RegExp functionality, you may want to use grep that is part of the UnixUtils package.| |fc||Compares two files or sets of files and displays the differences between them.| |more||Displays a file one page at a time.| |sort||Reads input, sorts data, and writes the results to the screen, to a file, or to another device. More information about sort can be found here.| |type||Displays the entire file to the screen.| File Permission and Management Programs |7Zip||Full featured archive program that can work with almost any archive type. When adding this to your command-line folder, be sure to copy both 7z.exe & 7z.dll for it to work properly.| |attrib||Displays, sets, or removes the read-only, archive, system, and hidden attributes assigned to files or directories. Used without parameters, attrib displays attributes of all files in the current directory. More information can be found here.| |cd||Changes the current working directory.| |copy||Copy a file to another name or to a different folder.| |dir||List the files in a folder.| |File Checksum Integrity Verifier||The File Checksum Integrity Verifier (FCIV) utility can generate MD5 or SHA-1 hash values for files to compare the values against a known good value. FCIV can compare hash values to make sure that the files have not been changed.| |forfiles||Selects a file (or set of files) and executes a command on that file.| |Handle||Handle is a utility that displays information about open handles for any process in the system. You can use it to see the programs that have a file open, or to see the object types and names of all the handles of a program.| |icacls||Displays or modifies discretionary access control lists (DACLs) on specified files, and applies stored DACLs to files in specified directories. More information about icacls can be found here.| |Junction||Allows you to create, list, or delete Junctions in Windows.| |LADS||LADS will display a list of all alternate data streams found in a particular folder.| |md5sum||Lists the md5 has for a particular file or numerous files in a folder.| |move||Move a file or folder to another location.| |ren||Rename a file or folder.| |Sdelete||You can use SDelete both to securely delete existing files, as well as to securely erase any file data that exists in the unallocated portions of a disk (including files that you have already deleted or encrypted). SDelete implements the Department of Defense clearing and sanitizing standard DOD 5220.22-M, to give you confidence that once deleted with SDelete, your file data is gone forever.| |sfc||Scans the integrity of all protected system files and replaces incorrect versions with correct Microsoft versions.| |Strings||Displays strings found within a file.| |xcopy||Copies files and directories, including subdirectories.| Filesystem Management Programs |chkdsk||Checks a disk and displays a status report.| |defrag||Locates and consolidates fragmented files on local volumes to improve system performance.| |diskpart||Diskpart allows you to manage and modify disk partitions. More information about diskpart can be found here.| |FixMBR||Repairs the master boot record of the boot disk. The fixmbr command is only available when you are using the Recovery Console.| |recover||Recovers readable information from a bad or defective disk.| |takeown||This tool allows an administrator to recover access to a file that was denied by re-assigning file ownership.| Network Diagnostics & Administration Programs |arp||Displays and modifies the IP-to-Physical address translation tables used by address resolution protocol (ARP). Useful for finding mac addresses of other networked devices on your network.| |cURL||cURL is a command line tool for downloading web pages, entire sites, ftp files, etc.| |ipconfig||Displays all current TCP/IP network configuration values and refreshes Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) settings. Used without parameters, ipconfig displays the IP address, subnet mask, and default gateway for all adapters. More information can be found here.| |Netcat||Netcat is a featured networking utility which reads and writes data across network connections, using the TCP/IP protocol. This is a very useful tool for diagnosing network connections, open firewall ports, or for sending the output of a local command to a remote computer.| |netstat||Displays protocol statistics and current TCP/IP network connections.| |Nmap||Nmap ("Network Mapper") is a utility for network discovery and security auditing. This program can quickly perform a TCP/IP audit of your network.| |nslookup||Nslookup allows you to perform DNS (Domain Name Service) resolution.| |pathping||The PathPing tool is a route tracing tool that combines features of Ping and Tracert with additional information that neither of those tools provides. PathPing sends packets to each router on the way to a final destination over a period of time, and then computes results based on the packets returned from each hop. Since PathPing shows the degree of packet loss at any given router or link, you can pinpoint which routers or links might be causing network problems. More information can be found here.| |ping||Ping is a computer network administration utility used to test if you can reach a host on an Internet Protocol (IP) network and to measure the round-trip time for messages sent from the originating host to a destination computer.| |PsFile||PsFile is a command-line utility that shows a list of files on a system that are opened remotely, and it also allows you to close opened files either by name or by a file identifier.| |PsExec||PsExec is a program that lets you execute processes on other systems, complete with full interactive use for console applications, without having to manually install client software. Please note that some anti-virus vendors may detect this as "Remote Admin", but it is a legitimate tool from Microsoft.| |PsLoggedOn||PsLoggedOn is an program that displays both the locally logged on users and users logged on via resources for either the local computer, or a remote one. If you specify a user name instead of a computer, PsLoggedOn searches the computers in the network neighborhood and tells you if the user is currently logged on.| |route||Displays and modifies the entries in the local IP routing table. Used without parameters, route displays help. More information can be found here.| |tracert||Displays the path taken from TCP/IP packets as they traverse from your local computer to a remote target. More information can be found here.| |Wget||GNU Wget is a program for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols.| Process Management Programs |ListDlls||ListDLLs is a utility that reports the DLLs loaded into processes. You can use it to list all DLLs loaded into all processes, into a specific process, or to list the processes that have a particular DLL loaded.| |PsKill||Allows you to terminate processes.| |PsList||Lists all running processes.| |tasklist||Lists all running running processes and services. This program can also be used to list what services are running under a particular svchost process. See here for more information regarding how to do that.| |taskkill||This tool is used to terminate tasks by process id (PID) or image name.| If there are any other command-line tools that you think we missed, please let us know about them. If your C: drive starts to run out of space, one of the most frustrating experiences can be figuring out what can be deleted or moved to another drive in order to free up storage. This is especially true with modern computers that are commonly configured with small SSD drives as their C: drive, which can easily run out of space due to their smaller storage capacity. If you are using Windows Live ... Windows 8 introduced a new boot loader that decreased the time that it takes Windows 8 to start. Unfortunately, in order to do this Microsoft needed to remove the ability to access the Advanced Boot Options screen when you press the F8 key when Windows starts. This meant that there was no easy and quick way to access Safe Mode anymore by simply pressing the F8 key while Windows starts. Instead in ... Before Windows was created, the most common operating system that ran on IBM PC compatibles was DOS. DOS stands for Disk Operating System and was what you would use if you had started your computer much like you do today with Windows. The difference was that DOS was not a graphical operating system but rather purely textual. That meant in order to run programs or manipulate the operating system ... One of the biggest issues many people have had with Windows 8 is that it automatically logs you into the Windows 8 Start screen rather than the traditional Windows desktop. For those people who do not want to use the Start screen and instead work off the desktop this change has been very frustrating. If this has been an issue for you, Windows 8.1 allows you to skip the Start screen and boot ... Ever since Windows 95, the Windows operating system has been using a centralized hierarchical database to store system settings, hardware configurations, and user preferences. This database is called the Windows Registry or more commonly known as the Registry. When new hardware is installed in the computer, a user changes a settings such as their desktop background, or a new software is installed, ...
<urn:uuid:f4be3d41-8c1b-4501-b9e0-10d36c669a35>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/command-line-toolkit-for-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00454-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879584
3,789
3.15625
3
The forthcoming unmanned, almost entirely secretive X-37B spacecraft will include a test version of an ion engine that could keep spaceships in orbit longer while making them more maneuverable. The Air Force Research Laboratory made the unprecedented announcement that that X-37B will soon blast into space with what’s known as a Hall thruster experiment onboard the flight vehicle. +More on Network World: Graphene is hot, hot, hot+ From the Air Force lab release: “The Hall thruster that will fly on the X-37B experiment is a modified version of the units that have propelled Space and Missile Systems Center’s first three Advanced Extremely High Frequency military communications spacecraft. A Hall thruster is a type of electric propulsion device that produces thrust by ionizing and accelerating a noble gas, usually xenon. While producing comparatively low thrust relative to conventional rocket engines, Hall thrusters provide significantly greater specific impulse, or fuel economy. This results in increased payload carrying capacity and a greater number of on-orbit maneuvers for a spacecraft using Hall thrusters rather than traditional rocket engines. The experiment will include collection of telemetry from the Hall thruster operating in the space environment as well as measurement of the thrust imparted on the vehicle. The resulting data will be used to validate and improve Hall thruster and environmental modeling capabilities, which enhance the ability to extrapolate ground test results to actual on-orbit performance. “ This will be the spacecraft’s fourth mission -- the first three flights have accumulated a total of 1367 days of on-orbit experimentation prior to successful landings and recoveries at Vandenberg Air Force Base, CA., the Air Force stated. According to the Air Force the spacecraft is based on NASA's X-37 design (NASA's X-37 system was never built) and is designed for vertical launch to low Earth orbit altitudes where it can perform long duration space technology experimentation and testing. Upon command from the ground, the orbital test vehicle autonomously re-enters the atmosphere, descends and lands horizontally on a runway. The Air Force discloses this much information on the X-37B: Prime Contractor: Boeing Height: 9 feet, 6 inches (2.9 meters) Length: 29 feet, 3 inches (8.9 meters) Wingspan: 14 feet, 11 inches (4.5 meters) Launch Weight: 11,000 pounds (4,990 kilograms) Power: Gallium Arsenide Solar Cells with lithium-Ion batteries Mission The X-37B Orbital Test Vehicle is an experimental test program to demonstrate technologies for a reliable, reusable, unmanned space test platform for the U.S. Air Force. The primary objectives of the X-37B are twofold: reusable spacecraft technologies for America's future in space and operating experiments that can be returned to, and examined, on Earth. Features The X-37B Orbital Test Vehicle is the newest and most advanced re-entry spacecraft. Based on NASA's X-37 design, the unmanned OTV is designed for vertical launch to low Earth orbit altitudes where it can perform long duration space technology experimentation and testing. Upon command from the ground, the OTV autonomously re-enters the atmosphere, descends and lands horizontally on a runway. The X-37B is the first vehicle since NASA's Shuttle Orbiter with the ability to return experiments to Earth. Technologies being tested in the program include advanced guidance, navigation and control, thermal protection systems, avionics, high temperature structures and seals, conformal reusable insulation, lightweight electromechanical flight systems, and autonomous orbital flight, reentry and landing. Check out these other hot stories:
<urn:uuid:f5010347-9e42-4ece-a064-9852f4f14a2c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2915845/security0/air-force-reveals-tiny-but-amazing-bit-of-super-secret-unmanned-spacecraft-flight-plan.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00270-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893786
758
3.4375
3
It should be abundantly clear to anyone who follows technology that Linux is a worldwide success story and has been for a long time. The open source operating system kernel originally developed by Linus Torvalds while he studied at the University of Helsinki, turned 25 earlier this fall. As of now, Linux is used as a basis for countless new products coming to market every day. Before we take a look at some of those, though, let’s go through some of the background to Linux’s success. 90s computers came with terrible operating systems When Torvalds started hacking away at his pet project, the world was starved for a cheap option to get modern, stable operating systems onto PC computers. Operating systems are the software that make a computer’s hardware work together and interact with the user and the software they need. The mainstream commercial operating systems for personal computers in the early nineties were straight out of clown college: MS-DOS was absurdly bereft of features and Windows, gaining popularity by then, was only a plastered-on windowing system. Apple’s Mac computers of the day ran a similarly rudimentary operating system that may have appeared more elegant on the surface. But classic MacOS too, lacked many of the features we expect computers to do every day now: like the ability to reliably pretend like they’re doing many things at once, preemptive multitasking. Alternatives to PC and Mac operating systems were commercial Unix variants for PCs that were prohibitively expensive. In the early nineties, IBM and Microsoft had just teamed up to build OS/2, in what through its own drama spun off to be Microsoft’s NT. This more modern operating system, is the foundation for all Microsoft systems today. But as Linus started playing around with a 386, all that was far off in the future. So it wasn’t surprising that the world of computer enthusiasts in the early nineties released a lot of bottled-up enthusiasm about an operating system with all the promise to make their 386-class PCs more useful. The holy grail for enthusiasts was to mimic the capabilities of powerful, networked computers seen at universities and businesses and Linux delivered. The early days of Linux’s success story were overshadowed by the public’s quite correct perception that Torvalds’ creation couldn’t take on Microsoft’s near-monopoly on the desktop OS market. But Linux’s real success story is one built up over time. Linux is now the defacto, go-to way in which new systems are being built. Wondering what we’re on about? Well, let’s take a look at where you can find Linux. 1. Almost every smartphone except iPhone Android, used on billions of smartphones, is built around the Linux kernel. Linux for server use and similar, is normally is perceived as distributions like Debian, Ubuntu or Red Hat/Fedora that bundles a similar set of tools beginning from GNU command line utilities/compilers and central pieces like glibc and the Xorg graphical system. Android uses different components to be suited for smartphone use, helping to make the touch screen the most used computer type ever. Among other things, Android apps are mostly Java code run in a special virtual machine. But this sort of modularity is indeed where Linux differs from most operating systems: it really is to be thought of as just a kernel. How you build the rest of a Linux product is just like, your opinion, man. 2. Wireless routers and other network equipment In the early 00s, components like router boards started getting powerful enough to run a general operating systems, rather than tiny, very specialized Real Time Operating Systems (RTOS). Linux’s convenience and free availability made it a good choice for inexpensive network gear. In 2005 Linksys, the manufacturer of the iconic WRT54GL routers, inadvertently made the Linux router a commodity by initially failing to comply with the Linux GNU General Public License at first. The GPL requires modified versions of Linux to be made available to the general public, and after being told to hand over the source in a bit of a media scandal, Linksys did. That Linksys code is now the precursor to a lot of open source router firmware projects, like OpenWRT. Despite its age and relative slowness, Linksys still sells the WRT54GL, which still enjoys significant demand thanks to its tinker friendliness. By now, a lot of the heavy-duty routers and switches powering big and small networks in businesses are starting to be based on Linux, too. 3. The first generation of successful, mass-scale internet companies Companies like Google, Amazon and Yahoo are well known for their use of Linux and other open source operating systems, like FreeBSD to get started building server infrastructure easily. The operating system is now something you just download. But as we hinted at before, proper ones used to cost you an arm and a leg, not counting application software. It bears repeating: it wasn’t clear back then, but Linux played a big role in making the operating system a commodity. Linux isn’t alone in this category, even today. WhatApp’s messaging system is build on FreeBSD and so are Netflix’s ingenious appliances shipped to ISPs over to world to serve close to 100 gigabits worth of video per second! 4. A vast majority of websites, large and small Companies of all sizes on the internet realized the value in using free operating systems to conduct business. Many realized that individuals and companies need websites and need to run them without paying lots of money for hardware and expensive, high-end internet connections, and operational costs. So, the web hosting industry and ‘shared hosting’ was born, around Linux and other open source software, like the web server Apache. Ever since, Linux has become sort of a default platform on top of which tools are developed for making web sites of many kinds, including this one. In other words, when you surf the “information superhighway”, whether it be to buy tickets to a concert or wasting time on Facebook, you’re using Linux most of the time. 5. The most expensive machines Linux’s world domination didn’t start out small: servers running websites are usually relatively beefy computers with a lot of resources that need to be shared efficiently. Remember reading how really small computers, like routers, are using Linux too? Well, that makes Linux scalable. From tiny router boards, Linux scales up to the world’s very fastest computers, HPC (high-performance computing), or supercomputers with tons and tons of CPUs and RAM. Ever since the mid-00s, Linux has been pretty much the default on big computers used for scientific calculation and modelling in academia and other research fields. Linux is even sometimes used on mainframe computers, big commercial machines that are built to run critical applications like financial systems very reliably without hickups. Do you use these machines? Well, indirectly: for example, the weather is being prognosticated on supercomputers and your financial information is being run through heavy duty machines. 6. “Cloud computing”, the idea that makes mobile apps run smoothly and cheaply In the mid 00s, building complicated or big online services required, mostly, that everyone buy or rent their own physical servers, often more capacity than needed, for hundreds or thousands of dollars a month. Granted, this was cheaper than computer operations ever before, but still, infrastructure could become a great capital expenditure. Amazon.com, the bookstore, which by the mid-00s was transforming into an everything store, realized that they could sell unused compute resources in their data centers in the form of bundling virtual servers, isolated copies of several operating systems on one machine. Furthermore, Amazon included tools for developers to buy these virtual servers on-demand. Suddenly, the equipment that makes large scale apps and services crop up became pay-as-you-go infrastructure. Further useful services like mass storage and CDN made Amazon Web Services, instrumental in making mobile apps and web services boom, creating only operational expenses, like they never did during the first dotcom bubble. Unsurprisingly, by this point, Amazon used the open source Xen virtualization hypervisor, or “virtualization engine” on top of Linux, to slice out the virtual hardware they rented. 7. Integrated systems, the “invisible” computers all around us As hinted at with the earlier router example, Linux early became a great operating system for devices you don’t see or think about as computers, or to me more precise, the field of “integrated systems”. You can find Linux on everything from computer kiosks, ATMs, signage, in-flight entertainment systems, ATMs, smart TVs and computer monitors, you name it. Many cars run Linux on some of their many different subsystems, Tesla being a famous example. The same goes for the new generation of “internet of things”, industrial and home devices that are gifted smarts by being connected and containing tiny computers. There’s still potential use cases where Linux isn’t the right choice, and smaller, Real Time Operating Systems like VXWorks and QNX are used. But largely, Linux is the where one would start to look and poke around for building a large number of products. 8. Most home computer that aren’t on Windows run Linux, or share some Linux DNA Home PCs are the holdouts for mass adoption of Linux, and that kind of makes them an exception by now. Still, as we said, Android, “the new Windows”, is Linux based. So is ChromeOS, the limited and well secured Google operating system inside typically cheap Chromebook laptops. Likewise, fun, small hobbyist, and prototyping computer boards, like the Raspberry Pi, are designed to run Linux by default. Furthermore, Unix, the family of operating systems Linux belongs to, originating from academic, big iron computers, is well represented elsewhere where there aren’t copies of Windows. Apple’s main operating systems, iOS, and MacOS, are designed and built as Unix-like systems from the start. Apple uses a kernel known as XNU, surrounding it with tools largely adopted from FreeBSD. Sony’s PlayStation consoles, generations three and four, are also based on FreeBSD. So, there you have it: a summary of how Linux (and open source systems very much like it) are taking the world by storm, or rather fortifying the empire that is free, open source software. All these use cases of Linux and open source operating systems at large, have the further benefit of building a foothold for all kinds of other open source applications in businesses, from web oriented servers and programming languages, ready made web-apps, databases and much more. This means that the components for building the next great thing for taking over the world are standardized and available to everyone. When you think about it, it’s a kind of magic. Latest posts by Thomas Nybergh (see all) - 7 reasons why MDM is a must for small business - 13.12.2016 - 11 reasons our customers love 10-year-old Miradore - 30.11.2016 - What every techie should see at Slush 2016 - 28.11.2016
<urn:uuid:8c0d635f-9e6d-4f5c-8eb4-7a4b5726cfdb>
CC-MAIN-2017-04
https://www.miradore.com/you-use-linux-every-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00206-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943933
2,388
2.640625
3
IBM and an number of European academic and corporate scientists today announced a project known as Sleeper that aims to reduce the energy used by everything from mobile phones to laptops and televisions to supercomputers by 10-fold. The problem? The enormous amount of electricity sucked up by computer equipment in standby mode. In the European Union it is estimated that the vampire effect of standby power already accounts for about 10 % of the electricity used in homes and offices of the member States. By 2020 it is expected that electricity consumption in standby/off-mode will rise to 49 terrawatt hours per year - nearly equivalent to the annual electricity consumption for Austria, Czech Republic and Portugal combined, according to a press release from IBM. Moreover, electronic devices currently account for 15% of household electricity consumption, and energy consumed by information and communications technologies as well as consumer electronics will double by 2022 and triple by 2030 to 1,700 Terawatt hours -- this is equal to the total residential electricity consumption of the United States and Japan in 2009, according to the International Energy Agency (IEA). "Our vision is to share this research to enable manufacturers to build the Holy Grail in electronics, a computer that utilizes negligible energy when it's in sleep mode, which we call the zero-watt PC," said Prof. Adrian Ionescu, Nanolab, Ecole Polytechnique Federale de Lausanne, who is coordinating the project. With the support of the European Commission's 7th Framework Program (FP7), project Steeper scientists will explore novel nanoscale building blocks for computer chips that aim to reduce the operating voltage to less than 0.5 Volt, thus reducing their power consumption by one order of magnitude. The main focus of the project is to develop what's known as tunnel field effect transistors and semiconducting nanowires to reduce electricity that "leaks" from electronics. Coordinated by Ecole Polytechnique Federale de Lausanne (EPFL), Project Steeper includes IBM Research - Zurich, Infineon and GLOBALFOUNDRIES, University of Bologna, University of Dortmund, University of Udine and the University of Pisa and others. The project is underway and is expected to last 36 months. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:295e2c80-b942-4c48-a5b3-c63aafefdd55>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227599/security/ibm--european-union-team-to-swat-electronic-vampires.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930244
486
3.21875
3
Discover the ways in which cybercrime occurs in three realms: individual, business, and governmental. Learn what you can do to protect yourself and your organization. During the 1920s and 1930s in the United States, there was a rather famous bank robber named Willie Sutton. He was called "The Gentleman Bank Robber" because of his demeanor and natty dress style. Ultimately, he was arrested. Recycling an anecdote from an earlier blog, the authorities reportedly asked him why he robbed banks. As the legend goes, he responded, "Because that's where the money is." As far back as 2009, an episode of "60 Minutes" on cybercrime and cyberwarfare interviewed Shawn Henry of the FBI. Now at CrowdStrike, Mr. Henry talked about a coordinated raid on the banking system in 29 countries through simultaneous withdrawals at ATM kiosks. This crime, which cost ten million dollars, was performed using stolen credit card numbers. To paraphrase Mr. Henry it would be "front-page news" if that was carried out with guns blazing. Hackers, then, are committing cybercrime across the Internet with techniques ranging from identity theft to stealing credit cards to stealing intellectual property in order to profit from their crimes, commit espionage, or for geopolitical and social causes. Considering the credit card black market and the theft of information from major retailers, hotel chains, and restaurants, the value of the cybercrime grows dramatically. For the victims-individual or corporate-the consequences are personal. When a criminal accesses someone's personally identifiable information (PII), financial information, identity, or personal health information (PHI) and uses it to carry out fraud, the effects have been likened to the sense of violation and mourning that matches being told they have a serious health problem. After a breach, businesses need to expend resources to close the vulnerabilities that the criminals exploited and (perhaps) compensate customers financially or with services such as Identity Theft Protection. They also suffer the intangible costs (we call this qualitative risk) of loss of customer trust and loyalty. Even if a company isn't charging for services (such as an information website,) the lingering "bad taste" of the cyber-attack stays with the consumers. Victims of Cybercrime Broadly, as in life, we can look at the victims of cybercrime in three realms: individual, business, and governmental. Carried out against individuals, the purpose of the attack may be to gather PHI or financial information to carry out an electronic robbery. Alternately, it may be to commandeer the victim's system into a so-called Botnet and then use the victim computer for sending SPAM or for a Denial-of-Service (DoS) attack. Here, as well, the bad actors may be cyber-gangs, individuals, or nation-states. Cybercrime against consumers takes on two forms, but the results are generally the same. An individual may have their financial information misused or their "identity" stolen. For example, criminals have stolen my credit card number to rent hotel rooms in Accra (the capital of Ghana) and someone once tried to bail a friend out of jail with my information. Obviously, the latter did not work out well for any of the criminals, either in custody or soon-to-be. A much tougher problem for individuals occurs when "identity theft" takes place and the criminals use someone else's PII to obtain a loan or perform some other action that appears on the victim's credit report. Individuals can also be victims of personally directed cybercrime. Stories of cyber-stalking, cyber-bullying, and online harassment regularly appear in newspapers and on news websites. With the growth in use of social media, this has taken on a new importance. Businesses must be concerned about the theft of their customers' information, whether that is account information, residential and email address, or payment data such as credit card information. Hacks that disclose PII and financial information have been in the news continually (it seems) since December 2013. Facing customers and the Internet, website defacement can prove an embarrassment (at the least) to a company, as can having their Internet presences brought down by DoS attacks. Responses to these attacks cost money and resources to fix. They also engender lack of trust amongst their customers.
<urn:uuid:0e8084b4-39db-4630-85a8-c7baef93867e>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/cybercrime-101/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00408-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951868
887
3.328125
3
Corporate governance is a set of instructions and best practices that enable a company to achieve its goals and communicate its success to the respective stakeholders. However, there are certain drawbacks of corporate governance that may enable the company officials to use this to their own advantage. A few key drawbacks are mentioned below. Many managers strive to achieve goals that are not aligned with the organizational goals. This causes ‘agency problem’ and can harm the company in the long run. Even investors and other stakeholders having an interest in the company will face the negative impact of the agency problem. This is largely witnessed in organizations that publicly trade in stocks. The board of directors and officers responsible for the corporation’s assets face a conflict of interest when they try to gain personal benefits from the company’s success rather than working towards maximizing shareholder wealth. ‘Corporate insiders’ are company officials who have access to highly confidential and non-public company information. Some confidential information may have an impact on the value of the firm’s shares in the market. If company officials, abreast with such information, use it for their own benefit and sell their shares to a person unaware of that information, this is called insider trading. Shareholders, who are not directly related to the company such as a government regulator, an external auditor or a relative of a corporate official, can also commit the illegal act of insider trading. There are many ways in which corporate officials can misrepresent financial information to avoid paying heavy taxes or to affect the value of company shares on the market. They can do this by forming a complicated network of cross-shareholdings and subsidiaries or by trading properties between the parent company and its subsidiaries to increase or decrease the amount of revenues or assets. This is also a drawback of corporate governance as misleading information can let companies get away with their corrupt acts. Due to extensive abuse of the power delegated to company officials under corporate governance, laws have been formulated to prevent such misuse and abuse of power. However, complying with these laws can be costly and stressful for many companies. For instance, the 1933 Securities and Exchange Act requires corporations to get listed on a stock exchange and then make detailed disclosures to interested investors. Complying with this rule can cost a company hundreds or even thousands of dollars. Moreover, the recent 2002 Sarbanes-Oxley Act requires companies to setup appropriate internal control systems to ensure that their financial statements are not misleading and are factually correct.
<urn:uuid:995dfab4-64af-4d25-852a-a1bffd15d8d9>
CC-MAIN-2017-04
http://www.best-practice.com/compliance-best-practices/corporate-compliance/the-drawbacks-of-corporate-governance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00160-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959084
501
2.953125
3
The Defense Department in recent weeks has invested $5.7 million in research to detect bombs concealed beneath dense goo, such as meat, sludge and animal remains. Homemade bombs -- improvised explosive devices in Pentagon terms -- took on new meaning for many following the Boston marathon bombing last week. No longer just an acronym for weapons that mangle and kill U.S. troops on the battlefield, IEDs now conjure visions of pressure cookers, nails and other common implements exerting deadly force at home. Since February, the Defense Advanced Research Projects Agency has been awarding year-long grants to scientists at several universities and technology companies to develop acute, contactless bomb sensors. On Tuesday, the University of Arizona became the latest funding recipient, potentially collecting $900,912 if a roughly six-month extension is exercised. The aim is to be able to flag explosives concealed in “opaque media” containing a lot of liquid, “such as mud, meat, animal carcasses, etc.,” states an Oct. 12 work description. The project requirements provide the example of a cancerous tumor deep inside a breast. For bomb detection, however, the technology must not, in any way, touch the item of interest. "DARPA is interested in the recognition of abnormalities in complex high water content media that can be performed at standoff (i.e., no physical contact with the surface of the host medium)," the requirements state. X-rays that use carcinogenic, ionizing radiation are off-limits because of health concerns for military personnel and nearby civilians, according to DARPA. "All proposed techniques will be evaluated on the expected trade-off between image fidelity and radiation output," the October paper states. Tools that combine electromagnetic indicators and acoustic vibration have shown promise, DARPA officials said. As of now, the other project participants include Virginia-based Quasar Federal Systems, University of Arizona, BAE Systems -- also in Virginia, and Boston-based Northeastern University. Each study is anticipated to last no more than a year and a half. Between 2006 and 2011, the military's Joint IED Defeat Organization spent $18 billion on countering homemade bombs, according to federal auditors. And, in 2011 alone, funding from the organization combined with appropriations for other counter IED efforts totaled $4.8 billion. It is unclear whether the technology under development at DARPA might be deployed stateside. The scope of the project is limited to demonstrations in laboratory settings, not real-life operations. Even before the April 15 detonations that claimed lives and limbs, the U.S. government had been cultivating tools to thwart the next Unabomber or Timothy McVeigh, who blew up the Alfred P. Murrah Federal Building in Oklahoma City in 1995, killing 168 people and wounding more than 800. In September 2012, the Homeland Security Department set aside $46 million for counter-IED intelligence systems. This year, the Bureau of Alcohol, Tobacco Firearms and Explosives plans to pay a dog instructor $32,500 to train bomb-sniffing Labrador retrievers to follow the commands of remote-controlled e-collars.
<urn:uuid:80f1c103-bf7e-4025-981b-3f8bf13b78c9>
CC-MAIN-2017-04
http://www.nextgov.com/technology-news/2013/04/there-bomb-burger-pentagon-wants-know/62773/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00281-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933022
650
2.921875
3
The data housed at Data.gov/climate will be contributed by NASA, the National Oceanic and Atmospheric Administration, the U.S. Geological Survey, the Defense Department and other agencies, according to a White House blog post. The first batch of data focuses on coastal flooding and sea level rises. NOAA and NASA also announced an “innovation challenge” on Wednesday for people using public data to build tools that educate the public about the dangers of flooding and rising water levels in their communities. “Every citizen will be affected by climate change -- and all of us must work together to make our communities stronger and more resilient to its impacts,” the White House said in a blog post authored by Counselor to the President John Podesta and Office of Science and Technology Policy Director John Holdren. “By taking the enormous data sets regularly collected by NASA, NOAA, and other agencies and applying the ingenuity, creativity, and expertise of technologists and entrepreneurs, the Climate Data Initiative will help create easy-to-use tools for regional planners, farmers, hospitals and businesses across the country,” the authors said. Wednesday’s announcement also included commitments from technology companies, including Esri, which will partner with 12 cities to create free “maps and apps” to help local governments plan for climate change impacts. Esri is a popular digital mapping vendor for governments and businesses. Google also agreed to donate one petabyte of cloud storage for climate data and 50 million hours of high-performance computing with the Google Earth Engine platform. The White House published an Open Data Policy in 2013 focused on giving the public, nonprofits and private-sector companies access to raw government-gathered data that they can use to build applications and tools that will aid public information, turn a profit or both. The consulting firm McKinsey and Co. has estimated open data from the U.S. government and elsewhere could add more than $3 trillion annually to the global economy if it was fully exploited. “We’ve already seen the powerful relationship government data and private sector innovation can produce -- whether it’s tracking a hurricane’s path or mapping the reach of rising sea levels,” Senate Homeland Security Chairman Tom Carper said in a statement. “Making sure this data is accessible, while adhering to security standards, will make our government more inclusive, provide a valuable return on the taxpayer dollars invested in these programs and mitigation efforts, and can even save lives and property.” In addition to climate data, the government information trove Data.gov hosts 20 other data communities focused on health, education, energy, finance and other topics.
<urn:uuid:0df3e17a-3bd7-4396-afb9-21f45be9c533>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/2014/03/white-house-unveils-major-climate-related-data-set-and-challenges-developers-use-it/80860/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00455-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92423
546
2.6875
3
New research into strokes may one day limit the damage of this troubling disease that affects millions of Americans. Despite the progress that’s been made over the last decades, strokes are still the most common cause of long-term disability and the third most common cause of death in the United States. Every year, more than 795,000 Americans experience a stroke, which occur when a clot forms in an artery or blood vessel, restricting the flow of blood to the brain and depriving it of oxygen. Breaking up these clots quickly and safely is the key to stopping a stroke and reducing the damage potential. Ground-breaking research on this deadly disease is using the power of supercomputing to explore a new technique that could be safer and more effective than either surgery or drugs. A group of researchers from the Universities of California at Berkeley (UC Berkeley) and San Diego (UCSD) used the supercomputing resources of the National Energy Research Scientific Computing Center (NERSC) to model the efficacy of microbubbles and high intensity focused ultrasound (HIFU) for breaking up stroke-causing clots. The results of their study have been published in Journal of the Acoustical Society of America. “One day, HIFU could be a useful medical treatment for people who are stroke victims. But before this can happen, we need to establish some fundamental background work, which includes understanding how HIFU accelerates damage to a clot when bubbles are present,” says one of the lead researchers, Andrew Szeri, Professor of Mechanical Engineering at UC Berkeley. The supercomputing allocation is enabling the researchers to generate enough data to make a case for further study. “Without some kind of preliminary data, it’s a non-starter; there’s just no way for us to find funding,” says Szeri. The research has an interesting backstory. Currently, the only federally approved treament for stroke-causing clots is a drug called tissue plasminogen activator (tPA), but there is only a short window of use, affecting its utility. In the early 1990s, researchers noticed ultrasound scans seemed to potentiate the effectiveness of the anti-stroke drug. A 2004 trial confirmed the connection. Since then, researchers have been experimenting with ways to co-administer ultrasounds to maximize the effectiveness of tPA. Researchers believed that the microbubble-HIFU technique would be a good replacement for the drug and artificial microbubbles were already used as ultrasound contrasting agents. Subsequent laboratory experiments with cadeavors confirmed their hypothesis, but they still needed to determine doseage and safety. That’s where the supercomputing resources of NERSC came in. “The supercomputer simulations verified that it is feasible to bust blood clots with relatively low energy, which is important because when the energy is not appropriate bubbles will form.” Generally the artificial microbubbles are harmless, but bubbles released under low pressure can rupture small blood vessels. Upon securing further funding, the research team will attempt to strengthen the connection between the computational results and actual clot damage from bubbles.
<urn:uuid:b78c193f-3a97-457a-beba-4aa67cc0a90c>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/09/25/supercomputer_bolsters_innovative_stroke_research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00391-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929497
651
3.765625
4