text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Solar PanelsBy Mel Duvall | Posted 2008-01-30 Email Print Data center operators Phil and Sherry Nail were tired of skyrocketing electricity bills. They converted to solar power—and now the green strategy is generating business of its own. At a cost of about $100,000, Nail installed 120 solar panels—60 on each side of the data center. The panels face south and generate about 12 kilowatts of electricity. Power from the panels is converted from DC to AC and then stored in a battery bank. The batteries, in turn, power the data center, AISO’s offices and air conditioning, as well as the Nails’ home. In case of emergency, AISO can fire up natural gas-powered generators or tap into the power grid. But that’s only the start of Nail’s green ideas. Lighting for the center during the day is provided by solar tubes installed on the roof. Nail estimates each tube, which requires less space than a conventional skylight, replaces the equivalent of about 300 watts of light. Initially, he installed compact fluorescent lightbulbs throughout the building, but has since gone to even more efficient LED lights. The center’s walls are more than 12 inches thick and filled with insulation consisting of recycled materials. As a result, the building stays naturally cool in the And he has more bright ideas in mind. At press time, work was set to begin installation of a green roof on the data center. Essentially, a four-inch layer of dirt will be laid down and drought-resistant greenery will be planted. The green roof should further cut down on the need for air conditioning, but when the air conditioning units do run, water generated by the units will be recycled to feed the plants on the roof. “Every day we come up with new ideas of things we’d like to do,” says Nail. “It’s been a really exciting and rewarding time for us.” AISO’s energy conservation efforts haven’t been restricted to physical elements. A year ago, the company implemented virtualization technology from VMware, of The bottom line, adds Nail, is that the data center has been able to grow its customer base at a rate of about 20 percent annually, without adding more solar panels. And AISO no longer pays a $3,000 monthly power bill. Many of AISO’s customers, such as Endangered Species Chocolate and MacGillivray Freeman Films, producer of films for Climate Savers Computing, an industry-sponsored initiative that encourages companies to adopt more energy-efficient computers and power-management tools, recently decided to move its Web site to AISO. Barbara Grimes, a spokesperson for the organization, says it made sense to use AISO, given the initiative’s goals. “We saw this as a simple change that we could make to reduce our carbon footprint,” she says. “As far as cost is concerned, we’re paying the exact same amount as [we were paying] our previous provider.” Despite AISO’s humble beginnings, Nail is becoming a celebrity in the data center business. Success is creating a challenge for AISO, however. Even with virtualization technology, the data center will soon require more power than its 120 solar panels can provide. As a result, Nail has begun laying the groundwork to install another 400 solar panels, which will allow the company to quadruple its customer base. The new arrays will also have the ability to track the sun, increasing their daily output. Nail says he’s preparing for the future—one he’s certain will be bright. “Every day I’m getting calls from potential customers wanting to know more about our service,” he says. “It just confirms that we’re on the right course.”
<urn:uuid:5162189b-547f-4c2e-befd-9e6766cf3eaa>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Infrastructure/Here-Comes-the-Sun/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00371-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937805
823
2.640625
3
A civilization surges ahead with intrinsic economic activities, with materials being transferred from one another and from one place to another, creating a cycle of need to fulfilment. Even war hero like Napoleon once remarked, ‘an army marches on its stomach.’ This is true that material demands require supplies to be replenished at the end of consumers. And it constitutes a broad business ecosystem of managing supply chain from the producers of raw materials to the manufactures or assemblers, and then to distributors and finally to end-users. The entire chain enables movement of goods and flow of underlying information from upstream to downstream, and then streaming of finance and feedback in the opposite direction. The Supply Chain Management (SCM) pertains to managing, exchanging, tracking and analyzing data from suppliers of raw materials to manufacturers, and finally, to customers through all intermediary channels of distribution, transport and retailing so as to ensure delivery of products and services in right quantities at right location in time while maintaining requisite service level with optimal costs. It transcends the organizational boundaries in its scope, and pervades the entire ecosystem to be responsible for its effective functioning. At the same time, it also poses challenges due to heterogeneous operational processes and nomenclatures across enterprises even along a single chain. A typical supply chain involves various enterprises to deliver goods and services successfully. It involves chalking out a well-defined collaboration process between enterprises despite of usual scenarios of having disparate operational processes, cultures, and levels of technology adoption. The actual data and process management within an enterprise and interaction between any two enterprises involve people, called as agents or actors. These actors ensure successful transition of goods and services, manage finances, transfer knowledge and make decisions towards good customer experiences and profits. Managing a supply chain may not necessitate the implementation of an Information Technology (IT) system. But managing it effectively and to remain competitive in today’s business arena would mandate that. It’s not true that adoption of IT solutions for SCM is an original preference of managers and consultants in recent years. In fact, such choice had started a few decades ago though neither scope nor goal of such solutions was so pervasive in those days. The term gained acceptance in eighties; and its scope and goals got comprehensive treatment in nineties with establishment of different corporate information systems. The evolution of SCM solutions can be understood by the expanding scope and goals, the parameters that we have just mentioned. In the early stage, scope was restricted to Material Requirement Planning (MRP) whereas the goal remained as data management simply so as to help the managers and decision-makers have control over costs and wasteful effects on inventories and their movements across the chain. Later the scope was widened to incorporate non-inventory resources; and Manufacturing Resource Planning (MRP II) came into existence. The goal of having control over costs and ensuring right service level became ambitious due to incorporation of multitude of parameters and processes into the purview. The modern SCM solutions came in the nineties with Enterprise Resource Planning (ERP) systems, and are still evolving with the advent of new computing paradigms like cloud computing, business use of RFID, advancement in mobile computing, realization of ubiquitous internet connectivity at reduced cost, proliferation of social web, and emphasis on analytics by using large quantity of business data. Big Data and Business Analytics have become buzzwords as we discuss business management today; and thus the same level of impact has been felt in the domain of SCM like that in any other corporate information system. But then there is a caveat. Despite significant transformations in the technology landscape, the legacy systems still exist in numbers. These use old technologies, sit on older software and hardware platforms, and manage things that are limited in scope and goals though it may be adequate from the standpoints of stakeholders. The cost of revamp is simply high; and it may not be affordable for the companies that were early adopters of SCM IT. While it may not be a right proposition to overhaul an existing information system, there is a great scope for augmenting the newer systems into the business process to reap benefits. Taken from a guest lecture on Supply Chain Management delivered by Mr. Ashwini Rath, Founder Director & CEO, Batoi at XIMB (Bhubaneswar) on March 05, 2015. Source: The Friday Quantum
<urn:uuid:7c6b0239-42d6-4b15-80d6-53c5d2cf5729>
CC-MAIN-2017-04
https://www.batoi.com/blog/post/2015/03/06/managing-supply-chain--need-of-using-information-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00215-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941248
886
2.609375
3
Symptoms and Diagnosis Project Releases Whiteboard Videos for Patients March 20, 2012 -- Omaha, NE (PRWEB) March 20, 2012 Nabin Sapkota, MD has been working on the book project Symptoms and Diagnosis for the last three months. His website has been attracting readers looking for answers about common health problems. As the website crosses the landmark of ten thousand views, the doctor has decided to create a series of whiteboard videos to explain complex problems in a simple way. His target audience are patients and general public looking for in-depth information about diseases and symptoms. The Nebraska doctor says,There is not much in-depth authentic medical information available on the Internet for general public. Most popular health websites have over-simplified and over-generalized information that is not very useful for people looking for more information about their diseases or symptoms. Most websites and articles with detailed medical information explaining complex situations only target fellow health care providers. They somehow assume that patients with no medical background are simply unable to understand complex medical concepts. I believe that this kind of thinking is flawed and fails to acknowledge the patient empowerment movement that has been gaining grounds in the last few years. Dr. Sapkota further explains, Lets look at an example. When you search the internet for Symptoms of pneumonia, you will get very reputable and patient oriented health sites listing the common symptoms of pneumonia. They all tell you what pneumonia is and give you examples of different types of pneumonia and list the common symptoms of each type of pneumonia. What they fail to do is tell you how pneumonia symptoms differ in individual patients depending on the unique situations of that patient. They also do not tell you how pneumonia symptoms develop and what actually happens inside your body when you have pneumonia. It is possible to explain the actual mechanisms of diseases to patients with no medical background. You just need to avoid technical terms and use the right instructional material. I believe using a whiteboard with simple drawings and tables is the right way to do it. The doctor hopes to raise patient awareness with these videos and narrow the knowledge gap between patients and health care providers. He adds, Traditionally patient instruction materials have focused on typical symptoms and textbook description of diseases with willful avoidance of controversial topics and complex real life scenarios. This has created a very unrealistic and simplistic view of medicine in the eyes of the general public. This problem has been amplified recently with emergence of websites that claim to make diagnosis based on symptoms you type into the computer. This approach completely ignores the fact that medical symptoms are subjective and depend on the unique circumstances of the patient. I believe patients deserve to learn more about how symptoms of diseases develop in an individual patient. I do not believe that using a computer algorithm to match their symptoms with possible diagnosis is patient empowerment. I believe real patient empowerment comes when they know as much or more about their disease as their health care provider. Read the full story at http://www.prweb.com/releases/patient/videos/prweb9301855.htm.
<urn:uuid:9dcf114e-6757-466c-b189-67f2f71b586b>
CC-MAIN-2017-04
http://internet.itbusinessnet.com/article/Symptoms-and-Diagnosis-Project-Releases-Whiteboard-Videos-for-Patients-1934430
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939287
616
2.546875
3
A step up in artificial intelligence Researchers from the University of Washington and the Allen Institute for Artificial Intelligence in Seattle have created the first fully automated computer program called Learning Everything about Anything (LEVAN) that teaches everything about any visual concept. LEVAN’s creators call it "webly supervised," as it uses the web to learn everything it needs to know. The program essentially searches millions of books and images on the web to learn all possible variations of a concept and then displays the results in the form of a comprehensive, browsable list of images. It scours web and various libraries to learn common phrases associated with a particular concept, then searches for those phrases in web image repositories such as Google Images, Bing and Flickr. The relevance of each term is ensured with the content of the images found on the web and identifying characteristic patterns across them using object recognition algorithms. For instance the algorithm in the program understands that "heavyweight boxing," "boxing ring" and "ali boxing" are all part of the larger concept of "boxing," and on a query from a user, will display results. But the program filters only visual phrases. For example, with the concept "horse," the algorithm would keep phrases such as "jumping horse," "eating horse" and "barrel horse," but would exclude non-visual phrases such as "my horse" and "last horse." UW assistant professor of computer science and engineering Ali Farhadi said, "It is all about discovering associations between textual and visual data. The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them." So far, LEVAN has modeled 175 different concepts. Users can also submit a concept and the program will automatically begin generating an exhaustive list of subcategory images that relate to that concept. The program was launched in March. Researchers are still working on increasing the processing speed and capabilities. This research was funded by the U.S. Office of Naval Research, the National Science Foundation and the University of Washington.
<urn:uuid:96ac5b73-9ae8-4f5a-ac13-3d3f9dfdb377>
CC-MAIN-2017-04
http://www.cbronline.com/news/big-data/analytics/uw-researchers-launch-new-program-that-can-learn-everything-about-anything-130614-4292881
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941065
429
3.25
3
Definition: Rearrange elements in an array into three groups: bottom, middle, and top. One algorithm is to have the top group grow down from the top of the array, the bottom group grow up from the bottom, and keep the middle group just above the bottom. The algorithm stores the locations just below the top group, just above the bottom, and just above the middle in three indexes. At each step, examine the element just above the middle. If it belongs to the top group, swap it with the element just below the top. If it belongs in the bottom, swap it with the element just above the bottom. If it is in the middle, leave it. Update the appropriate index. Complexity is Θ(n) moves and examinations. Generalization (I am a kind of ...) Aggregate parent (I am a part of or used in ...) See also American flag sort. Note: Using this algorithm in quicksort to partition elements, with the middle group being elements equal to the pivot, lets quicksort avoid "resorting" elements that equal the pivot. A detailed tutorial on the algorithm. The flag of the Netherlands or the Dutch national flag. James R. Bitner, An Asymptotically Optimal Algorithm for the Dutch National Flag Problem, SIAM Journal on Computing, 11(2):243-262, May 1982. Colin L. McMaster, An Analysis of Algorithms for the Dutch National Flag Problem, CACM, 21(10):842-846, October 1978. E. W. Dijkstra, A Discipline of Programming, Prentice-Hall, 1976. Lloyd Allison reports that Dijkstra used this problem as an exercise in program derivation and program proof. Allison first heard about the problem at the Institute of Computer Science (ICS), London University, about 1973. The algorithm is named for the problem of ordering red, white, and blue marbles into the order of the Dutch national flag. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 28 December 2006. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "Dutch national flag", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 28 December 2006. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/DutchNationalFlag.html
<urn:uuid:ec0a0521-edc3-4584-acbd-ad97a36d54ca>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/DutchNationalFlag.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz
en
0.834969
533
3.25
3
One useful definition for the unstructured data that underlies most existing and theoretical big data projects is that it was often collected for some purpose other than what the researchers are using it for. That definition was provided by Chris Barrett, executive director of the Virginia Bioinformatics Institute during a series of presentations before the President’s Council of Advisors on Science and Technology on Thursday focused on the value of data mining for public policy. Data that was initially collected to measure educational achievement, for instance, could be used to analyze how educational achievement relates to obesity or incarceration rates in a particular community. This definition points to the potential of big data analysis as more and more information is gathered online and elsewhere, but it also points to some challenges as outlined by Duncan Watts, a principal researcher at Microsoft’s research division. First off, a large portion of the data that might be valuable to social scientists, policymakers, urban planners and others is held by private companies that release only portions of it to researchers. Facebook, Amazon, Google, email providers and ratings companies all know certain things about you and about society, in other words, but there’s no way to aggregate that data to draw global insights. “Many of the questions that are of interest to social science really require us being able to join these different modes of data and to see who are your friends what are they thinking and what does that mean about what you end up doing,” Watts said. “You cannot answer these questions in any but the most limited way with the data that’s currently assembled.” Second, even if social scientists were able to draw on that aggregated data, it would raise significant privacy concerns among the public. “This is a very sensitive point because, to some extent, this is what the NSA has been reputedly doing, joining together different sorts of data,” Watts said. “And you can understand how sensitive people are about that. Precisely the reason why this is scientifically interesting is also the reason why it’s so sensitive from a privacy perspective.” Finally, because much of the data that’s useful to social scientists was gathered for other purposes, there’s often some bias in the data itself, Watts said. “When you go to Facebook, you’re not seeing some kind of unfiltered representation of what your friends are interested in,” he said. “What you’re seeing is what Facebook’s news ranking algorithm thinks that you'll find interesting. So when you click on something and the social scientist sees you do that and makes some inference about what you’re sharing and why, it’s hopelessly confounded.”
<urn:uuid:e23b6914-b424-4a43-8624-86c7c0c1afee>
CC-MAIN-2017-04
http://www.nextgov.com/big-data/2014/04/limits-big-data-social-science/81940/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96188
569
2.921875
3
Experts believe traffic, bus and parking applications will be the public sector’s key wireless contributions in helping reduce future environmental impacts, according to a report released Tuesday, Oct. 11. A new report sponsored by CTIA—The Wireless Association estimates that transportation apps can contribute to the reduction of 1.2 million metric tons of carbon dioxide emitted into the atmosphere each year. The report, Wireless and the Environment: A Review of Opportunities and Challenges, examines the environmental impact of wireless technology in four areas: agriculture, energy, the public sector and transportation. In the government arena, the report finds the usage of smart traffic apps could result — if available on a wide scale — in a 20 percent savings on fuel consumption on urban roadways. “Wireless technology can help improve the delivery of a wide variety of public services in a more sustainable and useful manner,” said Steve Largent, president and CEO of CTIA, in an e-mail to Government Technology. “For example, mobile communication in waste management is using real-time data to better optimize routing, fleet management and customer service, which translates into savings with fuel, time and money.” “As local, state and federal governments continue to explore new ways to more efficiently serve their constituents, wireless products and services offer them a unique opportunity to do that while being more environmentally and fiscally responsible,” he added. Authored by BSR, a business strategies consulting firm, the 71-page report uncovered a variety of other environmental savings, including: “Clearly wireless technology is having a profound and positive effect on the environment today and will become even more prominent in the future,” Largent said in a statement.
<urn:uuid:03df43b4-9271-441d-bb62-373887cb99c2>
CC-MAIN-2017-04
http://www.govtech.com/technology/Can-Smart-Transportation-Apps-Curb-Greenhouse-Gases.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929046
354
2.53125
3
Put the world at the fingertips of every student, and liberate teachers and students from the bounds of classroom walls. That's what Wisconsin Gov. Tommy Thompson believes technology can do. And that's why, back in February, he unveiled a proposed $500 million investment in upgrading the state's schools. "TEACH Wisconsin" is the latest step in Wisconsin's effort to keep their schools' quality of education at the forefront. The Technology for Educational ACHievement (TEACH) initiative is a project of Gov. Thompson's administration. Its purpose is to accelerate the use of technology in schools by increasing network connectivity and equalizing connection costs. Gov. Thompson believes training Wisconsin's young people to use today's technology effectively will prepare them for the competitive job environment of the 21st century. One of TEACH Wisconsin's primary goals is connecting schools to each other and the Internet. The initiative aims to have direct Internet access in every public library and K-12 school by the year 2000. Another goal would give every state resident dial-up Internet access toll-free by the turn of the century. A survey taken in September 1996 revealed more than 80 percent of Wisconsin's high schools already have limited, single computer dial-up Internet access. But nearly 13 percent of them pay long-distance toll charges for the connection. Less than 25 percent of all high schools use established circuits for direct Internet access, and less than half of these schools have that connectivity available throughout their classrooms. TEACH Wisconsin seeks to equalize the cost of telecommunications services between users. This will give schools across the state a minimum of T1 line speed for no more than $250 per month. The line will provide direct Internet access and an option for a two-way video link. Wisconsin schools must modernize their internal connection capabilities before they can increase their technology curriculum. TEACH Wisconsin will loan $50 million annually to K-12 schools for upgrading electrical and networking facilities. Block grants totaling $25 million for the 1997-98 school year will be made for investments in educational technology. This amount increases to $40 million for the 1998-99 school year. THE FOUNDATION: BADGERNET BadgerNet is the statewide telecommunications infrastructure that connects state government agencies and provides the foundation for TEACH Wisconsin. BadgerNet's mission is to reengineer, manage, and integrate the next generation of networks that comprise Wisconsin's statewide network. Since its inception, the network has evolved and consolidated under the Department of Administration (DOA). The telecommunications bureau is a program revenue operation that performs as a networking business inside state government. DOA puts together contracts, provides communication services to their customers and keeps a sharp eye on the costs of doing business. "We have to sell to our customers at a reasonable price to stay in business," said Jody McCann, director of the Bureau of Telecommunications Management. When the network was first created in 1968, it provided voice-only services for state agencies. During the early years, it consisted of point-to-point (PTP) dedicated circuits carrying voice traffic, and later, Systems Network Architecture data. By 1987, all data traffic was consolidated on the WAN. "We steadily consolidated the network and brought more users into it," said McCann. "If you can aggregate traffic, your unit costs are going to come down." In 1994, the decision was made to replace the private PTP circuits with a publicly available communication service provided by AmeriTech. Frame relay packet technology was introduced to the WAN, accompanied by the growing pains inherent in a technology transition. Although frame relay had been available for three years, it had not yet begun its explosive surge in popularity. "We were early on that curve," said McCann. Migrating from PTP to frame relay created transport problems. The original private network was a star configuration emanating from Madison to destinations throughout the state. Because frame relay is public, AmeriTech wasn't permitted to transport it across Local Access Transport Area (LATA) boundaries. Backhauling the data using an inter-exchange carrier was dismissed as too expensive. AmeriTech decided to place a frame relay switch in each LATA and make the entire state part of the frame relay cloud. The amount of business AmeriTech received from the state was judged sufficient to cover the cost of the switches. This benefit also allows smaller businesses within the state to use the statewide frame relay cloud. In January 1997, a contract was signed with Norlight Telecommunications to install an OC-48 SONET (2.488Gbps) ring around the state. An OC-12 (622 Mbps) portion is currently reserved for state use. "We can buy SONET services in OC-3 increments, or we can buy DS-3s between any of the points on the ring," said McCann. Sufficient fiber-optic cable exists for future growth. Expanding the ring from OC-48 to OC-192 is an electronics-only change. The SONET ring has 11 points of presence in the major Wisconsin cities. The network design includes all state agencies, university campuses, technical schools, 430 school districts and other public entities. The SONET ring bandwidth is anticipated to initially support 1,100 DS-1, 20 DS-3, 300 DS-3 video and 10 OC-3 primary connections. The network will have more than 450 Cisco routers in all sizes and configurations. Every router has a minimum of one permanent virtual circuit, and many have more. Within the state agencies, this data is distributed to Windows and Windows NT desktop computers attached to Novell and Windows NT servers. Videoconferencing for training, remote hearings and telemedicine will benefit from the increased bandwidth provided by BadgerNet's SONET ring. Eventually, the 30 compressed video sites connected throughout the state by ISDN will migrate to the high speed backbone. Several state agencies provide quarterly training by remote video to reduce costs. "Avoiding the drive from Superior to Madison saves a lot of windshield time, hotel expenses and per diem costs," McCann said. Pilot programs are in place for juvenile hearings by video, where the court is several hundred miles away from the institution. Telemedicine is now being used between university hospitals and correctional institutions. BadgerNet is well prepared for the future. DOA is using the latest technology and the economies of large-scale purchasing to solidify their infrastructure. This guarantees the BadgerNet foundation will carry the heaviest of loads for years to come. [July Table of Contents]
<urn:uuid:186bb2e6-b726-4a4b-9c15-437234a5c56b>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/BadgerNet-Puts-Teeth-Into-Wisconsins-Educational.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00573-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956875
1,340
2.90625
3
Definition: A collection of items in which only the most recently added item may be removed. The latest added item is at the top. Basic operations are push and pop. Often top and isEmpty are available, too. Also known as "last-in, first-out" or LIFO. Formal Definition: The operations new(), push(v, S), top(S), and popoff(S) may be defined with axiomatic semantics as follows. The predicate isEmpty(S) may be defined with the following additional axioms. Also known as LIFO. Generalization (I am a kind of ...) abstract data type. Specialization (... is a kind of me.) bounded stack, cactus stack. See also queue. Note: Other operations may include index(i, S), return the ith item in the stack, isFull(S), and rotate(i, S), move i items from top to bottom or vice versa. examples and code. Demonstration with dynamic array and linked list implementations. Origin is attributed to A. W. Burks, D. W. Warren, and J. B. Wright, An analysis of a logical machine using parenthesis-free notation, Mathematical Tables and Other Aids to Computation, 8(46):53-57, April 1954, and A. Newell and J. C. Shaw, Programming the logic theory machine, Proceedings of the 1957 Western Joint Computer Conference, pages 230-240, Institute of Radio Engineers, New York, February 1957. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 9 February 2015. HTML page formatted Mon Feb 9 09:23:56 2015. Cite this as: Paul E. Black, "stack", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 9 February 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/stack.html
<urn:uuid:b3268314-895f-430a-a420-25d6f4afe755>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/stack.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.857423
442
3.03125
3
This article describes OpenStack Compute (Nova), which represents the core of any workload. Even though there are cloud services that work without computation, at best they represent static storage — all dynamic activity involves some element of computation. The name OpenStack Compute refers to a specific project, also called Nova, but there are really two projects that relate to computation and the software that runs it — Image and Compute: - OpenStack Image manages static disk images which contain the executable code as well as the operating environment. - OpenStack Compute (Nova) manages the running instances. Nova controls the cloud computing fabric and as such forms the core of an infrastructure service. Nova is also the most complex component of the OpenStack family, primarily because of its highly distributed nature and numerous processes. Nova interfaces with several other OpenStack services: It uses Keystone to perform its authentication, Horizon as its administrative interface, and Glance to supply its images. The tightest interaction is with Glance which Nova requires to download images for use in launching images. Before going into more detail on Nova, let's take a closer look at the Image service, which, chronologically, represents the beginning of the Compute workload. Glance is the project name for the OpenStack Image Service which registers, lists, and retrieves virtual machine (VM) images. Glance manages the images in an OpenStack cluster but is not responsible for the actual storage. It provides an abstraction to multiple storage technologies ranging from simple file systems to object storage systems, such as the OpenStack Swift project. Along with the actual disk images, it holds metadata and status information describing the image. The OpenStack Image Store is a central repository for virtual images. Users and other projects can store both public and private images which they can access to launch instances. They can request a list of available images, retrieve their configuration information, and then use them as a basis for starting Nova instances. It is also possible to take snapshots from running instances as a means of backing up the VMs and their states. Nova comes into action after the image is created. It typically uses an image to launch an instance, or VM. Although it does not include any virtualization software itself, it can integrate with many common hypervisors through drivers that interface with the virtualization technologies. From a practical perspective, launching an instance involves identifying and specifying the virtual hardware templates (called flavors in OpenStack). The templates describe the compute (virtual CPUs), memory (RAM), and storage configuration (hard disks) to be assigned to the VM instances. The default installation provides five flavors which are configurable by administrators. Nova then schedules the requested instance by assigning execution to a specific compute node (called a host in OpenStack). Each system must regularly report its status and capabilities which uses the data to optimize its allocations. The whole assignment process consists of two phases. The Filtering phase applies a set of filters to generate a list of the most suitable hosts. Every OpenStack service publishes its capabilities which is one of the most important considerations. The scheduler narrows the selection of hosts to meet the parameters of the request. A Weighting phase then uses a special function to calculate the cost of each host and sorts the results. The output of this phase is a list of hosts that can satisfy the user's request for a given number of instances with the least cost. Nova also carries out several additional functions, many of which interact closely with other OpenStack projects covering networking, security, and administration. But Nova generally handles the instance-specific aspects of these, such as attaching and detaching storage, assigning IP addresses, or taking snapshots of running instances. Nova uses a shared-nothing architecture (Figure 1), so that all major components can be run on separate servers. The distributed design relies on a message queue to handle the asynchronous component-to-component communications. Figure 1. Nova architecture Nova stores states of VMs in a central Structured Query Language (SQL)-based database that all OpenStack components use. This database holds details of available instance types, networks (if nova-network is in use), and projects. Any database that SQLAlchemy supports can be used. The primary user interface to OpenStack Compute is the Web dashboard (OpenStack Horizon). This central portal for all OpenStack modules presents a graphic interface of all the projects and makes application programming interface (API) calls to invoke any requested services. The API is based on Representational State Transfer. It's a Web Server Gateway Interface application that routes uniform resource indicators to action methods on controller classes. The API receives HTTP requests, processes the commands, and then delegates the task to other components via the message queue or HTTP (in the case of the ObjectStore). The Nova API supports the OpenStack Compute API, the Amazon Elastic Compute Cloud (Amazon EC2) API, and an Admin API for privileged users. It initiates most orchestration activities and policies (like Quota). Each HTTP request requires specific authentication credentials using one of the authentication schemes the provider has configured for the Compute node. The Authorization Manager is not a separate binary; rather, it is a Python class that any OpenStack component can use for authentication. It exposes authorized APIs usage for users, projects, and roles and communicates with OpenStack Keystone for details. The actual user store can be a database or Lightweight Directory Access Protocol (LDAP) back end. ObjectStore is a simple HTTP-based object-based storage (like Amazon Simple Storage Service) for images. It can be and usually is replaced with OpenStack Glance. The message queue provides a means for all of the components in OpenStack Nova to communicate and coordinate with each other. It's like a central task list that all Nova components share and update. All of these components run in a nonblocking message-based architecture and can be run from the same or different hosts as long as they use the same message queue service. They interact in a callback-oriented manner using the Advanced Message Queuing Protocol. By default, most distributions implement RabbitMQ accessed via the Kombu library, but plug-ins are also available for Apache Qpid and ZeroMQ. Nova components use remote procedure call to communicate with each other via the Message Broker using Pub/sub. More technically, Nova implements rpc.call (request/response; the API acts as consumer) and rpc.cast (one way; the API acts as publisher). Nova API and the scheduler use the message queue as the Invoker, whereas Network and Compute act as workers. An Invoker sends messages via rpc.cast. The Worker pattern receives messages from the queue and answers each with the appropriate response. Nova compute is a worker daemon that manages communication with the hypervisors and VMs. It retrieves its orders from the message queue and performs VM create and delete tasks using the hypervisor's API. It also updates the status of its tasks in the central database. For the sake of completeness, some daemons cover functionality originally assigned to Nova that is slowly moving to other projects. The Network Manager administers IP forwarding, network bridges, and virtual LANs. It is a worker daemon picking network-related tasks from message queue. These functions are now also covered by OpenStack Neutron which can be selected in its place. The Volume Manager handles attach and detach of persistent block storage volumes to VMs (similar to Amazon Elastic Block Store). This functionality has been extracted to OpenStack Cinder. It's an iSCSI solution that uses Logical Volume Manager. Setting it up The actual installation instructions vary greatly between distributions and OpenStack releases. Generally, they are available as part of the distribution. Nonetheless, you must complete the same basic tasks. This section gives you an idea of what's involved. OpenStack relies on a 64-bit x86 architecture; otherwise, it is designed for commodity hardware, so the minimal system requirements are modest. It is possible to run the entire suite of OpenStack projects on a single system with 8GB of RAM, but for any serious work, the official recommendation is for the cloud controller node that runs the network, volume, API, scheduler, and image services to have at least 12GB of RAM, two 2TB disks, and a network adapter. Compute nodes (running the virtual instances) will vary much more in terms of their load, but a good starting point for a simple system is a quad-core CPU, 32GB of RAM, and 2Gbit network adapters. The installation instructions depend on the distribution and, more specifically, on the package-management utility you select. In many cases, it's necessary to declare the repository. So, for example, in the case of Zypper, you announce to You then install the required Nova packages on both the controller and compute nodes. The package-management utility should automatically install any dependencies. For the purpose of illustration, I have provided the primary commands for Ubuntu, Red Hat (Red Hat Enterprise Linux®, CentOS, Fedora), and openSUSE: - Ubuntu: On the controller node, run: On the compute node, run: sudo apt-get install nova-compute nova-network sudo apt-get install glance - Red Hat: Run the following command: sudo yum install openstack-nova sudo yum install openstack-glance - openSUSE: Run the following command: sudo zypper install openstack-nova openstack-glance Nova configuration involves several files, but the most important is nova.conf, which is installed in /etc/nova. A default set of options works fine for a standard installation, but you will need to edit the configuration for any special requirements. You can examine the nova.conf file format here and see a list of nova.conf configuration options here. To get an idea of how OpenStack Compute might be used in practice, imagine that you have a base image that you would like to launch in OpenStack. After configuring the system and making some personalized customizations, you may want to take a snapshot of the running instance so that you can accelerate the provisioning process to execute the same task again. After you have completed the project, you might want to stop the instance. You may even want to delete the image. - Log in to the OpenStack Dashboard as a user with a Member role. - In the navigation pane, beneath Manage Compute, click Images & Snapshots, then click Create Image. The Create An Image window opens (Figure 2) in which you can configure the settings that define your instance. - Enter a name and location for the image that you have previously created or downloaded. You need to specify the format of the image file, but there is no need to indicate the minimum disk size or RAM unless you want to supply them. Figure 2. Create an image - After creating the image, beneath Manage Compute, click Instances, and then click Launch Figure 3. Launch an instance - A window confirms your configuration and allows you to specify the required Flavor, or basic hardware configuration. Click Launch; the instance should be up and running. - Consider taking a snapshot. Again, beneath Manage Compute, click Instances, and then click Create Snapshot in the row associated with the instance of Figure 4. Instances Other tasks you can execute from this window include editing, pausing, suspending, and rebooting the instance. It is also the place to go to terminate the instance after you have finished using it. - To delete the image and any snapshots, go back to the Images & Snapshots menu where you have the option to delete the objects you no longer need. Figure 5. Images and snapshots That's all it takes to get started with the OpenStack Compute functionality. The main point to remember as you plan and deploy your compute workloads using OpenStack is that it is not a virtualization platform but a management abstraction that allows you to orchestrate workflows across multiple hypervisors using a variety of pluggable technologies. OpenStack merely simplifies the management and integration of these components. - Read more OpenStack articles in this series. - Check out the OpenStack documentation. - Read the official recommendations for the OpenStack cloud controller node. - Keep up with OpenStack on Twitter. - Learn more about the AQMP. - Read about IBM's open cloud architecture. - Learn more about the Kombu library. - Explore developerWorks' Cloud computing zone. - Follow developerWorks on Twitter. - Watch developerWorks demos ranging from product installation and setup demos for beginners to advanced functionality for experienced developers. - Get started with IBM SmartCloud Application Services by watching the demo. Get products and technologies - Try OpenStack for yourself. - Download SQLAlchemy. - Check out Apache Qpid. - Check out ZeroMQ. - Download RabbitMQ. - Evaluate IBM products in the way that suits you best: Download a product trial or try a product online. - Get involved in the developerWorks Community. Connect with other developerWorks users while you explore developer-driven blogs, forums, groups, and wikis. Dig deeper into Cloud computing on developerWorks Exclusive tools to build your next great app. Learn more. Crazy about Cloud? Sign up for our monthly newsletter and the latest cloud news. Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.
<urn:uuid:97588c8d-5c81-428d-b7f3-b7009f175044>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/cloud/library/cl-openstack-nova-glance/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875205
2,853
2.796875
3
Conventional cryptography cannot provide a bullet proof solution that fully addresses all of the diverse attack scenarios we experience in the twenty-first century. Traditionally, cryptography has offered a means of communicating sensitive (secret, confidential or private) information while making it unintelligible to everyone except for the message recipient. Cryptography, as was used in ancient biblical times, offered a technique in which text was manually substituted within a message as a means of hiding its original content. Many years later, during the Second World War, cryptography was extensively used in electro-mechanical machines (such as the infamous Enigma machine). Nowadays, cryptography is ever more pervasive heavily relying on computers supported by solid mathematical basis. Cryptography, as the name implies, attempts to hide portions of text from malicious eyes using a variety of methods. In theory, the concept sounds ideal but real life experience has proven that a multitude of factors and environmental aspects come into play which have a negative impact on the cryptographic key’s strength. Conventional means are unable to provide a bullet proof solution to fully address diverse attack scenarios attempting to exploit cryptography’s inherent vulnerabilities. This paper discusses traditional techniques while focusing on the white box cryptography implementation.
<urn:uuid:5819683c-c92e-4488-8fc6-c11207db1ac5>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/10/02/whitepaper-exploring-white-box-cryptography/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937092
247
3.21875
3
Knowledge Gap Series: The Myths Of Analytics It may not be rocket science, but it is data science. Do you have your eye on machine learning or a nice neural network to help your security team make decisions faster? Be aware that there are quite a few myths circulating about how these work; even the language used can be confusing. Many new terms -- and some familiar words --have different meanings in the world of statistical analytics. For example, “variable” means something significantly different to a programmer than to a statistician. And the capabilities of a statistician are different from those of a data scientist. Let’s start with building an analytical model. This does not happen quickly, because you need to capture enough data from your environment to give you a representative distribution. Roughly put, the distribution is the shape of the data (much like the classic bell curve from college), including the upper and lower limits, symmetry, presence of outliers, and other characteristics. There are dozens of statistical distributions, and the choice is critical because they form the foundation of the behavioral model. Another issue is cleaning the data prior to exploring potential models. How do you want to deal with outliers? What weights will you assign to the various components? Which ones are fully or partially dependent? Some machine-learning technologies will gather and analyze the data to try and determine an appropriate distribution for you, but you still need to be able to understand the decision. For example, many data sets do not fit the symmetry of a bell curve (formally called a normal distribution), and the distribution that fits probably has an unfamiliar name. Some of these tools only work with certain types of data sets, and all of them have underlying assumptions that you need to understand. You also need to understand some of the math, at least at a cursory level. Different tools may use different equations for a similar application -- such as correlation coefficients that show the degree of dependence between two sets of data, especially if the relationships are nonlinear. Say you have been through this exercise, some statisticians and data scientists have advised you, and you now have an analytical model for identifying data exfiltration, phishing attacks, or some other security event. What is the appropriate level of confidence in the results? No model is always right, and you need to know how well the model fits, what your statistical level of confidence is, and what to look for when an automated decision gets punted for human judgment. These models are ultimately built by humans, so you also need to make sure that you have an appropriate level of trust in the quality and ethics of your modeller. Statistics, analytics, and machine learning are powerful tools that will help resolve security problems faster, with fewer resources. They will empower the next wave of automated and even predictive defenses. However, this will take time, and we have to work our way up from reactive models, through proactive ones, before we get to predictive. This journey is going to require some learning on your part, whether it is a review of your college stats classes or building an understanding of the terms and concepts, so that you can communicate clearly and effectively with the statisticians and data scientists that will be joining your team. You need to ensure that your data scientists have a strong working knowledge of statistics, as this title is loosely defined and may be overused. Finally, you will need to be able to translate these concepts and plans to members of the C-suite, who may be skeptical about the uses and abuses of statistics. My intent is not to scare you off with the amount of work involved. When properly implemented, the security benefits of big data analytics are substantial. The Intel Security Knowledge Gap series brings forward unique educational content to bridge the gap between what cybersecurity professionals know and what they need to know to be successful against the threat landscape of today and tomorrow. Dr. Fralick is responsible for Intel Security's technical strategy related to analytics that integrates into the Intel Security corporate products. Dr. Fralick brings over 35 years of industry experience to the Analytics CTO position, 20 of those with Intel. ... View Full Bio
<urn:uuid:e46a332f-b31f-4ea3-9332-9c1979fc0a8f>
CC-MAIN-2017-04
http://www.darkreading.com/partner-perspectives/intel/knowledge-gap-series-the-myths-of-analytics-/a/d-id/1324602?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947491
844
2.9375
3
The iBeacon specification is Apple's implementation of the Bluetooth 4.0LE protocol to provide proximity-based services and notifications. There are some interesting IT management applications for this technology, most of which we've only just begun to explore. During this JAMF Nation User Conference (JNUC) session, Paul Cowan—IT Manager at the University of Waikato—explained the underlying architecture of iBeacons and walked the JNUC crowd through several interesting examples designed to explore how iBeacon technology can be used. iBeacon is designed to push information to a mobile device based on proximity to the beacon. It was originally conceived as a way to enhance the retail shopping experience (they're used by Apple retail stores, among others). Another example Cowan showed is a museum using beacons to share information for visitors about the objects around them. iBeacon technology is built into iOS devices, starting with iPhone 4s or later, and running iOS 7 or later. A Mac can detect beacons as well, if it has a modern Bluetooth chipset and an additional software daemon (like Proximityd from Two Canoes Software) running in the background. And any iOS device can broadcast as a beacon using an app like Dartle iBeacon. iBeacons can launch an app or do tasks—even if the app isn't running in the background. As a fun example, Cowan can share his business card through Wallet app on iOS, then trigger an alert using a beacon app. The Casper Suite can be used along with beacons to trigger events and gather proximity data as part of the inventory record. To configure the JSS to work with iBeacons, enable the JSS to monitor for beacons as part of inventory collection and define iBeacon region in the network settings. Policies can use the custom trigger "beaconStateChange" which is triggered when entering or exiting a beacon region. As an example, Cowan set up a beacon to install a calendar and web clip for attendees arriving at a conference venue. He also created a simple iOS app with a few buttons that transmit a specific beacon code to trigger an event on a Mac or through the JSS. He uses this to escalate privileges for IT admins to manage devices within range of their beacon. But be careful, he warned, as iBeacon is not designed as a secure protocol and spoofing a beacon ID is trivial for a determined attacker.
<urn:uuid:8f27e655-d91c-4e16-9a28-cf1bc2ecae78>
CC-MAIN-2017-04
https://www.jamf.com/blog/invisible-candles-exploring-it-applications-of-ibeacons-with-your-jss/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940697
501
2.6875
3
Hai is a network worm that spreads in Win32 local networks. The worm is a PE EXE file 65536 bytes long and it is packed with PELOCK file compressor. The worm was not widespread by the time of creation of this description. Disinfection instructions for Hai worm in a network environment: 1. Disable all network sharing or temporarily kill a network. 2. Scan infected systems with F-Secure Anti-Virus and the latest updates, identify and try to delete/rename the worm's file. 3. If FSAV is not able to remove the worm (locked file problem), its file has to be deleted from pure DOS (Win9x workstations) or renamed with non-executable extension with immediate system restart (for NT/2000 workstations). After restart the previously renamed worm's file should be deleted. 4. Remove the worm's autostarting line after 'RUN=' variable in WIN.INI file on infected workstations to get rid of annoying 'missing file' message generated by Windows on every startup. 5. Re-enable sharing or connect network only after all infected workstations are disinfected. If there's a single infected workstation, it can re-infect all others. After being launched the worm creates a thread that starts to scan for valid IP addresses starting from the IP address of the infected computer. The worm scans a full range of IP addresses starting increments/decrements from lower IP address value. When the worm finds a valid IP address (connection succeeds), it creates another thread that enumerates shared network resources/drives on a found remote computer. If there's a share with \Windows\ folder on a remote system the worm attempts to find and open WIN.INI file there. If WIN.INI is found, the worm creates WIN.HAI file and starts looking for 'RUN=' variable in WIN.INI file while copying its contents to WIN.HAI file. If 'RUN=' variable is found, the worm puts a randomly generated file name after it (the worm will later copy itself with this name to a remote system). If 'RUN=' variable is not found, the worm creates it itself and then adds a randomly generated file name after it. Finally the worm copies itself into \Windows\ folder to a remote system with a random name that it used to register itself in WIN.INI file (see above). Then the worm deletes WIN.INI file and renames WIN.HAI file as WIN.INI. When a remote system is restarted the worm gets activated from 'RUN=' command. This however only happens on Win9x systems as on NT-based systems WIN.INI file is not used to start files on bootup. After infecting a remote system the infection thread terminates and IP scanning thread keeps scanning for valid IP addresses. Description Details: Alexey Podrezov; F-Secure Corp.; August 28, 2001
<urn:uuid:b0173ce3-46ee-4a78-9721-bba9e054a17d>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/worm_hai.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00051-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898021
621
2.703125
3
Today, Speaker of the House Nancy Pelosi (D-CA) and Representatives Stephanie Tubbs Jones (D-OH) and Louise Slaughter (D-NY), introduced the Coordinated Environmental Public Health Network Act of 2007. Senators Hillary Clinton (D-NY), Harry Reid (D-NV) and Orrin Hatch (R-UT) introduced identical legislation in the Senate last week. This legislation would establish a national public health tracking network to allow for the detection and identification of possible connections between adverse health effects and environmental hazards, and increase funding for locally-based pilot projects to address environmental health concerns. It would also increase funding for biomonitoring work at the Centers for Disease Control and Prevention (CDC), which tracks exposure levels to common chemicals. "Approximately seven out of 10 deaths in the United States are linked to chronic disease. Exposure to air pollution and harmful chemicals has been linked to many of these illnesses, including asthma, cancer and neurological disorders," said Pelosi. "In California, for example, more than 33 million people live in areas where high levels of air pollution pose health risks, and breast cancer rates in San Francisco are among the highest in the country. This legislation will give public health officials the tools they need to determine the impact of environmental pollutants, and to intervene where appropriate." "This is really an issue of environmental justice," said Rep. Tubbs Jones. "Minority and low-income communities are particularly vulnerable to environmental health hazards. The factories and dumping sites that emit pollutants are often located near communities with less political and economic power, and therefore less ability to protest. The result is an elevated risk of exposure to harmful substances." "Many chronic diseases are on the rise. Asthma, for example, increased 76 percent nationwide between 1984 and 2003," said Rep. Slaughter. "Identifying pollutants that cause diseases and reducing harmful exposures will save lives and save our health care system billions of dollars each year. What's more, it is our responsibility to do all we can to provide our children and future generations with the knowledge and tools they need to protect them from these ailments." Over the past six years, Congress has allocated nearly $150 million for pilot programs to begin developing the capacity for a Coordinated Environmental Public Health Network. The CDC has used these funds to implement three sets of pilot grants focused on building state and local capacity to track environmental exposures and adverse health outcomes. These projects have included efforts to identify environmental health problems and to link, through standardization of electronic data elements, disparate sets of existing health data with data on environmental hazards. Funds have also gone toward research on the impact of environmental exposures on human health, as well as dissemination of best practices to additional jurisdictions interested in environmental health tracking. These pilot projects are giving the Centers for Disease Control and Prevention and the Environmental Protection Agency the information they need to put in place the comprehensive, coordinated network created by this legislation. Once fully operational, the network would coordinate national, state and local efforts to inform communities, public health officials, researchers and policymakers of potential environmental health risks, and integrate this information with other parts of the public health system. This legislation is supported by over 40 health and environmental groups, including Trust for America's Health, the Breast Cancer Fund, American Lung Association, American Public Health Association, and the Association of Public Health Laboratories.
<urn:uuid:06fff63e-8b70-46c4-837e-cbd2cff965c2>
CC-MAIN-2017-04
http://www.govtech.com/policy-management/Health-Tracking-Legislation-Introduced-in-Congress.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00445-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951882
673
3.015625
3
Supercomputers at the Oak Ridge National Laboratory (ORNL) computing complex produce some of the world’s largest scientific datasets. Many are from studies using high-resolution models to evaluate climate change consequences and mitigation strategies. The Department of Energy (DOE) Office of Science’s Jaguar (the pride of the Oak Ridge Leadership Computing Facility, or OLCF), the National Science Foundation–University of Tennessee’s Kraken (NSF’s first petascale supercomputer), and the National Oceanic and Atmospheric Administration’s Gaea (dedicated solely for climate modeling) all run climate simulations at ORNL to meet the science missions of their respective agencies. Such simulations reveal Earth’s climate past, for example as described in a 2012 Nature article that was the first to show the role carbon dioxide played in helping end the last ice age. They also hint at our climate’s future, as evidenced by the major computational support that ORNL and Lawrence Berkeley National Laboratory continue to provide to U.S. global modeling groups participating in the upcoming Fifth Assessment Report of the United Nations Intergovernmental Panel on Climate Change. Remote sensing platforms such as DOE’s Atmospheric Radiation Measurement facilities, which support global climate research with a program studying cloud formation processes and their influence on heat transfer, and other climate observation facilities, such as DOE’s Carbon Dioxide Information Analysis Center at ORNL and the ORNL Distributed Active Archive Center, which archives data from the National Aeronautics and Space Administration’s Earth science missions, generate a wide variety of climate observations. Researchers at the Oak Ridge Climate Change Science Institute (ORCCSI) use coupled Earth system models and observations to explore connections among atmosphere, oceans, land, and ice and to better understand the Earth system. These simulations and climate observations produce a lot of data that must be transported, analyzed, visualized, and stored. In this interview, Galen Shipman, data-systems architect for ORNL’s Computing and Computational Sciences Directorate and the person who oversees data management at the OLCF, discusses strategies for coping with the “3 Vs”— variety, velocity, and volume — of the big data that climate science generates. HPCwire: Why do climate simulations generate so much data? Galen Shipman: The I/O workloads in many climate simulations are based on saving the state of the simulation, the Earth system, for post analysis. Essentially, they’re writing out time series information at predefined intervals—everything from temperature to pressure to carbon concentration, basically an entire set of concurrent variables that represent the state of the Earth system within a particular spatial region. If you think of, say, the atmosphere, it can be gridded around the globe as well as vertically, and for each subgrid we’re saving information about the particular state of that spatial area of the simulation. In terms of data output, this generally means large numbers of processors concurrently writing out system state from a simulation platform such as Jaguar. Many climate simulations output to a large number of individual files over the entire simulation run. For a single run you can have many files created, which, when taken in aggregate, can exceed several terabytes. Over the past few years, we have seen these dataset sizes increase dramatically. Climate scientists, led by ORNL’s Jim Hack, who heads ORCCSI and directs the National Center for Computational Sciences, have made significant progress in increasing the resolution of climate models both spatially and temporally along with increases in physical and biogeochemical complexity, resulting in significant increases in the amount of data generated by the climate model. Efforts such as increasing the frequency of sampling in simulated time are aimed at better understanding aspects of climate such as the daily cycle of the Earth’s climate. Increased spatial resolution is of particular importance when you’re looking at localized impacts of climate change. If we’re trying to understand the impact of climate change on extreme weather phenomena, we might be interested in monitoring low-pressure areas, which can be done at a fairly coarse spatial resolution. But if you want to identify a smaller-scale low-pressure anomaly like a hurricane, we need to go to even higher resolution, which means even more data are generated with more analysis required of that data following the simulation. In addition to higher-resolution climate simulations, a drive to better understand the uncertainty of a simulation result, what can naively be thought of as putting “error bars” around a simulation result, is causing a dramatic uptick in the volume and velocity of data generation. Climate scientist Peter Thornton is leading efforts at ORNL to better quantify uncertainty in climate models as part of the DOE Office of Biological and Environmental Research (BER)–funded Climate Science for a Sustainable Energy Future project. In many of his team’s studies, a climate simulation may be run hundreds, or even thousands, of times, each with slightly different model configurations in an attempt to understand the sensitivity of the climate model to configuration changes. This large number of runs is required even when statistical methods are used to reduce the total parameter space explored. Once simulation results are created, the daunting challenge of effectively analyzing them must be addressed. HPCwire: What is daunting about analysis of climate data? Shipman: The sheer volume and variety of data that must be analyzed and understood are the biggest challenges. Today it is not uncommon for climate scientists to analyze multiple terabytes of data spanning thousands of files across a number of different climate models and model configurations in order to generate a scientific result. Another challenge that climate scientists are now facing is the need to analyze an increasing variety of datasets — not simply simulation results, but also climate observations often collected from fixed and mobile monitoring. The fusion of climate simulation and observation data is being driven to develop increasingly accurate climate models and to validate this accuracy using historical measurements of the Earth’s climate. Conducting this analysis is a tremendous challenge, often requiring weeks or even months using traditional analysis tools. Many of the traditional analysis tools used by climate scientists were designed and developed over two decades ago when the volume and variety of data that scientists must now contend with simply did not exist. To address this challenge, DOE BER began funding a number of projects to develop advanced tools and techniques for climate data analysis, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) project, a collaboration including Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, the University of Utah, Los Alamos National Laboratory, New York University, and KitWare, a company that develops a variety of visualization and analytic software. Through this project we have developed a number of parallel analysis and visualization tools specifically to address these challenges. Similarly, we’re looking at ways of integrating this visualization and analysis toolkit within the Earth System Grid Federation, or ESGF, a federated system for managing geographically distributed climate data, to which ORNL is a primary contributor. The tools developed as a result of this research and development are used to support the entire climate science community. While we have made good progress in addressing many of the challenges in data analysis, the geographically distributed nature of climate data, with archives of data spanning the globe, presents other challenges to this community of researchers. HPCwire: Does the infrastructure exist to support sharing and analysis of this geographically distributed data? Shipman: Much has been done to provide the required infrastructure to support this geographically distributed data, particularly between major DOE supercomputing facilities like the one at Lawrence Livermore National Laboratory that stores and distributes climate datasets through the Program for Climate Model Diagnosis and Intercomparison. To support the growing demands of data movement and remote analysis and visualization between major facilities at Oak Ridge, Argonne, and Lawrence Berkeley National Laboratories, for example, in 2009 the DOE Office of Advanced Scientific Computing Research began the Advanced Networking Initiative with the goal of demonstrating and hardening the technologies required to deliver 100-gigabit connectivity between these facilities, which span the United States. This project has now delivered the capabilities required to transition the high-speed Energy Sciences Network (ESnet) to 100-gigabit communication between these facilities. ESnet serves thousands of DOE scientists and users of DOE facilities and provides connectivity to more than 100 other networks. This base infrastructure will provide a tenfold increase in performance for data movement, remote analysis, and visualization. Moreover, DOE BER, along with other mission partners, is continuing to make investments in the software technologies required to maintain a distributed data archive with multiple petabytes of climate data stored worldwide through the Earth System Grid Federation project. The ESGF system provides climate scientists and other stakeholders with the tools and technologies to efficiently locate and gain access to climate data of interest from any ESGF portal regardless of where the data reside. While primarily used for sharing climate data today, recent work in integrating UV-CDAT and ESGF allows users to conduct analysis on data anywhere in the ESGF distributed system directly within UV-CDAT as if the data were locally accessible. Further advances such as integrated remote analysis within the distributed archive are still required, however, as even with dramatic improvements in the underlying networking infrastructure, the cost of moving data is often prohibitive. It is often more efficient to simply move the analysis to where the data reside rather than moving the data to a local system and conducting the analysis. HPCwire: What challenges loom for data analysis, especially data visualization? Shipman: The major challenge for most visualization workloads today is data movement. Unfortunately, this challenge will become even more acute in the future. As has been discussed broadly in the HPC community, performance improvements in data movement will continue to significantly lag performance improvements in floating-point performance. That is to say, future HPC systems are likely to continue a trend of significant improvements in total floating point performance, most notably measured via the TOP500 benchmark, while the ability to move data both within the machine and to storage will see much more modest increases.This disparity will necessitate advances in how data analysis and visualization workloads address data movement. One promising approach is in situ analysis in which visualization and analysis are embedded within the simulation, eliminating the need to move data from the compute platform to storage for subsequent post-processing. Unfortunately in situ analysis is not a silver bullet, and post-processing of data from simulations is often required for exploratory visualization and analysis. We are tackling this data-movement problem through advances in analysis and visualization algorithms, parallel file systems such as Lustre, and development of advanced software technologies such as ADIOS [Adaptable Input/Output System, or open-source middleware for I/O]. HPCwire: What’s the storage architecture evolving to in a parallel I/O environment? Shipman: From a system-level architecture perspective, most parallel I/O environments have evolved to incorporate a shared parallel file system, similar to the Spider file system that serves all major compute platforms at the OLCF. I expect this trend will continue in most HPC environments as it provides improved usability, availability of all datasets on all platforms, and significantly reduced total cost of ownership over dedicated storage platforms. At the component level, the industry is clearly trending toward the incorporation of solid-state storage technologies as increases in hard-disk-drive performance significantly lag increases in capacity and continued increases in computational performance. There is some debate as to what this storage technology will be, but in the near term, probably through 2017, NAND Flash will likely dominate. HPCwire: What hybrid approaches to storage are possible? Shipman: Introducing a new layer in the storage hierarchy, something between memory and traditional rotating media, seems to be the consensus. Likely technologies include flash and in the future other NVRAM technologies. As improved manufacturing processes are realized for NVRAM technologies, costs will fall significantly. These storage technologies are more tolerant of varied workloads. For analysis workloads, which are often read-dominant, NVRAM will likely be used as a higher-performance, large-capacity read cache, effectively expanding the application’s total memory space while providing performance characteristics similar to that of a remote memory operation. Unlike most storage systems today, however, future storage platforms may provide more explicit control of the storage hierarchy, allowing applications or middleware to explicitly manage data movement between levels of the hierarchy. HPCwire: How does big data for climate relate to other challenges for big data at ORNL and beyond? Shipman: Many of the challenges we face in supporting climate science at ORNL are similar to the three main challenges of big data — the velocity, variety, and volume of data.The velocity at which high-resolution climate simulations are capable of generating data rivals that of most computational environments of which I am aware and necessitates a scalable high-performance I/O system. The variety of data generated from climate science ranges from simulation datasets from a variety of global, regional, and local modeling simulation packages to remote sensing information from both ground-based assets and Earth-observing satellites. These datasets come in a variety of data formats and span a variety of metadata standards. We’re seeing similar volumes, and in some cases larger growth, in other areas of simulation, including fusion science in support of ITER. The President in a recent release from the Office of Science and Technology Policy highlighted many of the challenges in big data faced, not only across DOE, but also the National Science Foundation and the Department of Defense. A number of the solutions to these big-data challenges that were highlighted in this report have been developed in part here at Oak Ridge National Laboratory, including the ADIOS system, the Earth Systems Grid Federation, the High Performance Storage System, and our work in streaming data capture and analysis through the ADARA [Accelerating Data Acquisition, Reduction, and Analysis] project, which aims to develop a streaming data infrastructure allowing scientists to go from experiment to insight and result in record time at the world’s highest-energy neutron source, the Spallation Neutron Source at Oak Ridge National Laboratory.
<urn:uuid:a971a648-2bd9-486f-b1f0-4f3036d36b80>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/08/15/climate_science_triggers_torrent_of_big_data_challenges/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00566-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920537
2,906
3.140625
3
Do You Need a VPN, Firewall or Both? VPNs and firewalls are highly recommended security solutions that can be used to protect your IT assets from threats and they are essential elements of both business networks and personal device connections. Learn more about the different kind of firewalls, benefits of VPN use and general deployment recommendations. Protecting your IT assets from threats is an essential part of business and personal digital activities. VPNs and firewalls are two commonly used security tools to help reduce risk while maintaining usability. When used in concert, IT communications are filtered and encrypted. This white paper defines what these tools are, describes when you would want to use them, and offers suggestions for deployment. Overview of the State of Internet Security The online world is no longer a safe place to play or do business without being properly prepared. Gone are the days of being anonymous by default and an unlikely target for hackers and attackers. Today, every communication, every website visit, every file transfer, every email, and every e-commerce transaction puts you at risk of interception, spoofing, impersonation, hijacking, man-in-the-middle, account takeover, malicious code infection, and much more. Our daily personal activities and work tasks often mandate the use of the Internet. Whether from a smartphone or a personal computer, many of us are online for most of the day. We perform personal tasks, like shopping and banking; social tasks, such as planning dinner or a rendezvous; and work tasks, such as communicating with customers or participating in video conferences and document collaboration over the Internet. It is these very tasks that put our information, our businesses, and us at risk for attack. Fortunately, there are options for large organizations, small office/home office (SOHO) environments, and individuals that can reduce online risks considerably. Those options are to consider deploying a VPN and/or a firewall. What Is a VPN? A virtual private network (VPN), is a secure remote network or Internet connection that encrypts your communications between your local device and a remote trusted device or service. A VPN is a digital or electronic re-creation of a physical world concept; specifically, the idea of a dedicated isolated physical network cable that only you can use and access. A VPN creates a virtual or electronic version of a physical cable by wrapping up or containing a normal or standard insecure network communications in a tunneling protocol that encrypts the content being transported. Communications protected by a VPN still traverse the same, shared network pathways as normal traffic, but because the payload is encrypted, the result is the equivalent of a dedicated isolated physical cable. The Different Types of VPNs There are three main types of VPNs. They are: Transport mode host-to-host – A transport mode host-to-host VPN creates a secure connection between two individual systems. In such a VPN, only the payload is encrypted. The headers of the protocol packets, which guide the communication across the intermediary network, remain in their original plain-text form. Thus, the contents of a communication are protected, but the identity of those communicating is exposed. This type of VPN is commonly used inside private network environments where there is a general level of modest trust of the network, but when additional protection is needed for specific host-to-host communications, such as database replication or periodic backups. Tunnel mode site-to-site – A tunnel-mode site-to-site VPN creates a secure connection between two different networks or physical locations. In such a VPN, both the payload and the original packet headers are encrypted. An additional tunnel header is added to the encrypted content to direct the communication from one endpoint of the VPN to the other. Communications between two systems are only encrypted while in the tunnel itself. Thus, if a client in Network A sends data to a server in Network B, the initial communication would cross Network A in plain text; then become encrypted as it entered the VPN on the border of Network A; remain encrypted across the Internet until it reached the border of Network B; and then the communication would be decrypted and sent across Network B to the server in plain text. This type of VPN is commonly used to connect remote networks. Tunnel mode host-to-site – A tunnel-mode host-to-site VPN creates a secure connection between a single computer and a remote network. In such a VPN, both the payload and the original packet headers are encrypted. An additional tunnel header is added to the encrypted content to direct the communication from one endpoint of the VPN to the other.
<urn:uuid:71e4cd46-ef5f-4b0f-9fd5-9e1c999b4c0c>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/do-you-need-a-vpn-firewall-or-both/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00198-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932841
935
2.71875
3
“In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits. In this context, a 'message' means a specific realization of the random variable.” I find myself analyzing password and token entropy quite frequently and I’ve come to rely upon Wolfram Alpha and Burp Suite Pro to get my estimates for these values. It’s understandable why we’d want to check a password’s entropy. It gives us an indication of how long it would take an attacker to brute force it, whether in a login form or a stolen database of hashes. However, an overlooked concern is the entropy contained in tokens for session and object identifiers. These values can also be brute forced to steal active sessions and gain access to objects to which we do not have permission. Not only are these tokens sometimes too short, they sometimes also contain much less entropy than appears. Estimating Password Entropy Wolfram Alpha has a keyword specifically for analyzing passwords. Estimating Token Entropy Estimating the solution for: [ characters ^ length = 2 ^ x ] will convert an arbitrary string value to bits of entropy. This formula is not really solvable, so I use Wolfram Alpha to estimate the solution. e.g. 1tdrtahp4y8201att8i414a7km has the formula: Click “Approximate Form” under the “Real solution”: The password strength calculator also works okay on tokens, and we’ll see a similar result: BUT! Analysis of a single token is not enough to measure /effective/ entropy. Burp Suite Sequencer will run the proper entropy analysis tests on batches of session identifiers to estimate this value. Send your application login request (or whatever request generates a new token value) to the Sequencer and configure the Sequencer to collect the target token value. Start collecting and set the “Auto-Analyze” box to watch as Burp runs its tests. A sample token “1tdrtahp4y8201att8i414a7km” from this application has an estimated entropy of 134.4 bits, but FIPS analysis of a batch of 2000 of these identifiers shows an effective entropy of less than 45 bits! Not only that, but the tokens range in length from 21 to 26 characters, some are much shorter than we originally thought. Burp will show you many charts, but these bit-level analysis charts will give you an idea of where the tokens are failing to meet expected entropy. You can spot a highly non-random value near the middle of the token (higher is better), and the varying length of the tokens drag down entropy near the end. The ASCII-based character set used in the token have one or more unused or underused bits, as seen in the interspersed areas of very low entropy. In the case illustrated above I would ask the client to change the way randomness is supplied to the token and/or increase the token complexity with a hashing function, which should increase attack resistance. Remember, for session or object identifiers, you want to get close to 128 bits of /effective/ entropy to prevent brute forcing. This is a guideline set by OWASP and is in line with most modern web application frameworks. If objects persist for long periods or are very numerous (in the millions) you’ll want more entropy to maintain the same level of safety as a session identifier, which is more ephemeral. An example of persistent objects (on the order of years) which rely on high entropy tokens would be Facebook photo URLs. Photos marked private are still publicly accessible, but Facebook counts on the fact that their photo URLs have high entropy. The following URL has at least 160 bits of entropy:https://fbcdn-sphotos-a.akamaihd.net/hphotos-ak-ash4/398297_10140657048323225_750784224_11609676_1712639207_n.jpg For passwords, the analysis is a little more subjective, but Wolfram Alpha gives you a good estimate. You can use this password analysis for encryption keys or passphrases as well, e.g. if they are provided as part of a source code audit.
<urn:uuid:cbdb9fcd-aa58-41dd-9976-1363e754ccd2>
CC-MAIN-2017-04
http://blog.ioactive.com/2012/02/estimating-password-and-token-entropy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00528-ip-10-171-10-70.ec2.internal.warc.gz
en
0.87918
930
2.671875
3
At present, low loss fiber optic systems offer almost unlimited bandwidth and unique advantages over all previously developed transmission media. The basic optical transmitters convert electrical signals into modulated light for transmission over an optical fiber. The most common devices used as the light source in optical transmitters are light emitting diode. Fiber optic light source make a good use of this, as light emitting diodes have relatively large emitting areas and used for moderate distances. Fiber optic light source prove to be economical. A fiber optic light source device is mounted on a package that enables optical fiber to couple as much light as possible into the fiber. In some cases a tiny spherical lens is also fitted to collect and focus each possible light onto the fiber. Fiber optic light source is reliable and the most common wavelengths used by fiber optic light source today are 850 to 1300 nanometers or in some cases even 1500 nanometers. There are two methods through which light can be coupled into the fiber optic light source. One is by pig-tailing and the other is placing the fiber’s tip in very close proximity to an LED or LD. Since the only carrier in these systems is light there is no danger of electrical shock to the personnel repairing broken fibers. Fiber optic light sources are a necessity for performing fiber optic network testing to measure the fibre optical loss for both single mode fiber cable and multimode fiber cables; They are designed to cover a variety of wavelength ranges to suit all optical testing needs and usually the optical light sources are used with the fiber optic power meter to test the fiber system loss. Light sources are offered in a variety of types including LED, halogen and laser. Usually the optical light source is used with the fiber optic power meters,they act as an economic and efficient solution for the fiber optic network works.
<urn:uuid:8a793b4f-a9e2-4439-925c-075bc111d394>
CC-MAIN-2017-04
http://www.fs.com/blog/a-fiber-optic-light-source-provided-in-fiberstore.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932188
363
3.453125
3
Fiber Optic Attenuator is a device to reduce the optical fiber power at a certain level by a predetermined factor. The intensity of the signal is described in decibels (dB) over a specific distance the signal travels. Attenuator provides a certain amount of isolation between instruments, thus reducing measurement interaction. This can be done by attenuating the unwanted reflected signal due to imperfect matching. Fiber optic attenuators are used in applications where the optical signal is too strong and needs to be reduced, it is mainly used for fiber optic system of measurement, signal attenuation for short distance communication system and system test, etc. For example, in a multi-wavelength fiber optic system, you need to equalize the optical channel strength so that all the channels have similar power levels. This means to reduce stronger channels’ powers to match lower power channels. The basic types of optical attenuators are fixed and variable attenuators. The most commonly used type is female to male plug type fiber optic attenuator, it has the fiber connector at one side and the other side is a female type fiber optic adapter. Female to male mechanical attenuator is assembled with a fixed type connector, so it can only be connected with one patch cord, such as lc Attenuator, sc Attenuator, fc Attenuator, st Attenuator, etc. Fixed value attenuators have fixed values that are specified in decibels. Just its name implies, fixed value attenuator’s attenuation value cannot be varied. The attenuation is expressed in dB. The operating wavelength for optic attenuators should be specified for the rated attenuation, because optic attenuation of a material varies with wavelength. Their applications consist of telecommunication networks, optic fiber test facility, Local Area Network(Lan) and Catv systems. Fixed value attenuators are composed of two big groups: In-line type and connector type. In-line type looks like a plain fiber patch cable; it has a fiber cable terminated with two connectors which you can specify types. The in-line fiber optic attenuator is fit to use with optical patch cables. To use these in-line Fiber Optics Attenuators just select the connector type you need ST, SC, LC, & FC Available, the Polish (PC, UPC or APC angled Polish) & the Decibel dB rating. Variable attenuators come with variety separate designs. They are normal used for testing and measurement, but they also have a wide usage in Edfas for equalizing the light power among separate channels. One type of changeable attenuator is built on a D-shaped fiber as a type of evanescent field device. If a bulk external material, whose refractive index is greater than the mode effective index, replaces a part of the evanescent field reachable cladding, the mode can come to be leaky and some of the optic power can be radiated. If the index of the external material can be changed with a controllable mean, straight through the effects such as thermo-optic, electro-optic, or acoustic-optic, a gadget with controllable attenuation is achievable. Other types of variable attenuators consist of air gap, clip-on, 3-step and more. As it comes to getting a fiber optic attenuator you have several options listed above, so before you buy one you must be sure at what level you want to attenuate your signal and then choose what type will work best for you. Taking the time to choose the right one can save you big time.
<urn:uuid:ae96a6e7-40b9-4d48-94be-325e3584efd9>
CC-MAIN-2017-04
http://www.fs.com/blog/guide-to-fiber-optic-attenuator.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898131
754
3.109375
3
Researchers at the University of Arizona now have a world-class supercomputing system to help them unlock the secrets of the universe. “El Gato” – which is short for the Extremely LarGe AdvancedTechnOlogy system – was acquired via a $1.3 million grant from the National Science Foundation. The 145-teraflops system achieved a 336 ranking on the TOP500 list and placed seventh on the Green500 list, which reshuffles the TOP500 deck based on FLOPs-per-watt. With 13 times more processing capability than the previous generation HPC system in the UA’s Research Data Center, “El Gato” will enable the UA faculty to reach new heights when it comes to understanding complex scientific phenomena, such as the distribution of dark matter in the universe. The machine has already calculated the location of more than one billion dark matter particles in a simulation of the universe that is 280 million light years from side to side. University of Arizona assistant professor of astronomy Brant Robertson led the committee to bring this next generation system to the UA campus. The collaborative effort involved astronomers, computer scientists and engineers. The $1.3 million grant came from the National Science Foundation (NSF) Division of Astronomical Sciences through the Major Research Instrumentation program. El Gato, constructed of Intel, IBM and NVIDIA parts, was installed in December 2013 in the UA Research Data Center, a facility that supports all UA researchers. Like many other HPC systems being built today, El Gato draws its power from a combination of CPUs and GPUs. The hybrid architecture, which uses GPUs to accelerate workloads, is one of the biggest trends to hit HPC in the last decade. “For the price, this computer is very, very fast, and it’s very green,” Robertson said. “The graphic processing units enable you to speed up your calculations up to 300 times faster compared with central processing units.” It’s a degree of power that’s not to be taken lightly. “This computer is very powerful for a university of this size,” observes Robertson’s graduate student Evan Schneider. “For the University of Arizona to have a computer that is ranked in the top 500 fastest computers in the world is pretty impressive.” As Robertson points out, the top one hundred fastest machines in the world (those on the TOP500 list and those that are not disclosed to the public) are primarilty the purview of national research facilities, government labs and similar agencies, as well as very large corporations with budgets that can accommodate the hefty price tag. The new supercomputer will support faster compute times and much more detailed results. “El Gato allows us to perform bigger calculations, to look at finer details, and to include more features in our models,” Robertson said. “All of this enables us to do research we couldn’t do otherwise.” While the main area of focus will be in theoretical astrophysics, any UA researchers – including faculty and students – can access the system via the “windfall” usage policy that redirects “idle” time to worthy projects. “Anyone who is eligible for a high-performance computing account can use the system,” Robertson said. “We have reserved 30 percent of the total usage time for the UA researchers and their collaborators. Most of the time it is used by people not associated with the grant, which is exactly what we wanted.” The hybrid CPU-GPU architecture is new to many of the system’s users, who were accustomed to the school’s more traditional CPU-based HPC resources. To address this, the team of project co-PIs along with their grad students and postdocs have been retooling codes to make optimal use of the GPU accelerators. Current projects include the direct imaging of black holes, simulating the dynamics of clusters and reproducing the formation of galaxies. Robertson hopes that El Gato will encourage users to develop GPU-optimized code for other fields too. The computer’s suppliers – IBM, NVIDIA and Intel – will also have a hand in educating El Gato’s broad user community. This September, the companies will help lead an El Gato training symposium.
<urn:uuid:b85e4fc9-36f5-4ae5-851e-bc4fd10ad0ff>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/06/02/university-arizona-welcomes-el-gato-supercomputer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934357
900
2.53125
3
The social engineer is a highly-skilled, highly-motivated adversary and for the information security professional who knows that the human factor is the biggest weakness in any multi-layered defense strategy, social engineering represents one of their biggest challenges. Some of the most significant recent data breaches, from the high-profile attack on Target to the recent JP Morgan breach, are suspected to be the result of social engineering. So how can information security professionals protect their organization from the risk of social engineering – what are the policies, procedures and technologies that need to be in place to address the threat? During this session, the panel will provide insight into how social engineers manipulate individuals and exploit security weaknesses, and share best practice on how to manage the risk. - Analyse how social engineers target specific information and collect, sort and utilise that data - Identify the factors that make an organization vulnerable to a social engineering attack - Determine how to develop systems, policies and procedures to protect your organization from social engineering - Learn how to test your organization’s susceptibility to social engineering to identify weaknesses - Discover how to detect unintended disclosure of information on social networking sites Access best-practice strategies to educate employees to protect against social engineering
<urn:uuid:30716538-45f1-4f05-a77c-bcfb0af58617>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/webinars/mitigating-the-social-engineering/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920021
250
2.515625
3
The Universe might be expanding, but at least it’s getting easier to see. On Monday, at the annual Microsoft Research Faculty Summit, the software maker unveiled the largest and clearest image of the night sky ever assembled. This so-called “TeraPixel” sky map was generated with the help of some of Microsoft’s latest HPC and parallel software assets. The TeraPixel project from the folks at Microsoft Research was essentially a recomputation of the image data collected by Digitized Sky Survey over the last 50 years. The input data was made up of 1,791 pairs of red-light and blue-light plates produced by two ground high-powered telescopes: the Palomar telescope in California (US) and the UK Schmidt telescope in New South Wales (Australia). Between them, the two installations covered the night sky of the Northern and Southern hemispheres. As one might suspect of photographs collected over a long period of time with different equipment and under different conditions, the quality of the images varied considerably. Different color saturation, contrast, noise, and brightness, as well as the presence of vignetting (darkening toward the image corners) meant that the data would require a lot of post-processing to produce what the researchers were going for: a seamless photograph of the entire sky. Compared to the old sky image, the TeraPixel version is much more refined. With all the artifacts, seams and inconsistencies processed away, it looks like a true unified image of the sky above. It’s like going from Super Mario Brothers on 1985-era Nintendo consoles to Halo 2 on Xbox 360s. According to Dan Fay, the director of Earth, Energy and Environment at Microsoft Research, to get this level of refinement, all the images had to go through a four-stage processing to correct for the irregularities. The first stage attacked the vignetting artifact to brighten up the dark corners. The next step was more complex. Since each plate had a red and blue version for each area, these had to be processed separately and then realigned into one image. They even had to account for multiple overlapping plates. In some cases, Fay says, they chose the best pixels on the various plates to come up with the highest quality image. The third step involved stitching the individual images together and smoothing out the seams. Lastly, the multi-resolution images were generated so that users could zoom in for greater detail. The final result was a spherical panorama of the night sky in 24-bit RGB format. Much of the software relied on Microsoft software as well as Microsoft programmers. The project used the global image optimization program developed by Hugues Hoppe and Dinoj Surendran of Microsoft Research and Michael Kazhdan of Johns Hopkins. The DryadLINQ and the .NET parallel extensions framework was employed to construct and manage the applications. DryadLINQ is a programming environment for running parallel applications across a cluster, using LINQ (Language Integrated Query) as a query engine on top of the Dryad runtime. The latter takes the queries and distribute them across the nodes. Windows HPC Server was used to schedule the more tightly-coupled jobs and the Project Trident Workbench was employed to manage the entire workflow. By HPC standards, the hardware platform was relatively modest. A 16-node Intel Xeon cluster was used to process the TeraPixel image, but the final runs were done on a 64-node system. The image was built iteratively since the algorithms were continuously tweaked to get better refinement. A full run on the 16-node machine took three days, while on the larger machine, it took just over half a day. One of the costliest operations, time-wise, was shuffling the images around the cluster. “Some of the biggest issues were data movement,” notes Fay. “When you start getting to that many nodes and parallel jobs, moving the data ends up taking a lot of the time.” Just transferring the final 1,025 files (802 GB total) off the cluster took 2.5 hours using a 1 Gbps link. The TeraPixel image can be viewed by researchers and the general public on Microsoft Research’s WorldWide Telescope Web site. It also can be accessed via Bing Maps, via a plug-in, where you can see a street-wise view of the sky above. Because of the high resolution of the imagery, viewers are able to zoom into any area of the sky and see greater detail of specific star systems. The sky image they’ve produced has been verified by astronomers, who made sure that nothing is rotated incorrectly or is otherwise erroneous. According to Fay, the feedback from the community has been gratifying. “No one has ever seen an image of the sky like this,” he says.
<urn:uuid:c41a79b9-d8c6-4416-a3fa-53296d2f1826>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/07/13/recomputing_the_sky/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953425
993
3.171875
3
The trend no doubt hasn't been lost on the country's best and brightest scientists and NASA for example now says it has evidence that using one of its aircraft-deployed radar systems it can foresee sinkholes before they happen, decreasing danger to people and property. +More on Network World: NASA revels in "Gravity"+ Researchers from NASA's Jet Propulsion Laboratory (JPL) said they analyzed airborne radar data collected in 2012 that showed the system known as interferometric synthetic aperture radar (InSAR) had detected indications of a huge sinkhole before it collapsed and forced evacuations near Bayou Corne, La. JPL researchers Cathleen Jones and Ron Blom said InSAR detects and measures very subtle deformations in Earth's surface - basically by bouncing signals off the ground and measuring the differences in the phase of the waves returning to the aircraft or satellite depending on what system is being used. The JPL researchers said that their analyses of pictures taken around the Bayou Corne site showed the ground surface layer deformed significantly at least a month before the collapse, moving mostly horizontally up to 10.2 inches (260 millimeters) toward where the sinkhole would later form. These surface movements covered a much larger area -- about 1,640 by 1,640 feet, (500 by 500 meters) -- than that of the initial sinkhole, which measured about 2 acres (1 hectare). "While horizontal surface deformations had not previously been considered a signature of sinkholes, the new study shows they can precede sinkhole formation well in advance," said Jones in a statement. "This kind of movement may be more common than previously thought, particularly in areas with loose soil near the surface." The Bayou Corne sinkhole formed unexpectedly Aug. 3, 2012, after weeks of minor earthquakes and bubbling natural gas that provoked community concern. It was caused by the collapse of a sidewall of an underground storage cavity connected to a nearby well operated by Texas Brine Company and owned by Occidental Petroleum. On-site investigation revealed the storage cavity, located more than 3,000 feet (914 meters) underground, had been mined closer to the edge of the subterranean Napoleonville salt dome than thought. The sinkhole, which filled with slurry --a fluid mixture of water and pulverized solids-- has gradually expanded and now measures about 25 acres (10.1 hectares) and is at least 750 feet (229 meters) deep. It is still growing threatening nearby communities and Highway 70, so there is a pressing need for reliable estimates of how fast it may expand and how big it may eventually get. "Our work shows radar remote sensing could offer a monitoring technique for identifying at least some sinkholes before their surface collapse, and could be of particular use to the petroleum industry for monitoring operations in salt domes," said Blom in a statement. "Salt domes are dome-shaped structures in sedimentary rocks that form where large masses of salt are forced upward. By measuring strain on Earth's surface, this capability can reduce risks and provide quantitative information that can be used to predict a sinkhole's size and growth rate." While the Bayou Corne sinkhole was likely caused by human activities it occurred in an area not prone to sinkholes, the NASA researchers stated. The Gulf Coast of Louisiana and eastern Texas sits on an ancient ocean floor with salt layers that form domes as the lower-density salt rises. The Napoleonville salt dome underneath Bayou Corne extends to within 690 feet (210 meters) of the surface. Various companies mine caverns in the dome by dissolving the salt to obtain brine and subsequently store fuels and salt water in the caverns. Blom says there are no immediate plans to fly InSAR, which is part of NASA's Uninhabited Airborne Vehicle Synthetic Aperture Radar (UAVSAR) package over sinkhole-prone areas. UAVSAR is deployed on a pod attached to a NASA C-20A jet. "You could spend a lot of time flying and processing data without capturing a sinkhole," he said. "Our discovery at Bayou Corne was really serendipitous. But it does demonstrate one of the expected benefits of an InSAR satellite that would image wide areas frequently. So are sinkholes in fact increasing? An interesting item on Accuweather.com seems to take a conservative approach: "Increased reporting of sinkholes (as the public becomes more aware of them), combined with urban sprawl, may lead to the impression that they are occurring more frequently now than in the past," according to Rick Green, geologist for the Florida Geological Survey. Human influence has increased, however, Director of the Eastern Geology and Paleoclimate Science Center for the U.S. Geological Survey Randall Orndorff said. "We have no hard evidence to say for sure that sinkholes are occurring more than they have in the past; however, since human influences such as paving and building in sinkhole-prone areas has increased, it probably follows that we are seeing them more often," Orndorff said. Check out these other hot stories:
<urn:uuid:a5bd9532-a2ed-4243-b972-dd5d196c8a2e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226485/security/nasa-radar-system-could-help-predict-sinkholes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962603
1,064
3.375
3
In data security the weakest link is all too often a human being. Despite security measures such as firewalls, passwords and even security staff trained to keep intruders out, it is sometimes childishly simple to get access to confidential information by using manipulation techniques to influence people. Human behavior and the protection of confidential information are closely linked. Without intending to do so, someone who is inattentive can open the digital doors to hackers and give them access to the crown juwels of the organization. People are continuously being manipulated, influenced and misled. Not only by advertisers, call center agents, online shopping sites and car dealers, but also by colleagues, friends and cybercriminals who want something from them. More and more frequently, we see cybercriminals taking advantage of human weaknesses in a variety of ways. The psychology behind social engineering Extensive research into human behavior is nothing new. So how can human behavior be explained? How does it happen that people fall massively for phishing mails and that so many of us are seemingly happy to give social engineers access to our confidential information? Well known behavior researcher Robert Cialdini states that there are six universal principles of influence that determine human behavior. Social engineers make use of these principles to manipulate their potential victim and prompt certain behavior. The six principles of influence are: - Reciprocity. The urge to give something in return for what other people have given us. Someone who feels they owe something to someone else will give in much more easily when asked to return the favor. - Consistency. The urge to act in keeping with what we have done or said before. If we’ve previously expressed an opinion or made a choice, we are inclined in future situations to make a similar choice. - Social proof. Our judgement of correct or incorrect behavior is related to other people’s behavior. When people do something in the same way as ourselves, or it’s something we’ve seen before, we’re more likely to label it as correct. - Liking. It feels unfriendly not to comply with an urgent request, and we’re more likely to say yes if it comes from someone we like, feel attracted to, who flatters us, or who shares (purportedly) similar interests. In these cases, when pressure is applied, refusing becomes difficult. - Authority. Since birth we’ve been taught that it is right to obey proper authorities and respect the power they have. Would you mistrust a fireman coming to check smoke detectors? - Scarcity. We attach more value to certain things if we believe there is a shortage of them. When there is only one article left in stock, we are inclined to grab the opportunity before it is too late. If there is a risk of losing something, we tend to resist strongly (acquisitiveness). Besides these six principles, the time that someone is given to make the right behavioral decision also plays an important role in influencing the outcome. In a social engineering attack, the victim is often persuaded to act very quickly. Phishing mails, for example, often threaten to block bank cards and accounts if the recipient does not react within 24 hours. The victim is prompted to quickly logon into a seemingly legitimate online portal that has, in fact, been simulated by the hacker to look like the bank’s site. The victim will base their choice on the limited information that is given at that moment, and if the above-mentioned principles are successfully applied, the time pressure will cause the victim to trust the content. As an example, the amount of trust is increased when the phishing mail contains personal information on the victim, such as their full name and address, or bank account number. In addition to trust and acquisitiveness, it appears that people are above all curious. People are eager to get what they don't have already and especially want to avoid losing what they do have. In other words, when someone really wants something, they will put negative or mistrustful feelings aside and just forget about them for a moment. The number of cases increases considerably when random people are called from “faraway” countries and the call is terminated the moment they answer. Curiosity will prompt the victim to call back and get through to an expensive pay service (a kind of surcharged number) resulting in a high telephone bill. Whereas social engineering in the past was only used to enter physical premises, over the years different technical resources have been developed that enable cybercriminals to launch their attacks on a large scale through digital channels: - Phishing by email and telephone: According to the Cyber Security Assessment Netherlands 2015 (an annual report published by the National Cyber Security Center) phishing was one of the most powerful and frequently used cyber attack methods in 2015. Phishing is the collective term for digital activities that aim to pilfer personal information (like logins) from people. Principles of influence used: authority, scarcity, trust. - USB drop: Cybercriminals increasingly make use of USB flash drives containing harmful software and code scripts. These might be “dropped” into the victim’s suitcase after a train journey, left lying somewhere, or handed out as a free gift. If the person who finds the flash drive puts it in their computer, it is already too late: the hacker has entered. Weaknesses exploited: curiosity and acquisitiveness. - Rogue Wi-Fi access point: By imitating Wi-Fi hotspots that the user knows and trusts (most public hotspots), smartphones easily make contact with Wi-Fi access points that are controlled by the hacker. The hacker can then eavesdrop and manipulate the victim’s internet behavior when it's not encrypted. Weaknesses exploited: trust and consistency. - Combination of attack techniques: The method can further be refined: a phishing mail is often announced by a caller who asks a number of targeted questions on topics that interest someone in a professional or personal capacity. Following the conversation, the caller will send an email containing a link on which the person can click, with all the consequences thereof. Ransomware and malware are also often attached to phishing mails, which means that the recipient does not even need to click on a link: merely opening the attachment is enough. Don't be a victim, arm yourself! The question then arises of whether people can sufficiently protect themselves against cybercriminals who use social engineering. While we are inclined to say that cybercriminals will always find a way if they really want to, it is still very important to take a number of measures and reduce the risk of being harmed by social engineers. - Use two-factor authentication or login confirmation to access things like Gmail, Hotmail and other social media accounts. If cybercriminals have your password, they still can't log in without the second factor (e.g. SMS code, fingerprint, token, or a random code) or your direct authorization (login confirmation). - Turn off the “automatic connection” option on public Wi-Fi networks so that you your phone cannot connect to a rogue Wi-Fi access point. - Social engineering assessment: in a social engineering assessment, a trained social engineer will attempt to enter an organization and try to access the “crown jewels”. It’s a good way to assess the vulnerability of physical protection measures and at the same time raise awareness. - Phishing audit: the most powerful way to make an organization resilient to phishing is by “learning through experience”. Many organizations do periodic phishing audits, which are a way to measure and immediately increase employees’ security awareness, by carrying out a controlled phishing attack and analyzing the response. When organizing and carrying out a phishing audit, various legal, ethical and technical aspects must be taken into account, thus it is advisable to ask a cybersecurity expert to support you. - An advanced cyber drill as the ultimate test: a combination of phishing, hacking, USB drop and rogue Wi-Fi access. During a simulated attack the employees concerned are tested and trained on their vulnerability. - Technical measures: email authentication with the use of SPF, DKIM and DMARC, and making authentication emails recognizable before sending them to a wide audience. It is also advisable where possible to use two-factor authentication to keep the risk of unauthorized access down to a minimum. This is extremely important where systems are accessible via the ‘open’ internet. For the authorities: - Authorities can make people aware of the threat as a preventative measure to ward off damage. In the Netherlands, for example, we have seen campaigns run by an organization called 'Veilig Internet' (Safe Internet), advising people to ‘hang up, click 'close' and call your bank!’. Keep investing in awareness campaigns that continue to draw people’s attention in different ways. It remains childishly simple to access confidential information by means of human manipulation. Cybercriminals increasingly make use of this, while technology developments now enable them to apply social engineering on a large and international scale. To avoid being a victim of social engineering, citizens and employees, as well as authorities and businesses, must take appropriate measures to protect their confidential data. Training and increasing awareness is a first step in the right direction. About the author Guido Voorendt is one of Capgemini's cybersecurity experts. He is active in the field of public security, and specializes in Social Engineering, Privacy and Identity and Access Management. In 2013, when Guido joined Capgemini, he created the Capgemini Phishing Audit: a method to test the resilience and vulnerability of employees against phishing by simulating a phishing attack and measuring the response to the phishing emails and malicious websites. Discover new ways to Control and Secure your Assets
<urn:uuid:4abd975c-81a3-4693-a1b4-3cd4bdb506ef>
CC-MAIN-2017-04
https://www.capgemini.com/blog/capping-it-off/2016/09/human-manipulation-is-childs-play-arm-yourself-against-social
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946803
2,024
3.203125
3
Are you as safe online as you think you are? A recent survey of users reveals a troubling disconnect - By William Jackson - Nov 15, 2010 There's a security disconnect in the online public, according to a recent survey of Internet users. A preponderance of users said individuals are responsible for their own security when online, and more than half reported that they had a complete security suite on their computers (at least on their desktop PCs). But a follow-up scan of configurations showed that only 37 percent were running a full suite of tools. “It’s an emerging world,” said Michael Kaiser, executive director of the National Cyber Security Alliance, which commissioned the survey. “People feel a strong responsibility, but it takes time and education.” Kaiser takes an optimistic view of the results. People are becoming more aware of security problems and are acting more responsibly and the tools are becoming better. How do you measure IT security? Which products top the list of security concerns? Unfortunately, the sophistication of online criminals appears to be keeping pace with that of Internet users, and the functionality of increasingly mobile online devices could be outpacing our ability to secure them. “People aren’t as aware of the risks that exist on mobile devices yet,” Kaiser said. But they seem to have an instinctive understanding of those risks. Although 85 percent of respondents felt that their desktops were very or somewhat safe, fewer than half had the same level of confidence in their mobile devices. The bottom line: The Internet remains a scary place. Only 5 percent of respondents said the Internet is safer today than it was a year ago, and 21 percent think it is less safe. More than two-thirds say it is about the same. The study of 3,498 Americans was commissioned by NCSA and conducted by Zogby International. Norton by Symantec conducted follow-up scans of 400 computers. Part of the apparent disconnect could be confusion over what constitutes full security for a networked computer. Most people are running a handful of core tools, such as a personal firewall, antivirus and anti-spyware toold, Kaiser said. But basic protection now should also include tools to combat phishing as well as e-mail message filtering and management and identity protection. The list of tools needed for protection online is growing, but the tools also are becoming more automatic, with features such as default updating, and are being built into applications such as browsers. However, users still must assume responsibility for the use and maintenance of these tools, and exercise caution in their online behavior. We cannot expect consumers to assume full responsibility for their own security, however. Most have too little understanding of the technology and little patience for using it. Security tools must become even more automatic and better integrated into products, like the air bags and anti-lock brakes on cars. And, also like cars, people must operate them safely. The survey showed little appetite for government regulation of Internet security. This does not mean that government regulation has no place on the Internet. The government establishes certain requirements for the safety of automobiles on the road and for operation of them. There is no reason why baseline security requirements cannot be established for operators of the information superhighway. William Jackson is a Maryland-based freelance writer.
<urn:uuid:17c4a2d5-4638-4f38-87c4-68e0ea2d0f83>
CC-MAIN-2017-04
https://gcn.com/articles/2010/11/15/cybereye-internet-security-survey.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00354-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966985
685
2.578125
3
I don’t know about you but if my mouse vibrated and my office chair massaged my back I wouldn’t get much work done. But those technologies and more are part of newfangled ergonomic products being developed by Alan Hedge, an international ergonomics authority and his Cornell Human Factors and Ergonomics Research Group. That means the estimated 100 million people who now use computers in the United States may soon be using these products: - Vibrating mouse: To see if a vibrating mouse could prevent upper extremity musculoskeletal disorders in computer users by signaling people to take their hand off the mouse to avoid overuse. Researchers said that although subjects do remove their hands more often with a vibrating mouse than with a conventional mouse, they tended to hold their hand just above the mouse. "This position is potentially more detrimental because of a potential increase in static muscle activity required to hover the hand," Hedge said in a statement. The conclusion is that users should rest their hands on a flat surface when they feel the vibration. - Undulating chairs: Another study examined whether a seat that made a continuous massaging, wavelike movement at an adjustable rate would alleviate back pain in people whose pain increases when they are seated. Although their findings were mixed, researchers concluded that the movable seat was a concept with promise, particularly for individuals with back problems. - Movable arms for monitors: A third study looked at how suspending a flat panel computer monitor on a movable arm affects people's comfort, posture and preference. Researchers found that people unanimously liked the monitor arm because they could adjust their LCD screen, and it gave them more room on their desktop for documents."We saw fewer complaints about neck problems and about the workstation because people had more space," says Hedge. Users liked the versatility of the movable arm to show others what was on their screen, Hedge said. What has ergonomics researchers on edge is the younger onset of computer use and the current rate of compensatory damage claims, Hedge stated. There is typically a 10- to 15-year latency before injuries start to develop, Hedge said in a release. In the early 1990s he showed that the average age of workers reporting carpal tunnel syndrome was late 30s to early 40s; last year, he found the average age of onset had dropped to the mid-20s and even younger for some people. Still office ergonomics remain a bit of black magic for most office staffs. Linda Musthaler, a Network World columnist recently wrote: This lack of attention to proper office ergonomics is taking its toll. At the end of the day, 40% to 50% of computer users report having some sort of pain, such as in their neck or shoulders or their wrists or hands. Experts who study these reported instances of pain are still debating the exact causes, but many point to poor posture or positioning relative to the computer as contributing factors. Most large companies have an office safety department that studies such issues and recommends solutions to employees to eliminate or reduce the stress on muscles and other body parts. The fact of the matter is, office ergonomics are a mysterious area. Although many research studies have been conducted over the years, there are few conclusions over what leads to worker’s pains. Some people have alleged that the design of devices such as keyboards and monitor screens cause problems, yet there is little to no scientific link between keyboard design and carpal tunnel syndrome or monitor design and eyestrain, Musthaler stated. Layer 8 in a box Check out this stuff:Fast language translation software for laptops, hand-held devices on tap from BBN
<urn:uuid:6757df4d-5f66-4d2a-963e-91afb84b73a1>
CC-MAIN-2017-04
http://www.networkworld.com/article/2350173/data-center/researchers-rub--vibrate-way-to-new-ergonomics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00015-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96988
751
2.6875
3
Measuring Green's Benefits to the BusinessGreen initiatives offer returns and cost savings in both the short term and the long term.Green initiatives offer returns and cost savings in both the short term and the long term. Specific metrics and trackable benefits get to the heart of the problem. In a down economy, the successful execution of sustainability initiatives may require that banks develop new measurement capabilities and valuation techniques. Banks that are constrained by their bottom lines may only be able to fund the projects with predicted short-term paybacks, such as: - Simple reductions in cost. Newer hardware and systems management techniques such as server virtualization allow data centers to process the same volume of work with fewer physical machines, thus lowering the electric bill. - Trade-offs among cost categories. Many banks are encouraging customers to adopt online banking in the name of "green-ness." The banks save far more on postage, paper and printing than they spend on the supporting technology. Candidly, if the bottom line is the sole measure of return and hurdle for investment, banks' supposedly green motivations become suspect. Genuine sustainability initiatives may require different assumptions in order to "measure up"; managers must assign some real worth to the environmental benefits, whether far in the future or elsewhere in the value chain. Similarly, sustainability initiatives often require longer horizons for calculation of returns; energy-efficient building construction, for example, may require higher investment at start-up for a stream of returns extending many years.
<urn:uuid:7620a835-8a7f-42dc-9d2e-f2441e7e69ea>
CC-MAIN-2017-04
http://www.banktech.com/data-and-analytics/measuring-greens-benefits-to-the-business/d/d-id/1292675
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00227-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91088
302
2.65625
3
Jury Still Out on LTE-Unlicensed In less than 20 years of widespread availability, broadband radio technologies in the unlicensed spectrum, such as Bluetooth, WiFi, and ZigBee, have achieved a number of milestones. (See Playing by the Rules: The Success of Unlicensed Spectrum.) The current successes of unlicensed technologies have been achieved primarily in the ISM band at 2.4 GHz -- a band spanning only 80 MHz. However, over the coming years, we will increasingly see these technologies expand their operation to new spectrum bands, whose characteristics will enable entirely new types of application. - The TV White Spaces are the unused sub-1 GHz bands in the broadcast TV bands providing anywhere from 30 to 200 MHz on average per person across the US. Their use permits long-distance non-line-of-sight communication and is already being tested in rural and urban broadband projects around the world, as well as new generations of long-range sensor networks. - The 60 GHz band provides an extremely capacious 7 GHz of unlicensed spectrum. Due to its extremely high frequency, it will be used for short-range, ultra-fast communication to mobile devices and somewhat longer-range fixed uses such as line-of-sight backhaul from small cells and point-to-point uses. - The 5 GHz band provides propagation possibilities most similar to the 2.4 GHz band. However, it also contains an order of magnitude more spectrum -- 775 MHz in the US. The 5 GHz technologies will be used to build superfast multi-gigabit LANs and longer-range, point-to-point gigabit links, which will be particularly important for rural WISPs in the developed world and for building the high-capacity distribution systems that will extend the Internet across developing nations. These developments will likely increase the already impressive economic value delivered by unlicensed technologies. However, WiFi will not be the sole technology seeking widespread deployment in the 5 GHz band. LTE Advanced in unlicensed spectrum (LTE-U) is an innovative proposal from Qualcomm Inc. (Nasdaq: QCOM) to deliver LTE with small cells using unlicensed 5 GHz spectrum. Previously, Qualcomm has argued that only licensed spectrum can support the quality of communications that consumers expect. "Quality of service predictability is linked to the exclusivity and the binary access to a given spectrum resource, at a given location and a given time," the company said in November. LTE-U is an abrupt change of direction. However, it does not mean that LTE networks can be deployed completely in unlicensed spectrum. The specification will require that a control channel is implemented in a licensed band. Whether or not this is necessary to ensure that, as Qualcomm says, "the crucial signaling information is always communicated properly," a side effect is that the only deployers of LTE-U will be licensed mobile operators. The eventual success of LTE-U will depend on mobile operators and their willingness to take on large-scale deployments of the technology alongside or instead of carrier WiFi. This remains an open question. In its favor, LTE-U could allow mobile operators to offer very high speeds by tapping into the immense spectrum resources of the 5 GHz band -- especially attractive to weaker operators and new entrants with limited spectrum resources. Due to its deep integration with operators' current networks, fast handover between LTE-U cells and traditional macrocells should be possible. However, LTE-U adoption will also face substantial obstacles. It is unlikely that LTE-U can be as cost effective as WiFi. Base stations are likely to cost substantially more, and the operators will always need to bear the backhaul costs. The premium features provided by LTE-U (seamless voice and data roaming) may not prove sufficiently more valuable than those offered by WiFi technologies such as Hotspot 2.0. Perhaps the greatest drawback of LTE-U is that it will, of course, work only with LTE-capable devices. The future will contain many more devices that feature WiFi connectivity than LTE, including the majority of tablets, laptop computers, cameras, and nearly every other connected device. Will operators choose a standard (LTE-U) that locks them out of addressing the greatest possible market of connected devices? Should LTE-U prove a success, then the technology will be a prominent addition to the rule-based spectrum bands. As the history of Bluetooth and WiFi shows, conflicts between technologies can arise in unlicensed spectrum. Nonetheless, these conflicts have been resolved, largely through the creation of mutually non-exclusionary rules of operation embedded in each technology -- the protocol standards themselves. If LTE-U takes this conciliatory approach, then it can become an important part of the immense patchwork of unlicensed technology and usage. However, should the operation of LTE-U seriously degrade the possibility for other users to take advantage of the 5 GHz band, then this could precipitate the first true crisis of governance of the spectrum commons. This would pitch Qualcomm and some licensed stakeholders against an immensely broad coalition of unlicensed wireless users, including private citizens, retailers, governments at every level, healthcare providers, universities, and nearly every single business that uses networking technology. By allowing rule-based access, the unlicensed spectrum bands have enabled innovation and business deployment by entities ranging from the world's largest companies to the smallest of startups. The rules of this commons have been sufficient to generate the most varied, the most efficient, and potentially the most economically valuable bands in wireless communication. If LTE-U operation substantially disrupts the ability of other technologies to operate, the regulatory authorities must leave no doubt that they will intervene to help strengthen the rules governing the great spectrum commons. — Richard Thanki, telecommunications economist and PhD candidate, University of Southampton
<urn:uuid:cb56c3bf-6ba6-4534-b6a5-baf5a7461c48>
CC-MAIN-2017-04
http://www.lightreading.com/mobile/carrier-wifi/jury-still-out-on-lte-unlicensed/a/d-id/708155?_mc=RSS_LR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00135-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93367
1,176
2.53125
3
Sending an Email to Several Addresses at the Same Time. The purpose of this guide is to teach you how to send an email to multiple addresses simultaneously. Examples are given using Mozilla Thunderbird and Microsoft Outlook Express. For example let's say you want to send the same email to your friends Ted, Sue, Bob and Alice. There are a few ways you could do this: - Put their names in the "To:" box, separated by commas or semi-colons. Fairly self-explanatory. Addresses can be typed in or entered from the contact list. After the email is typed and the 'Send' button clicked, this will send the email to everyone listed. Here's what that looks like in Thunderbird: or in Outlook Express: This is not always an appropriate way to send email, if for example the email contained a request like "Can you get me two tickets?" there could be confusion over who was supposed to get the tickets and you could end up with eight, or none. Also note that everyone you sent to would see everyone else's address (see below). There may be a limit to the number of addresses you can enter, for example Hotmail limits multiple emails to a maximum of 50 email addresses at the same time. - Put one persons name in the "To:" box and put everyone else's name in the "Cc:" box. Cc: stands for "carbon copy" a throwback to near-prehistoric times when mail was typed by hand and a copy could be made by inserting a sheet of carbon paper between two sheets of typing paper. When you typed on the top sheet the impact of the typewriter transferred "carbon" ink to the second sheet. The top sheet was therefore the 'Original' and the lower sheet was the 'Carbon copy'. In Thunderbird many address boxes are available and can be defined as whatever type you choose by using the pull down menu on the left hand side (blue arrow). In Outlook Express there is one "To:" and one "Cc." box shown by default and multiple names are separated by commas or semi-colons. If you send email this way everyone will know who the email was for and who it was copied to. This means everyone sees everyone else's email address which sometimes is not appropriate. So how do you send a copy of an email to Alice, for example, without revealing Alice's email address to everyone else? The answer is the "Bcc:" box. - Use the "To:" box the "Cc:" box and the "Bcc:" box. In Thunderbird each address box can be defined as whichever type you choose as described above. To make Alice's email address hidden from the others change the box where her name appears to a "Bcc:" box using the button shown (blue arrow). In Outlook Express there is a "Bcc:" box which you can make visible by going to the 'View' menu, and selecting 'All Headers'. This way you can send your email to Ted and copy to Sue, Bob and Alice without revealing Alice's email address which is confidential. If Sue, Bob and Alice were Sales staff of competing companies and you didn't want them to be aware of each other you could put all their names in the "Bcc:" box separated by commas in Outlook express, or in Thunderbird change all their address boxes to "Bcc:". Entering names individually can quickly become tiresome as the number of people involved increases. The solution to that is to use 'Groups' or 'Mailing Lists'. - Create a 'Group' or 'Mailing List'. Most Email programs allow you to create groups of email addresses held under a group name. You can then email everyone in the group by simply entering the group name in the "To:" box. Group or Mailing List members will see the email addresses of other group members unless the group is put in the "Bcc:" box. Creating and using 'Mailing Lists' or 'Groups' using different email programs will be covered in separate guides. Edited by Grinler, 17 April 2012 - 09:56 AM.
<urn:uuid:6c8a2b47-d61b-4fde-b3b0-da8c99a79625>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/43181/how-to-email-to-multiple-addresses-at-the-same-time/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94868
869
3.40625
3
After seeing how the ElGamal system works, today we are going to take a look at the RSA public key cryptosystem. The RSA algorithm was first published by Rivest, Shamir and Adleman in 1978 and is probably the most used crypto algorithm today. Despite this fact, the algorithm seems to have been invented by Clifford Cocks, a british mathematician who worked for a UK intelligence agency. Since this work was never published due to the top-secret classification, the algorithm received its name from Rivest, Shamir and Adleman who were the first to discuss it publicly. A document declassified in 1997 revealed the fact that Clifford Cocks had actually described an equivalent system in 1973. Let me remind you once again that these posts are not intended to be 100% accurate in a mathematical sense, but an introduction for people who doesn't know much about cryptography. If you want more accurate and complete descriptions, take a crypto book such as the Handbook of Applied Cryptography I've linked in most of my posts :). Setting up the RSA algorithm The RSA algorithm is based on the assumption that integer factorization is a difficult problem. This means that given a large value n, it is difficult to find the prime factors that make up n. Based on this assumption, when Alice and Bob want to use RSA for their communications, each of them generates a big number n which is the product of two primes p,q with approximately the same length. Next, they choose their public exponent e, modulo n. Typical values for e include 3 (which is not recommended!) and (65537). From e, they compute their private exponent d so that: Where is the Euler's totient of n. This is a mathematical function which is equal to the number of numbers smaller than n which are comprimes with n, i.e. numbers that do not have any common factor with n. If n is a prime p, then its totient is p-1 since all numbers below p are comprimes with p. In the case of the RSA setup, n is the product of two primes. In that case, the resulting value is lcm((p-1)(q-1)) because only the multiples of p and q are not comprimes with n. Once our two parties have their respective public and private exponents, they can share the public exponents and the modulus they computed. Encryption with RSA Once the public key (i.e. e and n) of the receiving end of the communication is known, the sending party can encrypt messages like this: When this message is received, it can be decrypted using the private key and a modular exponentiation as well: sage: p=random_prime(10000) sage: q=random_prime(10000) sage: n=p*q sage: p,q,n (883, 2749, 2427367) sage: e=17 sage: G=IntegerModRing(lcm(p-1,q-1)) sage: d = G(e)^-1 sage: G(d)*G(e) 1 sage: m=1337 sage: G2=IntegerModRing(n) sage: c=G2(m)^e sage: c 1035365 sage: m_prime=G2(c)^d sage: m_prime 1337 In the commands above, I first create two random primes below 10000 and compute n. Then I create a IntegerModRing object to compute things modulo lcm(p-1,q-1) and perform the computation of the private exponent as the inverse of the public exponent on that ring. Next, I create a new ring modulo N. Then I can use the public exponent to encrypt a message m and use the private exponent to decipher the cryptotext c... and it works! Correctness of RSA encryption/decryption We have seen it works with our previous example, but that doesn't prove that it really works always. I could have chosen the numbers carefully for my example and make them work. Euler's theorem tells us that given a number n and another number a which does not divide n the following is true: Therefore, and since , for any message m that does not divide n the encryption and decryption process will work fine. However, for values of m that divide n we need to use more advanced maths to prove the correctness. Another way to prove it is to use Fermat's little theorem and the Chinese Remainder Theorem. I will explain these theorems in my next post and then I will provide a complete proof based on them. RSA for signing In the case of RSA, digital signatures can be easily computed by just using d instead of e. So, for an RSA signature one would take message m and compute its hash H(m). Then, one would compute the signature s as: For verifying the signature, the receiving end would have to compute the message hash H(m) and compare it to the hash contained in the signature: Therefore, if the hash computed over the received message matches the one computed from the signature, the message has not been altered and comes from the claimed sender. Security of RSA In order to completely break RSA, one would have to factor n into it's two prime factors, p and q. Otherwise, computing d from e would be hard because (p-1) and (q-1) are not known and n is a large number (which means that computing its totient is also difficult). In a few posts I will show an algorithm to solve the factorization problem. However, another way to break RSA encrypted messages would be to solve a discrete logarithm. Indeed, since , if one solves the discrete logarithm of c modulo n, the message would be recovered. Luckily, we already know that discrete logs are not easy to compute. And in this case, solving one does not break the whole system but just one message.
<urn:uuid:359792d5-d7b4-4be9-88ce-e47bcdd7f124>
CC-MAIN-2017-04
https://www.limited-entropy.com/introduction-to-rsa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941073
1,259
3.65625
4
Weather changes, but speed signs usually don't. Oregon has something called "the basic rule" that says you can be fined for going 50 mph in a 50 mph zone if conditions are bad and the "prudent" speed is only 35. But Utah is taking a road less traveled. It's joining with a few other states, like Wyoming, to install electronic variable-speed traffic signs on certain troublesome stretches of highway, and switched on a new $750,000 electronic sign system last week. Like this story? If so, subscribe to Government Technology's daily newsletter. The system includes 15 speed limit variable signs, according to News 4 in Utah, cameras, sensors to monitor road conditions -- temperature, road icing, traffic speed and visibility -- and electronics that allow Department of Transportation staff to adjust speed limit signs to match conditions. The highway patrol is notified and the new lower posted speed becomes enforceable. According to an article in The Salt Lake Tribune, the system is located in Parleys Canyon, about 10 minutes east of Salt Lake City, because it has bad weather, lots of traffic and is in close proximity to electricity, computer networks and road sensors.
<urn:uuid:fb29716e-106a-4cb0-8cc2-eef4aa7fde10>
CC-MAIN-2017-04
http://www.govtech.com/transportation/Variable-Speed-Signs-Keep-Roads-Safer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946042
238
2.546875
3
Cloud Computing has emerged as the biggest buzzwords with industry experts debating its enterprise-readiness and dependence on underlying technologies. Present picture of cloud computing is only the tip of an iceberg and there are lot to come in terms on innovation and methodology. There is a lot of debate around cloud computing from big guns like VMware, Amazon and Google. One of the biggest debates is about the best practice to create cloud solutions using fundamental technologies to bring it for enterprise class. As we all know that cloud computing appeared long before people realized that it’s going to change the way we compute and use client-server model e.g. grid computing, any paid service e.g. Gmail etc. But there are a number of other changes in technology that make cloud computing as big as it is today e.g. technology which addressed issues like scalability, availability and network bandwidth. In recent years, some of the issues have been resolved due to technology sprawl in computing industries. One of the biggest innovations during this era is Virtualization. There are many faces to this technology but it mainly emerged as a winner in the Data Centre Consolidation and Optimization. Virtualization also helps in solving the scalability issue to an extent due to its capability of multitenancy and distributed nature. It also helps the data centre administrator to easily manage the server using some easy to use APIs. So just invoke the API, and you are done with creating severs. Virtualisation helped in realising that VMs are "FILE” concepts, which can be booted, shutdown and migrated just like a file in distributed environment. One can realize the importance of it with the fact that a company like VMware is investing billions of dollars in it, and the Virtualisation market share in computing industries is increasing with big margins. Recent cloud solutions are heavily based on core Virtualisation technologies e.g. VM provisioning, VM Migration etc. Even Amazon says that on one click you will get an instance, which is possible due to the Virtualization solution underneath However, there is another side of Virtualization solution which many cloud vendors do not find of enterprise class. E.g. Google argues that Virtualization adds just an overhead both in amount of code/data and performance, and Virtualization software heavily depends on hardware reliability and always assumes that hardware does not fail. According to Google, hardware can fail so it uses other low-level customized servers in the data-center and has built reliability in software. Google generally believes that the traditional CLIENT-SERVER model still persists and works well in today’s scenario. They are arguing on the kind of performance we will see in this model as compared to the virtualisation platform. There is a big debate going on between Google and VMware regarding the future of data-center and cloud computing. Here are some snippets: According to Google, "It will be very sad if we need to use virtualization. It is hard to claim we will never use it, but we don't really use it. In the virtualization approach of private data-centers, a company takes a server and subdivides it into many servers to increase efficiency. We do the opposite by taking a large set of low-cost commodity systems and tying them together into one large supercomputer. We strip down our servers to the bare essentials, so that we're not paying for components that we don't need. For example, we produce servers without video graphics chips that aren't needed in this environment. Virtualisation vendor spends a lot of amount in high-end and reliable hardware. But we think hardware can fail and bring the reliability in software.” Full article here:http://googleenterprise.blogspot.com/2009/04/what-we-talk-about-when-we-talk-about.html VMware at VMWORLD 2009 revealed and sent a message to all the other vendors out there trying to suggest a different cloud computing model -- Virtualization is the only viable way to go for the cloud solution. My Personal views are that Virtualization made a big impact on the computer industry and its very fundamental innovation since the 80's. In many contexts, it really makes sense to use virtualization for cloud computing but not all. Cloud computing still has to leverage the full potential of Virtualization. To achieve these goals three things need to be in place: - Standardisation in managing Virtual Infrastructure. E.g. lib-cim standards - Uniform ways to represent VM as FILE e.g. OVF (it is still has to go long way) - Big leap in hardware technology where scope of hardware failure tends to zero (This can avoid the software overhead used to create such reliability in hardware) There are other solutions around virtualization which still have not made an impact on cloud computing e.g. Virtual Appliance (VA), OVF etc. When we think of bringing all cloud solutions under one umbrella, then these concepts will play a major role in designing future data-centres which will have place for both Public and Private Cloud.
<urn:uuid:83d511ab-70a8-4d3b-97c1-421d61320957>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/engineering-and-rd-services/virtualization-adoption-cloud-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937267
1,036
2.71875
3
Cambridge, UK, October 30, 2000 - Kaspersky Lab Int., an international anti-virus software-development company, warns users of the discovery of Sonic, a new Internet-worm. This worm was discovered in France and Germany in the morning on 30 October A distinctive feature of this malicious program is its ability to update itself (this means, to automatically download additional functional components) via The worm consists of two parts: the loader and the main module. Copies of the loader are spread across the Internet by e-mail. Once this virus enters into a computer, it penetrates a PC's operating system and initiates a connection to the hacker's site at "Geocities," a popular resource for free home pages. From there, Sonic tries to illegally download the main module in order to install it on the infected PC. The procedure of downloading the main module has been built in a way so that the worm's author can define its content. This procedure is performed in the following way: - The worm connects to the hacker's site and - downloads the file LASTVERSION.TXT, containing the version number of the worm's main module available on the site. - If the infected computer has no main module installed or the version on the site is higher, then the loader downloads two files from the site: nn.ZIP (where 'nn' is the number of the current main module's version) and GATEWAY.ZIP (the latest loader version) The main purpose of the main module is unauthorised data capture, tracking all of a user's activities and remotely controlling the infected computer (backdoor). Kaspersky Lab verifies that the worm's author can easily change the main module's payload, including those that carry content, which is even more dangerous and After the main module has been installed, the worm secretly gains access to the Windows address book (WAB), extracts the e-mail addresses available there, and sends out infected messages, containing copies of the worm's loader, to all of the encountered recipients. In the known versions of the worm, the infected messages have the following details: Subject: Choose your poison "This is not the first time we have discovered malicious code capable of self-updating via the Internet. Before 'Sonic', the Babylonia virus had the same abilities, as well as the Resume worm and others." Said Denis Zenkin, Head of Corporate Communications for Kaspersky Lab. "However, this is not something that catches our attention at the moment. The more disturbing thing is that this feature seems to have become a new standard for malicious programs, since more and more of them can self-update themselves via the Internet. This is a very dangerous trend, as it allows hackers to extend their malware cabilities in real-time with direct connection to the infected computers." Further details about the 'Sonic' worm are available at the Kaspersky Protection against this worm has already been added to the daily update of AntiViral Toolkit Pro (AVP). AntiViral Toolkit Pro can be purchased at the Kaspersky Lab online store.
<urn:uuid:bfd9b875-cee8-473b-9c64-3af27adc2266>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2000/Sonic_Yet_Another_Self_Updating_Internet_Worm_Has_Been_Discovered_in_the_wild_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893777
682
2.671875
3
Definition: An algorithm to code surnames phonetically by reducing them to the first letter and up to three digits, where each digit is one of six consonant sounds. This reduces matching problems from different spellings. Generalization (I am a kind of ...) phonetic coding algorithm. See also double metaphone, Jaro-Winkler, Caverphone, NYSIIS, Levenshtein distance. Note: The algorithm was devised to code names recorded in US census records. The standard algorithm works best on European names. Variants have been devised for names from other cultures. Overview of Soundex. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 13 December 2010. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "soundex", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 13 December 2010. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/soundex.html
<urn:uuid:65fb83d3-95da-4b54-b385-fadfe5ecc67c>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/soundex.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.88151
245
3.15625
3
In network security today, a firewall may be a software or hardware that makes a barrier between our internal network and untrusted external network. You can look at the firewall as a set of related programs that enforce an access control policy between two or more networks. The name “firewall” is very strange, it has been originally used to describe the segment that separated the engine compartment from the interior of an automobile. In the networking world firewall is the first line of defense and the technology that will allow us to segment the network in physically separate subnetworks. In this way it will help us to limit the risk of compromising the entire network in case of security attack. Is much like how original firewalls worked to limit the spread of a fire. - One mechanism blocks traffic. - The second mechanism permits traffic. A firewall is a set of programs located at a network gateway that protects the resources of a private network from users on other networks. These are basic firewall services: - Static packet filtering - Circuit-level firewalls - Proxy server - Application server Firewall is working like a guard that is, either blocking traffic or permitting it based on the Layer 4 port number. Modern firewall designs is much more complex and is developing the ability to block or permit traffic reading the Application layer data. If you are hosting a service for use over the network, firewalls can be used to manage public access to private network resources or they can log all attempts to enter the private network, and some can trigger alarms. Firewalls filter packets based on a variety of parameters, such as their source or destination address and port number. Network traffic can also be filtered based on the protocol used (HTTP, FTP, or Telnet). The result is that the traffic is either forwarded or rejected. Firewalls also can use packet attribute or state to filter traffic.
<urn:uuid:18e0a5e0-a837-472a-b72a-8971db67950d>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/firewall
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930259
387
3.75
4
(Part II) Virtual Routing and Forwarding This is the second part in the series of posts dedicated to network virtualization and path isolation. Ever needed one extra router? It’s possible to split the router into more logical routers by using VRF. How? Here’s how! Virtual Routing and Forwarding or VRF allows a router to run more that one routing table simultaneously. When running more routing tables in the same time, they are completely independent. For example, you could use overlapping IP addresses inside more VRFs on the same router and they will function independently without conflict (You can see this kind of overlap in the example below). It is possible to use same VRF instance on more routers and connect every instance separately using VRF dedicated router port or only a sub-interface. You can find VRFs to be used on ISP side. Provider Edge (PE) routers are usually running one VRF per customer VPN so that one router can act as a PE router for multiple Customer Edge (CE) routers even with more customers exchanging the same subnets across the VPN. By running VRF per customer, those subnets will never mix in-between them. VRFs are used to create multiple virtual routers from one physical router. Every VRF is creating his own Routing table and CEF table, basically a separate RIB and FIB. VRF is simply created by entering this command into Cisco router supporting VRFs: ip vrf MYTESTVRF When created, VRF needs route distinguisher in order to become functional. Route distinguishers are described a bit later. Route distinguisher (RD) for this VRF MYTESTVRF are configured with: When created and configured with RD, VRF needs some interfaces which will then be dedicated to this VRF and could bring some traffic into this VRF. Router interface (or most probably subinterface), will be assigned to a VRF like this: int gi1/0/1 ip vrf forwarding MYTESTVRF On L3 switch which is also a clever router, when we want a VLAN to become part of the VRF, we need to add VLAN interface to VRF and all members of the VLAN will then be part of that special VRF: int VLAN 20 ip vrf forwarding MYTESTVRF You need to take into account that addition of interface to VRF will remove all existing IP addresses configured on the interface. It is done in this way because it can help to avoid address duplication in the new routing table if some incautious engineer is entering interface with IP address into VRF that already has an interface with this same IP. When configured, traffic received on the interface which is member of VRF is routed and forwarded with that VRF table. When thinking of VRFs, best example of something similar is VLAN trunking between two switches. Packet with VLAN tag entering the trunk interconnection in-between two switches can only enter the same VLAN when arriving on the other switch side. With VRFs is the same but done on L3 rather L2 for VLANs, and there are no trunk ports but L3 sub-interfaces (or physical interfaces). Packets that enter a specific VRF will be forwarded with routes from that VRF’s routing table. Example goes even further. Like VLANs that span across multiple switches through trunk port, VRFs can be extended across multiple devices as well through sub-interfaces of two router interconnection or with separate interconnections. The connections are L3 sub-interfaces, usually Ethernet VLAN interfaces with dot1q encapsulation. Most common Layer 2 virtualisation technique used these days. Configuration for both examples First Example (two interconnections) ip vrf MYTESTVRF rd 111:1 interface Gi 1/0/1 description Global Routing Table Interconnect ip address 10.10.10.1 255.255.255.252 interface Gi 1/0/2 description VRF MYTESTVRF Interconnect ip vrf forwarding MYTESTVRF ip address 10.10.10.1 255.255.255.252 ip vrf MYTESTVRF rd 111:1 interface Gi 1/0/1 description Global Routing Table Interconnect ip address 10.10.10.2 255.255.255.252 interface Gi 1/0/2 description VRF MYTESTVRF Interconnect ip vrf forwarding MYTESTVRF ip address 10.10.10.2 255.255.255.252 Second Example (dot1q tagged subinterfaces) ip vrf MYTESTVRF rd 111:1 interface Gi 1/0/1.10 description Global Routing Table Interconnect encapsulation dot1q 10 ip address 10.10.10.1 255.255.255.252 interface Gi 1/0/1.20 description VRF MYTESTVRF Interconnect encapsulation dot1q 20 ip vrf forwarding MYTESTVRF ip address 10.10.10.1 255.255.255.252 ip vrf MYTESTVRF rd 111:1 interface Gi 1/0/1.10 description Global Routing Table Interconnect encapsulation dot1q 10 ip address 10.10.10.2 255.255.255.252 interface Gi 1/0/1.20 description VRF MYTESTVRF Interconnect encapsulation dot1q 20 ip vrf forwarding MYTESTVRF ip address 10.10.10.2 255.255.255.252 ICMP Test Example Pinging from Gi 1/0/1 to Gi 1/0/1 on other side within Global Routing Table is straight forward ping: If you want to ping the same (but other) ip address. The one that is inside VRF MYTESTVRF you neet to initiate the ping within that VRF on R1: ping vrf MYTESTVRF 10.10.10.2 Example above shows both solutions, although the subinterface example is the one that is used in the real world most of the time. We are extending VRF MYTESTVRF to other router (R2) by configuring interfaces of interconnection with VRF mapping configuration (ip vrf forwarding inside interface configuration). In this way every one of the interconnection will forward the traffic for mapped VRF. Global Routing table is basically a VRF 0. The first RIB and FIB with no need of mapping as they exist by default and all L3 interfaces on the router are by default part of Global Routing table. When expanding VRF MYTESTVRF we use one interconnection but we need to use another interconnection for Global routing table. We can look at Global Routing table as first (native) VRF on the router with more VRF configured. This is also known as Global VRF, existing on all routers, with all interfaces assigned to it by default. Method of expanding several VRFs across multiple devices by using separate sub-interfaces or separate interconnection links is known as VRF Lite. This is basically the most lightweight way of running VPNs. Being the simplest way of creating non-overlapping VPNs in a network is having some downsides to. This way of doing VRF expansion has poor scalability. You need dedicated link between two routers for every VPN (or dedicated sub-interface of one link). If you have the need for many VRFs, you will need many provisioned connections between routers. Remember from above, this is basic VRF config: ip vrf MYTESTVRF rd 111:1 111 and 1 are 32-bit integers. Route Distinguisher is used to label every route from an VRF routing table with 64-bit prefix. It is done so that router can distinguish which prefixes are member of which VRF (different routing tables) avoiding that prefixes from different VRFs are mixed up. Format for RD should be ASN:NN, with ASN meaning autonomous system and NN VRF number inside the router. Other way to configure it is IP-Address:NN, IP being the router IP address and NN VRF number. Examples above configured in GNS3 lab are awailable here for download so that you can run them and try it out by yourself. Everything should work fine with latest GNS3 installed It is corrected now, sorry. Read the whole series about Path Isolation techniques:
<urn:uuid:39d3d971-fd64-4e56-9d53-f7c963db749f>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2016/vrf
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.836939
1,794
2.578125
3
Under the narrower definition, a cyber threat is defined as an effort to “gain unauthorized access to a system or network.” This change is designed to assuage opponents who criticized the previous version for being so broad it could be used against websites for posting copyright infringing material, which would make it similar to the controversial Stop Online Piracy Act (SOPA) legislation that was eventually dropped in response to protests. The purpose of the legislation, according to proponents, is to enable participating businesses to share cyber threat information with others in the private sector and enable the private sector to share information with the government on a voluntary basis in order to combat cyber espionage and intellectual property theft. Critics, however, argue that the bill would expand the government’s ability to monitor and censor the internet. Privacy groups, including the Center for Democracy and Technology (CDT), the Electronic Frontier Foundation, the American Civil Liberties Union, and Free Press have launched an online campaign against provisions of the legislation. "CDT's main concerns with CISPA are that it has an almost unlimited description of the information that can be shared with the government; it allows for a large flow of private communications directly to the NSA [National Security Agency], an agency with little accountability; and it lacks meaningful use restrictions – it should be made clear that information shared for cybersecurity should be used for cybersecurity purposes, not unrelated national security purposes or criminal investigations", said CDT senior counsel Greg Nojeim. Facebook, an opponent of SOPA, has taken heat for supporting CISPA. Joel Kaplan, vice president for US public policy at Facebook, defended his company’s support in a blog post. “A number of bills being considered by Congress, including the Cyber Intelligence Sharing and Protection Act (HR 3523), would make it easier for Facebook and other companies to receive critical threat data from the U.S. government. Importantly, HR 3523 would impose no new obligations on us to share data with anyone – and ensures that if we do share data about specific cyber threats, we are able to continue to safeguard our users’ private information, just as we do today”, he wrote.
<urn:uuid:c322d347-64f0-4efa-a132-7e0133a3e020>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/house-panel-modifies-cispa-in-response-to-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945844
441
2.703125
3
Intel researchers have built a 48-core microprocessor that the chip giant is pitching as a "single-chip cloud computer," Intel's chief technology officer said Tuesday in San Francisco. "With a chip like this, you could imagine a cloud data center of the future which will be an order of magnitude more energy efficient than what exists today," said Justin Rattner, Intel's CTO and head of Intel Labs. The prototype single-chip cloud computer, which Santa Clara, Calif.-based Intel has dubbed an SCC, is the second generation of Polaris, a many-core computer chip that Intel introduced at the International Solid State Circuit Conference (ISSCC) two years ago. The experimental 48-core chip shares some attributes of Intel's future-generation GPU microarchitecture, code named Larrabee, Rattner said. As with Larrabee and unlike the first Polaris chip, the cores that make up Intel's new SCC are compatible with the x86 instruction set, or as Intel prefers it to be known, the Intel Architecture (IA). The experimental 48-core computer chip "rethinks many of the approaches used in today's designs for laptops, PCs and servers," according Intel. One key such "rethinking" is the utilization of software to manage page-level memory coherency, rather than baking that functionality into the silicon as with previous architectures, Rattner said. Removing such hardware functionality is a silicon space-saver, allowing room for a new, high-speed on-chip information sharing network built onto the processor die. The SCC team also developed new power management techniques that Rattner said allow all 48 cores to operate while drawing as little as 25 watts in power. At peak performance, the prototype chip draws 125 watts, putting it within the power band of Intel's Core 2 and Nehalem-based processors currently on the market. Intel will be sharing about 100 of the experimental chips with industry and academic partners in 2010, Rattner said. Research teams at Microsoft, ETH Zurich, University of California at Berkeley and the University of Illinois already have such chips to play with, he added. "This is not a product. It never will be a product. But it provides a very good platform for conducting research," Rattner said. Intel sees a strong play for future many-core chips in cloud computing installations, where energy efficiency and the ability to build extremely dense computing are at a premium. The "many-core era" will also mark a shift to computing that is more "immersive, social and perceptive," Rattner said. "Computers will see and hear and they will probably speak, and do a number of other things that resemble what humans do," he said. The experimental SCC was produced by 40 Intel Labs researchers in the U.S., Europe and India, Rattner said. The 1.3-billion transistor chip features Intel's current-generation 45-nanometer, high-k metal gate process technology. Bringing more cores, better power management and x86 compatibility to the first Polaris design was a largely glitch-free exercise, Rattner added. "There was only one significant bug" during the design process, he said. The chip's 48 x86-compatible cores are "the most ever built on a single chip," according to Intel. Those cores are laid out on the processor in a two-dimensional grid which further maps 24 tiles that have two cores apiece. Rattner described the power management capabilities as "fine-grain," though not so fine-grain as to allow for power to be throttled up or down at the core level. Instead, it's possible to run each two-core tile at a different frequency, while the chip's regions -- six banks of four tiles -- can each be run at different voltages, Rattner said. The second-generation 2D mesh network on the SCC features 24 routers, one per tile, and consumes just a third of the power of the previous Polaris network, Rattner said. Each core has its own dedicated L2 cache, with 256 Gbps bisection bandwidth and 64 Gbps duplex link bandwidth. The chip has four integrated, 64GB-addressable DDR3 memory controllers.
<urn:uuid:f3d53d9e-7b02-4d33-97c4-f793251f0b93>
CC-MAIN-2017-04
http://www.crn.com/news/components-peripherals/222000357/intel-labs-unveils-48-core-chip.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00209-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944173
868
2.578125
3
As today’s cyber attacks continue to increase in frequency and complexity, organizations must respond with new tactics to protect against these attacks. All techniques are not created equal, however, and today's cyber security expert has to understand what tools can and can't help in their cyber security efforts. This paper explains how sandboxing works, the failings of most sandbox-based approaches, and what organizations should look for in VM-based analysis of cyber threats to improvet heir security approach. In this paper: As shocking as the report may have seemed to the public, it only confirmed what Australia’s security experts have long known. Cyber attacks are growing more frequent. They are growing more effective. And they are growing more serious...Many of these incidents involve advanced attacks. Sponsored by foreign governments and well-organized cybercriminals, these attacks are easily slipping past standard security tools. Anti-virus (AV) software, traditional and next-generation firewalls, intrusion-prevention systems (IPS), and other tools are useless against them. Download the White Paper
<urn:uuid:2ddd876c-d2f6-4c27-b5ea-fee6aa17dae3>
CC-MAIN-2017-04
https://www2.fireeye.com/thinking-outside-the-sandbox-wp.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00511-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963656
219
2.5625
3
According to arstechnica.com, the US government spends $11 billion and dedicates 35,000 people each year to a program "dedicated to encryption," which includes cracking encryption. Most people would view this as a bad thing. There are issues like the right to privacy and whatnot. But the way I see it, this is actually good news for users of laptop encryption software like AlertBoot. Eleven billion dollars is not small potatoes. Let's assume that 0.1% of it is dedicated towards cracking encryption. That means that $11 million is being used each year on cracking encryption alone (personally, I find the number to be too low. It's probably much, much higher).What this means to the average computer user is that, if they are looking to protect their data, tools like disk encryption (say, the AES-256 encryption from AlertBoot) are more than enough to secure their data. If you're trying to hide something from the government, maybe you won't be successful, as in this case. But, if you're working in a hospital and are looking to ensure that patient files remain confidential, or you're a lawyer and you want to ensure that your client data remains under wraps, encryption is an easy, effective, and cheap way to do it.This is why most US states with data breach laws on their books will offer safe harbor if encryption is used to protect sensitive data. The reasoning extends to US federal laws as well as EU legislation, and basically to any country in the world that has data security and data privacy laws. Bad news always follows good news, however. According to the same arstechnica.com report, some cryptographers are growing increasingly concerned that breakthroughs in discrete mathematics could soon spawn a so-called cryptopocalypse that could undermine the security of core encryption algorithms...since there's no mathematical proof that the theory isn't possible, there's no way to dismiss the possibility.If this scenario does play out...well, it would be the end of the world as we know it. For one thing, encryption is what allows banking to occur. Not just online banking, but the flow of money from one bank to another, from a commercial bank to the federal reserve, international transfers, etc. Furthermore, the use of credit and debit cards requires encryption at some level.In fact, encryption tends to affect our lives in some of the most unexpected ways. In a sense, they're kind of like those "Made in China" tags: look close enough and there it is.Of course, most experts doubt that a Crytpocalypse is imminent. As a non-expert, here are my two cents on the issue: if "there's no mathematical proof that the theory isn't possible" is the main reason for pushing the argument, you're on the short side of the stick; it has something to do with trying to prove a negative, which is not impossible, but decidedly hard to do. Negative proofs that are not logical fallacies tend to be small in number, as I understand it. some cryptographers are growing increasingly concerned that breakthroughs in discrete mathematics could soon spawn a so-called cryptopocalypse that could undermine the security of core encryption algorithms...since there's no mathematical proof that the theory isn't possible, there's no way to dismiss the possibility.
<urn:uuid:5923f25f-a5e4-4ed0-b756-a5caf5e08b6c>
CC-MAIN-2017-04
http://www.alertboot.com/blog/blogs/endpoint_security/archive/2013/09/04/data-encryption-the-us-government-has-35-000-people-working-on-crypto.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00235-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96467
681
2.96875
3
Inkblots and gestures: Getting creative about security Most of us carry cell phones and other mobile devices that can, and often do, contain more data than many desktop computers. What's more, mobile devices are a lot easier to steal or to be misplaced. As a result, improving security for mobile devices is becoming a priority. Researchers at the Georgia Institute of Technology – with funding from the National Science Foundation – are developing a security system that would make user identification an unintrusive, passive operation. The system – dubbed LatentGesture – is a software application that identifies users through the way they swipe and tap mobile devices. If the system detects that the user's gestures don't match the owner's, the device is locked. According to Polo Chau, leader of the study and an assistant professor at the university's College of Computing, the software taps into sensors in the touchscreen to measure the speed of a user's swipes, as well as the pressure and location of taps, to produce a "touch signature." "Everyone has small differences in the way they use touchscreens," Chau said. "The speed of swipes, how hard a person taps a checkbox – it's all different." Testing in a laboratory setting showed that the system could accurately identify device owners 98 percent of the time. And the system can also be used to store multiple profiles, potentially giving different permissions to different authorized users. Since people rarely use their mobile devices in a laboratory setting, however, there is still some work to do. Would a user's on-screen behavior be enough different to cause misidentification if he is walking with the device instead of sitting at a table? While Chau says the team hasn't tested how peoples' behavior under real-world conditions will affect the system, they do plan to do so. "We also plan to integrate other sensor data into the system," said Chau, "such as ambient light and accelerometer data." While Chau and his team refine LatentGesture, researchers at Carnegie Mellon University are developing a new password utility called GOTCHA (Generating panOptic Turing Tests to Tell Computers and Humans Apart). OK, so the name isn't exactly self-explanatory. But, like LatentGesture, GOTCHA uses subliminal cues from users to identify them. In the case of GOTCHA, the data is users' responses to inkblots. With the GOTCHA system, a user creates a password and the computer then generates several random inkblots. The user is prompted to describe each inkblot with a short phrase. When the user next logs on, he or she is required to match a given inkblot with the appropriate phrase. While the system has proven reliable in preventing brute-force password cracking by computers, it still needs a little work. Apparently, humans aren't sufficiently reliable at remembering the correct matches between inkblots and phrases. In a test 10 days after users created phrases for inkblots, only one-third of the 58 participants correctly matched all the inkblots. Two-thirds of the participants, however, made more than half of the matches correctly. The Carnegie Mellon team has challenged security researchers to apply artificial intelligence techniques to try to attack the GOTCHA password scheme. Those wanting to take a crack at it will find the challenge at: http://www.cs.cmu.edu/~jblocki/GOTCHA-Challenge.html. Posted by Patrick Marshall on May 27, 2014 at 10:53 AM
<urn:uuid:2d6f85b1-956a-4f51-9de9-336d42ffbfc7>
CC-MAIN-2017-04
https://gcn.com/blogs/emerging-tech/2014/05/creative-security-gesture-gotcha.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936494
728
2.96875
3
While you may be familiar with multiple replication products and vendors, don’t confuse the technology of data or server replication with Disaster Recovery. Replication is not a disaster recovery solution nor does it provide business continuity. So what exactly is replication? According to TechTarget, replication is the process of copying data from one location to another over a SAN, LAN or local WAN. This provides you with multiple up-to-date copies of your data. Look at replication as an aspect of DR/BC. Although it is a key technology in order to implement a complete DR/BC plan, it needs to be combined with data deduplication, virtual servers or even the cloud. But let’s take a step back to really understand business continuity. According to ESG Sr. Analyst Jason Buffington, “business continuity is ensuring that your IT and business processes continue, involving availability technologies as well as mitigation methods, etc.” Ultimately, your entire IT infrastructure needs to be up and running in order to assure that your employees can continue working during any disaster or IT outage. While you need to protect your data, just as importantly, you need to protect and keep your applications up. Having survivable data does not equate to disaster recovery. “Business continuity and disaster recovery is more about people and process than it is about the data,” according to Buffington. By combining appropriate planning, IT orchestration and instrumentation with a surviving copy of your data, you then have a real BC/DR plan. As Buffington points out, it’s typical for a vendor to be able to backup and replicate his or her virtual machines to another data center. Many of these vendors are even willing to turn the servers back on in case something goes wrong. However, unless all of the servers are all powered back up and in an exact order, it is still considered downtime. Not only does the order in which servers are spun up important, but making sure all the interconnected elements of your network (e.g. Active Directories, etc.) are also up and working is vital. What you need to look for is not just the ability to bring up individual or multiple servers, but rather be able to virtualize your entire LAN. The other aspect that is important to consider that may be forgotten around certain technology is that a proper Business Continuity program requires people and processes as well. All these elements combined will form the basis for your company resiliency plan. So ask yourself, how resilient is your company? Do you have the right technology, the proper documentation, and executive sponsorship? You can easily find out with our guide to evaluating a company’s resiliency in the white paper, “The Disaster Recovery Maturity Framework” or by watching, “The Truth About Your Disaster Recovery Maturity Level.“
<urn:uuid:59c4c56d-f28f-40ea-a86f-439e6ab089bd>
CC-MAIN-2017-04
https://axcient.com/blog/there-is-more-to-disaster-recovery-than-replication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947882
581
2.625
3
A numeric range search is a search for any numbers that fall within a range. To add a numeric range component to a search request, enter the upper and lower bounds of the search separated by ~~ like this: apple w/5 12~~17 This request would find any document containing apple within 5 words of a number between 12 and 17. 1. A numeric range search includes the upper and lower bounds (so 12 and 17 would be retrieved in the above example). 2. Numeric range searches only work with integers greater than or equal to zero, and less than 2,147,483,648 3. For purposes of numeric range searching, decimal points and commas are treated as spaces and minus signs are ignored. For example, -123,456.78 would be interpreted as: 123 456 78 (three numbers). Using alphabet customization, the interpretation of punctuation characters can be changed. For example, if you change the comma and period from space to ignore, then 123,456.78 would be interpreted as 12345678.
<urn:uuid:de406442-f67e-40c1-9c3a-2ea955368542>
CC-MAIN-2017-04
http://support.dtsearch.com/webhelp/dtsearch/numeric_.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00466-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869312
215
2.84375
3
Human babies learn words through repetition, obviously, but also by associating words with shapes (in the case of objects). Curious whether dogs learned the names of objects in the same way as people, some researchers at the University of Lincoln in the United Kingdom put a five-year-old Border Collie -- reputedly the world's smartest breed of dog -- through its language-recognition paces. What they discovered was that dogs (or at least this dog) used a different technique than the "shape bias" employed by humans to learn the words associated with objects. From the online scientific research journal Plos One: Two experiments showed that when briefly familiarized with word-object mappings the dog did not generalize object names to object shape but to object size. [Another] experiment showed that when familiarized with a word-object mapping for a longer period of time the dog tended to generalize the word to objects with the same texture. These results show that the dog tested did not display human-like word comprehension, but word generalization and word reference development of a qualitatively different nature compared to humans. As to why that is, the researchers speculated that "the evolutionary history of our sensory systems – with vision taking priority over other sensory systems – seems to have primed humans to take into account visual object shape in object naming tasks." Whereas dogs (and many other animals) rely much more strongly on their senses of smell and hearing to make sense of the world. Of course, it's incumbent upon humans as partners and guardians of dogs to also understand how our canine friends communicate with us and each other. Here's one article about understanding dog "talk." Now read this:
<urn:uuid:cab56a2d-c78d-4dbf-88e7-1687054b0386>
CC-MAIN-2017-04
http://www.itworld.com/article/2718271/enterprise-software/how-dogs-learn-names-of-new-objects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00374-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95224
340
3.328125
3
Gov. Terry Branstad Elected governor of Iowa in 1982, Terry Branstad is the senior governor in the nation. Realizing education and a highly educated workforce key future success, he has given education a consistently growing share of Iowa's state budget. He engineered America's first statewide fiber-optic telecommunications network, bringing distance learning to Iowa schools. Recently, his $150 million School Improvement and Technology Program was adopted to help prepare Iowa schools for the 21st century. He was chair of the National Governors' Association in 1989, during the historic Education Summit with President Bush. In 1997, he chaired the Republican Governors' Association, the Education Commission of the States and the Governors' Ethanol Coalition. Gov. John Engler In 1990, when John Engler was elected Michigan's 46th governor, taxes and unemployment were high, and the state faced a deficit of $1.8 billion in a general fund of $8 billion. Today, taxes are down, employment is up and the state's budget is balanced. In education, he has fought for higher standards, better assessments, local control, interdistrict school choice and the nation's landmark charter-schools law. Michigan is the only state to increase education funding, balance its budget five years in a row and cut taxes 24 times. Engler led the 1994 campaign to win citizen approval for Proposal A, which led to cutting property taxes by $3 billion. Gov. Tom Ridge Understanding that economic opportunity for children depends on an education, Pennsylvania Gov. Tom Ridge makes education reform a top priority. Last spring, the Legislature accepted his challenge to create charter public schools. In 1996, Gov. Ridge won the first comprehensive reform of public-school sabbaticals in decades and a new tenure-reform measure. His Advisory Commission on Academic Standards created a new rigorous set of academic standards to ensure that a Pennsylvania diploma means its bearer can compete in the 21st-century economy. Now, Pennsylvania's 21st-century teachers must do better in school and on teacher tests, and take more intensive instruction in the subject they want to teach. Gov. Pete Wilson As governor of California since 1991, Pete Wilson was the first governor in the nation to sign a "Three Strikes and You're Out" bill into law, is the national leader in the drive to stop illegal immigration and led the effort to end racial quotas and special preferences. He signed legislation reducing class sizes in kindergarten through third grade, increased computer resources for California students and continues to build on his previous reform to put the basics back into school curriculum. He signed a four-year funding compact with higher education to ensure California's two university systems continue to meet the challenge of providing a growing population with a high-quality education. June Table of Contents
<urn:uuid:2d7ca226-9773-453c-adb0-311849ce1d19>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/The-Governors.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967034
563
2.734375
3
Groundbreaking scientific research is becoming more reliant on computationally intensive HPC resources, and mid-level research organizations without the resources to build an extensive HPC cluster are looking for cost-effective ways to contribute to these initiatives. In an effort to evaluate creative methods of participating in those large scientific projects, research out of Brigham Young University done by Spencer Taylor examined the open source software HTCondor, which makes use of computing power from idle computers to perform jobs on a local network. In this case, it was specifically applied to a water resource model called Gridded Surface Subsurface Hydrologic Analyst (GSSHA), a model that requires computationally intensive stochastic functions not uncommon to many scientific disciplines. The resulting tests showed that HTCondor can be a workable alternative to acquiring additional HPC resources for mid-level research institutions. “We found that performing stochastic simulations with GSSHA using HTCondor system significantly reduces overall computational time for simulations involving multiple model runs and improves modeling efficiency,” Taylor argued. The idea behind employing HTCondor, using idle computing resources to help process large amounts of data and perform intensive computations, has notably been used by researchers at Berkeley in the SETI@home project, where home computers are volunteered when idle to form a grid that analyzes extra-terrestrial radio signals. HTCondor hopes to accomplish something similar such that mid-sized research institutions can integrate their computing base with existing HPC resources both on-site and in the cloud. As noted in the research, “the goal of this project is to demonstrate an alternative model of HPC for water resource stakeholders who would benefit from an autonomous pool of free and accessible computing resources.” The architecture diagram below shows how the HTCondor software accesses and implements the various resources, including on-site ‘worker computers,’ local HPC implementations, and the existing HTCondor network, built similarly to the SETI@home network via volunteer computers across the country. The specific instance set up by the BYU research utilized a model that ran six precipitation events, using hydrometeorological data over a two-week period. The simulation required 14 minutes on a single desktop computer, and the test was set up to run 150 of those simulations, which would take 35 hours on average on a single processor. “Because of the nature of HTCondor,” as Taylor explained in the research, “each stochastic simulation ran on a different number of processors ranging from about 80 to 140. As expected, with about 100 times the computational power of normal circumstances I was able to essentially reduce the runtime by factor of 100.” In essence, by running these formerly idle processors in parallel, the BYU implementation was able to achieve performances consistent with other localized HPC instances. As seen in the figure above and noted in the research, “it is also possible to include commercial cloud resources as part of an HTCondor pool.” The software makes it possible to optimize what jobs are sent to the cloud based on the price points. “For example,” the research noted, “if you were using Amazon’s Elastic Compute Cloud (EC2) you could set the ‘ec2_spot_price’ variable to ‘0.011’ so that HTCondor would send jobs to the cloud only if the cost per CPU hour was $0.011 or less.” Many research institutions utilize cloud services for excess data storage and computation at peak times, so being able to incorporate those into the HTCondor system is an important consideration. Stochastic simulations, ones where the results are dependent on several randomized probabilistic variables, are commonplace across various scientific disciplines. As such, Taylor is hopeful this application can be utilized across those disciplines. “Using the scripts developed in this project as a pattern, HTCondor could be used for many other applications besides GSSHA jobs.”
<urn:uuid:fb8d24d2-c423-4ded-858c-ff364ef65d67>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/05/30/running_stochastic_models_on_htcondor-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941137
820
3.21875
3
Stateless autoconfiguration or SLAAC SLAAC is another method in which the host or router interface is assigned a 64-bit prefix, and then the last 64 bits of its address are derived by the host or router with help of EUI-64 process which is described here. SLAAC uses NDP protocol to work. As the format of the EUI-64 format is seen quite frequently so covering its details it seems important to me. EUI-64 Address Format EUI-64 in Automatic configuration is the most important aspect of IPv6 addressing. This is because an IPv6 host needs to be sure that the autoconfigured addresses are unique on a global level. There are two parts of this answer. The first part is about setting the prefix for this local network. Of course, all hosts in this network need to have NETWORK part of the address same, if not, they will not be able to communicate. This part is a job for network administrator, after he defines the network prefix the second part will make the rest. That second part is the autoconfiguration of the address. But there is another question here that needs to be asked. What format should be used by the host for these addresses in order to ensure that there will not accidentally be two or more equal addresses autoconfigured? The EUI-64 is that format. With the EUI-64 format, the configuration of the interface ID takes place locally by the host in order to make it globally unique. The host requires already known and globally unique piece of information. The piece of information must not exceed beyond 64 bits, because by definition EUI-64 needs a 64-bit interface ID and a 64-bit prefix. But it is important for this information to be both from a known source as well as long enough to become globally unique. Ethernet hosts and intermediary devices with Ethernet interfaces make use of their 48-bit MAC addresses as a source for making EUI-64 addressing. The host requires to derive the 16 bits from some other source as the MAC address is 48 bits long and the process of EUI-64 makes up the last 64 bits of an IPv6 address. As per the standard of the IEEE EUI-64 the hex value FFFE is placed into the center of the MAC address. After that, EUI-64 sets the 7th universal/local sign bit in the field of Interface ID address which it actually the bit that says if that address is locally generated by SLAAC using EUI-64 process or it’s received from DHCP. This is done in order to indicate the global scope. EUI-64 at work In the picture below, you can see EUI-64 at work. The picture is from RFC 2373 page 19 where actual EUI-64 i defined. You see the 7th bit thet is written as “1”, that is the universal/local bit. You also see 11111111 11111110 in the middle of the address. That is the FFFE in hex that is squeezed between two parts of physical MAC address of that interface on which the address is beeing generated. The ccccccccc’s are first part of MAC address called Organizationally Unique Identifier (OUI) and mmmmmmmmm’s are second part called Network Interface Controller (NIC). From here’s the name of the process EUI-64. Making 48bit MAC address longer using FFFE after EUI! |0 1|1 3|3 4|4 6| |0 5|6 1|2 7|8 3| +----------------+----------------+----------------+----------------+ |cccccc1gcccccccc|cccccccc11111111|11111110mmmmmmmm|mmmmmmmmmmmmmmmm| +----------------+----------------+----------------+----------------+ Let’s explain with a real deal example. You have a IPv6 network prefix, usually this one is received with RA router advertisements in NDP protocol. This prefix is 2001:1234:AD:5555 and a MAC address of 00-1C-C4-CF-4E-D0, the resulting EUI-64 address is In this the bold portion of the address is actually the complete interface ID. The configuration of this address on a router’s Fast Ethernet interface as shown in the example. Example Configuring a EUI-64 IPv6 Address: R3(config)# int fa0/1 R3(config-if)# ipv6 address 2001:1234:AD:5555::/64 eui-64 The show command is the relevant way to view the result. In the given example is a sample of the show ipv6 interface brief command. You can find the global unicast addresses as well as the link-local address assigned to this interface. You can see in the example the interface Fa0/1 with the aggregatable global unicast address configured in the given example, and the automatically created link-local unicast address by the router. Example About checking an IPv6 Interface’s Configured Addresses: R3# show ipv6 interface brief FastEthernet0/0 [up/up] FE80::21C:C4FF:FECF:4ED0 2001:1234:AD:5555:21C:C4FF:FECF:4ED0 As per the example the shaded part or section of the unicast address shows the EUI-64-derived part or portion of the address. In order to see the full productivity or output, it is important to omit or skip the brief keyword and to specify the interface, as per the example 20-3. In the given example, the router clearly informs regarding the address that it was derived by EUI-64 by the “[EUI]” at the end of the global unicast address.
<urn:uuid:f40be7c3-ea5d-4cf1-80ed-40d0bc29f26d>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2013/slaac-ipv6-stateless-address-autoconfiguration
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91404
1,240
2.703125
3
Data loss is a serious issue, which most businesses and individuals will encounter at some point in their digital lives. It can cripple businesses, ruin careers and generally cause a great deal of unnecessary stress. This article will attempt to generalize a strategy for dealing with such scenarios. Understanding Data Loss: Generally data loss can result from any of the following: Hardware and system failure, human error, software corruptions, power disruptions, computer viruses, Natural disasters and intentional human malicious damage. Almost all of these situations can be avoided, or at least the risk of them occurring be reduced. The good news is that in most cases some if not all the data is recoverable. Think of it this way, when data loss occurs the information appears to be “lost” to the computer. This requires guidance from either data recovery software or a data recovery specialist to help “find” the data again. Diagnosing Data Loss: It is essential to write down the steps, which led to the data loss event. This is the simplest and most crucial action anyone can take when experiencing data loss. It is interesting to note that it is often the immediate actions taken after the data loss event, which determines the ability for the information to be recovered. So no matter how embarrassing the circumstances are note everything down! But how will you know when data loss is going to strike? Common symptoms are a sluggishly running system, constant freezing or hanging, unusual noises (clicking or grinding sounds) coming from the hard drive or system and unusual error messages relating to software and/or drives being used. Recovering Data Loss: Essentially there are two methods to recovering data- via data recovery software or via a data recovery specialist. In general it is always best to first consult a data recovery specialist, however this is dependent on the value of the data which has been threatened, Data recovery software essentially analyzes, repairs and recovers corrupted and/or lost data and assists in re-linking the information, which has caused the data to become lost. Free evaluation versions are freely available on the web and are often a good way to test the capability of a product, if you are satisfied with the job performed by the product you can purchase it and recover your data thereafter. Professional Data recovery specialist will analyse and repair your system in a Class 100 clean room equipped with all essential technologies to ensure a stringent and controlled environment resulting in the greatest chance of full data recovery. Preventing Data Loss: Computer viruses are often the immediate cause of a data loss event. It is quintessential to have a trusted virus scanner and firewall operational on your system. A common misnomer is that if one does not view inappropriate sites one will not come across any problems. This is false; if your computer has a connection to the Internet there is a threat. Backup your work! This is the simplest and tried and tested measure to ensure you never lose your data. Generally it is best to have a dedicated backup hard drive in which an exact copy of your system and data can be stored. Further to this it is also recommended that highly valued data be backed up to CD or DVD. The reason is that hard drives can fail, and it is rare to have problems with a physical medium such as CD or DVD. Finally it is important to have a form of off-site backup for in the event of a natural diaster. An easy way is to simply have another backup hard drive or DVD’s stored at home. Data loss is a frightening concept reports Dallas computer repair and a real threat to businesses and individuals who rely on their computer systems for success. The risks however can be managed through developing a detailed strategy to deal with each possibility and mishap along the way.
<urn:uuid:76cd4cb0-a61f-4c39-a3e7-74234bf8c14c>
CC-MAIN-2017-04
http://3tpro.com/data-recovery/data-loss
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945435
771
2.6875
3
On March 1, 2007, a tornado ripped through Enterprise, Ala., killing eight students and severely damaging Enterprise High School. The area received a historically quick federal disaster declaration just two days later because before-and-after imagery was available thanks to Virtual Alabama, an implementation of Google Earth that contains government-owned data. In March 2009, Virtual Alabama was used to track a shooting spree in Geneva County that killed 10 people and also resulted in the perpetrator's death. Investigators within the governor's crisis command center used Virtual Alabama to follow the shootings as they occurred, including elements such as the time it took the shooter to travel from one location to another, the distance covered and the fatalities' identities. With that information, the investigators could draw comparisons as they investigated the crime. Simultaneously they shared that information with the mobile command center that deployed to the county. These are just two examples of Virtual Alabama's utility. The system improves disaster response through better data sharing and allows city, county and state agencies to collaborate in innovative ways. Before Virtual Alabama, it took the state days, if not weeks, to prepare disaster declarations -- and they weren't always the most accurate. With Virtual Alabama, the state can look at irrefutable evidence of damage and quickly determine its extent. The impetus for the application came after rains from 2005's Hurricane Katrina drenched Alabama. Having seen more than 450 tornadoes strike the state during his time in office, Gov. Bob Riley turned to state Homeland Security Director Jim Walker with two simple but important questions: How was he going to assess the damage and apply for federal aid if he didn't know what the communities looked like before the storm? And shouldn't all that imagery be stored in one place? Walker's answer to the governor's challenge was to build Virtual Alabama using locally owned imagery on a secure, permission-based Google Enterprise platform. Getting started was relatively inexpensive: The state spent less than $150,000 for the software licenses and hardware. The system contains location data for sewer, water and power lines; radio towers; police cruisers; fire hydrants; building schematics; sex offenders' addresses; approved landing zones for medical helicopters; inventories of hospitals and cached medical supplies, such as respirators; evacuation routes; shelters; land-ownership records; and assessed property values. Some of the data stitched into Virtual Alabama is sensitive, like floor plans for public buildings. For that reason, even though the data is potentially available to anyone at any level of government, access control is retained by the custodial owner of that information and protected by that agency's security protocols. As needed, first responders -- such as SWAT teams, bomb squads and firefighters -- can request access to the information. "If the custodial owner stays in full control of the data, then [he or she has] no fear of it being breached because it's inside their firewall," said Chris Johnson, Virtual Alabama program manager and vice president of geospatial technologies for the U.S. Space and Rocket Center in Huntsville, Ala. Virtual Alabama's platform provides access to the same technology that's behind Google Earth, except it's accessible only to government employees with the proper permissions. "We do this on our own servers behind our firewalls, and we serve it to whoever we need to serve it to, and it has no interaction ... with [Google's] globe," Johnson said. If a situation changes quickly, then access can widen or constrict depending on the circumstances. "If at 3:00 in the morning, the school administrator needs to widen that loop to include the sheriff, police chief, the bomb squad and whomever else, then she has full control through her IT staff to do that," Johnson said. Once permission is granted, the connection is established in real time and data is streamed to partners, but not necessarily stored by them. Photo: In March 2007, tornadoes blew debris through the windows of Enterprise High School in Enterprise, Ala., causing severe damage./Photo Courtesy of Mark Wolfe/FEMA Virtual Alabama has given officials unique insights on a variety of fronts, such as who's likely to evacuate during a disaster and how to help them. For example, the state found that low-income residents are less likely to leave their homes as a disaster approaches. By using socioeconomic data plotted on Virtual Alabama, the state's Department of Children's Affairs can predict who's likely to evacuate and develop strategies to remove the holdouts from harm's way. Another reason people may not evacuate during an emergency is concern for their animals' welfare. Recognizing this, the state's commissioner of agriculture, Ron Sparks, made a map of pet-friendly hotels and their costs. The idea is that citizens who have access to this data will be willing to get out of the way of a natural disaster knowing their animals also will be safe. Photo: Virtual Alabama can help first responders quickly assess damage during a disaster 1/Screenshot courtesy of Virtual Alabama The North Shelby County Fire Department uses the system for basic functions, like hydrant identification and making map books, but it's also useful in assessing tornado damage. "Sometimes you may think you're seeing the picture, but our eyesight is limited. We're blocked by trees, we're blocked by buildings, we can't see what's on the other side of stuff. The system can go and just look," North Shelby County Fire Chief Michael O'Connor said. "There is just a myriad of uses that is only limited by the individual using it," he said. "It's a wonderful process, and it's going to be part of our new incident command vehicle we're getting shortly." Photo: Virtual Alabama can help first responders quickly assess damage during a disaster 2/Screenshot courtesy of Virtual Alabama A major initiative Walker is working on is getting the state's 1,500 schools to import their data into Virtual Alabama. Currently schools can access the system, but they must capture data images they need for inclusion as part of their disaster plans. Feedback about how schools have utilized the program so far has been very positive, according to Sue Adams, director of prevention and support services for the Alabama Department of Education. A pilot with two schools was to be completed in May. Walker planned to present the pilot's results to school superintendents at their annual meeting in June, and the full implementation of Virtual Alabama in the state's schools will be completed within 18 months. Photo: Virtual Alabama's platform provides access to the same technology that's behind Google Earth but only government employees have access/Screenshot courtesy of Virtual Alabama Maintaining proper control while sharing information across jurisdictions and between levels has been a perennial challenge for governments. "In the past, if I shared my data with you it meant I had to give it to you," Johnson said. "And I had to trust that you weren't going to share it with anybody else or redistribute or use it in a way that I did not intend. Visualization has blown all of that away because now we're no longer data sharing." With all the data housed within Virtual Alabama -- whether it's land-ownership records owned by a revenue commissioner or sensitive data owned by an environmental agency -- they're all just connections, she explained. This means it would be difficult for someone to get a complete picture of Virtual Alabama if he or she breached the system. "If we have a breach in our system, it's not a single point of failure," Johnson said. "So you've breached into Virtual Alabama, but you're only seeing the benign layers. We have hundreds of systems that would have to be breached to have an aggregate of the whole." The security and partitioning of Virtual Alabama is robust enough that even the FBI and Secret Service agents who are operating in Alabama can use the system securely. They use all the system's assets inside their security protocols; but other agencies don't necessarily have access to it, she said. One key to the system's rapid and wide adoption is that its vector data cannot be stored, exported or removed from the globe. A user can take a screen capture of the data, but the native data cannot be extracted. "However, you can put links in there to the people that hold the native data and their contact information so you can connect to them directly and say, 'Hey, may I have this data for this specific purpose?' For the discovery of data it's been phenomenal," Johnson said. With Virtual Alabama, the state is trying to increase information sharing. "I'm giving you a feed of information that I think is useful to you. And you think is useful to you, and if I have a change in my situation, as the custodial owner of that data and I need to either send you more data or send you less data or send you no data, then that's in my control," Johnson explained. The state is working to establish standard operating procedures for the emergency sharing of data housed in Virtual Alabama. The list of people who have access to a particular piece of information changes depending on an alert's level. The IT personnel know who should have access to what data, she said. For example, after a tornado strikes, the state's Civil Air Patrol photographs the debris trail. The state has a protocol in place so the air patrol knows exactly where to load the data once it lands. "That can change so that if a tornado skips through five counties but it misses two in the middle, you know who needs to have access to that data," Johnson said.
<urn:uuid:63417e87-420b-42a7-88d7-ec2e9a4847d5>
CC-MAIN-2017-04
http://www.govtech.com/featured/Virtual-Alabama-Facilitates-Data-Sharing-Among.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00172-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964665
1,944
2.671875
3
The portable electronics revolution is riding on the miniaturization of electronics (chips and other components) but it also requires batteries to power these devices. Batteries have been falling behind while other components are innovating and optimizing to become smaller and increase performance. News out of the University of Illinois at Urbana-Champaign this week announced microbatteries. “The new microbatteries out-power even the best supercapacitors and could drive new applications in radio communications and compact electronics.” The graphic illustrates a high power battery technology from the University of Illinois. Ions flow between three-dimensional micro-electrodes in a lithium ion battery. The big development with this research is that the battery can offer a lot of power in a quick burst or a low trickle of energy. The batteries owe their high performance to their internal three-dimensional microstructure. Batteries have two key components: the anode (minus side) and cathode (plus side). Building on a novel fast-charging cathode design by materials science and engineering professor Paul Braun’s group, King and Pikul developed a matching anode and then developed a new way to integrate the two components at the microscale to make a complete battery with superior performance. With so much power, the batteries could enable sensors or radio signals that broadcast 30 times farther, or devices 30 times smaller. The batteries are rechargeable and can charge 1,000 times faster than competing technologies – imagine juicing up a credit-card-thin phone in less than a second. In addition to consumer electronics, medical devices, lasers, sensors and other applications could see leaps forward in technology with such power sources available. Read the full article to learn more about the microbatteries.
<urn:uuid:c7d6e2f4-60e2-4af4-ab1f-b77ab766b1a0>
CC-MAIN-2017-04
https://www.404techsupport.com/2013/04/microbatteries-presented-from-the-university-of-illinois/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921827
365
3.625
4
Google Hits Renewable Energy Goal in Quest To Pare Pollution By Michael Liedtke. Updated December 06, 2016. Google is crossing a milestone in its quest to reduce pollution caused by its digital services that devour massive amounts of electricity. The internet company believes that beginning next year, it will have amassed enough renewable energy to meet all of its electricity needs throughout the world. That's significant, given Google's ravenous appetite for electricity to power its offices and the huge data centers that process requests on its dominant search engine, store Gmail, YouTube video clips and photos for more than a billion people. Google says its 13 data centers and offices consume about 5.7 terawatt hours of electricity annually -- nearly the same amount as San Francisco, where more than 800,000 people live and tens of thousands of others come to work and visit. The accomplishment announced Tuesday doesn't mean Google will be able to power its operations solely on wind and solar power. That's not possible because of the complicated way that power grids and regulations are set up around the U.S. and the rest of the world. Google instead believes it is now in a position to offset every megawatt hour of electricity supplied by a power plant running on fossil fuels with renewable energy that the Mountain View, California, company has purchased through a variety of contracts. About 95 percent of Google's renewable energy deals come from wind power farms, with the remainder from solar power. Nearly 20 other technology companies also have pledged to secure enough renewable energy to power their worldwide operations, said Gary Cook, senior energy campaigner for the environmental group Greenpeace. Google made its commitment four years ago and appears to be the first big company to have fulfilled the promise. Apple is getting close to matching its rival. The iPhone maker says it has secured enough renewable energy to power about 93 percent of its worldwide operations. Apple is also trying to convert more of the overseas suppliers that manufacture the iPhone and other devices to renewable energy sources, but that goal is expected take years to reach. Cook said the symbolic message sent by Google's achievement is important to environmental experts who believe electricity generated with coal and natural gas is causing damage that is contributing to extreme swings in the climate. U.S. President-elect Donald Trump dismissed the need for climate control during his campaign for office, and he has pledged to undo a number of regulations to protect the environment. "More than ever, companies must show this sort of leadership on renewable energy," Cook said Tuesday. "Now is not the time to be silent." Google still hopes to work with power utilities and regulators around the world to make it possible for all of its renewable energy to be directly piped into its offices and data centers around the clock. For now, Google sells its supply of renewable energy to other electricity grids whenever it isn't possible for its own operations to use the power. Google Inc. declined to disclose how much it has spent on its stockpile of renewable energy or the size of its annual electricity bill.
<urn:uuid:5ff5d3ca-5e3f-4624-85ae-b16a4b259418>
CC-MAIN-2017-04
http://www.cio-today.com/article/index.php?story_id=023001I1WWVB
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00477-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964554
617
2.703125
3
OpenSSL is the open source implementation of the Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocols that encrypt data sent over a network connection (including the Internet) and is the standard for protecting sensitive information such as credit card and bank account numbers and personal data in web-based transactions. The HTTPS protocol uses TLS to secure traffic between a web client (web browser) and a web server hosting a web site. In 2014, the big news broke about the Heartbleed exploit, which uses a buffer over-read vulnerability in OpenSSL’s cryptographic library, to steal private keys from servers and access users’ passwords. At least half a million web servers were estimated to be using the affected version of OpenSSL and thus were vulnerable to Heartbleed, including some of the most popular sites on the web. These included Amazon Web Services (AWS), GitHub, Pinterest, WordPress, Gmail, GoDaddy, Netflix, YouTube, Dropbox, Tumblr, Wikipedia, Yahoo, Instagram and many more. To the credit of the open source community, the security flaw was patched the same day that the public disclosure occurred, but there were reports of exploits either prior to that time or during the hours between the disclosure and the application of the patch. Months later, in July, Business Insider was reporting that 300,000 servers were still vulnerable. The Heartbleed fiasco had IT professionals scrambling to get their web servers updated and Internet users scrambling to change their passwords on the various affected sites and services. But Heartbleed is by no means the only exploit that has targeted SSL and/or TLS. BEAST (Browser Exploit Against SSL/TLS) made headlines back in 2011, BREACH (Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext) was revealed by security researchers at Black Hat in 2013 and FREAK (Factoring RSA Export Keys) is another SSL/TLS exploit that was discovered in 2015. Heartbleed just got the most press. Now, not quite two years later, yet another OpenSSL exploit is rearing its ugly head. This one is called DROWN (Decrypting RSA with Obsolete and Weakened eNcryption) and it affects web servers that rely on SSLv2 to secure HTTP communications. Version 2 of SSL was released way back in 1995 and was superseded by SSLv3 the next year. In fact, the reason a new version came out so quickly was – you guessed it – security flaws in 2.0 that led to a complete redesign in the third version. TLSv1 came out in 1999 and was based on SSLv3. The current version of TLS is version 1.2 (with TLSv3 currently a working draft at the time of this writing). With four generations of SSL/TLS between it and SSLv2, you would think that SSLv2 would be too rare in the field to worry about. You would be wrong. Experts have warned that several million web sites and email services are vulnerable to DROWN. That’s because almost six million web servers and nearly a million email servers are directly supporting SSLv2. But there’s more. Even servers that don’t have SSLv2 enabled can still be vulnerable if another server that does support it uses the same RSA key pair. That brings the estimated number of affected web servers up to eleven and half million. Some of the popular web sites that are believed to have been vulnerable at the time DROWN was made public, on March 1, include Yahoo.com, weather.com, speedtest.net, Samsung.com, stumbleupon.com, apache.org, usc.edu and many more. With so many servers affected and such commonly-visited web sites vulnerable, you might be wondering: What is the impact of the DROWN exploit and how does an attacker take advantage of it? To put it simply, an attacker can use the vulnerability to decrypt TLS-protected communications. Here’s how it works: The attacker has to have some patience, because he’ll need to collect hundreds of connections between the client and server machines that use an RSA key exchange. The attacker will make changes to the RSA ciphertext and will send multiple specially crafted handshake messages to the SSLv2 server that he’s targeting. The attacker can send probe connections and eventually obtain the key for one of the TLS connections and run computations. The attacker can perform a man-in-the-middle attack and impersonate the server to the client computer. The thing that makes DROWN important is that even if web clients don’t use SSLv2 but instead make their connections over TLS, an attacker can still intercept and decrypt those messages if SSLv2 is a supported protocol. Merely allowing SSLv2 connections, even if they are never used, makes the server vulnerable. And even if SSLv2 is not enabled, using the same private key as another server that does allow it also makes your server vulnerable. Ouch! The solution is to disable SSLv2 on all servers and devices that use your server’s private key. Of course, this doesn’t help those connecting to the affected servers because there is nothing you can do on the client end to protect against the exploit. The good news (yes, there is some good news) is that the attacker is only able to decrypt one TLS connection at a time; this exploit doesn’t enable him to obtain the server’s private key. That means the server’s certificates aren’t compromised. The even better news is that there have been no reports of the DROWN vulnerability being exploited in the wild before its public disclosure and publication of countermeasures for preventing attack. Of course, now that the information is “out there,” it will be easier for attackers to exploit it. What can you do? As noted above, the solution is to disable SSLv2, but this isn’t always as simple as it sounds. In order to be protected against DROWN, you have to be sure that you’ve disabled SSLv2 on all servers and devices that use the same private key as your server(s). How to do that depends on what software and what SSLv2 implementation is running on the server(s). If you’re running OpenSSL, upgrade to the latest version (1.0.1s or 1.0.2g). Check out this OpenSSL User’s Guide to DROWN for more information. If you’re running IIS 7.0 or above or NSS 3.13 or above, SSLv2 should be disabled by default, but you should check to ensure that it hasn’t been manually enabled. If you’re running Apache httpd 2.4.x, SSLv2 is disabled but if you’re using httpd 2.2.x, it is enabled by default so you’ll need to disable it. For a much more comprehensive and technical discussion of the DROWN vulnerability and exploit, see the paper titled DROWN: Breaking TLS using SSLv2.
<urn:uuid:a6c9cbf1-751b-4e4c-818d-0e3a9b2ee9dc>
CC-MAIN-2017-04
https://techtalk.gfi.com/drowning-in-openssl-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00503-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939393
1,480
2.953125
3
Definition: An optimization problem induced by a collection of geometric objects. See also prune and search. Note: Since the variables and constraints come from physical situations, faster algorithms can often be developed. Adapted from [AS98, page 413]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "geometric optimization problem", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/geometricopt.html
<urn:uuid:00f94d73-eb60-4cd0-a6be-ef069be773c9>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/geometricopt.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00137-ip-10-171-10-70.ec2.internal.warc.gz
en
0.857197
168
2.75
3
Can't Code? Squash a Few Bugs The strength of Free and Open Source Software (FOSS) is its openness and transparency, and community support. Anyone can contribute, not just elite coders with lush geekbeards and ratty sandals. So what can a non-coder do? One of the reasons Free Software is so high-quality is anyone can report bugs and submit patches. Even if you can't fix bugs, reporting them is valuable. Every Linux distribution has a bug tracker, and so do most individual programs. Submitting a bug report usually means following a particular protocol, and using the appropriate bug-tracking tool. There are wrong ways to report a bug: - "ur proggie sux. crashed on me this morning while i was doin my homewrk." - "it doesn't work. fix it." - Whine in all the wrong places, like IRC and unrelated forums - Report your own error as a bug The right way is to first find the approved mechanism for reporting a bug. Bugzilla is a popular bug-reporting and tracking tool, used by major FOSS projects like Mozilla, Red Hat Linux, KDE, Gnome, and the Linux kernel. Ubuntu uses Launchpad, which is their own proprietary bug-tracking/project-hosting/meeting planner tool. The bug-reporting component is called Malone. Whatever bug-tracking system is used, you'll probably have to register a user account and log in. A Linux distribution is comprised of thousands of programs developed independently, so it's not always obvious who is responsible for a problem. All distributions modify programs to a degree, some more than others. So the first place to report a bug is to your distribution's bug-reporting system. If you build your programs from sources, then use the programs' own bug-trackers. Before you actually file a bug report, do some homework first. First search mail lists and forums to see if anyone else is having the same problem. Chances are you'll learn it's not a bug, and you'll learn how to fix the problem. Then search the bug database to see if it's already been reported. If it has, it doesn't hurt to add a "metoo" addendum. Include all the information you would put in any bug report. A nice, though not necessary, thing to do is include any useful workarounds in your bug report. The most important step is to make sure it's a bug, and not some daft thing you're doing. You should be able to replicate the bug. If you can't, neither can the developers. This is a great opportunity to exercise those troubleshooting muscles. The majority of bug reports are not bugs, but user error. What to Put in a Bug Report See Resources for some Horrid Examples of how not to write bug reports. Gnome's Bugzilla Helper includes forms to help you include the correct information. You should include: - Operating system and version - Program name and version - Whatever behavior makes you think it's a bug, and the steps you take to trigger it - Any pertinent error messages Sometimes that is all you need. If the developer needs more information they'll ask. Or you may get a request to move the bug report somewhere else. You might be told that it's not a bug, but the way the program was designed to operate. If that's the case you can always visit the developer's list for the program and nicely discuss your issue directly with the developers. What if the Bug Doesn't Get Fixed? If your bug report is ignored, there may not be a lot you can do. The devs may be overworked, or don't see it as serious enough to address, or are simply mean people who hate you. (This type of developer is a very tiny minority, but they do exist, and why they choose to support a product that is going to be used by other people is one of life's little mysteries.) Give it a couple of weeks, then try a polite reminder. Posting a duplicate report upstream (directly on the program's own bug-tracking system) might get quicker action. Always be polite and factual. Most developers are polite and professional, but they are human and you will encounter the occasional dork. You'll score big credibility points by taking the high road. Many FOSS projects have wish lists. These are the places to post your suggestions for features and improvements. It takes a bit of work and care to write good bug reports. If you start feeling abused, just remind yourself how much work and resources went into all that great software that you are enjoying for free. And while you're at it, remember how unresponsive most commercial closed-source vendors are to bug reports, and how they are so encumbered with ridiculous EULAs and patents and DRM and NDAs that even if you have the resources to do so, you are prevented from fixing problems yourself. Rather like living in a house where you were not allowed to paint, or patch holes, or change lightbulbs, or remodel, or do anything except pay out money year after year to the builder just for the privilege of living there. Share Tips and Howtos A wonderful and easy way to support Free Software is to post tips and howtos somewheres. These don't have to be your life's work, but useful things you have discovered in your everyday work. For example, it might be that you figured out some advanced networking configuration options, or a quick OpenSSH tip, or learned some useful but little-known netcat options, or yet another way to filter Ethereal output to quickly home in on what you want to see. Put these on a blog, or post them in appropriate user forums, or anywhere Google can find them. Five minutes' of work translates into help for thousands of users. Send Lawyers, Guns and Money Well, probably not guns. But money is often welcome, or donations of hardware. In this era of attacking with armies of lawyers instead of guns, good legal assistance for warding off the greedy barbarian hordes is a very nice thing. These horrid examples of bug reports are brought to you courtesy of Akkana Peck: - Bug 60632 - Bug is rapid - Bug 73360 - crashed application - Bug 133113 - My wish-list i compiled for the last 1 1/2 years i used gimp Example of a good bug report with a semi-happy ending:
<urn:uuid:3a6855ef-cd6d-43cf-8c18-f8a28abedecf>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netos/article.php/3616661/Cant-Code--Squash-a-Few-Bugs.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939653
1,350
2.703125
3
NASA's International Space Apps Challenge hacks out 770 proposals NASA has a mission that requires making sure the public understands the value of its experimental technology and exotic space outings. In April, NASA set a new standard for public involvement when it staged what it called the largest hackathon ever attempted, and the first of its kind to focus on the needs of government. The global innovation potluck drew 9,147 people – 2,200 in virtual settings – and produced 770 proposals. Participants in the International Space Apps Challenge crowdsourcing event developed software, hardware, data visualizations and mobile or Web applications in one of 58 different categories. “Our space program, more than ever, requires the active engagement of the public to co-create our future,” said Nick Skytland, program manager for NASA’s Open Innovation Program. The space agency said many of the submitted solutions, “had direct tangible benefits” to existing NASA programs, including 40 apps for NASA’s Asteroid program and 37 for its Spot the ISS Station challenge. A wide cross section of sites from around the world were logged into the challenge, including a site from Chile that was the largest in the event; a New York site boasting 50 percent female hackers; and a contingent of high-schoolers from Haiti who were hacking a sustainable-living technology application. Participants designed mini-satellites (CubeSats) for NASA’s Mars mission, data visualizations for the national air traffic control system and the “first interplanetary weather app,” using Mars science data. Other highlights included an underwater planetary rover using lights, thrusters and video cams and a proposal to steer the craft using Skype and a keyboard. At least two apps from last year’s hackathon are currently in use at the agency. One converts the image file format VICAR — used by many NASA employees — to PNG format, and the other is a software platform for NASA’s underwater robotic submarines. While the technology yield was high, so was the project’s regard for progressive open data policies. The space agency went all out to enable teams to use existing open data tools when collaborating on app development, including Twitter, Facebook and Google+. For teams that could not tap an existing platform, NASA created an open-source Django application offering centralized registration. The site also offered collaboration pages around each challenge that were designed for government regulations related to paperwork reduction and personal privacy. Code was not hosted natively but linked from GitHub, a Web-based hosting service for software development projects, or other repositories. NASA said it is currently open sourcing the code to the platform for the benefit of other agencies that might want to use the platform for their own hackathons or collaborations. Video also was a key technology component of Space Apps, particularly as the judging process went forward. Each team created a 2-minute video telling the story of its project and demonstrating the capabilities. These videos (all available on Vimeo and YouTube) helped projects come alive out of their GitHub repositories. Incredibly, NASA said a team of only four people envisioned, planned and implemented the project in six months, earning NASA a return on investment estimated at more than $15 million. The final outcome was a win-win for government, NASA said. “Mass collaborations have not only encouraged citizens to get involved in government, but incentivized agencies to get challenges out to the people and then receive valuable input back from them quickly, inviting them to directly engage the mission in new ways never before considered,” the agency said. Connect with the GCN staff on Twitter @GCNtech.
<urn:uuid:45d2e487-00af-4f4c-8415-6f03bd846e2d>
CC-MAIN-2017-04
https://gcn.com/articles/2013/11/08/nasa-spaceapps-challenge.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933912
748
2.8125
3
So the FCC has published its national broadband plan. This plan has many implications for cities and counties and local government. It has implications for public safety and general government, for consumers, for business, for wired and wireless networks. Here's my take on it: Q: Is this plan really radical or different? A: The FCC has charted a brave new vision for the United States with this plan. For example, in this plan the FCC has set a goal of "one hundred squared", that is, connecting 100 million households with 100 megabits per second. This is radical because it cannot be accomplished with existing copper wire networks such as the telephone networks or cable TV networks. Such speeds require fiber optic cable to every home and business, a radical change. The speeds copper can carry are quite limited. But fiber cable lightwave signals theoretically, have no upper limit on speed. Incidentally, there are about 114 million households in the U.S. Q: A 100 megabits per second - a 100 million bits per second - is "geekspeak" . What does it really mean for consumers at home or small business? A: Let me give you one specific example. Many homes and businesses are buying and installing flat screen TVs, and most of those are HDTV - high definition. That's cool, and the quality of the image is very detailed. But the signal is one way - you "watch the TV" - you don't really "interact" with it or use it for communications like you use a phone. At the same time, you can buy a video camcorder - even a cheap one like a Flip phone - that takes HDTV video. Now, let's suppose you could put the video camcorder next to the HDTV and connect them - all of a sudden you would have a video telephone or a video conferencing setup. You could make video phone calls. You could attend meetings with video. You could attend class at a high school or community college or a university, and actually interact with the teacher or professor - ask questions and participate. You could visit your doctor to talk about a health problem, or work from home. You could visit your local appliance store or clothing store and talk to the owner and have the owner demonstrate what you want to buy. You could play really cool interactive video games. And think of the implications for quality of life - with this sort of video, grandparents could have dinner with their kids and grandchildren every night via a video phone. They could see their grandchildren from hundreds or thousands of miles away, or from an assisted living or nursing home. But all of this requires super fast networks for both high quality and almost zero latency - no delay, just like the voice phone network. And this requires fiber with 100 million bits per second or more. To each home or business. Q: What are the implications for large cities like Seattle? A: Seattle has been a leader in thinking about these networks. We've already installed fiber cable connecting every public school, all our college campuses, every fire station, police precinct and every major government building. We have done extensive planning for a fiber optic cable network to every one of the 300,000 homes and businesses in Seattle. We are a high tech community and we value education. We need such a fiber network for jobs, education and quality of life. Mayor Michael McGinn is very committed to the idea, and a number of departments are working together on a business plan to make it happen. The visionary goals set by the FCC's broadband plan - 100 million bits per second to 100 million homes - validate that we're following the right path, and we need to move rapidly to stay ahead of other cities in the United States and around the world. Q: How can we learn more about this Seattle plan? A: To stay abreast of it or support it, go to http://www.seattle.gov/broadband . Q: What are the implications of the FCC plan for suburban and rural communities? A: Suburban communities can be wired with fiber, just like the FCC's plan envisions and Seattle intends to do. Some Seattle area communities such as Kirkland and Woodinville already have fiber networks installed by Verizon. In rural communities installing fiber to farms and small towns may not always make economic sense, although in some visionary places like Chelan County, the local PUD is doing it anyway. But the FCC has envisioned an alternative for rural communities - high speed wireless broadband. Today's wireless networks are usually called "3G" or 3rd Generation. Fourth Generation - 4G - wireless networks will be available in a few places by the end of 2010. These faster networks require a lot of spectrum. You may recall that, in June, 2009, all TV broadcast signals became digital - every TV in the nation had to have a wired cable connection or a digital antenna. The FCC mandated this digital transition to take spectrum away from UHF TV use and give it to telecommunications companies to build 3G and 4G networks. The FCC's broadband plan calls for adding another 500 megahertz of spectrum to be dedicated to new, faster, wireless networks. The FCC will try to convince TV broadcasters to give up even more of the 300 MHz of spectrum now used for TV. And the government itself controls another 600 MHz of spectrum, some of which could be used for wireless broadband. Q: The nation faces a number of threats - terrorism, disasters (like earthquakes and hurricanes like Katrina) and even local disasters like the shooting of four Lakewood, Washington, police officers in 2009. Will the FCC's national broadband plan help with this problem? A: Public safety communications were problematical on September 11th in New York City, in the Katrina Hurricane and in other disasters. The public cell phone networks won't reliably operate in such disasters or, sometimes, even in daily emergencies like power outages. The FCC has allocated 10 Mhz of spectrum in the 700 Mhz band for a nationwide public safety broadband network. In the national broadband plan, the FCC proposes putting money where its mouth has been - the FCC is proposing $6.5 billion in grants to create the public safety network. The City of Seattle is one of only 17 communities nationwide who have asked the FCC for permission to use this spectrum and build such a network. In their plan, the FCC includes a method for setting standards and operating procedures which will allow cities like Seattle, San Francisco, New York and Boston to build. And these municipal or regional public safety wireless broadband networks will interoperate with others nationwide. In fact, under the FCC's plan, the public safety networks will also interoperate with networks being constructed by AT&T and Verizon and T-Mobile. So if a police officer or firefighter can't get a strong signal from the public safety network the officer could get signals from a commercial network instead. Furthermore, Seattle has proposed that other government agencies - our electric utility, Seattle City Light, our water utility, Seattle Public Utilities, our transportation department, and others, also be allowed to use this network. In both daily emergencies and major disasters such "second responders" are vital to public safety and must interoperate with police and fire to keep the public safe. The national broadband plan recognizes this need as well. Q: Practically, why do we need a public safety wireless broadband network? A: I'll give one specific example - video. On October 31, 2009, a Seattle police officer was brutally murdered by an unknown assailant - Christopher Montfort was ultimately charged with the crime. How did the police find Montfort? I've discussed this in more detail in this blog entry, but essentially, every Seattle police patrol vehicle has a video camera which records video of traffic stops. The recording goes to a computer in the police vehicle. It took several days for the police to review all the video footage of traffic stops from Seattle police cars. They noticed, in the background of several such stops, a uniquely shaped vehicle cruising by, which was traced back to Montfort. With a wireless broadband network, such video could immediately, in real time, be transmitted to dispatch centers and other police officers. Furthermore, police and firefighters could receive mugshots, building plans, hazardous material data, and video from a variety of sources to improve their response to both daily incidents and larger disasters. Q: Are there other implications of the plan? A: Several are worth mentioning and there is a bit more detail in an analysis here. In summary, the FCC's plan is visionary. Certainly it was carefully crafted with many competing interests interests in mind. And it doesn't really provide any good mechanism to encourage competition between private providers. Such competition would reduce costs to users. Nevertheless, if it is followed, will materially improve the economy, safety, and quality of life for the people of the United States.
<urn:uuid:f08933fd-95e0-453d-8e3f-5fb658bdbdb5>
CC-MAIN-2017-04
http://www.govtech.com/dc/blog/FCCs-Broadband-Plan-and.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00165-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957882
1,806
2.671875
3
The Presidential Memorandum on Transparency and Open Government, issued on Jan. 21, 2009, called for a participatory and collaborative approach to government and quick disclosure of information that the public can find and use. The related directive, issued on Dec. 8, 2009, requires federal agencies to publish government information online and improve information quality. "The memorandum, which was signed on President Obama's first day in office, reflects the idea that government does not have all the answers and will benefit from citizen participation," says Adelaide O'Brien, research director for IDC Government Insights. "Go to 2040" The Data.gov website was a first step to improve public access to federal data. Many states and municipalities began to follow suit, deploying websites that provide access to information such as spending, revenue and demographics at the state and local levels. In addition, many government websites have taken on a distinctly social tone. "Citizens now expect government organizations to keep them well informed, using the full range of social tools that they have become accustomed to," says O'Brien. The Chicago Metropolitan Agency for Planning (CMAP) is the planning agency for seven counties in the greater Chicago area. The agency carries out comprehensive planning for the region and produces an integrated plan for land use and transportation. In late 2010, CMAP released a report called "Go to 2040," a plan for the next 30 years. The goal of the long-range plan is to improve the quality of life by identifying citizen needs, prioritizing the use of resources and tracking achievements. As part of the Go to 2040 campaign, a website called "MetroPulse" was developed in partnership with the Chicago Community Trust (cct.org) to provide citizens with a wealth of data about the community and with key performance indicators. "Whether the goal is to develop residential and commercial facilities around public transit or achieve better academic outcomes in a school district, we need solid data in order to measure our performance," says Greg Sanders, Web manager at CMAP. An array of technology is used to make the data available and meaningful. Business intelligence (BI) tools are a mainstay of open government because they provide the analyses that enable users to interpret large amounts of raw data. CMAP chose WebFOCUS from Information Builders to generate the charts and graphs from the analyses. The charts are rendered in Adobe Flex, an open source application development framework. Maps are used extensively to present community information on the website; they are generated by ESRI's ArcGIS Server and rendered in Adobe Flex. The data is stored in Microsoft SQL Server running on Windows servers, which are virtualized using Citrix XenServer. The abundance of data brings a mix of opportunities and obstacles. "The amount of available information is staggering," Sanders says. "We have data on building permits, code violations and many other factors relating to property, for example. This is fantastic because we can then roll it up to higher levels and match it to census tracts, which allows for other analyses, but having so many options can also be overwhelming." Because its resources are finite, CMAP prioritizes the analyses in accordance with its most pressing issues. CMAP is obtaining user feedback that will be helpful in planning future modifications to the site. "We are trying to balance all the suggestions, and we do see some general trends in ways we can go," Sanders explains. "For example, rather than seeing the information centered around a data set, people would like to see it organized around a particular location, so they can get a profile of an area. In addition, they would like to be able to compare that profile to the profiles for other communities. That is something we are working on." In addition to providing information that can guide planning and inform citizens, the wide availability of data has fostered greater citizen involvement. "There is a huge movement now to motivate volunteer use of data," explains Sanders. "For example, app contests have generated many useful apps built on government data. And organizations like Code for America allow civic-minded young programmers to apply their skills at the city level, which in turn generates more open data for public use. This involvement is instilling a sense of public service among ‘data geeks.' We hope to create the passion to make things better." A considerable change has taken place in recent years. In the past, Sanders observed a "fortress mentality" in some government agencies. "There was a hesitancy about whether the information could legitimately be shared, whether there might be legal issues and so forth, and workers were more protective," he says. "With the advent of Data.gov, these issues were largely put to rest, and government workers became more comfortable sharing data. The atmosphere regarding collaboration has undergone a major change, which is fostering both participation and improved decision making." Government organizations have become more innovative in the past few years, according to Michael Corcoran, senior VP and CMO at Information Builders. "A decade ago, government agencies were seen as technology laggards," says Corcoran. "But especially at the state and local level, there is now a much greater understanding of what their customers want." Some of the public-facing initiatives are aimed at crime prevention, and others work on improving social programs or reducing fraud. Metrics have played a significant role in energizing government organizations. "With greater availability of various performance metrics," Corcoran says, "they are taking a more proactive role to improve them." In addition, the user community has broadened considerably. "In the past, analytics were relegated to a few smart people in the back office who could gain an understanding of why a certain trend might be occurring," he adds. "But now that everyone can see the data and come to their own conclusions. the concept of transparency is proving to be very effective." Open source for open government Drupal is an open source content management platform that is being used by over 700,000 developers in more than 200 countries. As is typical of open source products, Drupal exists in a community in which developers are constantly revising and improving the software. Government organizations are using Drupal for the same reason that those in the private sector are: The software is free, flexible and perhaps most importantly in today's market, has built-in social and community elements. Therefore, it has strengths in blogs, wikis and interactive components that were not originally part of all Web content management (WCM) systems. About 150 federal websites were using Drupal as of early 2012. The Federal Communications Commission (FCC) regulates interstate and international communications by radio, television, satellite, cable and wire. It was established as an independent agency with five bipartisan, appointed commissioners. The FCC switched to Drupal last year in part to make use of the community features offered by the product.
<urn:uuid:ea0b55ee-1f39-4af5-838f-f3a74142e88a>
CC-MAIN-2017-04
http://www.kmworld.com/Articles/Editorial/Feature/KM-supports-open-government-82047.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00559-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960037
1,394
2.71875
3
As the calendar counts down to the first exascale supercomputer, efforts to resolve the steep technological challenges are increasing in number and urgency. Among the many obstacles inhibiting extreme-scale computing platforms – resilience is one of the most significant. As systems approach billion-way parallelism, the proliferation of errors at current rates just won’t do. In recognition of the severity of this challenge, the federal government is seeking proposals for basic research that addresses the resilience challenges of extreme-scale computing platforms. On July 28, 2014, the Office of Advanced Scientific Computing Research (ASCR) in the Office of Science announced a funding opportunity under the banner of “Resilience for Extreme Scale Supercomputing Systems.” The program aims to spur research into fault and error mitigation so that exascale applications can run efficiently to completion, generating correct results in a timely manner. “The next-generation of scientific discovery will be enabled by research developments that can effectively harness significant or disruptive advances in computing technology,” states the official summary. “Applications running on extreme scale computing systems will generate results with orders of magnitude higher resolution and fidelity, achieving a time-to-solution significantly shorter than possible with today’s high performance computing platforms. However, indications are that these new systems will experience hard and soft errors with increasing frequency, necessitating research to develop new approaches to resilience that enable applications to run efficiently to completion in a timely manner and achieve correct results.” The authors of the request estimate that at least twenty percent of the computing capacity in large-scale computing systems is wasted due to failures and recoveries. As systems increase in size and complexity, even more capacity will be lost unless new targeted approaches are developed. The DOE is specifically looking for proposals in three areas of focus: 1. Fault Detection and Categorization – current supercomputing systems must be better understood in order to prevent similar behavior on future machines, according to DOE computing experts. 2. Fault Mitigation – this category breaks into two parts: the need for more efficient and effective checkpoint/restart (C/R) and the need for effective alternatives to C/R. 3. Anomaly Detection and Fault Avoidance – using machine learning strategies to anticipate faults far enough in advance to take preemptive measures, such as migrating the running application to another node. Approximately four to six research awards will be made over a period of three years with award sizes ranging from $100,000 per year to $1,250,000 per year. Total funding up to $4,000,000 annually is expected to be available subject to congressional approval. The pre-application due date is set for August 27, 2014.
<urn:uuid:a8c8fd9f-a6b6-4071-a531-198036a8df07>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/07/30/doe-fund-exascale-resilience-research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916341
556
2.6875
3
If you want to see the future—and who doesn't?—the place to begin your search is now. It's not only that the future is already here, just unevenly distributed, as William Gibson has put it. Sometimes it's all around us and we just haven't noticed. If you want to see the future of education and knowledge, take a look at how software developers learn. There's a strong case to be made that they have built for themselves the best continuing and rapid education system ever. This is a learning environment that works at every level: If you don't know which programming language to use, there are pages that compare them, and extensive discussions about their strengths and weaknesses. If you need to learn a new language, there are tutorials aimed at beginners, at experts and at people who are grounded in one language and now want to learn some other. If you have a question about how to do something in a particular language, you can pose it to a search engine and it is highly likely to turn up a precise and accurate answer, either at a site dedicated to answering developers' questions or on a blog by someone who struggled with the same issue. There are likely to be multiple answers, each with some back and forth discussion, and then the code that you need. If you understand how a particular function works, but want to try out some variations in a sandbox, there are sites that let you. If you run into a bug in your own program that is driving you crazy, you can post it and likely find someone who has already worked out the problem. If it turns out to be a bug in the language's implementation or in a browser or an operating system, there is likely to be a way that you can report it, or read the status of someone else's report. If you want to reuse some existing code, whether it's for a function or an entire application, you can. Sites like SourceForge and GitHub make it easy to find code, alter it and post your version so others can benefit, as per the Open Source ethic. If you want to learn about new developments in software engineering, and the development of new tools and places to learn, you can bookmark sites like Hacker News where the community poses links, and discusses the linked posts, in an informationally high-density format. This environment has dramatically increased the productivity of software developers. Because they write the code themselves, it conforms quite precisely to their needs and ethos. Because most lawyers, doctors, academics and other knowledge seekers can't program their own sites, their learning environments are not as highly tuned to their needs. But they will get better, especially if they pay attention to the attributes that make the developers' environment so productive: The developers' learning environment covers every level of learner, from the newbie to the seasoned veteran. The discussions are appropriate to each level, including in their tone. While newbies can be scorned at times, it is usually not because they're asking beginners' questions, but because they have failed to observe the rules and norms of the particular forum, such as searching for an existing answer before asking a question, or posting the question in multiple subject areas. The answers in developers' learning environments tend to come from other developers, not from a professional staff of educators. It's an ecosystem of practitioners teaching practitioners. They have developed a set of tools and conventions for reusing one another's work. It's often just a download or a copy-and-paste of code. If it requires some commands typed into a terminal, they're often posted in a format suitable for copy and pasting. Developers share as much code as they can. That's not going to work in every profession, but it is a default to be encouraged. Their learning environment encourages iterative improvements: small tweaks and optimizations. That also breeds a certain humility about one's code: It can always be improved. The educational materials developers post usually don't sound like educational materials. They sound like that particular developer. It might be funny or feisty, but it comes in the voice of an individual human being. And the learning environment created by developers models something profoundly important, I believe: the idea that learning can be and should be a public activity that enriches not only the individual learner, but makes the public space a little smarter as well. The idea that the educational process improves society directly, and not just by creating individuals who can then work for a better society, is one of those notions that seem small and obvious, but could have big long-term effects on how we think about teaching and learning. Someday we'll all learn like software engineers.
<urn:uuid:47c4e6a4-ed41-4d2e-b6f0-171691ddb927>
CC-MAIN-2017-04
http://www.kmworld.com/Articles/Column/David-Weinberger/Learning-like-a-developer-learns-80738.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00127-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965853
963
2.71875
3
Of all the major breaches that made the headlines in 2015, many of them are believed to have started with some sort of phishing scam. From Anthem to Sony, human error is often to blame for the majority of security incidents that enterprises experience. Understanding what a phishing scam is and how and why the organization is being targeted will help security professionals be on alert and better train their employees to identify and report potential threats. Angela Knox senior director of engineering and threat research, Cloudmark said, “A phishing scam is when you receive an email or instant message or phone call where the person sending the message is pretending to be someone they are not in order to convince the recipient into giving over information because the receiver is someone they can trust.” The problem is that these bad guys are so sophisticated in their tactics that it’s difficult to detect the frauds. [ ALSO ON CSO: Social engineering: 7 signs that something is just not right ] In one word, the criminals are successful because of trust. “A lot of times they will use social engineering tactics,” said Knox. By creating a sense of urgency, the scammers are able to make end users act in a hurry. “Our brain shuts down from noting that there is something odd and focuses on the urgency, so these criminals use social engineering tactics to get past the normal doubt tactics. The people are experts,” Knox said. Different varieties of phish include smishing, vishing, and spear phishing, and the goal of each is the same though the medium used to conduct the scam differs slightly. Knox provided a quick definition of each: - Smishing takes those phishing techniques of building trust and establishing a sense of urgency but applies them to text (SMS) messaging. - Vishing uses voice, so it’s usually a phone call. The criminal can set the caller ID number to be anything they want it to be. The receiver may think the bank is calling because it’s the bank phone number. - Spear Phishing involves a particularly targeted attack, so it’s usually in lower volume, especially if someone has more data available. Criminals can make the phishing attack more targeted to break into an enterprise network. The ‘spear’ part is more targeted. When Knox gave the example of a spear phishing attack that an end user might see, I shook my head in agreement. It was a tale I had heard many times before. An email from the CEO is sent to someone in the finance department asking them to make a wire transfer. I’ve talked with so many high-level executives from major security companies who noted someone in their own organizations had seen this type of attack. Fortunately for them, the threat was detected before trouble ensued. “If you look at the domain name, it’ll be a similar domain with maybe one letter change,” said Knox, and it’s really important that end users are trained to not respond to these calls for urgency. Teaching employees how to find the domain name can prevent them falling victim to deception. Though phishing itself is not social engineering, bad guys use their understanding of the ways humans behave to develop these scams. Scamming someone is a psychological con, and these bad actors are some of the greatest and most skilled con artists because they know how to manipulate human trust. Criminals conduct phishing campaigns by collecting data on sites like, www.data.com, where a lot of employee emails, names, titles are listed. They learn who is in the organization and collect useful data that they use to start a conversation with targeted employees. “They will build up trust before they ask someone to open an attachment,” said Knox. “They may talk to someone for weeks then say, ‘Now I’m sending you an attachment.’” The attachments are where the money making comes in. Some are trying to get in to get data that they can sell or reuse. "It could be industrial espionage, or malware that demands someone transfer money out,” said Knox. Whether they are collecting credentials or installing bots, spam, or DDoS attacks, the techniques are different but usually the goal is money. The goal is to sell or trade sensitive data, and it's a lucrative market for criminals. Including examples of these types of scams in an ongoing awareness training program is a key step in mitigating the various threats to the security of the cyber seas. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:0db8afa7-e74f-48bd-a44b-b573b123bbe4>
CC-MAIN-2017-04
http://www.csoonline.com/article/3026012/social-engineering/how-to-recognize-the-many-phish-in-the-cyber-sea.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00155-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963368
955
2.6875
3
Do You Hear What I Hear?�Part VIII: Multiprotocol Label Switching Continuing with our series on Quality of Service (QoS) in VoIP, our previous installments have looked at some of the key factors surrounding the quality of the voice connection: Part I: Defining QoS Part II: Key Transmission Impairments Part III: Dealing with Latency Part IV: Measuring "Toll Quality" Part V: Integrated Services Part VI: Resource Reservation Protocol Part VII: Differentiated Services In two recent tutorials, we examined the Integrated Services (intserv) and Differentiated Services (diffserv) projects of the Internet Engineering Task Force (IETF). With intserv, defined in RFC 1633, network resources along the path from sender to receiver are reserved to support the particular application. The Resource Reservation Protocol, or RSVP, defined in RFC 2205, is used to reserve those resources. In contrast, diffserv, which is defined in RFCs 2474 and 2475, defines a special field within the IPv4 or IPv6 headers, called the Differentiated Services (DS) field, which is marked with a value that identifies the particular class of service. Routers along the path then read the value of the DS field to facilitate their packet processing decisions. This tutorial will consider a third IETF-supported QoS solution, known as Multiprotocol Label Switching, or MPLS. The origins of MPLS can be traced to ATM developments in the mid-1990s, which sought to integrate the high speed Layer 2 switching (from ATM) and Layer 3 routing (from IP) technologies. Many of these early developments were proprietary to specific vendors including Cisco Systems, IBM, Nokia, and Toshiba, and therefore non-standard. The IETF became involved in the development in 1997, and chartered the MPLS Working Group, described in www.ietf.org/html.charters/mpls-charter.html. The architecture for MPLS is defined in RFC 3031, published in January 2001, however there are 40 additional RFCs that document various aspects of MPLS operation and implementations. One of the reasons for the large volume of words written on MPLS is its applicabilityMPLS is designed for implementation over a wide variety of link-level technologies, including Frame Relay, Asynchronous Transfer Mode (ATM), Packet over SONET, plus LANs such as Ethernet and token ring. To describe the operation of MPLS, recall that the local network header contains the addressing information necessary to deliver the frame from one node to the next on the local network. Similarly, the IP header contains the addressing information necessary to deliver the packet from the source node to the destination node on the internetwork, which may include a number of intermediate hops. Thus, the IP header contains more than enough information to get the packet from one hop to the next within the internetwork, and the processing time to examine that entire IP header might be better spent on other tasks. MPLS looks at this choice of the next hop as a two-step function. First, all packets are partitioned into Forward Equivalence Classes, or FECs, which are groups of packets that have equivalent forwarding parameters, such as all the packets that share a common source and destination. The second function then maps each FEC to a next hop. Thus, all packets with the same FEC will follow the same path (or set of paths) associated with that FEC. The assignment of a packet to a particular FEC is performed only once, when the packet enters the network, and a label containing the necessary forwarding information is sent along with the packet. A 32-bit MPLS header (sometimes called the "tag" or a "shim") is placed between the packet's local network and IP headers when it enters the MPLS domain. This header determines how the intervening routers will handle that packet. Because these routers must be able to interpret and act upon this new label, the MPLS-capable routers are called Label Switching Routers, or LSRs. The packets are classified at the entry point to the network, called the ingress LSR, and only that 32-bit header is examined to determine how the packet should be handled within the network. Since the LSR only examines the MPLS header and not the entire IP header, this QoS mechanism can operate independently of the Network Layer protocol (such as IP) that is currently in use. This is the origin of the term multiprotocol in the name MPLS. Thus, in contrast to intserv, which requires the uses of an ancillary protocol (RSVP), or diffserv, which embeds flow information inside the IP header, MPLS acts as an intermediary between Layer 2 and Layer 3, providing a compromise between high speed Layer 2 LANs and well-understood Layer 3 internetworks. For further study, check out the MPLS Resource Center, at www.mplsrc.org. In our next tutorial, we will look at some vendor-developed QoS solutions. Copyright Acknowledgement: © 2005 DigiNet ® Corporation, All Rights Reserved Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons.
<urn:uuid:c8d64c03-eefa-4eac-9205-354d1cca1570>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/unified_communications/Do-You-Hear-What-I-Hear151Part-VIII-Multiprotocol-Label-Switching-3527686.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914863
1,125
2.796875
3
Definition: Arrange items in a predetermined order. There are dozens of algorithms, the choice of which depends on factors such as the number of items relative to working memory, knowledge of the orderliness of the items or the range of the keys, the cost of comparing keys vs. the cost of moving items, etc. Most algorithms can be implemented as an in-place sort, and many can be implemented so they are stable, too. Formal Definition: The sort operation may be defined in terms of an initial array, S, of N items and a final array, S′, as follows. Generalization (I am a kind of ...) Specialization (... is a kind of me.) quicksort, heapsort, Shell sort, comb sort, radix sort, bucket sort, insertion sort, selection sort, merge sort, counting sort, histogram sort, strand sort, J sort, shuffle sort, American flag sort, gnome sort, bubble sort, bidirectional bubble sort, treesort (1), adaptive heap sort, multikey Quicksort, topological sort. See also external sort, internal sort, comparison sort, distribution sort, easy split, hard merge, hard split, easy merge, derangement. Note: Any sorting algorithm can be made stable by appending the original position to the key. When otherwise-equal keys are compared, the positions "break the tie" and the original order is maintained. Knuth notes [Knuth98, 3:1, Chap. 5] that this operation might be called "order". In standard English, "to sort" means to arrange by kind or to classify. The term "sort" came to be used in Computer Science because the earliest automated ordering procedures used punched card machines, which classified cards by their holes, to implement radix sort. Demonstrations of various sorting algorithms. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 September 2014. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "sort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/sort.html
<urn:uuid:519427b5-ec1a-4be9-8d76-6b6c60cdf8be>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/sort.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00211-ip-10-171-10-70.ec2.internal.warc.gz
en
0.822557
501
3.59375
4
NASA and Amazon Web Services have joined forces to provide an easier and more efficient way for researchers to access and process earth science data. A large collection of climate and earth science satellite data produced by the NASA Earth Exchange (NEX) will now be freely available to research and educational users as well as citizen scientists through the AWS cloud. The new project is called OpenNEX. AWS blogger and chief evangelist Jeff Barr writes: “Up until now, it has been logistically difficult for researchers to gain easy access to this data due to its dynamic nature and immense size (tens of terabytes). Limitations on download bandwidth, local storage, and on-premises processing power made in-house processing impractical.” The initial collection includes three NASA NEX datasets – over 20TB worth of data – plus Amazon Machine Images (AMIs), and tutorials. The datasets are stored in Amazon S3 as part of the AWS Public Data Sets program. NASA will soon be adding virtual workshops with details on how to use the service and how to process the datasets on AWS. Here’s a short description of each project: - Data for Climate Assessment – The NASA Earth Exchange Downscaled Climate Projections provides high-resolution, bias-corrected climate change projections for the 48 contiguous US states. Researchers can use the data to evaluate climate change impacts on processes that are sensitive to finer-scale climate gradients and the effects of local topography on climate conditions. - Landsat Global Land Survey – Developed with the U.S. Geological Survey, Landsat is the longest existing continuous space-based record of Earth’s land, spanning four decades. Landsat has applications in agriculture, geology, forestry, regional planning, education, mapping, and climate change. It also serves as a resource for emergency and disaster relief efforts. - MODIS Vegetation Indies – The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on NASA’s Terra and Aqua satellites offers a global view of Earth’s surface every 1 to 2 days. Potential applications include global biogeochemical and hydrologic modeling, agricultural monitoring and forecasting, land-use planning, land cover characterization, and land cover change detection. The NASA Earth Exchange (NEX) is a research platform of the NASA Advanced Supercomputer Facility at the agency’s Ames Research Center in Moffett Field, California. NEX brings together advanced supercomputing, earth system modeling, workflow management, and NASA remote-sensing data, enabling users to explore and analyze large earth science data sets, run and share modeling algorithms, collaborate on new or existing projects and share results. The OpenNEX initiative continues NASA’s tradition of using cloud platforms to support open science in line with the Obama Administration’s Open Data Executive Order. “We are excited to grow an ecosystem of researchers and developers who can help us solve important environmental research problems,” reports Rama Nemani, principal scientist for the NEX project. “Our goal is that people can easily gain access to and use a multitude of data analysis services quickly through AWS to add knowledge and open source tools for others’ benefit.”
<urn:uuid:03f501d4-4e7a-4c85-afe7-82b2aab060b0>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/12/nasa-earth-science-data-now-available-aws/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00421-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879446
656
2.890625
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Last week I posed the question, "What exactly is the World Wide Web Consortium (W3C) for?" It seems that the W3C itself is also preoccupied with these concerns, judging by the series of one-day events it held recently across Europe. Dubbed the Interop Tour, this was ostensibly about promoting W3C technologies and "to show how they facilitate interoperability on the World Wide Web". But it is hard not to get the feeling that the main point of the tour was to promote the W3C, and to justify the new regional offices it is opening around the world. Promoting interoperability is all very well, but it begs the question, how does the W3C aim to do this - other than through rather expensive one-day events? One answer lies in the odd bits of software that it continues to turn out, such as the Amaya HTML editor browser and Jigsaw, a Java-based server. But the W3C's most important role is to produce authoritative and genuinely useful recommendations. And judging by the W3C news pages, it is certainly doing plenty of that at the moment. Some of this involves core Web technologies such as Cascading Style Sheets (CSS). This has a lovely home page, produced, naturally enough, by CSS technology. As the current work page indicates, CSS is now up to version three. The useful introduction to CSS3 explains that one key change is the modularisation of CSS' capabilities. Also worth noting is an essay on the W3C's design principles, which have largely informed the development of CSS. One module of CSS3 is needed for advanced linguistic capabilities such as ruby text. Other work that underlines the W3C's concern to make the Web truly worldwide is a character model for World Wide Web 1.0, which is part of its general internationalisation activity. Probably of more relevance to general business users is the Document Object Model (Dom) activity. In theory, the ability to address and manipulate individual elements of a Web page is a nice idea, but so far as I am aware, it has no major applications. Undeterred by this apparent lack of interest, the W3C is currently working on Dom Level 3. Another area that is conspicuous by its absence from everyday Web life is the Resource Description Framework (RDF) language. This is part of the larger Semantic Web activity. A notable development in this area is the release of Isaviz, a free RDF visual authoring tool written in Java, which allows those of us still confused about what exactly RDF is for to play with it rather than just read the theory. Other, more specialised work includes Scalable Vector Graphics (SVG) 1.1. SVG is one of the liveliest areas at the W3C, as its home page indicates. Together with news and articles on the subject, there are also a surprising number of full-length books devoted to SVG. The Synchronised Multimedia activity page is less busy, but even here work is continuing. What may not be apparent from this brief survey of W3C activity is the centrality of XML to every aspect of its work today. Developments in this important area will be the subject of a future column.
<urn:uuid:b545727c-a4e9-4cf7-a926-423d5ac0d1f5>
CC-MAIN-2017-04
http://www.computerweekly.com/opinion/What-is-the-W3C-doing-now
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951015
700
2.640625
3
The two terms, Electronic Medical Record (EMR) and Electronic Health Record (EHR), are used interchangeably by many stakeholders. However, IT leaders in healthcare should realize that these terms define two different ideas. A clear understanding of the fundamental differences between the two concepts will equip IT leaders with the tools necessary to face the implementation challenges of EMR and, in the near future, EHR. EMR Is Different from EHR EMR is a comprehensive electronic record of a patient's health-related information, including pathological, radiological, and pharmaceutical information that is shared and managed by healthcare practitioners via a secure network within a Care Delivery Organization (CDO). EHR is an environment that connects a subset of EMRs from different CDOs within a community, state, or region via a secure and nationally standardized network. The network allows authorized clinicians and healthcare staff access to patients' health-related information across more than one healthcare organization.
<urn:uuid:70bf81a1-9e4d-4c3c-97a4-f0fb68badf5d>
CC-MAIN-2017-04
https://www.infotech.com/research/emr-versus-ehr-a-letter-makes-a-difference
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927834
192
2.828125
3
The language of the future is digital; and with emerging technologies, this future will include interconnected devices that automate the management of the appliances and devices we depend on every day. We’re seeing the dawn of an era where even the most mundane items are communicating among themselves with little human interaction. Emerging technologies are turning the Internet of Things (IoT), into the Internet of Everything. Your thermostat detects movement and automatically adjusts the temperature in your home. Over time it senses your preferences and learns when to turn on and off. Your alarm clock can send a signal to your coffee pot and it starts brewing your morning coffee. As your car pulls into your driveway, your home disables the security system and unlocks the front door. The Internet of Things brings to life a vision of the future where the more manual aspects of life can be automated so we can enjoy more meaningful living. Through widespread advancements in smart technologies, the time to a fully connected world is rapidly advancing. New Technologies Means New Threats to Data How we connect devices is changing how we work and live. While with the touch of a smart device we can manage the intricate details of our life, new security consequences complicate the capability of these devices. Unsecured, smart devices counteract the convenience of the Internet of Things. Many advancements in technology today come without the consideration of potential threats; and without the proper security framework in place, the IoT enables new methods for home and privacy invasion. As the Internet of Things become more commonplace in the devices we use daily, it will increase the number targets for data security threats. And security threats to Internet of Things aren’t theoretical—they’re already happening. Recent attacks like the “smart” light bulb password leaks, hacks of Foscam baby monitors, Belkin home automation systems, and hacks of smart cars systems are just the beginning. As the number of intelligent devices rises, the potential damage that could be caused by lack of security will continue to increase. PKI is Poised to Answer Security Needs of the Internet of Things “What’s the best way to get an SSL Certificate on…?” Because IoT is a relatively new field, device developers aren’t as experienced with security principals as existing software companies. This is also true for device manufacturers, home security system providers, home automation solutions providers, and industrial systems designers—all of whom have never dealt with the threats associated with data security in networked devices. For over 20 years, PKI-based solutions have been securely exchanging information across the Internet and PKI usage has skyrocketed as companies are protecting more and more data. PKI is already being used to address problems similar to the ones the Internet of Things is likely to experience, as companies are using it to secure devices like mobile phones, tablets, printers, and WiFi hotspots. As a leading Certificate Authority and PKI provider, DigiCert can help secure Internet of Things devices. And as a leader in SSL innovation, DigiCert is uniquely situated to meet the security needs of individuals and organizations as they develop new technologies for Internet of Things devices. DigiCert already provides Internet security products and services to over 80,000 customers in more than 180 countries. Over the last decade, DigiCert’s reputation for agile and rapid solution development to meet customer needs has made us the fastest growing Certificate Authority in the world. DigiCert has increasingly become the Certificate Authority of choice for emerging markets and for data encryption security in emerging technologies. How to Install an SSL Certificate in the IoT – Internet of Things Devices DigiCert already secures devices for the government and scientific community and is poised to quickly deliver the digital certificates IoT manufacturers need in order to ensure proper data protection for their customers. Using the existing DigiCert Certificate Management System, manufacturers can utilize automatic issuance of digital certificates for a wide array of devices. The managed certificate system can scale from small tests to full-range deployment in mass production and enables simple management throughout the certificate lifecycle. DigiCert engineers also consult with organizations to develop a custom deployment solution designed to work with their existing production infrastructures to add strong data security to their IoT devices. Internet of Things device manufacturers and solutions providers can quickly begin using DigiCert managed certificate systems for Internet of things security. Where other digital certificate providers take days or weeks to set up managed accounts, DigiCert account managers can set up customers immediately—allowing you to begin deploying certificates to secure customer data and devices. DigiCert's Chief Security Officer Jason Sabin develops innovative products and features to simplify SAAS-based digital certificate management. He oversaw Novell's Security Review Board, built their first penetration testing teams, and engineers innovative identity and access management solutions within the cloud. He has also filed over 50 patents, earning him the “Utah Genius” award.
<urn:uuid:eddd1b67-7d82-4769-be33-0e0b87d892e6>
CC-MAIN-2017-04
https://www.digicert.com/news/2014-07-29-internet-of-things.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00504-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921385
1,005
2.75
3
To keep the click count high, more Web sites are offering users a mobile version of existing sites. Due to phones' small screens, cumbersome interface and differing speeds, these mobile sites, often at a URL such as http://m.sitename.com or http://www.sitename.mobi, typically display fewer and lower-resolution graphics and feature hyperlinks to only the most pertinent information. YouTube, for example, has a mobile site at http://m.youtube.com that serves up videos to users on BlackBerrys and iPhones. BlackBerrys and iPhones don't support Flash, the standard video format YouTube uses. So the mobile site streams videos in Real Time Streaming Protocol (RTSP), a format both phones do support. YouTube is just one example of a popular Web site optimized for the mobile user. Google, Yahoo, Facebook and countless others have also optimized their sites. So how has this movement to mobility changed state government Web sites? That's the question Government Technology sought to investigate. In our test, we visited every state Web site on both a desktop PC and a BlackBerry Curve to see which sites were best for smartphone browsing. State sites were graded using an A, B, C, D, F scale. These grades don't reflect an evaluation of any site were it to be visited on a PC or laptop. The grades apply strictly to how the sites function on a BlackBerry Curve. In the interest of full disclosure, our site, www.govtech.com, does not feature a version optimized for mobile phones or a text-only version.
<urn:uuid:88ccde38-661c-4b70-9093-61b583bf70f3>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Mobile-Browsing-Can-Your-iPhone-Access.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00532-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903465
316
2.96875
3
This appendix is significant in that it clearly defines various types of cloud services and the types of clouds in use, as well as different aspects of cloud services. This post will focus on how this appendix describes the various aspects of cloud services. Appendix C: Service Strategy and the Cloud indicates that there are five characteristics of cloud services. These characteristics are: - On demand - Ubiquitous access - Resource pooling - Rapid elasticity - Measured services I’ll provide a brief description of each of these characteristics, as well as how I’ve recently seen companies offer services that meet these characteristics. An on demand service is a service that can be accessed when and where it’s needed through either the Internet or an Intranet. “On demand” literally means it’s available when the user demands it. I recently completed an XSLT class. To do the assignments for this class, students activated a host with specific settings. Once the assignment was finished and uploaded to a website, the host was no longer needed, so it was deactivated until the student was ready to begin working on the next assignment. This was helpful because the class required significant configuration to make everything work. Having an on demand service like this meant that students could load a pre-configured environment as needed. Ubiquitous access means that various types of clients can use the service. Ubiquitous access requires three things: - The use of standard communication methods and network protocols - Coarse grained interfaces - An effective model for managing security-related aspects An example of ubiquitous access includes services that allow users to store music, photos, or other information that can then be accessed by various devices such as traditional personal computers, tablets, and smartphones. Cloud services are often provided through a collection of physical and virtual assets that are managed dynamically according to patterns of business activity and user and customer demand. An example of resource pooling is having a collection of assets that can be quickly arranged as needed to meet customer demand. I once worked for a financial services company that pooled its hardware assets in such a way that we could utilize additional resources from the pool to quickly respond to demand for IT services. A service that is elastic can be quickly and appropriately sized in-line with customer demand. What this means is that as patterns of business activity change, the demand for services is affected. A highly-elastic service is able to quickly add more resources in-line with increased demand or reduce resources in response to diminished demand. Rapid elasticity has existed in IT for some time. An example of rapid elasticity I was recently exposed to was with respect to an IT organization’s network circuits. This organization moves large datasets across a portion of its network. At times, it requires more bandwidth and in order to achieve these needs dynamically reallocates networking circuits and assets. When the need for increased bandwidth diminishes, the organization positions network assets according to normal operating criteria. Cloud services are often purchased according to a pay per use, or pay per utilization model. In order to offer that type of pricing arrangement, it is critical that there is some method to measure use of the service. Measuring the utilization of services is nothing new. We’ve been doing it for quite a while in IT. An example of measuring a service comes from mainframe environments but has been used in other environments as well. Many organizations will calculate what is called a “MIPS rate”. “MIPS” stands for “million instructions per second”, and it is the number of instructions that a computer or service can execute per second. An MIPS rate will often not only consider this but also will weigh the average time various instructions take, consider the overall cost of providing the service, and ultimately produce a billing rate equal to the cost of 1 MIPS. I’ve worked with several organizations that billed internally for high-end computing resources in exactly this fashion. In the most recent version of ITIL quite a bit of coverage is given to cloud-related topics and the impact of cloud technologies on IT service providers. The importance of this is to establish a common meaning of the various aspects of cloud services and how we as service providers can apply these technologies to deliver value to our customers.
<urn:uuid:95800021-1756-41be-a267-89e63aa9a985>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/05/09/characteristics-of-cloud-services-as-defined-by-itil/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00440-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961581
888
2.859375
3
Here’s a riddle: - What’s hidden in plain sight - Stretches the entire known world - Vilified and Adored - Born from a secret US Government project - Maintains secret societies - Requires one to pass through a ‘secret’ gateway to access? Answer: It is the infamous Darknet. This article will discuss Darknets and what these tools hold for the future of security. This discussion will share how to access and navigate the Darknet for those who are ignorant or otherwise uninitiated to this world. What is a Darknet? Under development for years and in use for reported thousands (maybe more), the definition of this ‘secret Internet has been elusive as its evolution is user-defined and built. This shifting innovation landscape results in frequent new concepts on how to apply Darknet technology and further navigate the future of cloaked or otherwise ‘deep’ network relations. As defined by Wikipedia: “A Darknet is an overlay network that can only be accessed with specific software, configurations, or authorization, often using non-standard communications protocols and ports. The purpose of of darknets is motivated by the desire to hide content, or even the existence of data and communication, from competing business or government interests. The most widespread darknets are governmental and corporate intranets, the use of which is a standard security practice nowadays, friend-to-friend networks (usually used for file sharing with a peer-to-peer connection) and privacy networks such as Tor.” The notion of the Darknet (a software defined network running ‘on top’ of your current network) is both intimidating and exhilarating. It is the ultimate Software Defined Network (SDN) that is already established. However real it is today, the whole concept of a ‘hidden’ Internet existing as almost a parasite to the public Internet is bizarre to most people. It is also true that this network is really only available to those who have gained the knowledge of how to use it, and sometimes there is a ‘secret’ handshake that is required to gain access. However, once there, the world is similar to the world of Minecraft, as users build obscure and secret communities. Many suggest that this whole idea is for illicit or illegal activity, however when one dives into the deep ghettos of the Darknet, these assumptions can be heavily shaken and frankly abandoned. How does one access the Darknet? Truth be told, a lot of hooey is made about how to access the Darknet. Many suggest that this world is difficult to access and that it may require some kind of advanced technical acumen, or perhaps a special invitation from a ‘club-member’. The reality is that accessing the Darknet is actually quite simple, requiring little technical knowledge or invested time. In fact, not unlike Alice in Wonderland, it is so easy to access that it is possible to fall into it accidently. First let me say that, as described earlier, a Darknet is any network riding on top of the ‘surface’ Internet that is meant to be obscured to the general population and not readily perceptible to most audiences. It is always software-based. One of the finest examples of how to describe the Darknet is the often-leveraged software program called The Onion Router (TOR) network. However, it may come as a shock to many that TOR is neither synonymous with the Darknet nor exclusively defines the Darknet. Normally, when exploring the principles of Darknet anonymity, it is customary to deeply understand the concept of an ‘onion network’, as this is most often mental-model that is used. To keep things simple, let’s focus on this concept. However please understand that many Darknets exist within popular software programs such as video games and social media applications, and are even more obscure through popular business applications we could all name. In fact, much has been made public lately about the new ways in which terrorists are communicating plans through modestly developed Darknets from video games to avoid government monitoring and interference. The US Navy originally designed TOR. In fact, the way TOR works is really straightforward. Normally, when accessing the ‘everyday’ Internet, your computer directly accesses the server hosting the website you are visiting. Conversely, in an onion network, this direct link is broken, and the data is instead bounced around a number of intermediaries before reaching its destination. The communication registers on the network, but the transport medium is prevented from knowing who is doing the communication. TOR is actually the brand name of a very popular program that leverages the onion router concept wrapped up in a fairly user-friendly format and scaled to be accessible for most of today’s most popular operating systems. Although technically-savvy users can find a multitude of different ways to configure and use TOR, the most popular cited method is to simply download a version of the ever-popular Firefox browser from the TOR website which supports TOR natively. This ‘TOR’ browser can then be used to surf the surface Web anonymously, giving the user added VPN-like protection against everything from advertisements, to government spying, to prying hackers, to corporate security department data collection. Ironically, TOR opens up a whole new world (even on the surface of the Internet), as it allows one to visit websites published anonymously on the Tor network, which are inaccessible to people not using Tor. This is one of the largest and most popular sections of the Darknet. TOR website addresses do not look like ordinary URLs. They are composed of random-looking strings of characters followed by .onion. Here is an example of a hidden website address: http://dppmfxaacucguzpc.onion/. That link will take you to a directory of Darknet websites if you have TOR installed, but if you do not, then it is completely inaccessible to you. Using TOR, you can find directories, wikis and free-for-all link dumps that will help you to find anything you are looking for on the Darknet. TOR is the most popular onion network, but it is not the only one. Another example is The Freenet Project, which offers similar functionality but also allows for the creation of private networks, which means that resources located on a given machine can only be accessed by people who have been manually placed on a ‘friends list’. Another Darknet system (or ‘privacy network’) called I2P (the Invisible Internet Project) is growing in popularity. Although Tor still has many users, there seems to be a shift towards I2P, which offers a range of improvements such as integrated secure email, file storage and file sharing plug-ins, and integrated social features such as blogging and chat. How does one secure against Darknets? Many of my security brethren believe deeply that they can protect corporate networks from Darknets. However, the truth is that protecting any network from Darknets is nearly impossible (for two reasons). First, most security applications that advertise protections are looking at two major areas to protect: the downloading and use of known Darknet executable and also a viral or APT-like signature to the transmission use. Both of these models have huge gaps in implementation effectiveness and moreover, the network and applications behavior of most users leveraging Darknets is almost indistinguishable from legitimate users once the software is downloaded. How can they successfully download the applications to get around modern security applications? There are step-by-step guides on every Darknet application including TOR on how to accomplish said tasks that end up being trivial to accomplish. In the end, Darknets are inevitable and already existent in most corporate networks. The question now becomes how to understand that they are there and ferret them out like a flu that makes its way into a season and mysteriously disappears as the seasons change. To deny that Darknets are there is ignorance in today’s world and to find them will immeasurably assist your defense. Good hunting!
<urn:uuid:724f5579-ca11-4a11-88a2-42cf1638900b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/02/15/anonymous-networks-101-into-the-heart-of-the-darknet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00010-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952114
1,666
2.703125
3
The Phishing Problem Phishing is an attempt to fraudulently acquire sensitive information (such as usernames, passwords, and credit card details) by masquerading as a trustworthy entity in an electronic communication. Phishing is typically carried out by email and often directs users to enter details at a fake website. The individuals behind phishing send out millions of emails in the hope that a few recipients will act on them. Any email address that has been made public (in forums, in newsgroups, or on a website) is susceptible to phishing. Mitigating the threats posed by phishing requires a combination of solutions-based, policy-based, and behavioral-based controls. For example, the Cisco Context Adaptive Scanning Engine (CASE) reviews sender reputation, examines the context of the entire message, and filters more accurately than traditional spam-screening techniques. Because security is a never-ending race against threats, it is important to analyze your security infrastructure on a regular basis. Few factors are as important as how often the technology updates itself. The Cisco Email Security Solution Cisco gateway security appliances provide the first line of defense in a comprehensive security approach. Using data from Cisco SenderBase Network, Cisco Email Security technology examines: - What content a message contains - How the message is constructed - Who is sending the message - Where the message's call to action takes you Cisco Email Security technology provides both proactive and reactive protection. Measures such as DomainKeys Identified Mail (DKIM) signing clearly identify mail sent from your organization. At the same time, automatic updates to signature files and preventive security defenses provide the latest protection and information on emerging threats. Multiple built-in antiphishing features include: By combining these elements, Cisco's antiphishing features stop the broadest range of threats with industry-leading accuracy. Cisco Email Security products can protect your infrastructure not only from today's threats but also from those certain to evolve in the future.
<urn:uuid:f9d0496a-2db8-4199-8813-2b90a9ae147b>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/products/security/email-security-appliance/phishing_index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92068
407
2.765625
3
In the Beginning The history of grid computing at SDSC can be traced to 1985, when the National Science Foundation decided to start a supercomputing centers program to make supercomputers available to academic researchers. "Before then, if you wanted to use supercomputers, you really needed to be in defense or in the Department of Energy and doing classified work," Papadopoulos said. "That kind of computing power was not available to the masses."By 2001, network throughput increased from 155M bps to 655M bps. SDSCs TeraGrid project was then introduced with a 40G-bit back-plane network. Today, all SDSC research is funded by grants and awards. However, in its early days, the center was allowed to resell about 10 percent of its spare cycles in an effort to raise more funds for research. Next Page: Scaling the TeraGrid That program existed for 12 years, and in 1997 a new programthe Partnership for Advanced Computational Infrastructurewas started, aided by the growth of broadband, Papadopoulos said. "Part of the reason it started was that networks went from [56K-bps] networks that interconnected the centers in 1985 to, in 1994, the BBNS [BroadBand Networking Services] at 45M bps. In 1997, the centers were connected at 155M bps. It was enough of a change for a new program to be started. The supercomputer centers no longer had to act as islands."
<urn:uuid:f6f4b461-2a14-4979-86a6-2291af43ec74>
CC-MAIN-2017-04
http://www.eweek.com/c/a/IT-Infrastructure/Grids-Conquest-of-Space/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00524-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972605
305
3.34375
3
As they imagine the Internet of 2020, computer scientists across the U.S. are starting from scratch and re-thinking everything: from IP addresses to DNS to routing tables to Internet security in general. They're envisioning how the Internet might work without some of the most fundamental features of today's ISP and enterprise networks. To borrow from John Lennon: Imagine there's no latency, no spam or phishing, a community of trust. Imagine all the people, able to get online. This is the kind of utopian network architecture that leading Internet engineers are dreaming about today. As they imagine the Internet of 2020, computer scientists across the country are starting from scratch and re-thinking everything: from IP addresses to DNS to routing tables to Internet security in general. They're envisioning how the Internet might work without some of the most fundamental features of today's ISP and enterprise networks. Their goal is audacious: To create an Internet without so many security breaches, with better trust and built-in identity management. Researchers are trying to build an Internet that's more reliable, higher performing and better able to manage exabytes of content. And they're hoping to build an Internet that extends connectivity to the most remote regions of the world, perhaps to other planets. This high-risk, long-range Internet research will kick into high gear in 2010, as the U.S. federal government ramps up funding to allow a handful of projects to move out of the lab and into prototype. Indeed, the United States is building the world's largest virtual network lab across 14 college campuses and two nationwide backbone networks so that it can engage thousands – perhaps millions – of end users in its experiments. "We're constantly trying to push research 20 years out," says Darleen Fisher, program director of the National Science Foundation's Network Technology and Systems (NeTS) program. "My job is to get people to think creatively potentially with high risk but high payoff. They need to think about how their ideas get implemented, and if implemented how it's going to [affect] the marketplace of ideas and economics." The stakes are high. Some experts fear the Internet will collapse under the weight of ever-increasing cyber attacks, an increasing demand for multimedia content and the requirements for new mobile applications unless a new network architecture is developed. The research comes at a critical juncture for the Internet, which is now so closely intertwined with the global economy that its failure is inconceivable. As more critical infrastructure — such as the banking system, the electric grid and government-to-citizen communications — migrate to the Internet, there's a consensus that the network needs an overhaul. At the heart of all of this research is a desire to make the Internet more secure. "The security is so utterly broken that it's time to wake up now and do it a better way," says Van Jacobson, a Research Fellow at PARC who is pitching a novel approach dubbed content-centric networking. "The model we're using today is just wrong. It can't be made to work. We need a much more information-oriented view of security, where the context of information and the trust of information have to be much more central." NSF ramps up research Futuristic Internet research will reach a major milestone as it moves from theory to prototype in 2010. NSF plans to select anywhere from two to four large-scale research projects to receive grants worth as much as $9 million each to prototype future Internet architectures. Bids will be due in the first quarter of 2010, with awards expected in June. "We would like to see over-arching, full-scale network architectures," Fisher says. "The proposals can be fairly simple with small, but profound changes from the current Internet, or they can be really radical changes.'' NSF is challenging researchers to come up with ideas for creating an Internet that's more secure and more available than today's. They've asked researchers to develop more efficient ways to disseminate information and manage users' identities while taking into account emerging wireless and optical technologies. Researchers also must consider the societal impacts of changing the Internet's architecture. NSF wants bidders to consider "economic viability and demonstrate a deep understanding of the social values that are preserved or enabled by whatever future architecture people propose so they don't just think as technicians," Fisher says. "They need to think about the intended and unintended consequences of their design." Key to these proposals is how researchers address Internet security problems. "One of the things we're really concerned about is trustworthiness because all of our critical infrastructure is on the Internet," Fisher says. "The telephone systems are moving from circuits to IP. Our banking system is dependent on IP. And the Internet is vulnerable." NSF says it won't make the same mistake today as was made when the Internet was invented, with security bolted on to the Internet architecture after-the-fact instead of being designed in from the beginning. "We are not going to fund any proposals that don't have security expertise on their teams because we think security is so important," Fisher says. "Typically, network architects design and security people say after the fact how to secure the design. We're trying to get both of these communities to stretch the way they do things and to become better team players." The latest NSF funding is a follow-on to the NSF's Future Internet Design (FIND) efforts, which asked researchers to conduct research as if they were designing the Internet from scratch. Launched in 2006, NSF's FIND program has funded around 50 research projects, with each project receiving $500,000 to $1 million over three to four years. Now, the NSF is narrowing these 50 research projects down to a handful of leading contenders. World's largest Internet testbed The Internet research projects chosen for prototyping will run on a new virtual networking lab being built by BBN Technologies. The lab is dubbed GENI for the Global Environment for Network Innovations. The GENI program has developed experimental network infrastructure that's being installed in U.S. universities. This infrastructure will allow researchers to run large-scale experiments of new Internet architectures in parallel with -- but separated from -- the day-to-day traffic running on today's Internet. "One of the key goals of GENI is to let researchers program very deep into the network," says Chip Elliott, GENI Project Director. "When we use today's Internet, you and I can buy any application program that we want and run it….GENI takes this idea several steps further. It allows you to install any software you want deep into the network anywhere you want. You can program switches and routers." BBN was chosen to lead the GENI program in 2007 and has received $45 million from the NSF to build it. BBN received an $11.5 million grant in October to install GENI-enabled platforms on 14 U.S. college campuses and on two research backbone networks: Internet 2 and the National Lambda Rail. These installations will be done by October 2010. "GENI won't be in a little lab on campus. We'd like to take the whole campus network and allow it to run experimental research in addition to the Internet traffic," Elliott says. "Nobody has done this before. It'll take about a year." The GENI project involves enabling three types of network infrastructure to handle large-scale experiments. One type uses the OpenFlow protocol developed by Stanford University to allow deep programming of Ethernet switches from vendors such as HP, Arista, Juniper and Cisco. Another type of GENI-enabled infrastructure is the Internet 2 backbone, which has highly programmable Juniper routers. And the third type of GENI-enabled infrastructure is a WiMAX network for testing mobile and wireless services. Once these GENI-enabled infrastructures are up and running, researchers will begin running large-scale experiments on them. The first four experiments have been selected for the GENI platforms, and they will test novel approaches to cloud computing, first responder networks, social networking services and inter-planetary communications. "All of these experiments are beyond the next-generation Internet," Elliott says. "All of these efforts are targeting the Internet in 10 to 15 years." The benefit of GENI for these projects is that researchers can test them on a very large scale network instead of on a typical testbed. That's why BBN and its partners are GENI-enabling entire campus networks, including dorm rooms. "What's distinctive about GENI is its emphasis on having lots and lots of real people involved in the experiments," Elliott says. "Other countries tend to use traffic generators….We're looking at hundreds or thousands or millions of people engaged in these experiments." Another key aspect of GENI is that it will be used to test new security paradigms. Elliott says the GENI program will fund 10 security-related efforts between now and October 2010. "If I were rank ordering the experiments we are doing, security is the most important," Elliott says. "We need strong authentication of people, forensics and audit trails and automated tools to notice if [performance] is going south." Elliott says GENI will be the best platform for large-scale network research that's been available in 20 years. "You could argue that the Arpanet back in the '70s and early '80s was like this. People simultaneously did research and used the network," Elliott says. "But at some point it got impossible to do experimentation. For the past 20 years or so we have not had an infrastructure like this." Stanford protocol drives GENI platform One idea that GENI is testing is software-defined networking, a concept that is the opposite of today's hardware-driven Internet architecture. Today's routers and switches come with software written by the vendor, and customers can't modify the code. Researchers at Stanford University's Clean Slate Project are proposing — and the GENI program is trialing — an open system that will allow users to program deep into network devices. "The people that buy large amounts of networking equipment want less cost and more control in their networks. They want to be able to program networking devices directly," says Guido Appenzeller, head of the Clean Slate Lab. Stanford's answer to this problem is an alternative architecture that removes the intelligence from switches and routers and places these smarts in an external controller. Users can program the central controller using Stanford's OpenFlow, which was developed with NSF funding. "Juniper and Cisco are struggling with lots of customer demand for more flexibility in networks," Appenzeller says. "Juniper has done some steps in that direction with its SDK on top of their switches and routers…But it's harder to do that because of the issues of real-time control. It's easier to do this in an external controller.'' If software-defined networking were to become widespread, enterprises would have more choice in terms of how they buy networking devices. Instead of buying hardware and software from the same vendor, they'd be able to mix and match hardware and software from different vendors. Stanford has demonstrated OpenFlow protocol running on switches from Cisco, Juniper, HP and NEC. With OpenFlow, an external controller manages these switches and makes all the high-level decisions. Appenzeller says the OpenFlow architecture has several advantages from an Internet security perspective because the external controller can view which computers are communicating with each other and make decisions about access control. "OpenFlow is about changing how you innovate in your network," Appenzeller says. "We have several large Internet companies looking at this. We're pretty optimistic that we'll see some deployments." Stanford anticipates publishing Version 1.1 of OpenFlow by early 2010. Already deployed in Stanford's computer sciences buildings, OpenFlow will be installed in seven universities and two research backbone networks through the GENI program build-out in 2010. Tackling routing table growth Among the Internet architectures that will run on the GENI infrastructure is a research project out of Rochester Institute of Technology that is trying to address the issue of routing table growth. Dubbed Floating Cloud Tiered Internet Architecture, the RIT project is one of the few NSF-funded future Internet research projects that has software up and running as well as a corporate sponsor. RIT's Floating Cloud concept was designed to address the problem of routing scalability. At issue is the 300,000 routing table entries that keep growing as more enterprises use multiple carriers to support their network infrastructure. As the routing table grows, the Internet's core routers need more computational power and memory. With the Floating Cloud approach, ISPs would not have to keep buying larger routers to handle ever-growing routing tables. Instead, ISPs would use a new technique to forward packets within their own network clouds. RIT is proposing a flexible, peering structure that would be overlayed on the Internet. The architecture uses forwarding across network clouds, and the clouds are associated with tiers that have number values. When packets are sent across the cloud, only their tier values are used for forwarding, which eliminates the need for global routing within a cloud. "There will be no routers containing the whole routing table. The routing table is going to be residing within the cloud. To forward information across the cloud, you just use the tier value and send it across," explains Dr. Nirmala Shenoy, a professor in the network security and systems administration department at RIT. The Floating Cloud approach runs over Multi-Protocol Label Switching (MPLS). Shenoy says it completely bypasses current routing protocols within a particular cloud, which is why she refers to it as "Layer 2.5." RIT has been running its Floating Cloud software on a testbed of 12 Linux systems. Shenoy is excited about testing the software on the GENI-enabled platforms operated by Internet 2. "Twelve systems is not the Internet," Shenoy says. "We've been talking to the GENI project people about a more realistic set up." RIT also is collaborating with Level 3 Communications, an ISP that plans to test the Floating Cloud architecture in its backbone network. Shenoy sees many benefits for enterprise network managers in the Floating Cloud approach. "This architecture is affording the flexibility of a defined network cloud, which can be defined to any network granularity," Shenoy says. "What happens in the cloud, is [the responsibility] of the network manager. This cloud structure introduces more economy, or you can make it more granular if you want better control and management." Shenoy says the Floating Cloud approach has some security advantages. "The very fact that I'm going to have control on the cloud size is going to give me more control and management and should positively impact security," Shenoy says. "Also, the fact that I don't have these huge global routing tables, and my packets shouldn't get shunted all over the place. Instead I will have more structured forwarding, and that should impact security." Sometimes-on mobile wireless networks Researchers from Howard University in Washington, D.C. will be experimenting with a new type of mobile wireless network on the GENI platform. The group's research is focused on networks that aren't connected all the time – so called opportunistic networks, which have intermittent network connectivity. "In this kind of opportunistic network…sometimes you are out of the signal range and you cannot talk to the Internet or talk to other mobile devices," explains Jiang Li, an associate professor in the Department of Systems and Computer Science at Howard University. "One example is driving a car on a highway in a remote area." Opportunistic networks would use peer-to-peer communications to transfer communications if the network is unavailable. For example, you may want to send an e-mail from a car in a remote location without network access. With an opportunistic wireless network, your PDA might send that message to a device inside a passing vehicle, which might take the message to a nearby cell tower. Li sees this type of opportunistic network architecture as useful for data transmission and could be a complement to cellular networks. "The most fundamental difference about this architecture is that the network has intermittent connections, as compared to the Internet which assumes you are connected all of the time," Li says. Li says opportunistic networks involve rethinking "everything" about the Internet's architecture. "Seventy to eighty percent of the protocols may have to be redesigned because the current Internet assumes that a connection is always there," Li says. "If the connection is gone for a minute, all of the protocols will be broken." Li says opportunistic mobile networks are useful for emergency response if the network infrastructure is wiped out by a disaster or is unavailable for a period of time. Li's research team also has an NSF grant to study the network management aspects of opportunistic networks, which may have long delays between when a message is sent and when it is received. Li says these types of delay-tolerant network management schemes would be useful in developing countries such as India, which isn't covered by traditional wireless infrastructure such as cell towers. "We're trying to extend the current networks to a much broader geographic area," Li says. "Now, if you want to get access to the Internet, you have to have infrastructure, at least a cell tower. If you look at the map to see what's covered by cell towers…we still have lots of red areas that aren't covered." The Facebook-style Internet Davis Social Links uses the format of Facebook — with its friends-based ripple effect of connectivity — to propagate connections on the Internet. That's how it creates connections based on trust and true identities, according to S. Felix Wu, a professor in the Computer Science Department at UC Davis. "If somebody sends you an e-mail, the only information you have about whether this e-mail is valuable is to look at the sender's e-mail which can be faked and then look at the content," Wu says. "If you could provide the receiver of the e-mail with the social relationship with the sender, this will actually help the receiver to set up certain policies about whether the message should be higher or lower priority." Davis Social Links creates an extra layer in the Internet architecture: on top of the network control layer, it creates a social control layer, which explains the social relationship between the sender and the receiver. "Our social network represents our trust and our interest with other parties," Wu explains. "That information should be combined together with the packets we are sending each other." Davis Social Links currently runs on Facebook, but researchers are porting it to the GENI platform. Although based on the popular Facebook application, Davis Social Links represents a radical change over today's Internet. The current Internet is built upon the idea of users being globally addressable. Davis Social Links replaces that idea with social rather than network connectivity. "This is revolutionary change," Wu says. "One of the fundamental principles of today's Internet is that it provides global connectivity. If you have an IP address, you by default can connect to any other IP address. In our architecture, we abandon that concept. We think it's not only unnecessary but also harmful. We see [distributed denial-of-service] attacks as well as some of the spamming activity as a result of global connectivity." Davis Social Links also re-thinks DNS. While it still uses DNS for name resolution, Davis Social Links doesn't require the result of resolution to be an IP address or any unique routable identity. Instead, the result is a social path toward a potential target. "The social control layer interface under Davis Social Links is like a social version of Google. You type some keywords…and the social Google will give you a list of pointers to some of the social content matching the keywords and the social path to that content," Wu explains. Wu suggests that it's better and safer to have connectivity in the application layer than in the network layer. Instead of today's sender-oriented architecture – where a person can communicate with anyone whose IP address or e-mail address is known — Davis Social Links uses a social networking system that requires both sides to have a trust relationship and to be willing to communicate with each other. "As humans, we have very robust social networks. With the idea of six degrees of separation, it's very realistic that you will be able to find a way communicate with another," Wu says. Another radical proposal to change the Internet infrastructure is content-centric networking, which is being developed at PARC. This research aims to address the problem of massive amounts of content — increasingly multimedia — that exists on the Internet. Instead of using IP addresses to identify the machines that store content, content-centric networking uses file names and URLs to identify the content itself. The underlying idea is that knowing the content users want to access is more important than knowing the location of the machines used to store it. "There are many exabytes of content floating around the 'Net…but IP wasn't designed for content," Jacobson explains. "We're trying to work around the fact that machines-talking-to-machines isn't important anymore. Moving content is really important. Peer-to-peer networks, content distribution networks, virtual servers and storage are all trying to get around this fact." Jacobson proposes that content — such as a movie, a document or an e-mail message — would receive a structured name that users can search for and retrieve. The data has a name, but not a location, so that end users can find the nearest copy. In this model, trust comes from the data itself, not from the machine it's stored on. Jacobson says this approach is more secure because end users decide what content they want to receive rather than having lots of unwanted content and e-mail messages pushed at them. "Lots of relay attacks and man-in-the-middle attacks are impossible with our approach. You can get rid of spam," Jacobson says. "This is because we're securing the content itself and not the wrapper it's in." Jacobson says content-centric networking is a better fit for today's applications, which require layers of complicated middleware to run on the Internet's host-oriented networking model. He also says this approach scales better when it comes to having millions of people watching multimedia content because it uses broadcast, multi-point communications instead of the point-to-point communications built into today' s Internet. More than anything, content-centric networking hopes to improve the Internet's security posture, Jacobson says. "TCP was designed so it didn't know what it was carrying. It didn't know what the bits were in the pipe," Jacobson explains. "We came up with a security model that we'll armor the pipe, or we'll wrap the bits in SSL, but we still don't know the bits. The attacks are on the bits, not the pipes carrying them. In general, we know that perimeter security doesn't work. We need to move to models where the security and trust come from the data and not from the wrappers or the pipes." PARC has an initial implementation of content-centric networking up and running, and released early code to the Internet engineering community in September. Jacobson says he hopes content-centric networking will be one of the handful of proposals selected by the NSF for a large-scale experiment on the GENI platform. Jacobson says the evolution to content-centric networking would be fairly painless because it would be like middleware, mapping between connection-oriented IP below and the content above. The approach uses multi-point communications and can run over anything: Ethernet, IP, optical or radio. Will the Internet of 2020 include content-centric networking? Jacobson says he isn't sure. But he does believe that the Internet needs a radically different architecture by then, if for no other reason than to improve security. "Security should be coming out of the Web of interactions between information," Jacobson says. "Just like we're using the Web to get information, we should be using it to build up our trust. You can make very usable, very robust security that way, but we keep trying to patch up the current 'Net."
<urn:uuid:55159f85-c845-4163-bfb6-565d3be442d0>
CC-MAIN-2017-04
http://www.networkworld.com/article/2238717/wireless/2020-vision--why-you-won-t-recognize-the--net-in-10-years.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956762
5,017
2.78125
3
Evolution of the File Property Timestamp - By Raymond Chen In Windows 95, the timestamp information on the file property sheet showed the various timestamps in the local time zone, even if that wasn’t the time zone active at that particular moment in time. For example, right now in Redmond we are on Pacific Standard Time, so Windows 95 would show all timestamps relative to Pacific Standard Time. As a result, the timestamp on a file you created at noon on July 4 will show up as having been created at 11:00 a.m. when you view it during the winter, because noon Pacific Daylight Time refers to the same moment in time as 11:00 a.m. Pacific Standard Time. Mind you, Redmond was not using Pacific Standard Time on July 4, but the information is technically correct (if intuitively wrong). This treatment of times that belong to "the other side" of the Daylight Saving time boundary (what outside the United States often goes by the name Summer Time) is a notable discrepancy between managed and unmanaged code. Unmanaged code historically performs conversion between coordinated universal time (UTC) and local time based on the local time zone at the time the conversion is made, rather than on the time zone that was in effect for the time being converted. One reason for this is that basing the time zone on the time being converted can result in ambiguity or times that cannot be converted. For example, during the transition from standard time to daylight time, the clock jumps ahead from 2:00 a.m. to 3:00 a.m. in the United States. A local time recorded as 2:30 a.m. would have no corresponding UTC time because there was no such thing as 2:30 a.m. locally. Even worse, a local time recorded as 2:30 a.m. during the transition from daylight time to standard time would be ambiguous, because during the transition from daylight time to standard time, the clock regresses one hour, and local time 2:30 a.m. takes place twice. Another reason for not using the time being converted to choose the time zone is that in many cases the OS does not have the information necessary to know what time zone was in effect at some point in the past. The rules for changing between standard time and daylight time are subject to change by local governments. Furthermore, there have been countries (such as Israel and Brazil) that, until recently, did not follow a predictable set of rules but instead decided on the date for the change on a case-by-case basis. Given a point in time in the past or in the future, it is difficult (or in the case of the future, impossible) to determine with certainty what time zone is in effect at that point in time. (And good luck getting anything meaningful for points in time that predate time standardization!) On the other hand, the System.DateTime class in managed code does its best to determine what time zone was in effect for the time being converted, opting to present something more intuitively correct, but which still may fail when presented with the unusual cases that give unmanaged code the heebie-jeebies. Back to the evolution. In Windows 2000, the formatting of timestamps was modified slightly to say Yesterday, Today or Tomorrow for dates that were within one day of the current date. For example, instead of saying Created: Sunday, February 14, 2009, 7:00:00 AM, the property sheet would say Created: Today, February 14, 2009, 7:00:00 AM when viewed on Feb. 14. Nothing essential changed; this was just a little visual tweak to make things prettier. In Windows Vista, the time portion of the timestamp was made a bit friendlier: If the time and date refer to the current day, then the time portion of the timestamp is given in relative notation: Created: Today, February 14, 2009, 15 minutes ago. And most recently, in Windows 7, the file property sheet General page shows timestamps based on the time zone that was in effect at your current location when the file was created rather than based on the current time, bringing the property sheet more in alignment with the way managed code presents timestamps. Finally, the file you created at noon on July 4 will show as having been created at noon even during the winter months. This change takes advantage of the new so-called Dynamic Time Zones, which permit the Daylight Saving Time rules for a time zone to vary from year to year. This allows Windows to know that a file created on Oct. 30, 2006, in Redmond was created during Pacific Standard Time, whereas a file created on Oct. 30, 2007, was created during Pacific Daylight Time, thanks to the change in Daylight Saving Time rules in the United States that took effect in 2007. Note, however, that the historical information that comes with Windows does not go back to years before 1987, when the rules were different still, so timestamps prior to 1987 may still end up converted incorrectly. The result of all these changes to the interpretation of timestamps is that the same file, when viewed from different versions of Windows, may end up showing values that differ by an hour either way. The timestamp itself hasn't changed; just the way Windows presents it. [Editor's Note: This article originally appeared in TechNet Magazine.] Raymond Chen's Web site, "The Old New Thing," and identically titled book (Addison-Wesley, 2007) deal with Windows history, Win32 programming, and Krashen's Comprehensible Input Hypothesis.
<urn:uuid:909f0464-7a30-4df8-8acc-0b4452de9bb2>
CC-MAIN-2017-04
https://mcpmag.com/articles/2010/07/06/file-property-timestamp.aspx?admgarea=BDNA
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959473
1,157
3.09375
3
Note: This article originally appeared in Digital Communities magazine. A set of technologies called Web 2.0 is transforming the Internet. Web sites such as YouTube, MySpace and Facebook, in addition to really simple syndication (RSS) feeds, blogs and wikis attract hundreds of millions of people. Yet this Web 2.0 transformation of government is just beginning. How might it occur? Web 2.0 and government are both about building community and connecting people. Web 2.0 technologies are transforming the Internet into connected communities that allow people to interact with one another in new and distinct ways. Government is, by its very nature, all about community. Government is a group of people - citizens or constituents - doing together what they can't do as individuals or otherwise obtain from private business. I believe most of us wouldn't want individuals or private businesses to manage street networks, maintain parks or operate police and fire departments. In the end, government is community. Therefore, Web 2.0 - community-building tools - seems tailor-made for government, at least theoretically. Potential Web 2.0 Uses How can government use Web 2.0 tools to make a better community? Here are some ideas and examples: MySpace, Facebook, LinkedIn and Second Life have truly broken new ground. These online spaces allow individuals to establish a new presence for interacting with members of their online community. Government also promotes small groups in communities, such as anti-crime block watches, neighborhood disaster recovery groups and legislative districts. Having secure social networking sites for community groups to interact, learn from each other and educate themselves has great promise. Moderated blogs with interactive comments are potentially a good way for elected officials to garner input from constituents and interact with them. They might supplement communities' public meetings. We have many kinks to work out because too many blogs - and public meetings - are monopolized by a few citizen activists. And moderating a blog requires much time and effort for a government agency. Video and Images YouTube is the new groundbreaker in this arena. Governments could use such Web sites to encourage residents and visitors to post videos of their favorite places to visit in the jurisdiction, special events and dangerous places (e.g., intersections, sidewalks and overgrown vegetation). For instance, it could help build community if video was posted of the Northwest Folklife Festival - a popular music and crafts festival held at the Seattle Center each Memorial Day weekend. People could share videos and post "sound off" video bites with their opinions about certain subjects. The Seattle Channel, a local government access TV station, often videotapes people on the street with questions for their elected officials, and then poses those questions online in Ask the Mayor or City Inside/Out: Council Edition. Online surveys via Zoomerang and SurveyMonkey are everywhere. Surveys could help elected officials gauge the mood of a city's residents on a range of topics. Like all online surveys, however, activists and special interest groups can rig the results by voting early and often. Such surveys won't be statistically valid. It might be possible to combine online surveys with traditional surveying techniques (e.g., calling residents by phone, which is itself becoming less valid as people shed their published, landline phone numbers in favor of cell phones). Wikis: Internal Processes Wikis certainly hold great promise for government internally. We divide government into departments, each with unique functions. Departments tend to be siloed groups, so cross-departmental communication is difficult. Wikis, or products like Microsoft SharePoint, could be used to standardize many business processes, functions and terms across the entire government. Simple processes, such as "how to process a public disclosure request" and "how to pay a vendor invoice," are inclined to documentation and improvement through wiki. Certainly such procedures can be documented and put on a government intranet's static Web pages. But the advantage of a wiki is that many more employees are involved in creating and editing the content, so the process happens faster and employees actually read and use it because they're involved from the start. Wikis: External Processes I believe there are a couple fundamental uses for external wikis, and one is processes for interacting with government. How do you recycle a computer? What do you do if a refrigerator is found on a boulevard's median? How do citizens apply for and use food stamps? This information can be posted online via public Web pages maintained by government employees. But the advantage of a wiki is that the "whole story" of questions like these can be much broader than a single government agency. In the computer recycling example, many people have many ideas; some are involved in recycling, others are environmentalists and there are employees from multiple agencies who might contribute ideas to "recycling a computer." An interactive wiki will give new dimensions to the ideas. Wikis: External Deciphering Most government workers have at least some idea of how to build a budget and what their own budget contains. But for constituents and residents, government budgets are just gobbledygook. A budget wiki could not only foster voter understanding, but might also provide meaningful input to it, rather than having special interest groups come to the table and demand funding for their unique programs. Individuals inside and outside government could contribute to editing those kinds of wikis. Governments are fundamentally about geography - the city limits or county lines. Much of what government does is geographically based through functions like providing water and solving crimes. Data mash-ups against maps or other information can give new insights. One specific example is mapping 911 calls of fires and medical emergencies in Seattle on My Neighborhood Map, a city-run Web guide to city services. Though it isn't technically Web 2.0, next-generation 911 has many possibilities. Nowadays if you need police, fire or emergency medical services, you call 911. But with cell phone cameras, cheap video cameras, text messages and other ways to connect and interact technologically, 911 has the potential to do much more. The day will come when someone will witness a crime, snap a photo of the criminal and transmits it to the 911 center that sends it to police officers, who make an arrest while rushing to the crime scene. Blogs and Wikis: Customer Service and Feedback Although this isn't technically a single technology, I believe it merits special mention. As government's ability to interact with constituents and customers improves because of Web 2.0 tools, government agencies and employees will get more feedback about things we are doing right and wrong and what we've chosen to do but isn't universally loved. Do we really want to be that transparent? Common Web 2.0 Challenges Many Web 2.0 technologies pose special challenges for government that we'll have to work through. The "Frequent Flyer" or "Citizen Activist" Every elected official knows the folks who grab your arm at a public meeting to rant about the crosswalk in their neighborhood or the lack of affordable housing. They monopolize public meetings and rally their supporters with mass e-mail campaigns. Most Web 2.0 tools are susceptible to the same techniques. All I can say is that with these Web 2.0 applications, the "normal" constituent has additional paths for interacting with elected officials. The Digital Divide Many people with limited income often lack access to computers and the Internet. Web 2.0 may give the well-off an even more disproportionate voice in government. Though extra feedback and input are good, it will also require more legislative assistants and other government employees to moderate blogs, dispatch requests for service and respond to constituents. Some people feel compelled to use offensive language to express their ideas or characterize elected officials and government in general. This means blogs, social networking sites and video and photo submissions must be monitored and moderated, which may lead to charges of censorship. Censorship and Public Disclosure Most jurisdictions have Freedom of Information Act (FOIA) or public disclosure laws that require archiving public records. Web 2.0 technologies will increase the volume of material to be archived and potentially turned over to the public through FOIA requests. This will require better and more expensive archival and search technologies. A Balanced Picture & Web 3.0 Elected officials seek constituent input on all public issues. And the response from the public - overwhelmingly - is apathy. Obtaining a true picture of what constituents think, even with Web 2.0, will be difficult. I hasten to add that all techniques have this problem, including traditional ones such as public meetings and e-mail (I guess it's "traditional" now). There are only so many issues an individual or government official can pay attention to. While governments grapple with the possibilities and implications of Web 2.0, it's worth noting that Web 3.0 is hot on our heels. It's a subject for another time, but I'll tantalize you with this tidbit: Truly high-speed broadband is coming with fiber-to-the-premises, 100 Mbps symmetric networks, which would make a whole host of new tools and techniques possible, such as two-way HDTV and high-quality interactive gaming. What a wonderful world the 21st century is becoming.
<urn:uuid:dc617287-6e74-4305-b4ff-228214c88b7d>
CC-MAIN-2017-04
http://www.govtech.com/featured/How-Web-20-will-Transform-Local.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943106
1,891
2.703125
3
6 steps to prevent foodborne illnesses Tuesday, Dec 17th 2013 One in six Americans suffers from food poisoning every year. annually. This means there are approximately 76 million annual cases and 5,000 deaths which come as a result of foodborne illnesses, stated Health Magazine. Therefore, it is important that consumers and foodservice workers take steps to prevent foodborne illnesses. Recognizing food poisoning These types of illnesses arise due to a number of factors, including unwashed food items, poor preparation practices and improper refrigeration. However, when an individual has food poisoning, they present certain symptoms which signal the presence of the illness. According to the Mayo Clinic, people who have foodborne illnesses usually experience nausea and vomiting, fever, diarrhea or abdominal cramps or pains, which can last up to 10 days. However, if these symptoms become severe, or if an individual has signs of dehydration, muscle weakness or trouble speaking or swallowing, he or she should seek medical help. Steps to prevent foodborne illnesses To avoid food poisoning, customers and foodservice employees should utilize certain practices to ensure edibles will not cause illness. - At the grocery store, consumers should shop for non-perishables first. Items from refrigerated or frozen sections should be placed in the cart last, so they remain at a proper temperature for as long as possible, according to Health magazine. Once food has been brought home or to another location, cold items should be put away first to prevent them from reaching an unsafe temperature level. - Experts also advise washing all produce, even items that are peeled before eating. Bacteria or other pathogens can permeate the skin of some foods, therefore thoroughly washing these items is important to prevent foodborne illnesses. However, Center for Science in the Public Interest staff attorney Sarah Klein recommended not rewashing triple-washed bagged lettuce as increased handling could cause additional contaminants to be interjected. Furthermore, preparation tools and surfaces should also be washed as well, including knives, cutting boards and countertops. - Keep foods separated during shopping, storage and preparation. Raw meat, poultry, fish and seafood should be kept in individual containers and be isolated from other items to prevent cross contamination. - In addition, temperature monitoring is a must, especially within the foodservice industry. Restaurants and vendors should have temperature sensors present in refrigerators and all storage units to ensure that inventory is kept at the proper level. Experts advise that refrigerated food be kept below 40 degrees Fahrenheit to prevent bacteria growth that causes foodborne illness. - Furthermore, items should be heated to a safe temperature, as consumption of raw or undercooked foods can result in food poisoning. The Mayo Clinic stated that ground beef should be cooked to 160 degrees Fahrenheit, chicken or turkey should reach 165 degrees Fahrenheit and fish should be cooked at 145 degrees Fahrenheit. Health Magazine stated that bacteria grows the fastest between 40 and 140 degrees Fahrenheit. - Above all, when food seems questionable or an individual is in doubt about an item, he or she should not consume it. As the Mayo Clinic puts it, "when in doubt, throw it out."
<urn:uuid:f9f3cec9-92bc-49b2-a2bb-8351e3c37080>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/6-steps-to-prevent-foodborne-illnesses-555119
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941648
633
3.09375
3
CHARLOTTETOWN, PRINCE EDWARD ISLAND--(Marketwired - Jan. 9, 2014) - Public Safety Canada The Honourable Gail Shea, Minister of Fisheries and Oceans and Member of Parliament for Egmont, PEI, on behalf of the Honourable Steven Blaney, Minister of Public Safety and Emergency Preparedness, today highlighted the success of the crime prevention program, Girls Circle, which is making a difference for at-risk girls on PEI. Operated by the Women's Network PEI, Girls Circle is helping girls, aged 10 to 12, avoid involvement in criminal activities by providing support services that focus on increasing positive connections, personal strengths and competencies, as well as connecting participants to their community. The Girls Circle program is preventative in that it builds on girls' existing strengths and gives them solid, real-life applicable skills, which help them to build resiliency and good mental health. They have built critical thinking skills, confidence, and feel better equipped to make value-based and healthy decisions about their lives. - The Government of Canada has provided $682,823 over three years to support this important project. - From April 2012 to March 2013, the Government funded 105 community-based crime prevention programs through Public Safety Canada's National Crime Prevention Centre, in which more than 16,000 at-risk youth participated. - Last fall, the Government also committed up to $10 million toward new crime prevention projects under the National Crime Prevention Strategy's Crime Prevention Action Fund. "Through this project, we are offering life skills that help at-risk girls make smart choices in their lives. It is an example of our Government's strong commitment to preventing crime and making our streets and communities' safer places to live, work, and raise our families." -Gail Shea, Minister of Fisheries and Oceans "Girls Circle is a wonderful program that supports our community to work with girls to address risk factors and build protective factors. Things like exposure to addictions, physical and sexual violence, mental health issues, relational bullying, etc. puts youth at risk for both victimization and criminalization." -Michelle MacCallum, Youth Program Manager, Women's Network • Crime Prevention Funding Programs Follow Public Safety Canada (@Safety_Canada) on Twitter. For more information, please visit the website www.publicsafety.gc.ca.
<urn:uuid:6290e881-eb08-42d8-a93b-49f4ad72b0fe>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/government-of-canada-supports-pei-in-fight-against-crime-1867788.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926462
491
2.515625
3
There is an ongoing chess match between law enforcement agencies and civil liberties groups like the American Civil Liberties Union (ACLU). For years, groups like the ACLU have used their power and influence to limit the reach of law enforcement agencies and to protect personal privacy and basic rights. Today, the debates are about emerging technologies like drones and automatic license plate readers, technologies law enforcement often want to use but face resistance in doing so. It’s for that reason the International Association of Chiefs of Police (IACP) released the IACP Technology Policy Framework. The 11-page document outlines standards for how law enforcement agencies are to use and manage such technologies. The hope, the IACP reported, is that such a framework will allow law enforcement to make more effective use of technology while also appeasing civil liberties groups -- to an extent that will in turn allow law enforcement to gain greater access to valuable technology when those groups see that law enforcement agencies are taking the power of such technologies seriously and doing their part to safeguard personal rights. Like this story? If so, subscribe to Government Technology's daily newsletter. The framework outlines how to create policies, what things should be considered and provides nine “universal principles” for the creation of technology policies. The nine universal principles provided by the IACP Technology Policy Framework are: • Specification of Use • Policies and Procedures • Privacy and Data Quality • Data Minimization and Limitation • Performance Evaluation • Transparency and Notice • Data Retention, Access, and Use • Auditing and Accountability The document concludes by saying that the proper management of such technologies is important, and states that ongoing training for new technologies must be maintained so as to protect the privacy of citizens, as well as the security of law enforcement agencies’ systems.
<urn:uuid:b5af05f6-87da-45f3-ad83-5ec177887122>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/IACP-Outlines-Standards-for-Police-Tech.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00212-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913402
366
2.890625
3
How’s this for a data analytics challenge? Consider the thousand or so variables that an operating jetliner records every second. Add reports written by pilots and others in the air traffic system. Multiply that by nearly 10 million flights a year in the U.S. Your task: extract information from this mountain of data—much of which is in unstructured text files—so you can predict and prevent safety problems. That mission is the responsibility of Ashok Srivastava, Principal Scientist for Data Mining and Systems Health Management at NASA. It’s a daunting challenge, but Srivastava’s team has made enough progress that Southwest Airlines uses the NASA technology in the company’s operational safety program. The airline and NASA have been working together since 2008 on the data mining project. Predictive analytics is an emerging focus in the data analytics field. It’s all about taking massive datasets and looking for precursors to interesting events, said Srivastava. “In our case it’s an aviation safety event, but in other applications it could be looking for predictors for a medical event or it could be looking for predictors of a change in the stock market,” he said. The information is there. The question is developing the right tools to uncover it. In 2010, Srivastava conducted a demonstration of the potential for data mining for safety applications, publishing a NASA analysis of flight data which uncovered instances of a type of mechanical problem—excessive wear of the threads on a critical nut—that caused the fatal crash of an Alaska Airlines flight in 2000. The NASA project uses text analytics—algorithms that automatically identify useful information in text documents. Text analytics is a big part of the big data trend because text data falls outside the bounds of traditional information management tools like relational databases, and because there’s a lot of it. The NASA data miners analyze very large text data sets in the hunt for factors that might contribute to aviation safety incidents. “Let’s say you have 100,000 reports that talk about different things going on in the aviation system,” said Srivastava. “People might be talking about engine problems, they might be talking about problems understanding signage in an airport, or they might be talking about confusing runways,” he said. Srivastava’s team is developing machine learning algorithms that can identify patterns and spot anomalies in large text-based data sets. One of the team’s key algorithms—a multiple kernel learning algorithm—combines information from multiple data sources, such as numerical and text data. The NASA project focuses on data sets on the order of 10 terabytes. “We picked that number based on the number of flights that are occurring within the United States and current computing power,” said Srivastava. The team will scale up to larger data sets as the size and complexity of real-world data sets increases, he said. NASA regularly transfers its technologies to the Federal Aviation Administration, and they use the algorithms on much larger, much more complex data sets, said Srivastava. NASA is also sharing the technology with the aviation industry, including Southwest Airlines, he said. Many of the algorithms are open source and available on NASA’s DASHlink site. “The algorithms we’re developing can discover precursors to aviation safety incidents. We’ve already seen that happen,” said Srivastava. As NASA refines the algorithms and deploys them on real systems, and as it shares them with air carriers, new trends will be discovered, said Srivastava. “And some of those might have safety consequences,” he said. Parsing Text for Sentiment Analysis Text analytics isn’t just for big government agencies or life-and-death issues. It’s rapidly emerging as a valuable marketing tool. “Text has never been more interesting than it is now with the huge volume of text-based data that exists in social networking sites,” said Jamie Popkin, managing vice president and Gartner Fellow Emeritus at market research firm Gartner Inc. The overwhelming amount of online content in general—social network sites, wikis, blogs, user forums, e-commerce sites—is text-based. A major thrust in commercial data analytics is correlating information derived from text analytics with data from transaction systems, said Popkin. For example, a company might link business intelligence output from a data warehouse with text data from a customer service center or with things that people are saying on social networks, he said. There are two approaches to text analytics: linguistic models and machine learning, said Popkin. The linguistic approach uses natural language processing to attempt to understand the meaning of the text data. Machine learning algorithms like those NASA is developing identify patterns in text-based data. “Most people are finding that there is a hybrid approach: the combination of the machine learning and the linguistic models,” he said. A hot topic in text analytics is sentiment analysis—figuring out from people’s written words what they like and dislike. “I want to know whether you like a particular feature and whether that like of that feature is something that might drive your intent,” said Popkin. One use of sentiment analysis: understanding which preferences push individuals to make purchasing decisions. Sentiment analysis also measures strength of sentiment. “How much do you like this, how much do you hate this, how much would this affect your opinion on something,” said Popkin. Sentiment analysis is also emerging as an important political tool. “Which candidate do you like, what aspects of the candidate do you like, how strongly do you feel about certain positions being taken by one candidate versus another,” said Popkin. And the all-important question, “will this affect your voting position,” he said. Eric Smalley is a freelance writer in Boston. He is a regular contributor to Wired.com. Follow him on Twitter at @ericsmalley.
<urn:uuid:1d401c95-6c93-4a5d-8049-d8346af9b4fe>
CC-MAIN-2017-04
http://data-informed.com/nasa-applies-text-analytics-to-airline-safety/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928594
1,273
2.9375
3
In this video course, the student will be learning how to design and implement a Server infrastructure. The topics that will be covered in the series are as follows: planning and deploying a Server infrastructure, designing and implementing network infrastructure services, designing and implementing network access services, designing and implementing active directory, and designing and implementing an active directory physical topology. The student will start out learning about automation, enhancements in virtualization technology, Windows Server 2012 R2, and the enhanced features for File and Storage Services. In the next course the student will be looking at three of the infrastructure services that are used in enterprise-level networks. From there, they will take a look at designing a remote access solution with either VPN or DirectAccess, as well as how to design a scalable remote access solution. Learn the design and implementation of the logical structure of Active Directory. In the last course, the student will study how to design replication to occur constantly and efficiently in an Active Directory environment that contains multiple domain controllers. Included in the course are demos and other material that will help demonstrate and reinforce the use of the tools and techniques that you will be learning. 12-Month Online Access Join the conversation for special offers and coupons! Get exclusive offers, sales and special deals delivered to your inbox! We accept the following credit cards: Copyright © 2009–
<urn:uuid:b8024d8c-a833-4acf-805e-5c59dc9117d9>
CC-MAIN-2017-04
http://www.mindhub.com/MCSE-Designing-and-Implementing-a-Server-p/1-0058.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919131
273
2.765625
3
NASA today said an 18-month study featuring teams of aircraft experts from Boeing, Northrop, GE Aviation and the Massachusetts Institute of Technology used all manner of advanced technologies from alloys, ceramic or fiber composites, carbon nanotube and fiber optic cabling to self-healing skin, hybrid electric engines, folding wings, double fuselages and virtual reality windows to come up with a series of aircraft designs that could end up taking you on a business trip by about 2030. NASA highlighted the entries from all four teams. Some of the observations from NASA's report are as follows: The GE Aviation team developed a 20-passenger aircraft that could reduce congestion at major metropolitan hubs by using community airports. The aircraft has an oval-shaped fuselage that seats four across in full-sized seats. Other features include an aircraft shape that smoothes the flow of air over all surfaces, and electricity-generating fuel cells to power advanced electrical systems. The aircraft's advanced turboprop engines sport low-noise propellers and further mitigate noise by providing thrust sufficient for short takeoffs and quick climbs. MIT's 180-passenger D8 "double bubble" strays farthest from the familiar, fusing two aircraft bodies together lengthwise and mounting three turbofan jet engines on the tail. Important components of the MIT concept are the use of composite materials for lower weight and turbofan engines with an ultra high bypass ratio (meaning air flow through the core of the engine is even smaller, while air flow through the duct surrounding the core is substantially larger, than in a conventional engine) for more efficient thrust. In a reversal of current design trends the MIT concept increases the bypass ratio by minimizing expansion of the overall diameter of the engine and shrinking the diameter of the jet exhaust instead. Northrop Grumman foresees the greatest need for a smaller 120-passenger aircraft that is tailored for shorter runways in order to help expand capacity and reduce delays. The team describes its Silent Efficient Low Emissions Commercial Transport, or SELECT, concept as "revolutionary in its performance, if not in its appearance." Ceramic composites, nanotechnology and shape memory alloys figure prominently in the airframe and ultra high bypass ratio propulsion system construction. The aircraft delivers on environmental and operational goals in large part by using smaller airports, with runways as short as 5,000 feet, for a wider geographic distribution of air traffic. The Boeing Company's Subsonic Ultra Green Aircraft Research, or SUGAR, team examined five concepts. The team's preferred concept, the SUGAR Volt, is a twin-engine aircraft with hybrid propulsion technology, a tube-shaped body and a truss-braced wing mounted to the top. Compared to the typical wing used today, the SUGAR Volt wing is longer from tip to tip, shorter from leading edge to trailing edge, and has less sweep. It also may include hinges to fold the wings while parked close together at airport gates. Projected advances in battery technology enable a unique, hybrid turbo-electric propulsion system. The aircraft's engines could use both fuel to burn in the engine's core, and electricity to turn the turbofan when the core is powered down. NASA said it expects to award one or two research contracts for work starting in 2011. NASA began trying to define future passenger aircraft in 2008 asking experts to imagine what the future passenger aviation might look like. NASA said its goals for a 2030-era aircraft were: - Achieve a 71-decibel reduction below current Federal Aviation Administration noise standards -Reduce nitrogen oxide emissions by 75% -Reduce fuel burning performance by 70% -Exploit what NASA called metroplex concepts that enable optimal use of runways at multiple airports within metropolitan areas, as a means of reducing air traffic congestion and delays. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:b36e9d8c-d429-46ff-807f-1c18881be8cc>
CC-MAIN-2017-04
http://www.networkworld.com/article/2230748/security/nasa--what-cool-future-passenger-aircraft-will-look-like.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00266-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930805
806
3.140625
3
A Visual Way to See What is Changing Within Wikipedia July 9, 2012 Wikipedia is a go to source for quick answers outside the classroom, but many don’t realize Wiki is an ever evolving information source. Geekosystem’s article “Wikistats Show You What Parts Of Wikipedia Are Changing” provides a visual way to see what is changing within Wikipedia. The performance program was explained as: “Utilizing technology from Datasift, a social data platform with a specialization in real-time streams, Wikistats lists some clear, concise information you can use to see how Wikipedia is flowing and changing out from under you. Using Natural Language Processing, Wikistats is able to suss realtime trends and updates. In short, Wikistats will show you what pages are being updated the most right now, how many edits they get by how many unique users, and how many lines are being added vs. how many are being deleted.” Enlightenment was gained when actually viewing the chart below: This program calculates well defined reports on Wikipedia’s traffic, and Wiki frequenters might find the above chart surprising. The report in this case shows the reality that Wikipedia is an over flowing pool of information. We are not saying Wikipedia is unreliable, but one should never solely rely on one information source. The chart simply provides a visual way to see what is changing within Wikipedia and help users understand how data flows. This programs potential for real time use on other sites could be tremendous. Jennifer Shockley, July 9, 2012
<urn:uuid:14b13e10-2ec8-462e-a45b-3718e9271a80>
CC-MAIN-2017-04
http://arnoldit.com/wordpress/2012/07/09/a-visual-way-to-see-what-is-changing-within-wikipedia/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00387-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916198
323
2.875
3
Months after sequestration cuts shut down the country's "Space Fence," the Air Force budget unveiled Tuesday preserves $2 billion to build a successor to the program that helps keep space vessels safe from high-speed orbital debris. For the Air Force, keeping its new radar installations off the Pentagon's budget-cut chopping block was essential. The Space Fence is part of the Air Force's Space Surveillance Network, which tracks many of the roughly half-million pieces of debris that clog Earth's orbit. At 17,000 miles per hour, even a marble-sized object could cause catastrophic damage to a spacecraft. And one collision can cause a debris field that leads to many more—a scenario demonstrated in the movie Gravity. The Space Fence helped decrease that risk. In 2012, the network helped satellite owners make 75 maneuvers to avoid collisions. Last September, the Space Fence—responsible for 40 percent of the network's tracking—shut down. It had been in operation since 1961. The Air Force blamed the closing on "resource constraints caused by sequestration." While the first half of that release laments the loss of Space Fence's information, the last five paragraphs hype the newer, better Space Fence, calling it key to the Air Force's future tracking ability. So what did we lose, and what will we gain? First, the advantage of Space Fence over other programs on the Space Surveillance Network is its uncued tracking. Rather than following specific objects, it served as a "trip wire" to monitor space events, such as debris-causing collisions. When the new Space Fence is installed, it will bring back that capability—but on a much more powerful scale. The Air Force hopes to have S-band radars ready by 2018. While the current Space Fence could see about 23,000 objects larger than four inches, something like 480,000 smaller objects still threaten spacecraft. S-band will expand our ability to see those pieces of debris. The first S-band radar, the Air Force confirmed in its budget, will be located in the Marshall Islands, allowing better tracking of orbits that cross the Southern Hemisphere. The second is expected to be built in western Australia. While the Air Force saved $14 million when it shut down Space Fence last year, the new-and-improved version in this year's budget will cost close to $2 billion. And the painful, short-term cuts may have had a role in preserving the program's long-heralded upgrades. Since last year's Space Fence cuts inhibited our ability to track debris in the near-term, wrote Brian Weeden, a veteran of the Air Force's space programs, they made it that much more important to preserve that capacity in the future. By crippling the program's current operations—which Weeden says wasn't necessarily mandated by sequestration—the Air Force added pressure to the Pentagon to approve the the S-band radars and restore the early-warning system. "The Pentagon is under significant pressure to find budget cuts and there was an internal debate over whether the S-Band Space Fence was really worth the investment," Weeden wrote in an email. "...[I]f un-sticking the roadblock to the S-Band Space Fence was the goal, then on the surface [cutting the Space Fence during sequestration] seems it may have worked." He added that there's still no proof the Air Force deliberately cut Space Fence to pave the way for its successor. Air Force spokespeople did not immediately respond to a request for comment.
<urn:uuid:3d1eedb1-add7-4994-be8b-293b13d1a3ae>
CC-MAIN-2017-04
http://www.nextgov.com/defense/2014/03/air-force-congress-save-space-fence/79906/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00111-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952397
735
2.8125
3
The floating point numbers can be specified by any of the following syntaxes as shown below. $iNum1 = 2.143; $iNum2 = 2.1e4; $iNum3 = 2E-10; Depending on the platform on which PHP is running the size of float differs. The 64 bit IEEE format has a precision roughly up to 14 decimal places. The implicit precision of a normal IEEE 754 double precision number is slightly less than 16 digits, which will give a maximum relative error due to rounding in the order of 1.11e-16. So the floating point numbers have very limited precision. Also, we can represent the rational numbers as floating point numbers in base 10, like 0.1 or 0.7, do not have an exact representation as floating point numbers in base 2, which is used internally, no matter the size of the mantissa. Hence, they cannot be converted into their internal binary counterparts without a small loss of precision. This can lead to confusing results as shown below. $iNum1 = '0.1'; $iNum2 = '0.5'; $sResult = floor(($iNum1 + $iNum2)*10); print $sResult; //Output 5 If normal arithmetic is taken into account then the output shall be 6. However PHP usually returns 5 because the internal representation is something like 5.9999999999999991118 . Let us take another example where we shall compare to floating point numbers as shown below. $iVal1 = 9.00 + 2.44 + 1.28 + 3.88; // 16.60 $iVal2 = 16.60; if ($iVal1 == $iVal2) print 'Floating point numbers are equal'; print 'Floating point numbers are not equal'; When the above code is run in PHP it prints "Floating point are not equal". Even though normal arithmetic says both values should be equal, PHP doesn't think they are. This is because internally, computers use a binary floating-point format that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all. When the code is compiled or interpreted, floating point numbers like "0.1" is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens. Updated on Nov 11, 2015 The techReview is an online magazine by Batoi and publishes articles on current trends in technologies across different industry verticals and areas of research. The objective of the online magazine to provide an insight into cutting-edge technologies in their evolution from labs to market.
<urn:uuid:7520586e-3b52-4244-b119-6e220fd6ab0f>
CC-MAIN-2017-04
https://www.batoi.com/support/articles/article/precision-of-floating-point-numbers-in-php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00111-ip-10-171-10-70.ec2.internal.warc.gz
en
0.878663
558
3.640625
4
This course is for anyone involved in Wireless communications from those just starting out, those specifying and designing, through to actual installation engineers. It has been designed to ensure that the student understands the components involved in creating a successful radio link. Looks at how radio transmission works, the technical considerations involved and also comparing microwave and or fiber optic transmission to supply the emerging and ever increasing information/communication demands. No pre-requisite experience is required to take this course as the subjects covered will be of interest to anyone working in the RF and Wireless fields in any aspect. - To provide the student with fundamental information to help them understand the basics of wireless systems and infrastructure. - Understand the terms used in wireless including unit of measurement - Discuss wireless design and the components involved from one end to the other - Look at cellular transmission, with past, current and future technologies from 1G to 5G and WiFi - Investigate different cabling options and look at Antenna theory and design - Show transmission methods including modulation schemes, data rates and spectrum - Discuss methods of increasing antenna coverage to meet ever increasing wireless demands - Explore the complexities of connectivity including Link Budget, Path Loss, Signal to Noise Ratio, Cell Foot Print and coverage - Consider Cell Site Design not only antennas but jumpers, feeders, filters, combiners and amplifier - Find out about RF measurements at a Cell site including line sweeping and VSW - See how important good connections are and if not, resulting possible PIM issue - To provide a look at Microwave backhaul v fiber options and advantages / disadvantages. - Fundamentals of RF Communications - Transmission Theory – radio waves – Units of measurement, Amplitude - Wavelength- Frequency- Velocity- Impedance - Attenuation - Component matching – Phase and Frequency – VSWR-Gain – Amplitude Modulation - Signal to Noise Ratio – Radio Equipment - Frequency – Spectrum and Bands – Cellular Multiple Access Schemes- Cell Types - Standards Relative to Cellular Communications – 1G -5G and WiFi Networks - Path Loss – Propagation – Building Material Density - Link Budgets – Downlink –Uplink Parameters – Intermodulation Distortion – Carrier to Interference Ratio - Free Space Path Loss – Connector and Cable Loss – Link Budgets- Receiver Sensitivity - Dynamic Range - Cell Site Development – Antennas – Splitters – Couplers - Coaxial cable- Connectors- Amplifiers – Filters – Mixers – Oscillators - Circulators – Diplexers - Duplexers - Antenna Types – Dipoles – Antenna Patterns – Polarity – Linear Array- Antenna Gain- Aperture and Beamwidth - Electrical and Mechanical Downtilt – AISG – Adaptive Arrays - MIMO - Market Trends – FTTA – RRH’s – Multiband Antennas - Combiners - Amplification - Tower Mounted Amplifiers- Receiver Multi-couplers – Receiver Desensitization - Horizontal and Vertical Separation – Bandpass Cavity - RF Test and Measurement – PIM and VSWR Fundamentals – Line Sweeping – Insertion Loss - Return Loss - Troubleshooting - Sweeping for Faults – Inspection – PIM Causes and Measurements - Microwave Backhaul – Microwave v Fiber – Microwave Antennas- Frequencies - Microwave Paths – Link Lengths – Antenna Gain – Beamwidth - Front to Back Ratio How will I learn? You will study this course online in a self-paced format. The course is made up of a number of webcast lessons and online multiple choice assessments giving immediate feedback. The course content is supplemented by some and PDF and video support material. It is highly recommended that the student augment their learning with real world design and installation experience, preferably with the help experienced colleagues or mentors. Successful completion will require: Any level of pass in the accumulative overall assessment score. Is this the right course for me? If you have an interest in RF Wireless communication networks, specification, termination, cabling, antennas and need to understand the principals, then this course is for you. Even if you are unlikely to ever physically do any practical work in your current job role, understanding what is involved is also very important to help in a project. This course is part of the Infrastructure Specialist Certification. By completing this course you will achieve the CommScope RF Wireless Infrastructure Specialist (CRWIS) certification and will be provided a badge and entry on the CommScope certification database. Upon successful completion you will: Receive a course certificate that may be self-printed. BICSI CECs: 5 Event ID: OV-COMMS-IL-0315-1 Certificate valid for 3 years Estimated study time: 5h Webcast duration: 2h 58m
<urn:uuid:c3a6ac14-1dfd-4214-a0dd-ed8c5dd56934>
CC-MAIN-2017-04
https://www.commscopetraining.com/courses/wireless/sp6500/rf-wireless-infrastructure-fundamentals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879666
1,030
3.34375
3
Geoff Harris is the President of the UK Chapter of the Information Systems Security Association (ISSA) a not-for-profit, international organization of information security professionals and practitioners. In this interview he discusses cyber warfare. In your opinion, how far are we from a international consensus as to what constitutes an act of cyber warfare and what issues do you predict will be the major stumbling blocks on the road to such an agreement? There have been various incidents that could be described as acts of cyber warfare. The Cyber-attacks against Estonian systems in starting in April 2006 with data-flooding attacks on key government websites, culminating on a coordinated Distributed Denial of Service (DDoS) attacks on key government, financial and media sites in May 2006 certainly would fall under this category. In terms of what constitutes an act of cyber warfare, I would have to refer this to international lawyers and powers such the United Nations. There has been enough debate around what constitutes an act of war in convention terms e.g. Iraq Like the definition of “war” itself, the term “cyber war” is complex. The most basic definition is that cyber war simply entails waging war through digital, technological means. According to the Institute for Advanced Study of Information Warfare (ASIW), they have defined cyber as “the offensive and defensive use of information and information systems to deny, exploit, corrupt, or destroy, and adversary’s information, information-based processes, information systems, and computer-based networks while protecting one’s own. A country will have to be extremely careful when it comes to attributing attacks to a source, because the cyber world is eminently suitable for misdirection and subterfuge, and traces left by attackers are not obvious to the greater public. Do you think there will be a need for some kind of international court or body that will have final ruling on who’s to blame for attacks and breaches that cross the line between cyberterrorism and cyber espionage and cyber warfare? In the example above, the Estonian defence authorities traced the sources of the attacks to Russian IP addresses. However the Russian authorities were unable provide details on the individuals owning these IP addresses stating that they had no legal powers to do so, apparently stating that these acts were not illegal in Russia at that time. Organization such as The Internet Governance Forum (IGF) have been doing some excellent work in areas such as: - The definition of security threats, international security cooperation, including such issues as cybercrime, cyber terrorism and cyber warfare. - The relationship between national implementation and international cooperation. - Cooperation across national boundaries, taking into account different legal policies on privacy, combating crime and security. - The role of all stakeholders in the implementation of security measures, including security in relation to behavior and uses. - Security of internet resources. What are your thoughts about the partnership between Google and NSA? It is in the interest of national security and law enforcement organizations to work together with major Internet providers such as Google to combat all forms of e-crime to make the Internet as safer place to work and play. Users have a choice when signing up to use such services and should read privacy and service level contractual terms if they have any concerns. I would expect other Internet service providers and products vendors to follow. In some ways, cyber wars will surely emulate real-life ones. Do you see countries that are historical allies banding together? Absolutely, an example of this is the recent Directive 2008/114/EC “on the identification and designation of European critical infrastructures and the assessment of the need to improve their protection”. In March 2009 the EU Commission published a Communication on Critical National Infrastructure Protection entitled “Protecting Europe from large scale cyber-attacks and disruptions: enhancing preparedness, security and resilience” (COM(2009)149 final, Council document 8375/09). This document was accompanied by 400+ pages of “Impact Assessment” (COM(2009)399 and 400, Council document 8375/09 ADD 1-4) setting out the background to the Commission’s approach to this issue. What would be the cyber counterpart of gained territory and resources in a meatspace war? This could be gaining control of systems and networks at either administrator privileged access level or the ability to launch so form of attack from them e.g. botnets which are used to perpetrate a host of different attacks including DDoS, keylogging, spamming, phishing, Web-scraping, etc. Botnets are of interest to many bodies, including those with commercial, criminal, military, intelligence or terrorist interests. The scale of the problem and the future potential is large and growing. It demands a coordinated approach by all stakeholders. It cannot be addressed by, for example, law enforcement or military action alone. However, the more covert and subtle intelligence gathering threats potential present more of a risk from mass attacks which are easily detected and traced to their originating sources. Examples of these are Targeted Trojan Email Attacks as defined in the UK’s National Infrastructure Security Co-ordination Centre advice paper released 5 years ago. In this reference was made to a series of trojanised email attacks targeting UK Government and companies with the aim of the attackers to covertly gather and commercially or economically valuable information. Trojans were detected delivered in email attachments or through links to a websites using techniques such as social engineering, spoofed sender address and sending information relevant to a recipient’s job or interests. Once installed on a user machine, Trojans could used to obtain passwords, scan networks, exfiltrate information and launch further attacks. The recent attacks on targeted Google users in China and other global organizations are no surprise. The advice above was given in 2005 and attackers intent on commercial, criminal, military, intelligence or terrorist e-crime will use these as well as product vulnerabilities. The January 2010 Internet Explorer Vulnerability (Microsoft Security Advisory 979352) used for these attacks earlier this year falls into this category. Should “political” hackers be considered paramilitary organizations under the leadership or, at least, working with tacit approval of the country whose interests they promote? E-crime is a criminal act as defined by the jurisdiction in which the act takes place. Organizations such as The Internet Governance Forum (IGF) that are addressing the issues of law enforcement and cooperation across national boundaries need to address this, taking into account different legal policies on privacy, combating crime and security. You serve as the Cyber Defence conference chairman. How important is this event? Will there be a need for more focus on this topic in the near future? This event now in its 3rd year will discuss and raise issues such those above. It serves as an important event to collect latest views and thought leadership on International Cyber Defence issues. I can only see this topic growing throughout this decade as all counties use of the Internet for part of their critical national infrastructures continues to grow.
<urn:uuid:b4ea8f92-797b-4494-9f53-90f40803371a>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2010/04/08/qa-cyber-warfare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00231-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949484
1,446
2.78125
3
This article uses economic criteria to define what it means for a project to fail. It then categorizes how projects fail and finally, it examines common traps that contribute or accelerate project failure. The cost, feature, product spiral Economics of Adding Features Organizations must consider the cost of adding features to a product. Figure 1 shows a software project whose returns outpace the cost of production, thus producing a positive ROI. Figure 2 depicts a product that initially has a positive ROI, but whose added features cost (marginal cost) more than the amount of return generated by the features. This initially profitable product becomes a drag on the company. Figures 1 and 2 are deceptive because under most software processes, the cost of changing software is not linear, but exponential. Brooks (1) attributes the exponential rise in costs to the cost of communication. Changes to software include new features, bug fixes and scaling. The effects of exponential cost of production can be characterized by three properties. First, new projects are successful because the cost curve is flat. Second, once the costs start increasing, they quickly overcome any additional value added from the new features. Finally, if changes are made after the costs become exponential, the additional costs will quickly overwhelm all returns garnered from the product to date. Figure 3 details the effects of an exponential cost of change. Software processes are designed to manage the cost of change. An examination of cost management and processes is beyond the scope this article but will be the topic of a future article. Briefly, processes that follow waterfall and iterative models control costs by reducing need for change as costs increase. In contrast, processes based on the spiral model ensure that the cost of change is fixed. This article assumes an exponential cost of change as most projects are based on waterfall or iterative models. Changes are often unavoidable because there are no successful medium-sized software projects. Successful projects require a significant amount of development and become a company asset. Maximizing ROI means expanding the market and the addition of features which, in turn, increase the investment in the product. If the next version is successful, this increased investment leads to an even greater desire to maximize returns. If the cost of change becomes exponential, high cost makes adding features impractical and development must stop. Unfortunately, most companies do not realize this point exists and spend huge sums on dead products. Software Failure Modes Exponential costs of change belie a stark reality: Unless the product is shipped before the cost of change becomes exponential, it will very likely fail. Many projects become races to see if enough features can be created to make a viable product before adding the additional required features becomes too expensive. There are four failure modes that prevent product completion: Hitting the wall before release: A small team of programmers is making good progress adding features to a product. Before the needed features can be delivered, some event makes the cost of change exponential and all progress stops. These events may include losing a key team member, adding team members to accelerate production, unforeseen difficulties with technology choices, unforeseen requirements, and major changes in target audience/market. Figure 4 shows how the minimum number of features will never be reached. 90% done: A team of programmers is making steady progress but never finishes the required features because of a gradual rise in the cost of change. This failure mode is often unavoidable because the riskiest features are often put off until last. These features often require so much complexity that their solutions overwhelm the development process. Proper risk mitigation is essential to avoiding this failure mode. Endless QA: Endless QA occurs when a product ships with all features completed, but still has too many bugs to make it into production. If the cost curve has become exponential, these bugs will take longer and longer to fix. As the cost of change increases, any given change will likely cause more bugs. Figure 6 demonstrates how the fixing of bugs once the product is released to QA can ruin ROI. The higher the cost of change before delivery to QA, the larger the number of bugs. Indeed, the number of bugs at QA is a good indirect metric of the cost of change. Version 2.0: Most failures of version 2.0 of any product can be traced to exponential cost of change. During version 1.x, the cost of change has become exponential. The new features will never generate high-enough returns to make up for the costs of producing the version. Figure 7 diagrams this effect. What is most frustrating for many teams is that after a successful first version, the costs of change may have become so high, that it is unlikely the second version will ever ship. If costs do increase exponentially, development teams must ensure cost is managed until delivery of the product. If they don't, failure is all but guaranteed. Unfortunately, there are several traps for developers that accelerate the onset of exponential costs of change. Interestingly, all of these techniques are designed to accelerate development at the beginning of the project, but the costs of using may overwhelms any savings. Here are four of the most common traps: Prototype trap. Product prototypes are great ways to prove technologies, techniques and reduce risk. However, unless the economics of development are understood, they become liabilities. The problem is how much money is spent on the prototype. If enough resources are spent on any given prototype it becomes too valuable to throw away. Most developers intend to throw away a prototype once it is completed and the resulting code quickly becomes expensive to change. The prototype trap can be avoided by ensuring that no significant investment is spent on any given prototype. There are many situations where prototypes are necessary, but they must never endanger a project by reducing the amount of resources available to finish. 4GL trap. 4GLs such as Visual Basic (VB), Forte, 4GL, and Magic allow developers to rapidly develop applications by making assumptions about how data will be accessed and displayed. The problem with 4GLs is that the code is very hard to modify after it has been created. This accelerates the cost of change. In addition, a language that makes some applications easy to create becomes a hindrance when the problem domain exceeds the design of that language. Often, the only way around these limitations is to use some other language such as Java or C++ to solve the unsupported problem. The interfaces between multiple languages are notoriously expensive to maintain and extend. Anyone who has tried to make a VB application perform and look like a professional, highly polished standalone application will immediately realize these limitations. The 4GL trap is easily avoided by understanding the limitations of each language and only using it if all of the features required by the product fit within the assumed model of the language. This is the most insidious part of this trap. Most 4GLs are marketed as being designed for novice programmers with little training. Microsoft has been particularly aggressive in marketing VB to companies as the way to hire 'cheap' programmers. Unfortunately, these are precisely the people who should not be making the decision about when a particular language is adequate for solving a given problem. Choosing the wrong language will ensure that the product will never ship. Scripting trap. Scripting languages allow the easy creation of sophisticated software by sewing together existing applications. Advanced scripting languages such as Perl are very powerful and can be used for a variety of purposes. Operating systems such as Unix are designed to be easily integrated through scripting languages and have far lower cost of ownership than those whose management tools are grafted on with pretty user interfaces. The trap lies in the sophistication of these languages and the mechanisms that make it easy to write programs. Most scripts are not maintainable or even readable by those people who created them. This does not mean that scripts are bad things. They are the perfect solution for integrating existing tools and making small programs. However, since they are always expensive to maintain, the amount of effort put into any single script should be below the threshold of throwaway code: essentially, it is usually cheaper to rewrite the script than to try to modify it. A stark example of the scripting trap comes from Excite. Excite built its original search and Web serving infrastructure in Perl on Unix machines. Perl allowed Excite to quickly create products that competed with more mature companies such as Yahoo and Web Crawler. However, by 1998, maintenance expenses made it impossible to add new features. Excite had to stop all production and rewrite its infrastructure in Java. This transition took many months and hindered Excite's competion in the other markets such as online shopping and video streaming. Avoiding the trap is relatively easy. There are many applications that are small and will remain small forever. These are perfect for scripting languages. If new features are required, this small size makes it easy to rewrite in an OO language to control cost of change. Integrated Development Environment (IDE) trap. Many companies product IDEs that allow developers to quickly deploy code that they write. Examples include Microsoft's Visual Interdev Studio and .NET framework, IBM's Visual Age and Oracle's 8i. The problem with these environments is that they make assumptions about the target deployment environment and workgroup configuration. The problem is that companies do not design these tools to help developers, but lock developers who use their IDE's into their platforms. In the real world of changing requirements, platform restrictions are often deadly. These restrictions include limited OS support, limited APIs that may make certain features impossible, or platform bugs. Often, the only way around these restrictions is to rewrite major amounts of code. The IDE trap is easily avoided by choosing tools that do not lock you into a vendor's technology. In addition, development teams must deploy to production style systems early in the development process. This allows adequate time to develop the necessary scripts and procedures to ensure proper delivery. Reengineering trap. Reengineering projects is designed to address exponential cost of change of an existing system. Lessons learned in previous versions can be applied to control the cost of change. Reengineering almost always fails because the existing code cannot be easily changed because the cost of change is exponential. If the cost of change was not exponential, there would be no reason to reengineer. This makes it extremely expensive to work with the existing code. As a result, reengineering usually takes as long or longer to complete than the original product while producing the same set of features. If it took 10 man-years to complete the first product, it will probably take 10 man-years to complete the reengineered version with exactly the same features. Ten man-years for a zero-sum gain. This is why reengineering projects are rarely completed. The reengineering trap is avoided by developing a migration strategy. All new features must be made separate from the original code base to avoid the exponential cost of change and the original code base is mined for completed features. Whenever a bug is encountered in the original code base or an existing feature needs to be extended, the existing code is removed and refactored into the new code base. These migrations are expensive, but there is no way to avoid them. In this way, an organized reengineering of only those sections that are not currently adequate will be performed. The cost of changing these sections will be exponential, but will hopefully be limited. In a capitalist economic system, software must possess a positive ROI in order to make sense to an organization. Many software products fail not because there is no market, but because the cost of creating the software far outstrips any profit. Exponential costs of change exacerbate this problem. Software processes are designed to manage these costs; however, it is crucial that an organization understand how and when the costs of creating software will outstrip the worth of a product. Fortunately, software products tend to fail in one of four modes. By understanding how these modes organizations can choose the appropriate software process to avoid these failures. Each software process model (waterfall, iterative, spiral) has a different approach of managing costs. How each process attempts to manage costs is beyond the scope of this article. However, understanding how costs contribute to failures is crucial to picking a model and process appropriate for your organization. Finally, regardless of the chosen software process, there are several traps that can accelerate the exponential cost of software production and must be avoided at all costs. The tools that cause these traps are essential to the existence of any software organization, but inappropriate selection will invariably lead to failure. Fortunately, it is usually possible to avoid these traps. Carmine Mangione has been teaching Agile Methodologies and Extreme Programming (XP) to Fortune 500 companies for the past two years. He has developed materials to show teams how to move from standard methodologies and non-object oriented programming to Extreme Programming and Object Oriented Analysis and Programming. He is currently CTO of X-Spaces, Inc. where he has created an XP team and delivered a peer-to-peer based communications infrastructure. Mangione is also a professor at Seattle University, where he teaches graduate-level courses in Relational Databases, Object Oriented Design, UI Design, Parallel and Distributed Computing, and Advanced Java Programming. He holds a B.S. in Aerospace Engineering from Cal Poly Institute and earned his M.S. in Computer Science from UC Irvine. The Mythical Man Month, Brooks, F.P., Addison-Wesley, 1995.
<urn:uuid:bb8fe469-2e4d-433e-b08c-2667d68f4a23>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/1563701/Software-Project-Failure-The-Reasons-The-Costs.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00167-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948061
2,736
3.09375
3
Maintaining the Active Directory Environment These questions are based on 70-649 – TS. Objective: Maintaining the Active Directory Environment Sub-Objective: Configure backup and recovery Single Answer, Multiple Choice You are the systems administrator for your company. You install Windows Server 2008 on a computer and configure it as a file server, named FileSrv. The FileSrv computer contains four hard disks that are configured as basic disks. You want to configure Redundant Array of Independent Disks (RAID) 0+1 on FileSrv for performance and fault tolerance of data. To achieve this, you need to convert the basic disks in FileSrv to dynamic disks. Which command should you use? You should use the Diskpart.exe command. RAID is commonly implemented for both performance and fault tolerance. There are various RAID levels that you can choose from to provide fault tolerance, performance or both. RAID 0 uses disk striping and offers the fastest read and write performance, but it does not offer any fault tolerance. If a single disk in a RAID 0 array is lost, all data is lost and will need to be recovered from backup. RAID 1 uses disk mirroring with two disks. This configuration produces slow writes, but relatively quick reads, and it provides a means to maintain high data availability on servers because a single disk can be lost without any loss of data. RAID 0+1 combines RAID 0 and RAID 1 and offers the performance of RAID 0 and the fault tolerance of RAID 1. To be able to configure RAID 0+1, you must have dynamic disks. If your disks are configured as basic disks, you can convert them to dynamic disks with the Diskpart.exe utility. The Diskpart utility enables a superset of the actions that are supported by the Disk Management snap-in. You can use the Diskpart convert dynamic command to change a basic disk into a dynamic disk. The Chkdsk.exe command cannot be used to convert a basic disk to dynamic disk. Chkdsk.exe is a command-line utility that creates and displays a status report for a disk based on the file system. The Chkdsk utility also lists and corrects errors on the disk. You should not use the Fsutil.exe command. Fsutil.exe is a command-line utility that can be used to perform many FAT and NTFS file system related tasks, such as managing reparse points, managing sparse files, dismounting a volume or extending a volume. The Fsutil utility cannot be used to convert a basic disk to dynamic disk. The Fdisk.exe command cannot be used to convert a basic disk to dynamic disk. Fdisk.exe is a command-line utility that can be used to partition a hard disk. You can use the Fdisk utility to create, change, delete or display current partitions on the hard disk and to assign a drive letter to each allocated space on the hard disk. Windows Server TechCenter > Windows Server 2003 Technical Library > Windows Server 2003: Deployment > Windows Server 2003 Deployment Guide > Planning Server Deployments > Planning for Storage > Planning for Fault Tolerance > Achieving Fault Tolerance by Using RAID
<urn:uuid:9cf929bd-215e-4937-be95-17d84416bf9d>
CC-MAIN-2017-04
http://certmag.com/maintaining-the-active-directory-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00497-ip-10-171-10-70.ec2.internal.warc.gz
en
0.854718
666
2.515625
3
5G (5th generation mobile networks or 5th generation wireless systems) is the next major phase of mobile telecommunications standards beyond the current 4G/IMT-Advanced standards. In basic terms, 5G will provide significantly faster internet speeds across a range of devices from smartphones, to smartwatches, to cars, to computers. 5G offers internet speed between 10 Gbps and 100 Gbps and provides ultra low latency range from 1ms to 10ms when compared to 4G technology that offers low latency range from 40ms to 60ms: buffering during streaming will be a thing of the past. What will trigger it’s rise? According to the report from Global Market Insights the adoption of 5G is set to soar in next 7 years, largely due to the surging adoption of internet enabled devices connected throughout the world by the Internet of Things (IoT). The report suggests that 5G technology growth will be due to the rising high speed and large network demands of various industry applications such as autonomous driving, distant learning, video conferencing, telemedicine and augmented reality. 5G may also be necessary to facilitate the foundational infrastructure for smart city development by enhancing mobile network performance capabilities. The projected growth of 5G comes down to the fact it provides very high bandwidth for mobiles and advanced features for various smart devices. It is considered to be extremely endurable for WWWW (Wireless World Wide Web), and is likely to support millimetre wave, M2M/IoT, multiple-input and multiple-output (MIMO) application as well as device-centric network architectures, along with spectrum sharing functionalities, according to the report. Growing adoption of mobile broadband as well as growing machine-to-machine communication in organisations is expected to fuel the global 5G technology market size over the forecast period. Rapid growing demand for high internet speed in order to get real time response is anticipated to be one of the key driving factors for 5G technology market growth over the next 7 years. The benefits offered include high resolution, bi-directional large bandwidth shaping, supervision tools for fast action, precise traffic statistics and supports as many as 65,000 connections. This in turn is predicted to further propel industry growth from 2016 to 2023, suggests the report. The U.S. 5G technology market size is forecast to witness particularly high growth rates over the forecast timeframe. The Federal Communications Commission (FCC) has already begun an assessment of the allocation of 5G frequencies, according to the report, while Asia Pacific 5G technology market share is predicted to exhibit significant growth prospects over the forecast period. Countries such as India, China and Korea are expected to have major contribution owing to numerous ongoing initiatives and developments. Japan is expected showcase the 5G innovation throughout the PyeongChang 2018 Winter Olympics as well as Tokyo 2020 Summer Olympics. Events like these will have an impact on the growth of the 5G technology market size. Indeed, “the European Union [as well] has promised that by 2020 every European city, town and village will be connected with free wireless internet and will fully deploy 5G mobile networks by 2025,” said Robin Kent, director of European operations at Adax. It is a global phenomenon. A challenge to 5Gs growth The move to 5G may not be as smooth – or as imminent – as this report and other industry observers predict. Indeed, the report itself highlights that a challenge to this predicted growth is 5G’s interdependence on various other technologies, such as millimetre wave propagation (wireless connectivity). “There is a lot of talk about 5G being the next big thing in the telco world,” said Kent. “Media commentators and industry experts alike are making bold predictions about how it will support the vast number of IoT networks due to the conception that it allows a greater volume of connections than current 4G networks.” “If operators aren’t prepared they could face not being able to carry the huge levels of traffic required by the host application to any and all of its possible destinations.” “We are still seeing issues with 3G and 4G networks. The industry hasn’t fully utilised them yet, so the move to 5G isn’t necessarily going to be a smooth as some might think.” “It is therefore vital that service providers have the right tools in place for 5G to be successfully implemented.”
<urn:uuid:471a17c6-e82d-46f5-8d98-d0f754dc6a88>
CC-MAIN-2017-04
http://www.information-age.com/5g-market-expected-grow-123462993/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00497-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943922
942
2.609375
3
It looks like self-driving cars may be on the road sooner than most people had thought -- at least in Nevada. The state passed Bill 511 (PDF document) last week, authorizing executives at the state's Department of Motor Vehicles to begin coming up with a set of rules of the road for autonomous, or self-driving, vehicles . This is the first step in what could be a lengthy process in getting autonomous cars, which are designed to use artificial intelligence, computer sensors and GPS instead of human drivers, on the nation's roads. But the move must be seen as good news to companies such as Google and General Motors, along with researchers at institutions such as Stanford, Cornell and Carnegie Mellon University. All of these organizations have been working on autonomous cars. Just last fall, Google announced that its engineers were working on software for self-driving cars . Google's self-driving car reportedly logged 140,000 miles in California, driving -- with a trained driver and software engineer on board -- around Lake Tahoe, across the Golden Gate Bridge and along the Pacific Coast Highway. And about six months before that, GM showed off a car dubbed Electric Networked-Vehicle, or EN-V. The two-wheeled, two-seat electric car is designed to be driven either normally or autonomously. Self-driving cars also have been the focus of high-tech contests sponsored by U.S. government's military research arm, the Defense Advanced Research Projects Agency or DARPA. The race pits teams of researchers from universities like Virginia Tech, Stanford and Cornell, against each other as they test their robotic vehicles on a long course. Work on robotic cars has advanced to the point that one Stanford researcher said developing self-driving cars could help the U.S. auto industry take back its global leadership role, and maybe even save the industry as a whole . Technology, specifically artificial intelligence, could revolutionize what automobiles are able to do, said Sebastian Thrun, a professor of computer science and director of the artificial intelligence laboratory at Stanford. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin or subscribe to Sharon's RSS feed . Her e-mail address is firstname.lastname@example.org . Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Nevada paves way to getting robotic cars on the road" was originally published by Computerworld.
<urn:uuid:1efbb22e-bd7b-4664-bb1b-97672bdce026>
CC-MAIN-2017-04
http://www.itworld.com/article/2741424/hardware/nevada-paves-way-to-getting-robotic-cars-on-the-road.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00341-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955907
518
2.96875
3
What You'll Learn IBM BigInsights Overview - Understand the purpose of big data and know why it is important - List the sources of data (data-at-rest vs data-in-motion) - Describe the IBM BigInsights offering - Utilize the various IBM BigInsights tools including Big SQL, BigSheets, Big R, Jaql and AQL for your big data needs. IBM Open Platform (IOP) with Apache Hadoop - List and describe the major components of the open-source Apache Hadoop stack and the approach taken by the Open Data Foundation. - Manage and monitor Hadoop clusters with Apache Ambari and related components - Explore the Hadoop Distributed File System (HDFS) by running Hadoop commands. - Understand the differences between Hadoop 1 (with MapReduce 1) and Hadoop 2 (with YARN and MapReduce 2). - Create and run basic MapReduce jobs using command line. - Explain how Spark integrates int the Hadoop ecosystem. - Execute iterative algorithms using Spark's RDD. - Explain the role of coordination, management, and governance in the Hadoop ecosystem using Apache Zookeeper, Apache Slider, and Apache Knox. - Explore common methods for performing data movement - Configure Flume for data loading of log files - Move data int the HDFS from relational databases using Sqoop - Understand when t use various data storage formats (flat files, CSV/delimited, Avro/Sequence files, Parquet, etc.). - Review the differences between the available open-source programming languages typically used with Hadoop (Pig, Hive) and for Data Science (Python, R) - Query data from Hive. - Perform random access on data stored in HBase. - Explore advanced concepts, including Oozie and Solr Who Needs To Attend This intermediate training course is for those who want a foundation of IBM BigInsights. This includes: - Big data engineers - Data scientist - Developers or programmers - Administrators who are interested in learning about IBM's Open Platform with Apache Hadoop. This course consists of two separate modules. The first module is IBM BigInsights Overview and it will give you an overview of IBM's big data strategy as well as a why it is important to understand and use big data. The second module is IBM Open Platform with Apache Hadoop. IBM Open Platform (IOP) with Apache Hadoop is the first premiere collaborative platform to enable Big Data solutions to be developed on the common set of Apache Hadoop technologies.
<urn:uuid:c0cf6958-f3f4-4f34-99e9-3f5e0d4f8daa>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/120911/ibm-biginsights-foundation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.845194
566
2.59375
3
Let’s face the reality that robots will gain cognitive skills. This is not self-awareness. But it is an ability to interact in ways that prompt human emotional attachment. People do get emotionally attached to things. We all know this. But we have little idea how people will ultimately respond to machines that can converse, learn and demonstrate an interest in your life. Robotics has to be in Apple’s development thinking. It is a logical extension of the iPhone Siri capability six or a dozen generations from today. Imagine that Apple will develop a walking, smiling and talking version of your iPhone. It has arms and legs. Its eye cameras recognize you. It will drive your car (and engage in Bullitt-like races with Google’s driverless car), do your grocery shopping, fix dinner and discuss the day’s news. Apple will patent every little nuance the robot is capable of. We know this from its patent lawsuits. If the robot has eyebrows, Apple may file a patent claiming rights to “a robotic device that can raise an eyebrow as a method for expressing skepticism.” But will Apple or a proxy group acting on behalf of the robot industry go further? Much further. Will it argue that these cognitive or social robots deserve rights of their own not unlike the protections extended to pets? Should there be, minimally, anti-cruelty laws that protect robots from turning up on YouTube videos being beaten up? Imagine if it were your robot? The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality — if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient. Darling’s interesting and thoughtful paper also discusses the risks and controversies likely to emerge by giving legal rights to robots. Some argue that the development and dissemination of such technology encourages a society that no longer differentiates between real and fake, thereby potentially undermining values we may want to preserve. Another cost could be the danger of commercial or other exploitation of our emotional bonds to social robots. If Apple or any company can make a robot that leaves the factory with rights the marketing potential, as Darling makes note of, may be significant. But then if corporations are people, why not give rights to their assembly line babies? This is all weird, fascinating, discomforting and academic still, but on its way.
<urn:uuid:98fe1a2f-d7b1-49e3-bbe3-20e60a2129f5>
CC-MAIN-2017-04
http://www.computerworld.com/article/2472835/personal-technology/if-apple-makes-robots--will-robots-have-rights-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00213-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934778
540
2.90625
3
A comment voiced repeatedly by enthusiastic state and federal environmental CIOs about the Environmental Information Exchange Network is that it's changing the way they do business. This Internet-based innovation provides vital data -- such as hazardous waste disposal, water quality and air pollution statistics -- to all levels of U.S. government, and does so accurately and quickly enough to foster an effective response. to improve the quality of data; The project is a collaborative effort between the U.S. Environmental Protection Agency (EPA) and the Environmental Council of the States (ECOS), and is referred to as simply "the exchange network" or even just "the network." Its name isn't set in stone -- in the evolving technological environment, they're making it up as they go. Already, it is the largest network of its kind anywhere and reflects a fresh approach. "The exchange network exemplifies the benefits of collaboration -- different parties working together to create a better solution for all -- in this case, a national environmental computer network," said Kim Nelson, the EPA's CIO and assistant administrator for environmental information. "The EPA needs timely environmental information to make informed policy decisions. The states and other partners require accurate data to monitor their progress toward cleaner water and air, and the American public is entitled to view the latest and best available data on their communities. By combining resources, all parties get what they need quicker and at less cost to the taxpayer." The point of the network is to foster timely sharing of vital environmental data between states and federal government better, faster and cheaper than was possible even a decade ago. Replacing the old linear hierarchy of states sending data to Washington, D.C., the network enables universal exchanges between all levels of state and federal government, in almost real time. And the information is critical. The country spends billions of dollars every year to protect the environment -- the EPA's budget for fiscal 2006 is $7.5 billion. Cleanup costs from past problems add up to billions more, in addition to costs from health care and the economy. The list goes on. Nelson points out that without good information -- and good information being exchanged in a timely fashion -- those costs could be even higher. Clogged Information Arteries It's no mystery how the arteries of environmental data became clogged. The EPA was created in December 1970, by cobbling together a variety of programs from different federal agencies that all had different ways of doing things -- much like the more recently created Department of Homeland Security. Over the years, the EPA's mandates and consequent demands for data from the states increased. Each state responded in its own way, collecting information as it saw fit. There was no uniformity. By the time data was keyed into the EPA system, it was often ancient history. "There was a lot of frustration getting data to the EPA," recalled Molly O'Neill, executive project manager of the ECOS Network Steering Board. Moreover, by the late 1990s, states started moving away from the clumsy EPA system, desiring more advanced internal technologies. In December 1993, 20 frustrated states established ECOS to improve the environment by asserting state roles in environmental management, by providing for interstate exchange of ideas and coordinating environmental management, and by dealing with the federal government. With more states on board in 1995, ECOS opened a permanent office in Washington, D.C. Although it formed in response to EPA shortcomings, ECOS developed into a partner rather than an adversary. Both the EPA and ECOS established mostly collegial relations in the mid-1990s, when information management issues came to the forefront. These practices developed haphazardly, and everyone agreed that the jerry-built system was incoherent and antiquated. As reporting requirements multiplied and missions crept, states responded on paper, by punch cards or on large floppy disks, as their respective systems permitted. As state agencies struggled to supply more data, more hands had to transcribe more, thus increasing the potential for errors. Deadlines varied from almost daily to every few years, further complicating the system. Each state responded in its own fashion. To chart a better course, the states and the EPA initiated an information management work group to tackle major problems that grew in the past decade, including: burdensome reporting requirements; error-prone transcription of data from punch cards, paper and floppy disks; obsolescence of data by the time it became available; and high cost. Along came the Internet. In the late 1990s, the EPA and ECOS grasped the Internet's potential to resolve their nagging hindrances through a uniform system. The concept of an exchange network jelled. It would cost money up front, but the savings over time were promising. In 1999, while this idea was advancing, the EPA created its Office of Environmental Information, which brought together diffused data, and has a mission of better managing information for the public. The office's key responsibilities included information quality, integration, infrastructure and partnerships, along with improving public access to this data. Mirroring the concerns of ECOS, the notion of an information exchange network found a nest. By 2000 there was a conceptual design for working with the Internet, ready for implementation by 2002. In fiscal 2002, Congress helped get things going by appropriating $25 million for the Exchange Network Grant Program, sustaining it for a total of $85 million into fiscal 2006. These appropriations fund competitive grants to states so they change their databases to flow over the network. Participants use the funding to cover setup costs associated with joining the network, including getting their internal data fit to share, which is a problem in some of the smaller, more recently established agencies. Over four years, about one-half of the state network applications -- or nodes -- have been funded. Two Native American nations recently joined the system, with one-third close behind. Thirty-eight states currently have operational nodes in the network, and another seven have nodes still in development. Five states and the District of Columbia have decided not to participate. It is misleading, however, to read too much into states slow to enter the system. State environmental CIO offices are small -- sometimes one-person operations stretched to respond to their governors' priorities. Some must work to clean up their own data before they can share it. Many agencies are too new; others have been reorganized too many times. The District of Columbia doesn't even have an environmental department, although there has been talk of developing one. Exchange network nodes -- the exchange interface on the network -- enable two-way communication between individual states and the EPA, relying on a common set of data elements shared by all participants. As the system developed, states innovated using their nodes to communicate internally and with one another -- and possibilities multiplied. The system ballooned so fast that the EPA isn't yet ready to absorb all of each state's data. Unfortunately states not yet on the system are clustered in the southeast -- the area most impacted by Hurricane Katrina. Mississippi is operational, and its node was knocked out of action by the storm, and Louisiana and Alabama's programs are still in development. Consequently hurricane impact data will be rather old by the time it's available. If the state network nodes had been fully operational, data would be available from as recently as the day before the hurricane. Some southeastern states were late in receiving the EPA's grants -- too late for the recent disaster. Ironically states getting on board late do it faster and cheaper than the pioneers, benefiting from their predecessors' experiences. Oklahoma got its node up and running in just one week. The best technical decision, in retrospect, was the much-debated idea of sticking with eXtensible Markup Language (XML), a computer language uniquely readable both by people and computers. Five years ago, some observers feared that this was a dead-end technology, doomed to rapid obsolescence. Vendors marketed "new and improved" products persuasively, but the EPA and ECOS stuck with XML, which has helped overcome system incompatibilities. Fortunately the risk was worth taking. While other technologies were initially adopted, they were quickly dropped. The central XML decision has paid off, but it might not pay off forever. "One design principle is [that] it's going to change constantly," said Mark Luttner, director of the EPA's Office of Information Collection. "We know it will be different five years from now." Luttner works directly with Andrew Battin, director of the EPA's Information Exchange and Services Division, which Luttner said does all technical, administrative and support work that develops and maintains the network from the EPA side. Battin, who has been with the program more than two years, is regarded by the states as a key participant. As it's developed, the exchange program has pursued four objectives: to improve timeliness by removing bottlenecks; to lower costs and administrative burdens; and to improve public and employee access to information. States own the data they generate, and provide additional data on demand. Since the information need not be transcribed, errors don't creep in casually -- this efficiency removes the traditional bottlenecks of merging data in various forms into the EPA's system. Naturally information arrives cheaper and faster, since it is immediately accessible through the Internet. "In the long run, it's going to change the way we do business rather than just simplify things," said Mitch West, information services manager of the Oregon Department of Environmental Quality. He cites relations with the state of Washington, which shares the same watershed in the portion of the network that deals with exchanging hazardous waste management data. Each state records its data in a slightly different format for internal purposes, but converting the information for intergovernmental sharing is a one-time process, and each state reformats its own data. Thus, Oregon regulates producers shipping hazardous wastes to receivers in Washington by providing access to balancing documents that show the materials arrived where they were supposed to. Much of Oregon's waste ends up in Arkansas, however, where such confirmation is not yet available. The exchange network's development promises to tighten that loose end, achieving the "cradle-to-grave" hazardous waste management optimistically mandated by the Resource Conservation and Recovery Act of 1976. As it developed, the network suggested other imaginative applications of its potential, such as the New Jersey Beach Monitoring System (NJBMS), a partnership pioneered in 2002 between the state's Department of Environmental Protection and Earth 911, a clearing-house for community-specific environmental information. Local monitors can record water sampling results on handheld PDAs, instantly transmitting them to the state's data management system through a completely paperless process, so it's ready for immediate laboratory analysis. The system provides recommendations on beach closings to local authorities, who make their decisions instantly available to the public. The NJBMS resulted from a grant to New Jersey, Delaware, North Carolina, Georgia and California to develop an automated exchange between the states and the EPA. In some ways, the program's growth and extension of its beneficial results parallel those of the exchange network itself. Health officials, the public and other states enjoy free and timely access to its data. The Garden State has been monitoring beach conditions since 1974. Its Cooperative Coastal Monitoring Program (CCMP), with the participation of local environmental health agencies, provides a model for the EPA's Beaches Environmental Assessment and Coastal Health Act. Now technology allows it to work faster, and with better data. "Now historical conditions at monitoring stations can be documented and reviewed as necessary," said Research Scientist Virginia Loftin, manager of the CCMP. "The NJBMS and the exchange network have enhanced and streamlined the operations of the beach monitoring program, and automated data reporting to EPA." As New Jersey's program shows, the network already reaches beyond environmental agencies to address related concerns, such as those of health departments, which can now access environmental data. O'Neill points to the Centers for Disease Control Public Health Tracking Program as a complementary initiative to bring together data that environmental and health agencies must share to make well informed decisions. Such vital interagency sharing, forwarded by the exchange network, has emerged in Washington, Oregon and New Jersey initiatives, as well as some others. State agriculture or natural resource agencies are candidates to partner as well. The exchange is clearly evolving at a remarkably fast pace, spinning off applications as it develops in a kind of creative chaos. When the EPA's Nelson came into office in late 2001, she claims she was "given the ball on the 2-yard line," but observers think her claim regarding the exchange is modest. She helped create the exchange as a Pennsylvania state official prior to moving to Washington, D.C., where she is now working to bring it to maturity. But Luttner is puzzled by the lack of curiosity among other federal agencies to consider the exchange as a model. "It's the most advanced system of its kind," he said. "We'd be delighted to share our experience." So far, there have been no takers. In a way, success has been overwhelming. The problem now is keeping up with the growth and the avalanche of data, said O'Neill. The exchange network has become the poster child for how agencies can craft an intergovernmental solution. Its Web site is neither a federal nor a state Web site, but a site run by officials communicating "out of the box."
<urn:uuid:bedcb171-d6f6-44e1-8083-3eceb56180c4>
CC-MAIN-2017-04
http://www.govtech.com/magazines/pcio/XML-Goes-Green.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00029-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959975
2,741
2.8125
3
The netstat command in Linux is a very useful tool when dealing with networking issues. This command is capable of producing information related to network connections, routing tables, interface statistics etc. This utility also helps the network administrators to keep an eye on the invalid or suspicious network connections. In this article we will understand the basics of this command using some practical examples. The syntax of this command is : netstat [options]... 1. Display routing information maintained by kernel This... [More]
<urn:uuid:b4fd995c-4371-4351-9ea2-315907296b5d>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/58e72888-6340-46ac-b488-d31aa4058e9c/tags/network?lang=es
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.845743
99
2.890625
3
This year, an estimated five billion consumer objects worldwide will become "smart"—connected to the internet—with 25 billion predicted by 20201. For those still unaware, this expanding network of physical objects connected wirelessly to Internet is what makes up the Internet of Things. They are objects which can be used for an endless list of uses (think cars, heaters, door-locks etc.), but all share the talent of being capable of sharing data seamlessly across the Internet of Things by chattering away over the airwaves to one another. For consumers who fall within its wireless omnipotence, the Internet of Things brings countless benefits which are already being felt: the ability to transition seamlessly across devices, being able to control home security from the poolside, or turning up the heat at home whilst on the evening commute, to name a few common examples2. There are huge benefits for businesses too, for example in the Retail industry it is becoming increasingly common for warehouses and dispatch processes to be performed by robots leading to significant supply chain improvements. Just last month, Blue Prism became the first British company using Robotic Process Automation software to list on the AIM, as further evidence that big businesses are more than willing to bring big-data, AI and the Internet of Things into their operations. Is it reasonable to think that we deduce some sort of value from even the most mundane objects by connecting them to the Internet of Things? Even something as uninteresting as the bin perhaps? The Smart Bin The Bin. Not the recycling bin shortcut that sits castaway in the top left-hand corner of your desktop, but our humble trash can, the unglamorous garbage container, the distant descendant of the first municipal landfill established on the outskirts of Athens in 400 B.C. A cursory glance would show that the technology and engineering of the bin has changed very little in the past centuries other than maybe our decision making processes around its use: considering whether our rubbish is recyclable or non-recyclable, and putting it into the appropriate compartment. Does the evolution bin have anything more to add? Nevertheless, the Accenture Innovation Program has taken the principles of innovation and the Internet of Things and applied them to our bins, changing the ways we treat waste disposal, inventory management, and compelling us to reevaluate our own behaviors. The Smart Bin has now become a part of the Internet of Things. The Innovation Program has developed a Smart Bin which can aid inventory management for businesses, and help a user to understand what can be recycled and ensure they are recycling the optimum amount. Imagine a fast food chain that is constantly restocking with the same ingredients. For example a fast-food burger chain is in a constant cycle of inventory monitoring and restocking burger buns. As employees on the burger assembly line use and finish each packet of burger buns they throw the packaging into the Accenture concept Smart Bin. The bin scans the bar code on the packaging as it enters the bin and removes this item from the store’s online inventory. If the online inventory of burger buns falls below a certain point, it can be automatically reordered ensuring the outlet always has enough burger buns for its needs. The bin can also use the barcode of the packaging to identify if the packaging can be recycled, and inform the user if the packaging has been placed in the wrong compartment, helping a company to reduce its ecological footprint. The value of this becomes strikingly apparent when considering that although up to 70 percent of business waste is recyclable, only slightly over 5 percent finds its way to a recycling facility, the remaining misallocated waste goes on to comprise the 25 percent of the UK’s total annual waste output generated by our commercial sector3. The Accenture Smart Bin can help to improve these figures. The benefits outlined here are applicable to any business, or organization, which relies on a high turnover of stock; for example online retailers (who use a considerable amount of packaging materials), cleaning services and the NHS, to name but a few. Helping Cities to be Smart Smart Bins can be given other functionalities too to enable them to become another part of the Smart City. In London, for example, advertising firm Renew used their Smart Recycling Bins to track mobile devices via Wi-Fi as their owners moved throughout the city of London in order to sell tailored advertising opportunities4. Across the pond in New York, waste management firm BigBelly developed their own Smart Bin which tracks how full it is, this then notifies BigBelly which bins need collecting and emptying and which do not, giving actionable data that drives operational effectiveness5. These bins essentially are able to calculate their own remaining capacity and share this information with cleaning crews, on their network, who then allocate and dispense resources only when there is a necessitated requirement. This kind of Smart Bin is also currently being trialed in Bangkok. Using Smart Bins to Change Behaviors The Smart Bin has the potential to be used as a lever to change consumer behaviors. Recently, two graduates in Mumbai devised a Smart Bin which provides free Wi-Fi as a reward for those who put rubbish in the bin; empowering users in an area with a sparse network while tackling environmental issues stemming from poor waste management6. We can imagine how else Smart Bins could be developed to help change behaviors; for example a Smart Bin in a family home could measure calorie intake and make suggestions to improve the family’s health, or a Smart Bin in an office could measure paper usage throughout the year and check this against the company’s target. As much of what we consume is packaged, the Smart Bin can understand our consumption habits and is well positioned to be a powerful tool to help steer us towards healthier habits both in the home and in the office. Closing the Lid Smart Bins are in their infancy, yet it is already clear that they could deliver significant value for the consumer, businesses, and the environment. As the Internet of Things becomes a greater part of everyday life, our bins will be more than just rubbish. By: George E. Goldhagen
<urn:uuid:69c290df-38ff-4491-9e3b-c9286afab991>
CC-MAIN-2017-04
https://www.accenture.com/bd-en/insight-highlights-cgs-delivering-significant-value-consumer
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93674
1,251
2.8125
3
For the last several years, cryptographer Karsten Nohl and his team at Security Research Labs in Berlin have tested about 1,000 SIM cards for vulnerabilities. Give this German cryptographer two minutes on a PC and he can send a send a secret text message that contains a “virus” to a mobile phone’s SIM card, and then basically get “root” and take over the phone. That text can allow him to eavesdrop, make purchases via mobile payment systems and otherwise “trick mobile phones into granting access to the device's location, SMS functions and allow changes to a person's voicemail number.” Nohl will present his research during “Rooting SIM cards” at the Black Hat security conference in Las Vegas. “We can remotely install software on a handset that operates completely independently from your phone,” Nohl told The New York Times. “We can spy on you. We know your encryption keys for calls. We can read your SMS’s. More than just spying, we can steal data from the SIM card, your mobile identity, and charge to your account.” While it’s not something you see happening, mobile operators can push out updates via sending hidden text messages to the SIM card, which is like a tiny computer with its own operating system and software. The SIM has a Java Card that runs Java-based programs as if it were a Java virtual machine. Although there are about seven billion SIM cards in “active use,” Nohl estimates “as many as 750 million phones may be vulnerable to attacks.” “Give me any phone number and there is some chance I will, a few minutes later, be able to remotely control this SIM card and even make a copy of it,” he told Forbes. His team sent a deliberately false binary code via SMS to a phone using a SIM with a weak encryption standard called DES (Data Encryption Standard) that has been around since the 1970s. The code didn’t include the right cryptographic signature, so the command wasn’t understood and it wouldn’t run. However when the SIM rejected that code, it gave the virus “root” access and sent back an error code that contained its encrypted 56-bit private key. Using a rainbow table, that private DES key was cracked. The whole process takes about two minutes. Now knowing the private DES key, an attacker can pretend to be the mobile operator and push out malicious software updates to the device. This allowed him to “eavesdrop on a caller, make purchases through mobile payment systems and even impersonate the phone’s owner.” He could “send premium text messages, collect location data, make premium calls or re-route calls,” reported Forbes. “A malicious hacker could eavesdrop on calls, albeit with the SIM owner probably noticing some suspiciously-slow connections.” He told The New York Times that “in three-quarters of messages sent to mobile phones using DES encryption, the handset recognized the false signature and ended communication.” Only about a quarter sent the error code with its encrypted digital signature, but that’s also equal to about 750 million vulnerable phones. It’s a double whammy; he found a way to exploit a flaw in DES as well as another way to exploit the Java software in the SIM. “Through over-the-air (OTA) updates deployed via SMS, the cards are even extensible through custom Java software,” Nohl wrote on his company’s blog. “While this extensibility is rarely used so far, its existence already poses a critical hacking risk.” Nohl explained the second bug to Forbes as “unrelated to the weak encryption key,” but that “it allows even deeper hacking on SIMs” thanks to “a mistake on the part of SIM card manufacturers.” Java Card uses a concept called sandboxing, in which pre-installed programs like a Visa or PayPal app are shielded from one another and the rest of the SIM card. The term comes from the idea of only allowing programs to “play with their own toys, in their own sandbox,” says Nohl. “This sandboxing mechanism is broken in the most widely-used SIM cards.” The researcher says he found a few instances where the protocols on the SIM card allowed the virus he had sent to a SIM, to check the files of a payment app that was also installed on the card. Nohl believes badly-configured Java Card sandboxing “affects every operator who uses cards from two main vendors,” including carriers like AT&T and Verizon who use robust encryption standards. Are SIM cards with these 3DES standards vulnerable? Nohl suggests they might be, and that he’ll expound on the details at Black Hat.
<urn:uuid:0608f269-6497-431b-8d42-c89a1dfde279>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474182/cybercrime-hacking/750-million-phones-vulnerable-to-spying--hack-sim-card-via-tainted-text-to-get-ro.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9511
1,026
2.671875
3
University research regenerates mouse heart Wednesday, Aug 14th 2013 Scientists from the University of Pittsburgh School of Medicine may have found a way to artificially create working organs after using human cells to regenerate a mouse's heart. As a delicate operation, the procedure has made great improvements in the effort to reconstruct human organs. Using proper environmental control systems, research labs can be kept at appropriate temperatures needed for regenerative experiments. Without it, the samples could be contaminated and made unusable. The efforts of the university have centered around the fact that heart disease is the highest cause of death in the U.S., and current therapies are ineffective for over half of patients with the condition, according to Health Canal. The experiment has created an opportunity for additional testing in order to make regeneration a reality. The scientists were able to completely rebuild the mouse's heart with human cells. The mouse's heart had been completely stripped of all cells and repopulated with the replacements, according to Science World Report. The heart was able to contract at 40 to 50 beats per minute after a few weeks, and research is continuing to ensure the organ is strong enough to pump blood. Thanks to this recent development, the future may produce regenerated organs that can be used in transplants, making up for a lack of available donors. The method could also be used to test drug effects and heart development. "One of our next goals is to see if it's feasible to make a patch of human heart muscle," said Dr. Lei Yang, the university's assistant professor of developmental biology. "We could use patches to replace a region damaged by a heart attack. That might be easier to achieve because it won't require as many cells as a whole human-sized organ would." Temperature is paramount The environmental conditions of lab samples could hurt the research process if they are not kept in the appropriate temperature. Having enough humidity and air flow is also important in the samples' usability. Biological tests, like that of the heart regeneration, are extremely sensitive, and temperature monitoring has become important in the research process, according to Medical Laboratory Observer. Temperature sensors are very accurate in measuring the environment, and some take less than a second in response time during changes. Samples have differing ranges they can be in, and doing research under the recommended conditions will prevent them from being compromised. The ability to store varying materials in their optimal environments has allowed scientists to explore other options to meet their objectives. For example, the university researchers used multipotential cardiovascular progenitor cells taken from a human skin biopsy as the replacement for the removed cells in the mouse heart. "Nobody has tried using these MCPs for heart regeneration before," Yang said. "It turns out that the heart's extracellular matrix – the material that is the substrate of heart scaffold – can send signals to guide the MCPs into becoming the specialized cells that are needed for proper heart function." Lab research tends to contain delicate processes that could be ruined in an unsuitable environment. Investing in monitoring and control systems will help keep samples usable and aid in the advancement of new procedures.
<urn:uuid:591cfc0a-e1d3-4cc7-acbb-3a4dfad2f1f9>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/university-research-regenerates-mouse-heart-491231
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00296-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957724
631
3.453125
3
The horrendous aftermath of last Friday’s 9.0 earthquake off the east coast of Japan is still unfolding. The quake was the largest ever for the island nation and was described by Prime Minister Naoto Kan as his nation’s ”worst crisis since the end of the war 65 years ago.” The ensuing destruction from tsunamis, infrastructure collapse, fires and now nuclear plant radiation is being tracked and analyzed, some with the help of computer technology designed for just such an event. In certain cases, this technology worked quite well. As reported by Computerworld, the National Oceanic and Atmospheric Administration’s (NOAA) Center for Tsunami Research was able to track the earthquake-spawned tsunamis in real-time as they spread east and south across the Pacific Ocean. The models predicted both the timing and intensity of the waves as they made their way to the west coasts of North and South America. According to the NOAA Center for Tsunami Research, approximately 25 minutes after the earthquake, the tsunami was recorded at DART buoy 21418, which was near the epicenter of the quake off the coast of Japan. DART, which stands for Deep-ocean Assessment and Reporting of Tsunamis, uses dozens of buoys scattered across the ocean to collect tsunami-spawned wave action and beam the data to warning centers around the country. NOAA, as well as a number of other research institutions, rely on the MOST model for tsunami forecasting. MOST (Method of Splitting Tsunami) is a suite of numerical codes capable of simulating three processes of tsunami evolution: earthquake, transoceanic propagation, and inundation of dry land. The code was developed by Vasily Titov of the Pacific Marine Environmental Laboratory’s and Costas Emmanuel Synolakis of the University of Southern California. The model was used to generate an animation of last Friday’s earthquake, which illustrates how the tsunami propagated across the Pacific Ocean. Detailed analysis of the results have not yet been completed, but according to Donald Denbo of the University of Washington, for past tsunami events they’ve achieved 85 percent accuracy in predicting maximum wave height. As the animation shows, the strongest wave energy was directed southward away from North America, which was reflected in the relatively light damage to Hawaii and the West Coast of the US. The model was run on six Dell PowerEdge dual-socket servers with Intel Xeon X5670 CPUs (2.93 GHz) , 32GB of RAM, and 16TB of disk storage. The animation runs were pre-computed, with the results saved on disk as compressed files. There are a total of 1691 pre-computed runs, each taking about 8 hours. The animation was generated from these compressed files and generated with MATLAB on a dual-processor laptop in about 4 hours. Although the model is designed to work in real time, it’s of limited use for coastal communities in close proximity to the earthquake. Residents along the northeast coast of Japan were warned immediately following the quake, but there simply wasn’t enough time to perform a mass evacuation of the shoreline communities. In this case, the first 20-foot-plus waves hit the coast about an hour-and-a-half after the earthquake and traveled inland up to six miles, inundating entire communities. In fact, most of the earthquake fatalities tallied so far appear to be tsunami related, with whole towns destroyed by the giant waves. As of Wednesday, the official death toll stood at over 4,300, with more than 8,000 people still missing. Both numbers are expected to rise. While the tsunami aftermath was being tallied, another crisis developed around the Fukushima Daiichi nuclear power plant in northeastern Japan. The number two, three and four reactors of the plant’s six reactors were seriously damaged during the quake, causing fires and subsequent explosions that released radioactive material into the air. The problem stems from damage to the water cooling systems, which has caused the reactor cores to become exposed to the air. This allow the nuclear materials to overheat and generate hydrogen gas, resulting in dangerous pressure buildup. A 12-mile radius around the plant has been evacuated, although the chairman of the United States Nuclear Regulatory Commission is advising evacuation of a much larger area. The current danger level appears to be somewhere between the Chernobyl nuclear disaster in 1986 and the Three Mile Island accident in 1979. As this article goes to press, the situation is still in flux as fears of a containment breach and core meltdown are still real threats. The spikes in airborne radiation that have accompanied the explosions are being tracked, mainly for the purpose of reducing exposure to on-site rescue workers and the local populace, but also to ensure that these radioactive clouds don’t threaten other areas of Japan or even further afield. Since winds tend to disperse the radioactive clouds rather rapidly, the danger to areas outside of northeast Japan, and especially to other countries, is rather small at this point. One of the radiation releases produced a local reading of 10,850 microsieverts per hour, or about 5,000 times the normal background level. Average human exposure for an entire year is on the order of 6,200 microsieverts, while acute radiation sickness doesn’t occur until you get into the 1,000,000 microsieverts per hour range. Nextgov.com reports that the NNSA is lending a hand with a team from the National Atmospheric Release Advisory Center (NARAC) that is helping to provide real-time estimates associated with the radiation leaks. According to the report: “The squad’s specialists plug data in to supercomputer algorithms on radiation doses, exposure, hazard areas, meteorological conditions and other factors to produce predictive models.” The idea is to repurpose some of the same codes used for safeguarding US nuclear materials for the nuclear plant disaster. The NARAC team runs out of Lawrence Livermore in California and presumably has access to some of the big Blue Gene supercomputers there (Dawn, for example), but according to Livermore officials, these resources are not being used for this effort. Half a world away, the Viennese Central Institute for Meteorology and Geodynamics is also tracking these radioactive clouds. The institute has modeled the dispersal of radioactive Iodine and Cesium as it streams across the Pacific Ocean. The illustration below models the dispersal of Iodine-131 associated with one of the radiation releases (click on the image to display animation). One unfortunate irony of this particular earthquakes is that is has incapacitated Japan’s own ability to do much of this modeling work on its own. Because of the damage or shutdown of many of the nation’s power plants — nuclear or otherwise — Japan is undergoing rolling blackouts across the country. Some areas in northern Japan are completely without power, where 850,000 households are still without electricity. Major computing centers are barely operational, with facilities like the ones at RIKEN and the Tokyo Institute of Technology (Tokyo Tech) operating intermittently. Cycles for the multi-petaflop TSUBAME 2.0 system at Tokyo Tech will be especially hard to come by, given that the machine requires several hours for booting up and shutting down. Presumably after the power situation stabilizes (which may not be for some time), Japanese supercomputers will be working overtime running post-mortem scenarios of the tsunami and nuclear plant disasters. Given the country’s risky seismic profile, the episode may spur the Japanese to re-evaluate its nuclear energy strategy. Beyond that, the disaster will certainly refocus the country’s efforts to provide more sophisticated disaster preparedness systems and robust infrastructure for its populace. But for a country that prides itself on how well it has overcome its precarious geography, this was a harsh lesson indeed.
<urn:uuid:a3ecc689-0910-4fbe-9ce5-e697d3828f5a>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/03/16/computer_simulations_illustrate_scope_of_japanese_disaster/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00020-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950048
1,620
3.65625
4
“My circuitry will need to adapt. It will take some time before they can compensate for your input.” Artificial life form and Lt. Commander Data on Star Trek: The Next Generation Translation: “Geordi, I will miss you.” I am having trouble writing this essay. Somehow my computer has a mind of its own today. He is a bit slow and isn’t really very responsive. I don’t know what’s wrong with him today. Maybe tomorrow he’ll feel like himself again. Have you ever caught yourself attributing human characteristics to your computer or to your car? (Lots of people, including myself, even give their car a name and can describe its personality.) You most likely do so to your cat, dog or even your goldfish. We all do this, and it is perfectly normal. We pat our computers on the back, we tell customers on the phone that “he doesn’t want to” today, or – in the spur of a techno-anger moment – we even hit our computers. YouTube is full of funny videos of people doing all this. Does it help? Does the computer behave better? What would happen if you would ignore your computer for a week after he (it) let you down? Or even better, could you make your Windows-based computer jealous by threatening to move to Apple? Nothing would happen, of course. We know that. Still, it seems to be deeply human to describe things around us in our own – human – terms. Perhaps this is also the reason why the question “Can computers think?” is such a popular one in modern philosophy. The most influential person who has reflected on this question is undoubtedly Alan Turing (1912-1954). Turing was a pioneer in computer science and was responsible for cracking the German Enigma encryption machine during World War II. In a 1950 paper that is a remarkably good read, “Computing Machinery and Intelligence,” Turing introduced what is now known as the Turing test. Turing replaces the question of whether computers can think with a more practical one: Is it imaginable that a computer could fool a human being, and be taken for a human being as well? The test that Turing devised described – and I am summarizing here – a situation in which a test person could ask a question to both another human being and a computer, without being able to see who was who. They would communicate through a computer screen. The test person would be allowed to ask questions, and the other human being and the computer would give answers. Both the other human being and the computer would even be allowed to cheat and respond with statements such as, “Don’t listen to him. I am the real human being.” To emulate the slower human speed, the computer would also be allowed to wait before responding to mathematical questions, for instance. If the test person could not to distinguish the difference, the computer passed the Turing test and would seemingly be able to think – in other words, display intelligence. The Turing test provides a very practical solution to a very hard philosophical problem. What is intelligence anyway, and what does it mean to think? But the Turing test has been widely criticized, too. Because of its practical solution, it equates intelligent behavior with human behavior. This is not necessarily the case. Humans can display extremely unintelligent behavior (like hitting their computer and thinking it helps). And who says that human intelligence or the human way of thinking is the only way of thinking? People, and their cognitive capabilities, cannot be separated from their bodies and their senses. Why would computers have to be limited to means of communication such as language? Maybe the concept of thinking can be defined in completely non-human ways. Ironically, if computers perform at their best thinking capacity in non-human ways (and you can argue computers do this all the time already), they would completely fail the Turing test instead of acing all intelligence tests. Turing wrote his paper in 1950, so what he describes was pretty far-out thinking for his day. Nevertheless, he was very concrete in his predictions. He expected that by the year 2000, computers would pass the Turing test at about 70%. In his view, storage was the bottleneck, and he expected it to be “109 ” by 2000. Assuming he was counting in bytes, this would be 10 gigabytes. It seems reality did overtake this prediction considerably! But what about the prediction of getting it right by 70%? IBM, Chess and Jeopardy! Turing ends his paper with a small discussion on where to start emulating human intelligence. Turing suggests an abstract activity first, such as playing chess; and in, fact, this is exactly what happened. IBM’s computer, Deep Blue, did defeat chess champion Garry Kasparov. The programming of Deep Blue was largely one of brute force, calculating an unbelievable 200 million positions per second and “thinking through” six to eight moves in advance. Additionally, Deep Blue contained a huge library of 700,000 chess games. The developments didn’t stop with Deep Blue’s victory. Today’s chess software running on standard PCs may not calculate as many moves per second, but contain much smarter algorithms. Moreover, IBM has successfully moved beyond chess and has succeeded in a much, much harder domain. In 2011, the IBM Watson computer won the American game show Jeopardy! , beating the two best contestants the show ever had. At the core of Watson was an engine to parse language, including trying to understand clever word play and slang, on which many of the show’s questions hinge. The computer’s programming was a combination of many different styles of algorithms trying to interpret the questions, in combination with four terabytes of semantically structured information – an infinitely more complex task than playing chess. Still, Watson is far from passing the Turing test. It may interpret language better than any other computer, but it is focused on providing answers instead of full conversation. Returning to the original question, what does it mean for a computer to think, or to display intelligence? To even define what thinking really means is challenging already. Taking a very rational approach, thinking comes very close to reasoning and problem solving, where thinking describes the process of going through the various steps. Consequently, the more complex the problems you can solve, the more intelligent you are. According to the IQ tests, at least. Using this definition, it is hard to deny that computers can think; in fact, they think much better than human beings do. Take, for instance, the “logic grid” puzzles that have been so popular for a while. For instance, based on a list of logical clues, one fills in a grid that shows the ages, hobbies and favorite colors of Casper, Rosaly, Emilie and Wilhelmine. Logic grid puzzle As this is a fairly simple algorithm, computers crack these puzzles in a millisecond. Moreover, they can do it on a much more abstract level than we can. Where we need real-world clues, like phone numbers, names and hobbies to imagine the logic, computers just need a label, and Rosaly as a label does just as fine as C2. The What and How of Thinking Others define thinking as a much more organic process, full of lateral steps, and also include thinking in terms of images, emotions, and so forth. When thinking includes imagination and inventive approaches to problems, this is where we human beings excel. I remember hearing a story that the American military was using pattern recognition software for visual processing. In particular, it was trying to create software that would recognize tanks, to avoid shooting at the wrong ones. The software was fed as many pictures as possible about American tanks, as well as other tanks, and through learning algorithms the software got better and better at recognizing them, until the software had to perform its capabilities based on live feeds instead of pictures. The software engineers later found out what had gone wrong. Instead of recognizing the patterns of the actual tanks in the pictures, the software started to recognize the resolution of the pictures. Pictures of American tanks had higher resolution than the pictures of foreign ones. Thus it seems there are two ways to think about thinking.: the “what” way and the “how” way. Turing chooses the “what” way; he only focuses on the outcome, an intelligent result. If we follow this way of thinking, we cannot escape the conclusion that computers can think. In fact, they can think much better than we can. They can reason better, faster and deeper than human beings, with much more precision. Getting to the same broad level of thinking than humans can is simply a matter of time. Support for this thought comes from the analytical philosophers, a twentieth century school of thought led by Bertrand Russell (1872-1970) and Ludwig Wittgenstein (1889-1951). In their view, people can only think what they have words for. If there is no word for it, it cannot be thought. (Even if this is not true, the moment something new is thought, in order to express it, it needs a word.) In essence, although we are not that far yet, you can codify all thought; and if it can be codified, it can be fed to a computer. The “how” way presents a very surprising view. It seems there is a stronger tendency in research to explain human thinking in terms of computer science than the other way around. Many scientists and philosophers currently describe the world as consisting of matter only, subjected to the laws of nature. Applied to the human brain, according to some (not all) neurologists and psychologists, this means it is nothing more than an incredibly complex neurocomputer. Human behavior is simply the result of neuro stimuli. In fact, as the brain doesn’t have a central command center, but consists of many different pieces interacting with each other, one currently popular belief is that it the brain doesn’t even make most decisions. The body has decided already it wants to eat the lovely smelling food even before the brain has interpreted the scent. The body already reacts by withdrawing the hand from a hot surface fractions of a second before the information reaches the brain. Recent research shows that, in some cases, the brain gets involved only slightly after the body gears towards action. As some put it in extreme terms, for a large part of our daily behavior, the brain is a “chatterbox” that rationalizes behavior and actions after the fact. Seen this way, there is no reason to suggest that computers cannot think. Decision making is a distributed mechanism involving many centers in the brain. Thinking would be comparable to an internal dialogue. Seen this way, computers would again be better at it than human beings. In fact, this is exactly how the IBM Watson computer was programmed, with multiple algorithms for grammatical analysis, information retrieval, information comparison, and formulating answers. Even better than a human being, a computer could contain algorithms (bots) following different paradigms (whereas most human beings have trouble handling multiple, or even conflicting, paradigms at the same time). Such a distributed and diverse process could lead to much more balanced outcomes (or a much more serious version of schizophrenia, now that I come to think of it). In many ways, defining the brain as a large and complex neurocomputer represents a full circle from the Age of Enlightenment, which was at its height in the 18th century.1 Philosophers aired an unshakable belief in the power of science. The world and the universe were seen as a machine – an incredibly complicated one, but a machine nevertheless. Our job, then, is simply to figure out the rules, as is the same with the brain today. It is an incredibly complicated neurocomputer, and it is up to us to figure out how it works. Continuing this discussion, we’ll come to the anti-intuitive conclusion that a truly intelligent computer, the one we can really trust, is the one that can make mistakes. As shown earlier in this article, computers can think, but somehow the conclusion doesn’t really satisfy me. It somehow feels wrong that our human thinking can be reduced to pure reasoning. I am more than happy to accept that computers can reason much better than we can, and infinitely faster. But there is a clue in the “logic grid” puzzles I described. Computers can do this in pure mathematical form, while human beings benefit from labels such as names and hobbies. Labels make us understand what we are thinking about. Do computers have that understanding too? When a person understands something, its meaning is clear to him or her. When is a meaning clear? Perhaps things, ideas or concepts can have inherent meaning, worth and significance in their own right. But I think it is more helpful to think of understanding and meaning as relations between objects or subjects in the world and ourselves. The moment we can relate to them, they start to have meaning. And if we can define the relationship we have, or can even predict the behavior of the object, we understand it. For example, I understand how to drive a car; I can see how my actions relate to the behavior of the car while driving it. However, my understanding is more limited than the understanding of a mechanic, who can relate to putting the various components together. The keyword in all this is “ourselves.” Said another way, we need to be self-aware. Self-awareness means that we can be the objects of our own thoughts. We can reflect on our own being, characteristics, behaviors, thoughts and actions. We can step outside of ourselves and look at ourselves . This can be very shallow when we look in the mirror and decide we don’t look that bad. Self-awareness can also go very deep, creating an understanding of who we truly are and what we believe in, and then we can consciously decide on how to behave. We have the will and power to stop intuitive reactions and behaviors and react the way we believe we should react, in a more appropriate manner. This is missing in this “can computers think” discussion so far. So, can computers be self-aware? This has been an important theme in science fiction, at least. The Terminator movies describe the war between humanity and Skynet, a computer network so advanced that it became self-aware. The system engineers who designed Skynet realized the consequences and tried to shut it down. Skynet saw this as a threat to its own existence (realizing your own existence, and being able to grasp the concept of death are key concepts of self-awareness), and struck back – Judgment Day. Or consider The Matrix in which computers run the world and use comatose human bodies as batteries. It turns out that even Neo himself, escaping the dream world to live in the real world fighting the Matrix, is a product of the Matrix. The Matrix has the self-awareness to realize it needs an external stimulus to reinvent itself to become a better version of itself time and time again. In fact, every system that realizes it needs to renew itself in order to surive is self-aware. In order to renew yourself, you need to be able to learn. And this is the argument that opponents always bring forward. A computer only does what it is told. The argument is easy to counter. IBM’s Deep Blue played better chess than its programmers ever could. It learned so much about chess that it beat Garry Kasparov, the reigning world champion. IBM’s Watson built so much knowledge that it beat the world champions on Jeopardy! . Fraud detection systems contain self-learning algorithms; in fact, self-learning is a complete branch in an IT discipline called data mining. Learning is the cognitive process of acquiring skill or knowledge, and very much the domain of computers. Can computers rewrite their own programming? This would have to be part of computers renewing themselves. In fact, there is an established term for it: metamorphic code. It is a technique used in computer viruses in order to remain undetected. Most computer viruses are recognized by a certain footprint, a combination of code. By continuously changing it, computer viruses become harder to detect. Every generation the virus reproduces itself, it reproduces a slightly different, but still functioning version. In principle, this is not different from human evolution. You could call the self-evolution of computers, as witnessed through viruses, an early stage of evolution. It’s far from the evolution the human race has experienced, but it is entirely conceivable computers will ultimately evolve to a similar or much more powerful organism than human beings. Given that the evolution takes place in the digital world, it is even likely this form of evolution goes infinitely faster than evolution in the real world. In general, you can even argue computers can be self-aware in a much better way than human beings. Computers can make themselves the subject of their analysis completely dispassionately and objectively. They can run a self-diagnostic and report what they believe is malfunctioning in their system. Modern mathematics helps computers to judge the quality of their own program. Computers don't kid themselves like people do (when people are asked if they belong to the top 50% of students or drivers, invariably more than 50% rate themselves to be). At the same time, dispassion is also the issue. How self-aware is the analysis then, if the analysis doesn't differentiate between itself and another computer in the outside world? It’s not. Furthermore, can computers self-reflect on their self-reflection? Maybe, if they are programmed to do so, there can be a diagnostic of diagnostics. A meta diagnostic is not hard to imagine. But let’s continue this. Could computers self-reflect on the self-reflection of their self-reflection? Here we hit an interesting point. What does it mean to self-reflect on the self-reflection of your self-reflection? Most people would struggle with it, and that is exactly the point. This is why we human beings have invented the concept of the soul . The soul is the “metalevel of being” that we don’t even grasp ourselves. So likewise, a computer doesn’t have to fully understand itself to still be self-aware. After all, do we? Our brains are not capable of fully fathoming themselves. We can map what happens in our brain during all kinds of activities, but it doesn’t mean we can truly understand it. By definition, we cannot step outside the paradigm in which we live. It is no use to theorize about what came before the Big Bang. The Big Bang created time, space and causality, and we need time, space and causality to think. Anything related to the absence of time, space and causality then is unthinkable. As such, a computer can’t think outside of its own universe either. I Think, Therefore I Am In trying to think this through, perhaps we are approaching the matter from the wrong angle. As human beings, we feel superior to computers. We have created computers, so we are the computer’s god. How could computers be better than we are? Every time we come to the conclusion, through reasoning, that conceptually computers are not very different from us –that they can think, and that they can be self-aware – we come up with a new reason why we are different. The killer argument is that computers do not create and invent things like we do. Computers haven’t created any true art simply because they felt like it. Computers haven’t displayed altruistic behavior. Computers don’t make weird lateral thinking steps and invent Post-it Notes when confronted with glue that doesn’t really stick, or invent penicillin by mistake. And there we are… mistake. That is the keyword. We, human beings, are special because we are deeply flawed. We make mistakes, we don’t always think rationally, our programming over many, many years of evolution is full of code that doesn’t make any sense, and so forth. We are special because we are imperfect. In a paradoxical way, our superiority – today(!) – is in our imperfection. Because we don’t know anything for sure, we have to keep trying to come up with better ways and better ideas. As long as we keep doubting ourselves (which only a self-aware person can), we improve and sustain our state of being. For computers to pass the Turing test and become superior (at least from a human point of view), they need to take uncertainty into account. Computers have no issue with probabilistic reasoning, but should rely more on fuzzy interpretation.3 To put it in provocative terms, they should become more imperfect. They should be able to doubt, be uncertain and reflect on their own thinking. From here it is only a small step to Descartes. Rene Descartes (1596-1650), a French philosopher, tried to establish a fundamental set of principles of what is true. He looked at phenomena and the world around him and asked if there were different explanations possible, a way of testing whether the existing explanations were correct. The safest way of asking these questions is to have no preconceptions at all, to doubt absolutely everything. The only way to establish truth is to reach a certain sound foundation or, in other words, an ontology from which the rest can be derived. Descartes eventually reached the conclusion that everything can be doubted, except doubt itself. You cannot doubt your doubt because that would mean all would be certain, and that is what you are doubting. The thought of doubt itself proves that doubt cannot be doubted. And because you cannot separate a person from his thoughts, therefore, cogito ergo sum: I think (doubt), therefore I am. If you doubt things, it means you are not sure. You are aware of your shortcomings to grasp the truth. And the only thing you can do to evolve your understanding is to doubt what you think you know. For computers to learn organically, break free of their programming and evolve, become creative, and be able to deal with unknown, unprogrammed situations,4 computers need to become less perfect. Turing would have loved the thought. Computers that can think, can doubt. So, computers that can truly think, at least in this definition, are to a certain extent unreliable. In fact, we can even take it a step further. To try to beat Deep Blue, Kasparov played a very intimidating game. Unfortunately, Deep Blue couldn’t be intimidated, and it had no effect or at least not the effect it would have had on a human being. A really smart computer would have been able to look beyond the chess board and interpret the behavior of the opponent. Interpretation is not an exact science. Sometimes interpretations are wrong. You could argue that only stupid computers win all the time. IBM’s Deep Blue would have been really smart if it had been able to lose to Garry Kasparov, too. Perhaps Google is actually a good example of the non-perfect computing paradigm. Google doesn’t claim to have a single version of the truth, or to possess the ultimate knowledge and wisdom. On the contrary, panta rhei , as Heraclitus (535 – 475 BC) said, everything flows. Google’s data basis is continuously changing, and googling for something twice might very well lead to two different results. Google also gives multiple answers, a non-exact response to usually pretty non-exact questions. Still, it’s a pretty crude process. Some search engines use fuzzy logic and also search for information that is “round and about” what the user is asking for. If, for instance, you are looking for a second-hand Mercedes, preferably black, with not more than 50,000 miles on the odometer, the search engine may also return a dark blue Lexus with a mileage of 52,000. Once we are the next generation down the road and the semantic Web becomes a reality, information retrieval and processing in general will become a bit more intelligent. On the semantic Web, computers will be able to understand the meaning of the information that flows around, based on ontological data structures. An ontology is a formal representation of knowledge as a set of concepts within a domain and the relationships between those concepts. If a human is formulating a search in an ambiguous way, search engines will be able to ask intelligent questions in order to provide a better result. Moreover, computers will be able to meaningfully process information without any human interaction or intervention. Big Data and Big Process So, congratulations, dear reader, for coming this far in this essay. Mostly it has been intellectual play. And I didn’t even dive into singularity thinking, which predicts far-reaching coalescence between humans and machines. Is there any practical value to the whole discussion of whether computers can think? The answer might be more obvious than you would think. We have confirmed that computers can think. A thought a computer may have is nothing else but everything it derives from processing data, such as a correlation, a segmentation, or any other type of step towards calculating a result. We have even confirmed that computers can be self-aware. Let’s answer the following question: Can computers be individuals? Sure, they can be individualized, with all kinds of settings, but can computers have a mind of their own? To get where I’d like to take this, we need to separate computer and content, which is impossible for human beings, but business as usual for computers operating in the cloud. For the purposes of determining if computers can be like individuals, we’ll focus on large data sets. The overload of information is growing, and it is potentially growing faster than computers and Internet infrastructure. Big data is one of the most significant trends in IT. If data sets become too big to be copied within reasonable time frames, you effectively cannot copy them anymore. They become unique. Data collections become individuals in the literal sense of the word: They exist just once. Two collections of data may be similar or related, like siblings, but can never be identical. Furthermore, their complexity in terms of volume, variety and velocity is so high it cannot be understood by normal human beings. With a little bit of imagination, you can argue that data sets become person-like.5 They grow and mature over time. Data sets develop unique behaviors that they display when you interact with them. They could even develop dysfunctions and have disorders, being trained by the data and the analyses the systems perform.6 The complexity makes it so that we simply have to trust the answers the systems give because the moment we try to audit the answers, the data has already changed. Effectively, like people, systems just offer a subjective point of view that is sometimes hard to verify. In this scenario, information managers are further away from the “one version of the truth” they strive for than ever before. Perhaps information managers should leave their Era of Enlightenment behind. Perhaps the idea that there is a single truth, and all that needs to happen is to discover it and roll it out, is not realistic. Perhaps it is time for a new wave – the days of “postmodern information management.” Postmodernism, a term used in architecture, literature, and philosophy, has its roots in the late 19th century. Fronted by philosophers such as Martin Heidegger (1889–1976) and Michel Foucault (1926–1984), postmodernism has declared the “death of truth.” Postmodernism is a reaction to the “modernist” and “enlightened” scientific approach to the world. According to postmodernists, reality is nothing more than a viewpoint, and every person has a different viewpoint. This means there are many realities. Reality is not objective, but subjective. And realities may very well conflict (something we notice in practical life every day as we sit in meetings discussing problems and solutions). Although debated (which school of thought isn’t?) and not the only trend in 20th century philosophy (analytic philosophers disagree fundamentally with postmodernists), I think it is safe to say that in the Western world, postmodernism is deeply entrenched in society. In a liberal and democratic world, we are all entitled to our opinions; and although some opinions are more equal than others, our individual voices are heard and have an influence in debates. (Except in information management.) What would, for instance, postmodern “business intelligence” look like? If computers can think, even be self-aware, and if datasets can have a certain individuality, computers might as well express their opinions. Their opinions, as unique individuals, may differ from the opinions of another data source. Managers need to think for themselves again and interpret the outcome of querying different sources, forming their particular picture of reality – not based on “the numbers that speak for themselves” or on fact-based analysis, but based on synthesizing multiple points of view to construct a story. Just what would postmodern “business process management” look like? It would not be possible to define and document every single process that flows through our organization. After all, every instance of every process would be unique, the result of a specific interaction between you and a customer or any other stakeholder. What is needed is an understanding that different people have different requirements, and structure those in an ontological approach. In a postmodern world, we base our conversations on a meta-understanding. We understand that everyone has a different understanding. Of course, as we do today, we can automate those interactions as well. Once we have semantic interoperability between data sets, processes, systems and computers in the form of master data management, metadata management, and communication standards based on XML (in human terms: “language”), systems can exchange viewpoints, negotiate, triangulate and form a common opinion. Most likely, given the multiple viewpoints, the outcome would be better than one provided by the traditional “single version of the truth” approach. Thinking this through, could it be that postmodern information management and postmodern process management is here today? Could that be the reason why most “single version of the truth” approaches have failed so miserably over the last twenty (or more) years? Did reality already overtake the old IT philosophy? One thing is clear: Before we are able to embrace postmodernism in IT, we need to seriously re-architect our systems, tools, applications and methodologies. In my mind, perhaps the ultimate test of whether computers can think is a variation on Turing’s test. I pose the following question: Do computers have a sense of humor? This was the one thing Lt. Commander Data always struggled with in Star Trek. He had read everything about humor that was ever published, but still wasn’t able to interpret the simplest joke.End Notes: - Also see my article “Medieval IT Best Practices” as published on the BeyeNETWORK. - Arthur C. Clarke’s “The City and the Stars” (1956) is a story that describes exactly the same dynamic as told in The Matrix. Alvin, a “unique” as it is called, is created to leave the city of Diaspar and explore. - Humans have a so-called mirror gene. If we see someone else cry, the center in our brain that controls crying is activated too. If someone else eats, we can become hungry too. In interpreting human behavior of others, we reach within ourselves. This is what computers don’t have. Intelligence does not have to be human, but inhuman intelligence will have trouble interpreting human behavior. - Although I don’t really subscribe to this school of thinking, analytic philosophy comes to our aid again. There is an old story that tells how easy it was for the Europeans to conquer Native Americans. As the Native Americans did not have any concept for sailboats, and no words for it, they simply didn’t register the sailboars at the horizon. It shows humans don’t know how to deal with unprogrammed situations as much as we would like to believe we can. - I’d like to recognize Roland Rambau, a colleague of mine when I worked at Oracle, for coming up with this idea. - My career recommendation for the years to come is to become a “data therapist.” SOURCE: Can Computers Think? Recent articles by Frank Buytendijk
<urn:uuid:eba0a910-66d3-46e1-a843-8b69979070de>
CC-MAIN-2017-04
http://www.b-eye-network.com/channels/5567/view/16771
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963705
6,844
2.875
3
IT around the world: Peeking over the Great Firewall of China Visitors from many nations flock to China to view one of the world’s architectural marvels: the Great Wall of China. China’s imperial rulers first erected this monumental defensive barrier more than 2,000 years ago to keep out invading armies. Today, China uses a more high-tech wall that functions in reverse, keeping Chinese citizens “inside” when they attempt to access the Internet, as well as limiting foreign influences: the Great Firewall of China. The Great Firewall, also known as the Golden Shield, isn’t a single security device. Rather, it is a collection of technologies from many different companies designed to filter Internet traffic entering and leaving China. While it may be primarily designed to restrict the activity of Chinese citizens, it also impacts the ability of business travelers and tourists seeking to use the Internet to reach back home. History of the Great Firewall The Internet came late to China, arriving in 1993, several years after almost every other country on the planet connected. The Chinese government immediately realized the impact that widespread communication could have on their closed society. Government censors reacted in 1997 when the Ministry for Public Security released a set of regulations governing Internet use in the country. One section, translated by the Congressional Research Service, sums up the Chinese government approach well. It reads: “Users are prohibited from using the Internet to create, replicate, retrieve, or transmit information that incites resistance to the PRC Constitution, laws, or administrative regulations; promotes the overthrow of the government or socialist system; undermines national unification; distorts the truth, spreads rumors, or destroys social order; or provides sexually suggestive material or encourages gambling, violence, or murder.” The passage of this law also marked the beginning of China’s Golden Shield project, an effort to filter and censor all Internet traffic in the world’s largest nation. The project began in 1998 and went into full production mode in 2003. The system remains fully operational today and is highly effective, censoring traffic on a wide range of topics deemed offensive by the government. The Open Net Initiative, a nonprofit organization focused on Internet filtering and surveillance, describes China’s Great Firewall as “one of the most pervasive and sophisticated regimes of Internet filtering and information control in the world.” Building the Great Firewall The effectiveness of the Great Firewall lies in its diverse approach to Internet filtering. The system uses a wide range of technologies designed to censor offensive web traffic and defeat the many circumvention methods publicized on the Internet. As hackers develop new approaches to work around the Great Firewall, the Chinese government builds new countermeasures to defeat those workarounds. The old security adage of defense-in-depth describes the importance of using multiple overlapping controls to achieve important security objectives and the Chinese government certainly embraced that principle in their design. The most basic mechanism used by the Great Firewall is simple IP address blocking. The Chinese government maintains a blacklist of known undesirable IP addresses and simply blocks all access to those addresses. They may use this technique to ban access to web servers, proxy servers and other devices that jeopardize Chinese government objectives. In many cases, web sites use many different IP addresses and may rotate those addresses frequently. This is especially true in the era of Infrastructure-as-a-Service, where websites may temporarily lease IP addresses from cloud service providers and then release those addresses when they are no longer needed. The Chinese government uses DNS poisoning to block sites where a simple IP block isn’t effective. When a user requests the IP address of a blocked domain name, the Chinese DNS servers return poisoned, invalid results, preventing the user from reaching the site and redirecting them to a site considered harmless. The use of DNS poisoning can impact third parties in surprising ways. When the Chinese government poisons DNS results, they provide a false IP address in response to a DNS query. The unfortunate website located behind that IP address may find itself quickly overwhelmed by a flood of traffic generated by Chinese web users seeking to access blocked content. Indexing the entire web is a mammoth undertaking and it simply isn’t possible for any organization, even one with the resources of the Chinese government, to build a complete blacklist of undesirable sites. In an attempt to overcome this limitation, the Great Firewall also employs URL filtering that searches the names of web pages requested by users for terms considered subversive. The Great Firewall then blocks access to those pages. Some users attempt to defeat the Great Firewall by using encrypted HTTPS connections to websites. The thought is that if the communication between the web browser and server is encrypted, the Chinese government won’t be able to see the contents of the communication and filter it. The Great Firewall also has a workaround for this technology: the man-in-the-middle attack. In this attack, the Great Firewall pretends to be the web server that the user wishes to view and presents a false digital certificate to the user’s browser. If the user is fooled into accepting the certificate, he or she communicates with the Great Firewall, which then passes communications on to the desired website. This position in the middle of the communication allows Chinese government eavesdropping on the connection. Over the past few months, observers noted an increase in the censorship performed by the Great Firewall. In February, The New York Times reported that the Chinese government added Instagram and Line to the list of blocked social media sites and users behind the Great Firewall reported stepped-up controls that blocked popular techniques used to bypass Chinese filtering. Defeating the Great Firewall As long as the Chinese government has attempted to filter communication into and out of China, activists have worked to defeat those controls and provide unfettered access to Chinese citizens and foreigners visiting the country. Some of those efforts have been more successful than others, but they all illustrate the difficulty of cutting off free and open access to information in a technologically advanced world. One of the primary mechanisms used to defeat the Great Firewall is Virtual Private Networking (VPN). VPNs use encryption to build a secure tunnel between two computing systems or networks. Companies often deploy VPNs that allow traveling users to securely access corporate networks. Travelers to China often use VPNs to connect back to their home networks and then use the unrestricted Internet access from that location to access the rest of the Internet. The VPN approach was so successful for business users that it quickly spread to Chinese citizens who purchased accounts on commercial VPN services. These services allow Chinese citizens to establish a secure connection to an uncensored country, such as the United States or United Kingdom and then access the Internet as if they were physically located in those countries. The Chinese government noticed this use and recently took technological measures designed to detect and block VPN connections. Individuals seeking to access the Internet from China also make use of the anonymous Tor network. Like VPNs, Tor uses encryption to obscure the content of Internet communications. It also adds anonymity to those communications by bouncing requests off of several anonymous servers located around the world. While the Chinese government blocks known Tor servers, many activists operate secret Tor sites, known as “hidden nodes.” These servers, advertised within dissident communities, operate in secret providing access to the Tor network and, by extension, the open Internet. The battle between the Chinese government and Internet users is a constant struggle between hackers seeking to undermine the Great Firewall and government programmers seeking to upgrade it to continue its censorship. Businesses operating in China and IT professionals supporting Chinese users must be aware of the firewall, the technologies it employs and mechanisms that they may use to defeat the Great Firewall’s filtering and censorship.
<urn:uuid:c953e238-98c9-43b3-a7d6-91ccd86c5f65>
CC-MAIN-2017-04
http://certmag.com/around-world-peeking-great-firewall-china/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935542
1,583
2.875
3