text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
A network tester is designed to calculate how well your high-speed network cables are performing. As you know, poor performance in these cables can result in damage to work, poor Internet access, and general disruption to the network. Main Features Of Network Tester Network tester has 2 different boxes, which are separate. These boxes are the transmitter and receiver. A basic tester consists of a source of electrical current, a measuring device that shows if the cable is good, and a connection between the two, usually the cable itself. Network cable tester can be a simple apparatus that merely identifies whether current flows through the cable, or it may be a professional-level, complex device that gives additional information that helps identify the problem. Professional-level network cable testers may not only tell if an open circuit exists, but may identify where the break is located. Some also identify the gauge of wire used and can generate their own signal to test for interference. Common Problems Of Network Cables There are a number of problems which can be sorted out by using the network tester, and it will certainly help you to save time trying to find software or hardware solutions. If a network isn’t working correctly, the problem is frequently user error or other problems. It will rarely be a faulty cable. A network cable tester is more frequently used to tell whether a patch cable will work before it is connected. The cabling should first be examined visually to identify any obvious problems. If everything looks correct, a network cable testing device may then be used. Basic network cable testers can test for simple connectivity issues but may not identify other problems that cause the cable to malfunction. Cabling may not work when it is near a source of interference or if the cable is too long. Intermittent faults may develop that do not show up when the cable is tested. Sometimes the problem is not sustained long enough to show up on the tester. How To Use A Network Tester? Step 1 – Connect the tester Plug the network cable into both ends of the box, one into the transmitter, and one into the receiver. These boxes are a model of your computer network, so you should be able to easily locate which box is which. Make sure that your cables are fully plugged in before you proceed to the test. Step 2 – Turn on the tester Turn on the network cable tester device. The tester will send a signal from one end of the box to the other, and this will be the message which is relayed through the cable, much as would occur in your computer network. Keep the network cable tester connected to the cable at all times during the test, otherwise it will not work. Step 3 – Read the report from the tester While the message is being passed from one end of the cable to the other, the network cable tester will be examining the message for faults, and checking that the message has been properly received at the other end. If the tester concludes that this is not the case, and that something has gone wrong, you will find that the tester displays a number of red lights. The differences between the red lights and the green lights which mean that all is working well may not be the same from network cable tester to tester, but you should be able to work out exactly what the problem is with your cable by reading the instructions. Step 4 – Read the problem The different lights on your tester are used to send a signal to the person operating the machine. Your manual will give you a complete run-down of what the different patterns of lighting means, and if there is any problem with your cable that should be addressed. Once you have found the problem, remove the cable from the boxes, and return the tester to a safe place. For cable testing, the network tester provides full cabling testing, displaying wire map, ID, and faults, including shorts, opens, miswires, split pairs, and reverses. The full featured network tester also measures cable length and generates tone levels for signal tracing and cable identification on all pairs, a selected pair, or a selected pin. Fiberstore is a professional manufacturer supplies all kinds of fiber optic tools, including network cable tester, cable stripper, punch down tool, telephone line tester, fiber cable stripper, cable crimping tool, etc.
<urn:uuid:5464f06a-a7e2-43c7-8f1c-0fcbdbe4c8d1>
CC-MAIN-2017-04
http://www.fs.com/blog/using-network-tester-to-identify-cable-problems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00060-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925365
898
2.921875
3
The UN Sustainable Development Goals (SDGs) envisage a world transformed from today. A world where poverty and gender inequality no longer exist. Where good healthcare and education are available for all. And where economic growth no longer harms the environment. The ambition is huge. Every country in the world falls short on more than half of the 17 SDGs. And a quarter of the world’s countries fall short on all 17 of the goals. Find out how digital technologies can hugely accelerate progress towards achieving every single one of the SDGs by 2030. In developed and developing countries alike.
<urn:uuid:23b6aec9-b273-46c5-b811-97e164232845>
CC-MAIN-2017-04
https://www.accenture.com/bd-en/insight-global-esustainability-initiative-joint-report
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.870913
119
2.921875
3
Authentication is any process by which a system verifies the identity of a User who wishes to access it. Since Access Control is normally based on the identity of the User who requests access to a resource, Authentication is essential to effective Security. Authentication may be implemented using Credentials, each of which is composed of a User ID and Password. Alternately, Authentication may be implemented with Smart Cards, an Authentication Server or even a Public Key Infrastructure. Users are frequently assigned (with or without their knowledge) Tickets, which are used to track their Authentication state. This helps various systems manage Access Control without frequently asking for new Authentication information.
<urn:uuid:ee897217-da9b-44c4-83ed-c025e08f11b8>
CC-MAIN-2017-04
http://hitachi-id.com/concepts/authentication.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00050-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949889
133
3.171875
3
From the very beginning of the computer era in the early ’60s, there was one major rule—the price per unit of performance of mainframe processors would decline directly in proportion to the performance increase. So, if the performance from one processor generation to the next doubled, then the price per MIPS (or KIPS in the early days) would halve. This meant that from one generation to the next, despite a doubling of the capacity, the largest system consistently cost around $3 million— not unlike the PC scenario where despite huge performance increases each new model was roughly the same price. Behind this was the basic manufacturing process where performance increases were largely driven by the miniaturization of components, so that to double the performance the component size had to be halved. Consequently, a production line that produced x units of performance after halving the size of the components could now produce 2x units of performance for the same cost. Therefore, assuming the full capacity could be sold, the price to the user could be halved at no loss of profit to the supplier. Obviously, the supplier could increase its profitability by reducing the price by somewhat less, but until recently, the existence of competition with the same economic scenario ensured all suppliers reduced prices in line with the technology improvements. This model resulted in the price per MIPS falling from $10 million in 1960, to $400,000 in 1980, as technology advanced at a rate that allowed the price to fall by 15 percent per annum. This rate of decline accelerated to 26 percent per annum during the ’80s with the price per MIPS falling from $400,000 to $20,000 as competition increased with the arrival of PCM suppliers Amdahl and Hitachi. This competition resulted in technology advancing more rapidly to sustain the higher level of price decline. This was good for the users, as only more rapid technology advancement could enable IBM to sustain its market share. As a result, both technology and price declines were accelerated, allowing an explosion in the use of technology. However, as software costs became more important in the ’90s, IBM began to exploit its control of the software to limit the competitiveness of the PCM suppliers (for example, through the Parallel Sysplex pricing model). This enabled them to slow the speed of technology advance and therefore the consequent price decline. The result was that during the ’90s, prices fell by only 20 percent per annum from $20,000 to $2,000 per MIPS. This was obviously not so good for the users, but worse was to come as IBM’s use of software pricing to reduce the competitiveness of the PCMs eventually forced them out of the market in the late ’90s. This has allowed IBM since 2000, now with no mainframe processor competition, to completely separate the technology advancement and price decline curves for the first time in the history of the mainframe. To illustrate just how dramatic the change has been, over the 2000 to 2005 period, the technology improvement has been more than 300 percent, which would support a price decline of more than 20 percent per annum. Yet, IBM has delivered closer to 13 percent per annum price decline to lower the price from $2,000 to $1,000. This represented a significant slowdown for the user. But it became even worse when the z990 was introduced at the same price as the z900 predecessor, despite a doubling of the technology that would have supported a 50 percent price reduction. This has been followed with the z9 being introduced at around 10 percent higher pricing than the z990 despite a 30 percent technology improvement that should support a 15 percent price reduction. The net result of the slowdown that has occurred since the ’90s, when IBM first began to limit and ultimately eradicated all mainframe processor competition, is that the price per MIPS today is approximately six times higher than the $165 per MIPS that the traditional technology/price decline link would have produced. As a result, the largest systems today cost closer to $18 million than the $3 million they have cost throughout most of the mainframe’s history—a very high price to the user for the elimination of competition. Interestingly, if you go back to the PC comparison at the beginning of this column, the opposite has occurred with PCs where increased competition has actually meant that in addition to large performance increases from one model to the next, the price of the typical PC is now declining.
<urn:uuid:f5b75883-db02-4d4f-b823-e3f3d2b277d4>
CC-MAIN-2017-04
http://enterprisesystemsmedia.com/article/mainframe-processor-pricing-history
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971908
918
3.296875
3
The Global Positioning System (GPS) is made up of 24 satellites orbiting the earth at speeds around 7,000 mph. At this speed, you could travel across the entire United States in about 30 minutes. Did you know that GPS is used on golf courses to measure the distance between golfers and the pin? Farmers also rely on GPS systems to program tractors to automatically plow, fertilize, and harvest their fields. Here are five ways GPS technology can help you cut costs within your government agency and provide better public service. With GPS, you can automatically collect location data and transmit the information to any computer or device with a browser. Managers can see where field employees are and produce reports showing where a worker has stopped, for how long, at what time, and on which date. Reducing idle time equates to improved efficiency and often less overtime – a cost savings no agency can ignore. Customer service representatives handle thousands of non-emergency 311 calls related to city streets; sidewalk curbs; signs and pavement markings; bicycle and pedestrian programs; and more. Being able to generate jobs information from the field allows for faster response to 311 requests. A prompt reply may even reduce the number of repeat calls – a win-win for everyone. When storms or disasters hit, using mobile forms means departments can note what types of crews and equipment are needed and where, depending on the nature and location of the disaster area. This data can be transmitted back to headquarters where a detailed analysis can be done to set tasks for the next shift and adequately schedule work crews. In the event of a disaster, a prompt reply is expected of any city or state government. GPS and forms can ease the stress of an emergency by helping place the required help in the hardest hit areas. Many city workers and contractors have to clock in and clock out for work. This often requires extra time to simply submit essential paperwork. GPS technology can automate timesheets and payroll processing, shaving minutes from hundreds of workers. The best part is that the timesheets can sync with your accounting program to make this truly a seamless process for both workers and administration. A geofence is a virtual barrier for a specified geographic area. Programs that use geofencing allow managers to set up triggers when a vehicle enters or exits the defined boundaries. There are two common uses of this technology. a. Employee safety: Managers are alerted if an employee enters or leaves a specified area. With many city workers working in the field, it is critical to ensure their safety, but not knowing where everyone is located makes this nearly impossible. GPS technology can alert dispatchers if a code enforcer has not made contact and send backup if necessary. b. Reducing fuel costs: Studies have found that with GPS tracking installed on fleets, employees are less likely to make personal stops, therefore reducing fuel costs. Anne Bonaparte is president and CEO of mobile workforce management company Xora.
<urn:uuid:4f435f53-bc5c-48ce-8354-071d5f2ca424>
CC-MAIN-2017-04
http://www.govtech.com/products/Industry-Perspective-Five-Ways-GPS-Helps-Cut-Costs-and-Provide-Better-Service.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00196-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937997
601
2.734375
3
By Avijit Ghosh The Need for Blind-Spot Monitoring Systems Each year, more than 826,000 vehicles in North America are involved in lane-change blind-spot accidents, according to the National Highway Traffic Safety Administration. Although the fatality rate is only 1 percent, low compared to other types of accidents, the extent of property damage and injury is high. This has therefore remained a cause of concern for years. Although several preventive measures have already been taken, they could not drastically reduce the number of these accidents. By 2007, automakers are expected to introduce vehicles equipped with high-tech radar systems that can see beyond the driver's peripheral vision - the dangerous blind spots. Blind spots are areas in adjacent lanes of traffic that are blocked by various structures in the automobile. The physical constraints in eye movement and head and body rotation make certain areas invisible to the driver. The blind spots in cars depend on their construction. The direct blind spots are: - Area covered by the A-pillar, between the front door and windshield - Area covered by the B-pillar, behind the front door - Area covered by the C-pillar, ahead of the rear windshield Apart from these, there are certain indirect blind spots like the region between the driver's peripheral vision on the sides and the area that is covered by the rear view mirror. The unseen areas are immense for drivers of medium and heavy trucks as compared to drivers of passenger vehicles. Areas directly to the right of the cab extending past the trailer, directly behind the trailer, to the immediate left of the cab and directly in front of the cab are blind spots for truck drivers. Drivers of cars should try to stay out of these areas. Highway congestion is the main cause of blind spot-related accidents. According to the U.S Department of Transportation, highway congestion will outpace new road construction by 13 percent every year. This means more cars on the same lane-miles of road, thereby increasing the probability of blind spot-related accidents. Hence automakers are now searching for a suitable technological solution. Radar systems are likely to be the most effective alternative. Blind-Spot Detection Using High-Tech Radar System Valeo Raytheon Systems Inc., one of the world's top-ten automotive suppliers, has come up with a system for preventing blind spot-related lane-change accidents. Valeo Raytheon's lane-change assistance system consists of two radar sensors joined to a control module and two LED warning indicators mounted in the side rear-view mirrors. The sensors continuously monitor the presence, direction and velocity of vehicles in the lanes adjacent to the vehicle, thus creating a digital picture of the vehicle's surrounding. The central control module processes the digital information. When any vehicle moves into the blind spot, the control module alerts the driver by lighting the warning indicator in the appropriate side mirror. As the system is currently designed, if a turn signal is kept engaged, the central controller can turn on an audio alarm inside the cockpit to give an additional warning. Therefore, this system can help reduce the number of lane-change accidents and accidents when merging to join a stream of traffic. According to Valeo Raytheon, this system, expected in 2007, would be priced around $450 to $500 for vehicle buyers. Other Possible Technological Solutions Other technologies, both high-tech and low-tech, also are being explored. Multi-radius mirrors having a 40-degree field of view have been a popular option for consumers in Europe and Japan for more than 20 years. But in the US, this cannot be the solution, since government regulations permit only flat mirrors, having a 15-degree field of view. Regulations permitting multi-radius mirrors could be a low-tech solution to the blind-spot problem. A different blind-spot detection device has been developed by Advanced Technology Products of Toronto, Ontario. The system uses a patented passive infrared sensor technology, which the company claims can sense thermal energy radiating from the tires of a moving vehicle. This temperature difference is used to trigger a flashing red light to warn the driver of the hazard. Netherlands-based Mobileye NV has displayed a computer chip named EyeQ, which processes images from a small mirror-mounted camera. Apart from detecting nearby objects, it can determine shapes as well. If the driver starts moving into another lane, occupied by a vehicle, both visual and audio alerts are generated. The whole system consist of the EyeQ processing unit and one or two complementary metal-oxide semiconductor (CMOS) video cameras, compact enough to fit into a side rear-view mirror. The chip possesses the equivalent computing power of two powerful Pentium computers and is completely programmable to accommodate a wide range of visual processing applications. This system will be available as an aftermarket item for fleet customers beginning this June and for the consumer market shortly thereafter. Pricing has not yet been set. Michigan-based Magna Donnelly Corp. has developed panoramic vision displays involving three cameras, which can give an image of both sides and of the back of the vehicle, covering a 70-degree field of view with almost no blind spots. The three cameras replace the exterior and interior rear-view mirrors. This is likely to be introduced in concept vehicles in 2005, whereas the impact of radar-based system can only be known in 2007. In mid 2004, Volvo cars with cameras fitted in outside rear-view mirrors to detect obstacles in blind spots will be introduced. Future of Radar-Based Technology Most automakers see radar-based systems as the future, since the need for extended coverage favors this technology. These systems will have sensors covering a wide radius from the side to the back of the vehicle, and will be capable of differentiating between mobile and stationary objects. Can radar-based technology make an impact? Whether consumers will embrace this blind-spot detection device as a legitimate safety feature worth the price remains to be seen. This is a serious thought, because blind-spot detectors hardly are technologically unique in an environment of lane-departure warning systems, night-vision cameras, and airplane-like "black boxes" that keep running logs of driving activity. But the concept could take off if the price is right. Since vehicle buyers are strongly aware of the danger of accidents due to blind spots, blind-spot monitoring systems could become a major selling point, much like side-impact airbags and electronic stability control systems.
<urn:uuid:a3194f76-d828-439e-9a98-079cb3f54a6d>
CC-MAIN-2017-04
https://www.frost.com/sublib/display-market-insight.do?id=19917640
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00196-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938681
1,314
2.703125
3
Popular Inventions - Quiz Questions & Answers - Which Italian inventor succeeded in transmitting wireless waves over long distances for the first time? - Who set up the first English printing press? - Galileo Galilee has two famous inventions to his credit. What are they? - Four inventions that were to change the history of the world came from China. What were they? - Which great painter of the Renaissance was also a famous inventor? - This great American has patented the most inventions, the most illuminating being the incandescent electric bulb, the phonograph and the motion-picture projector. Who was he? - Match the following with their inventors: (a) Sir James Dewar Telephone (b) Alfred Nobel Dynamo (c) Alexander Graham Bell Thermos Flask (d) Michael Faraday Dynamite (e) William Sholes Photography (f) Louis Daguerre Typewriter - Name the inventors of the following: (b) ball-point pen - Elias Howe made the first sewing Machine. Who improved it? - Who invented the Computer? - Who is credited with the invention of the magnifying glass? - For what invention are the following noted: (a) Alastair Pilkington (b) Henry Bessemer (c) E.G. Otis - What did Alessandro Volta invent in 1800? - What was invented solely for the use of King Louis XIV of France? - Who invented these articles of everyday use: (a) Safety razor (b) Ironing board (c) Friction Matches (d) Video-tape recorder - In 1760, an Englishman, John Spilsbury invented popular game. What was it? - For what invention is Sir Humphry Davy famous? - Who invented Xerography (instant copying)? - Which famous Science Fiction writer first suggested it? “The invention of satellites as a mean of communication.” - Who produced the first really useful TV system? Answers of Popular Inventions Quiz Questions - William Caxton - The Thermometer and the Telescope - Paper, Printing, Gunpowder and the Magnetic compass - Leonardo da Vinci - Thomas Alva Edison - (a) Sir James Dewar - Thermos flask (b) Alfred Nobel - Dynamite (c) Alexander Graham Bell - Telephone (d) Michael Faraday - Dynamo (e) William Sholes - Typewriter (f) Louis Daguerre - Photography - (a) Lewis Waterman (b) Lazlo Biro - Isaac Singer - Vannevar Bush invented the Analog Computer in 1930; Howard Aiken invented the Digital Computer in 1944. - Roger Bacon - (a) Glass - The Electric Battery - The lift or Elevator - (a) Gillette in 1901 (b) Sarah Boone in 1892 (c) John Walker in 1827 (d) Charles P. Ginsberg, Shelby Anderson Jr. and Ray Dolby in 1960 - The Jigsaw puzzle - The safety lamp for miners - Chester Carbon in 1938 - Arthur C. Clarke - J. L. Baird This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:d495b97a-0c54-49f3-bb7f-e5e912e79f18>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-721.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz
en
0.888487
780
2.671875
3
Fiber optic cleaving is the process to scribe and break an optical fiber endface. Fiber optic technicians need some training in order to gain the skills necessary for best possible results. The goal for fiber cleaving is to produce a mirror like fiber endface for fiber splicing-either fusion splicing or mechanical fiber splicing. Incorrect or pool cleaving techniques will result in lips and hackles which makes good fibre splicing impossible. A bad cleaving usually has to be redone. The tools needed for fiber cleaving are called fiber optic cleavers. Fiber optic cleaver is used to cut the fiberglass to make a good end face, as we know the quality of the bare fiber end face will determine the quality of the joint of the fibers in the fiber optic fusion process, and the joint point quality means higher or lower attenuation of the fiber connection line. So the fiber optic cleaver is very important in the fiber splicing process, it works together with the fusion splicer to meet the end needs. There are two types available on the market: high precision single fiber cleaver and field fiber cleaver, high precision single fiber cleaver for fusion splicer, field cleaver for Fiber Optic Mechanical Splices. High precision fiber cleavers cost from $1,000 to $5000 dollars while field fiber cleavers cost from $100 to $500 dollars. The design of fiber optic cleavers varies among manufacturers such as AFL, Corning, Fujikura or York. But the working principle is the same. Here I describe a typical work flow of optical fiber cleavers. 1. Strip the fiber to its cladding size, the standard optical fiber cladding size is 125um. The strip length depends on your application. 2. Clean the fiber with lint-free wipes moistened with isopropyl alcohol. 3.Place the stripped and cleaned bare fiber into the fiber cleaver 4.Scribe the bare fiber with either a cutting wheel or a blade 5.Break the fiber with the built-in mechanism on the cleaver 6.Remove the fiber scrap and put it into a fiber disposal unit This semi-automated process produces high quality cleaving in minimum steps. It has been used widely in the fiber optic communication industry. FiberStore is a worldwide leading manufacturer and suplier of fiber optic communication products. Learn more about fiber optic cleaving, fiber optic tool and fiber optic tester and on FiberStore.com
<urn:uuid:5e52117c-8dc2-4903-bf1e-3d463c0a28ad>
CC-MAIN-2017-04
http://www.fs.com/blog/what-is-fiber-optic-cleaving.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911121
512
2.75
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: Standards- Project Review App Select a size App Review Project By: Staci Hiatt *K.E.1.3 Compare weather patterns that occur from season to season *1. O.A.1 - Use addition and subtraction with 20 to solve word problems involving situations of adding to taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem *2. G.1.1 - Interpret maps of the school and community, that contain symbols. legends and cardinal directions *3.NF.3 (B)Recognize and generate simple equivalent fractions, e.g., 1/2 = 2/4, 4/6 = 2/3). Explain why the fractions are equivalent, e.g., by using a visual fraction model *5. E.2.1Explain the importance of developing a basic budget for spending and saving. Facilitate and Inspire Student Learning and Creativity- “Teachers use their knowledge of subject matter, teaching and learning, and technology to facilitate experiences that facilitate student learning, creativity, and innovation in both face to face and virtual environments.” *The Mobile software applications use technology to enhance learning in the classrooms Design and Develop Digital Age Learning Experiences and Assessments- “Teachers design, develop, and evaluate authentic learning experiences and assessments incorporating contemporary tools and resources to maximize content learning in context and to develop the knowledge, skills, and attitudes identified in the Standards.” *The Mobile software applications use digital tools for expanded learning Model Digital Age Work and Learning-“Teachers exhibit knowledge, skills, and work processes representative of an innovative professional in a global and digital society.” *The teacher MUST understand the technology the students are being asked to use. They must know it well enough to teach their students how to use it efficiently Promote and model digital citizenship and responsibility- “Teachers understand local and global societal issues and responsibilities in an evolving digital culture and exhibit legal and ethical behavior in their professional practices.” *These Applications will enhance students’ online responsibilities Engage in professional growth and leadership- “Teachers continuously improve their professional practice, model lifelong learning, and exhibit leadership in their school and professional community by promoting and demonstrating the effective use of digital tools and resources.” *These applications are designed, developed, and distributed by entities, in an effort to improve student learning by giving resources to enhance learning with practice Empowered Learner - “Students leverage technology to take an active role in choosing, achieving and demonstrating competency in their learning goals, informed by the learning sciences.” 1a. Articulate and set personal learning goals, develop strategies leveraging technology to achieve them and reflect on the learning process itself to improve learning outcomes. 1b. Students build networks and customize their learning environments in ways that support the learning process. 1c. Students use technology to seek feedback that informs and improves their practice and to demonstrate their learning in a variety of ways. 1d. Students understand the fundamental concepts of technology operations, demonstrate the ability to choose, use and troubleshoot current technologies and are able to transfer their knowledge to explore emerging technologies. Digital Citizen - “Students recognize the rights, responsibilities and opportunities of living, learning and working in an interconnected digital world, and they act and model in ways that are safe, legal and ethical.” 2a. Students cultivate and manage their digital identity and reputation and are aware of the permanence of their actions in the digital world. 2b. Students engage in positive, safe, legal and ethical behavior when using technology, including social interactions online or when using networked devices. 2c. Students demonstrate an understanding of and respect for the rights and obligations of using and sharing intellectual property. 2d. Students manage their personal data to maintain digital privacy and security and are aware of data-collection technology used to track their navigation online. Knowledge Constructor - “Students critically curate a variety of resources using digital tools to construct knowledge, produce creative artifacts, and make meaningful learning experiences for themselves and others.” 3b. Students evaluate the accuracy, perspective, credibility and relevance of information, media, data or other resources. Innovative Designer - “Students use a variety of technologies within a design process to identify and solve problems by creating new, useful or imaginative solutions.” 4b. Students select and use digital tools to plan and manage a design process that considers design constraints and calculated risks. Computational Thinker - “Students develop and employ strategies for understanding and solving problems in ways that leverage the power of technological methods to develop and test solutions.” 5b. Students collect data or identify relevant data sets, use digital tools to analyze them, and represent data in various ways to facilitate problem-solving and decision-making. 5c. Students break problems into component parts, extract key information, and develop descriptive models to understand complex systems or facilitate problem-solving. Creative Communicator - “Students communicate clearly and express themselves creatively for a variety of purposes using the platforms, tools, styles, formats and digital media appropriate to their goals.” 6a. Students choose the appropriate platforms and tools for meeting the desired objectives of their creation or communication. 6b. Students create original works or responsibly repurpose or remix digital resources into new creations. Global Collaborator - “Students use digital tools to broaden their perspectives and enrich their learning by collaborating with others and working effectively in teams locally and globally.” 7a. Students use digital tools to connect with learners from a variety of backgrounds and cultures, engaging with them in ways that broaden mutual understanding and learning.. 7c. Students contribute constructively to project teams, assuming various roles and responsibilities to work effectively toward a common goal. North Carolina Professional Teaching Standards Standard 1 Teachers Demonstrate Leadership teachers lead in their classrooms. teachers demonstrate leadership in the school. teachers lead the teaching profession. teachers advocate for schools and students. teachers demonstrate high ethical standards. By using digital tools teachers are giving students skills to prepare them for life in the 21st century. Standard 2 Teachers Establish a Respectful Environment for a Diverse Population of Students teachers provide an environment in which each child has a positive, nurturing relationship with caring adults. teachers embrace diversity in the school community and in the world. teachers treat students as individuals. teachers work collaboratively with the families and significant adults in the lives of their students. These Applications can use used individually and in groups and some can be offered in different languages Standard 3 Teachers Know the Content They Teach teachers align their instruction with the North Carolina Standard Course of Study teachers know the content appropriate to their teaching specialty. teachers recognize the interconnectedness of content areas/disciplines The teachers MUST understand the technology the students are being asked to use. They must know it well enough to teach their students how to use it safely. Standard 4 Teachers Facilitate Learning for Their Students teachers know the ways in which learning takes place, and they know the appropriate levels of intellectual, physical, social, and emotional development of their students. teachers plan instruction appropriate for their students. teachers use a variety of instructional methods teachers integrate and utilize technology in their instruction. teachers help students develop critical thinking and problem solving skills. teachers help students work in teams and develop leadership qualities teachers communicate effectively. Teachers can help with understanding the application and offer assistance Standard 5 Teachers function effectively in a complex, dynamic environment. Teachers Understanding that change is constant Teachers actively investigate and consider new ideas that improve teaching and learning. Teachers adapt their practice based on research and data to best meet the needs of their students. These applications serve to improve teaching and learning. It incorporates Web 2.0 skills to teach skills necessary for using the internet and technology tools. Standard 6 Teachers Contribute to the Academic Success of Students Teachers integrate and utilize technology in their instruction. Technology provides another way for students to learn and communicate with each other, and people around the world.evelop and test solutions.” teachers use a variety of instructional
<urn:uuid:db27995a-7c6e-4262-be9f-a1c9aad9a773>
CC-MAIN-2017-04
https://docs.com/staci-hiatt/4279/standards-project-review-app
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922518
1,832
2.625
3
NASA intends to start a “large-scale fire” in space, but unless something goes horrifically wrong it’s not like you will look up and see a fireball overhead. Maybe there’s been too many movies showing how a fire in a spacecraft leads to disaster – action films which allegedly got it wrong – for NASA’s plan to start a fire in space not to sound alarming. Nevertheless, on March 22, NASA will launch an unmanned Cygnus spacecraft, via a United Launch Alliance Atlas V rocket, to the ISS for a resupply mission; after undocking, NASA will kick off the first of three Spacecraft Fire Experiments (Saffire). What do the ISS astronauts, who have probably seen the same movies as we have, think? Dan Tani, a former astronaut who flew two space shuttle missions and spent 120 days aboard the ISS, said, “Igniting ‘a relatively large-scale fire’ in a zero-gravity environment” is “a big deal.” Although NASA mentioned the fire in space experiment during a recent press call, NASA Glenn Research Center posted a video last year to explain how the Saffire experiment will go down. Once the supplies are offloaded and Cygnus is loaded with the astronauts’ trash, it will fly a “safe distance” away from the ISS – “about four hours away” and “on a different orbit” according to Gizmodo – then NASA control engineers in Dulles, Virginia, will remotely spark the fire. Cygnus will be put into “free drift” during the Saffire experiment which is expected to take up to 2.5 hours. Cygnus, along with the Saffire experiment hardware, will burn up when re-entering Earth’s atmosphere, but not before the data and video from the experiment are “downlinked to several ground stations across the globe” and then transferred to NASA’s Glenn Research team in Cleveland, Ohio. “Saffire I, II, and III will launch separately in 2016 aboard resupply missions.” Despite plans for three fire-in-space tests, NASA explained, “The experiment is very limited in the amount of data and test conditions that can be investigated. In Saffire-I and –III, the sample material is a single large sample (approx. 0.4 m wide by 0.94 m tall) to demonstrate the development and spread of a large-scale low-gravity fire. Once started, the entire burn of each of these samples is recorded, the data compressed, and downlinked.” The image below, according to NASA, is of the Saffire Experiment Module with the cover removed so we can see it better. The “hardware consists of a flow duct containing the sample card and an avionics bay. All power, computer, and data acquisition modules are contained in the bay. Dimensions are approximately 53- by 90- by 133-cm.” Saffire II will include “nine smaller samples” having dimensions of “5 cm wide x 25 cm long” being burned. NASA added, “These are burned sequentially with the camera recording images only from the sample being burned. Once started, these experiments run automatically. Because of limitations in time available for downlinking, a maximum of 20 gigabits of data can be downlinked.” This is far from the first fire to be studied in space or even on the ISS. Three years ago during an ISS experiment dubbed FLEX, astronauts studied “how to put out fires in microgravity.” The flames went out as planned, “but unexpectedly the droplets of fuel continued burning.” They “seemed to be burning without flames,” something called “cool flames” which “can burn for long minutes.” Those flames are trippy-looking as seen in a NASA video which showed the difference between a flame on Earth and a flame in space. NASA’s “large-scale fire” in space is even expected to have benefits for Earthlings, such as helping to understand fire behavior “inside mines, airplanes or submarines.” NASA makes the Saffire experiments sound less alarming than what we might conjure up after hearing NASA intends to set a “large-scale fire” in space. Here’s hoping the words, “Houston, we have a problem” are not uttered and that we don’t look up and see a fireball in the night sky.
<urn:uuid:d8398dfe-2bfc-4c96-a510-e9674199e5a6>
CC-MAIN-2017-04
http://www.computerworld.com/article/3044797/space-technology/nasa-intends-to-start-a-large-scale-fire-in-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00033-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940533
970
3.28125
3
What do clouds and a distant red planet with a thin atmosphere have in common? Dr. Jose Luis Vazquez-Poletti from Universidad Complutense de Madrid explains how cloud computing is being deployed in innovative space missions that take aim at Mars. He reports on the outcome of a meeting of the Mars MetNet Mission, which was held at the Finnish Meteorological Institute headquarters in Helsinki and describes in detail some of the cutting-edge research that is making use of cloud-based resources to handle the massive data expected. The MetNet project aims to go where no other Mars missions have gone before, at least in terms of the way it will gather and then process data. This mission to Mars will be based on the power of a new type of dandelion seed-shaped, semi-hard landing vehicle called the MetNet Lander. The leaders of the mission hope to deploy several of these oddly-shaped landers (as shown to the left) on Martian soil. While these lofty goals will take shape over a number of years, the first step in the mission to launch a MetNet Mars precursor mission with the first few landers being deployed in the coming year. The main idea behind these vehicles is that by using a state-of-the-art inflatable entry and descent systems (instead of rigid heat shields and parachutes like those from the earlier semi-hard landing devices) the ratio of the payload mass to the overall mass is optimized. This means that more mass and volume resources are spared for the science payload. The scientific payload of the Mars MetNet Mission encompasses separate instrument packages for the Martian surface operation phase. At the Martian surface, the lander will take panoramic pictures and will also perform observations of pressure, temperature, humidity, magnetism, as well as atmospheric optical depth. The network of MetNet landers will provide valuable scientific data, decisive for studying the Martian atmosphere and its phenomena. Countries involved are Finland (Finnish Meteorological Institute), Russia (Lavochkin Space Association and Russian Space Institute) and Spain (Instituto Nacional de Técnica Aerospacial). The collaboration developed in Mars MetNet by our group, the Distributed Systems Architecture Research Group led by Prof. Ignacio M. Llorente from the Universidad Complutense de Madrid has much to do with cloud computing… in fact, the collaborative effort is dedicated to using cloud computing for boosting all possible applications pertaining to the Mars mission, as will be explained in greater detail in a moment. We began this collaboration with the Mars MetNet Mission more than a year ago, when they were dealing with the tracing of Phobos, the biggest Martian moon which orbits at about 9,400 Km (5,800 miles) distance from the planet’s center, completing its cycle nearly 3 times a day (a Martian day lasts 24:39 hours). The prediction of each Phobos’ eclipse is important for the onboard instruments, which obviously depend on the landing coordinates. The challenge arises when the approximated landing area is not known until two hours before the touchdown. For this reason, an application for tracing Phobos was developed by the Meiga-MetNet Team, in order to provide a Phobos cyclogram, which is the trajectory of the Martian moon in Astronomy terms, using coordinates, dates and time intervals as an input. This way, the MetNet lander would achieve its exact location on the Martian surface by comparing the position of Phobos and the cyclogram, that is to be sent to the probe before the landing procedure. We performed an initial parallelization of the application so that the complete set of coordinates pertaining to the approximated landing area can be processed with a desired grain. This process of profiling brought us to the conclusion that the needed hardware could be too expensive for executing this HPC application only twice a year. We had no way of even knowing if there would be other uses for this costly hardware either. For this reason we turned to Amazon EC2, the de facto standard public cloud, attracted by its high speed deployment and its “pay-as-you-go” basis. Because all the possible setups that Amazon EC2 was offering by means of instance types and number, we crafted and validated an execution model for the application considering time, cost and a metric involving both . This way, the optimal infrastructure could be obtained given a problem size. Considering one of the possible setups, its baremetal equivalent could be a cluster consisting in 37 nodes of the latest HP Proliant DL170 G6 Server (for example). Taking its web price of $4,909 per node, we would get our machines for $181,633 without considering any other expenses like shipping or insurances. Great, but… what about electricity? Administrator’s salary? Startup time? Even more, are we going to use this infrastructure at full power in a 24×7 fashion? Probably not. On the other hand and according to our model, Amazon EC2 provides the needed infrastructure for $7.50. During the meeting, I performed a comprehensive presentation explaining what Cloud Computing is and its elements to the rest of the Mars MetNet Scientific Team. The best way to make a base scientist understand Cloud Computing is to provide a good assortment of working examples and success stories. Of course, I recommended “HPC in the Cloud” as one of the main sources of news about our favorite technology. Among these examples was the NASA case. They begun with the Nebula initiative in 2009, providing an alternative to the costly construction of additional data centers whenever NASA scientist or engineers require additional processing. This is accomplished in a fancy way and, in my point of view, following a real life “on demand” definition, as truck containers are delivered to the demanding research centers. These shipping containers can hold up to 15,000 CPU cores or 15 petabytes of storage while proving 50% more energy efficient than traditional data centers. However, NASA decided last December 2010 to make another step on its Cloud path: they started to use the Amazon public cloud for its ATHLETE (All-Terrain Hex-Limbed Extra-Terrestrial Explorer), to be commissioned to future Mars exploring Missions. Machine instances from Amazon EC2 are used for processing satellite high-definition images in order to take navigation decisions. But one year before NASA, the Mars MetNet Mission was already using Amazon EC2 as I explained at the beginning of this article. The results obtained for the locations of the different Martian probes were presented during the meeting and the detection of eclipses was confirmed by the experimental (and historic) data retrieved. This confirmed that the Phobos tracing model will help the Mars MetNet Mission and that Cloud Computing will be an indispensable tool, due to the huge amount of computational power needed in a very short term of time. After my presentation, a new application was proposed. This time it has to do with the process of the meteorological data from landers pertaining to previous Mars Missions. This work has much to do with what could be addressed as “Archaeological Computing”, because much of the raw data is about 30 years old! Despite its age, the meteorological information obtained from the landers will be very useful for the Mars MetNet Mission. The amount of data is huge and parallelization may solve some of the problems, considering several processes which respond to certain parameters. These parameters are provided by a Meteorological Model, developed within the Finnish Meteorological Institute. However, computing resource availability is another thing to take into account regarding the numerous application executions needed, so this is where a public cloud infrastructure helps reaching the goal. But the advantages are more, because the final framework is intended to be used with the data obtained by the Mars MetNet probe, and it will be increased while more probes from the meteorological network become part of the Martian landscape. To conclude, space missions are bringing many HPC challenges and adopting cloud computing is a decisive move for meeting them. Additionally, all research done on cloud computing for fulfilling the space mission’s demands will revert in other areas, as other achievements outside computing already did–like lyophilized food or Velcro straps. If you are curious about the landing procedure of these dandelion-seed-shaped landers, I really encourage you to visit the Mars MetNet Mission website and watch the animation. About the Author Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Universidad Complutense de Madrid (Spain), and a Cloud Computing Researcher. He is (and has been) directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives. His interests lie mainly in how the Cloud benefits real life applications, specially those pertaining to the High Performance Computing domain. Dr. Vazquez-Poletti is also the author of a popular article that appeared in HPC in the Cloud describing a range of upcoming cloud computing research projects pending in Europe. J. L. Vázquez-Poletti, G. Barderas, I. M. Llorente and P. Romero: A Model for Efficient Onboard Actualization of an Instrumental Cyclogram for the Mars MetNet Mission on a Public Cloud Infrastructure. PARA2010: State of the Art in Scientific and Parallel Computing, Reykjavík (Iceland), June 2010. Proceedings to appear in Lecture Notes in Computer Science (LNCS).
<urn:uuid:ee80bcfd-a5a1-442b-bff4-ace5045be161>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/02/03/mars_as_a_service_cloud_computing_for_the_red_planet_exploration_era/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92776
1,967
3.53125
4
Password synchronization is any process or technology that helps users to maintain a single password, subject to a single security policy, across multiple systems. Password synchronization is an effective mechanism for addressing password management problems in medium to large - Users with fewer passwords tend to remember them. - Simpler password management means fewer problems and fewer help - Users with fewer passwords are less likely to write them down. There are two ways to implement password synchronization: - Transparent password synchronization, where native password changes, that already take place on a common system (example: Active Directory) are automatically propagated through the password management system to other systems and applications. - Web-based password synchronization, where users change all of their passwords at once, using a web application. One of the core features of Hitachi ID Password Manager is password Password Manager implements both transparent and web based Return to Identity Management Concepts
<urn:uuid:e745a7e2-5815-40d5-a180-a5d4ac9c5dd1>
CC-MAIN-2017-04
http://hitachi-id.com/resource/concepts/password-synchronization.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00299-ip-10-171-10-70.ec2.internal.warc.gz
en
0.861398
190
2.78125
3
Police know that most crimes are committed by repeat offenders, so a staple of police work is identifying patterns that link crimes together and deduce the results to specific individuals. This becomes difficult when analysts look at thousands of cases each year. Though the human brain excels at identifying patterns, it can succumb to information overload. Chicago Police Department crime analysts sometimes look at 100 cases a week, sifting through hundreds of data elements trying to unravel patterns that might lead to a break. It requires a lot of legwork and some luck. However, police brass hope a new neural network system will take some chance out of the equation, and add more of a scientific formula to solving crimes. "The average cases kind of blend together after a while," said Steve Maris, assistant director for Information Services for the Chicago Police Department. "This would be able to segment those for us." The Classification System for Serial Criminal Patterns (CSSCP) is the brainchild of Dr. Tom Muscarello, an assistant professor at DePaul University. It's different from other crime-analysis systems being used by law enforcement in that the CSSCP thinks 24 hours a day, seven days a week -- not just when prompted by an analyst. It can, however, be prompted to search for a particular data set, analyze data from multiple crimes and find patterns that link crimes without human intervention. Running 24/7, the system combs through police department IT systems, searching for patterns or clusters of data elements that might tie together a string of crimes and give police the data they need to find the perpetrators. The system assigns numerical values to different data elements in each crime, including crime type, suspect description and profile, getaway vehicle and so forth. The system uses pattern-recognition software that is "trained" to find those clusters of data. Neural networks are considered artificial intelligence -- the networks attempt to imitate the human brain in the way the brain programs data structures and recognizes patterns. Neural networks function by creating connections between processing elements, which are the equivalent of neurons to the computer system. These networks become adept at predicting events when they have a large database of examples from which to draw, and are typically "trained" by being fed large amounts of data and "taught" rules about interpreting relationships between that data. "It cuts down on manual intervention," Maris said. "[A detective] reads 100 cases this week, he reads 100 next week. Can he go back and remember which case belongs where? "Right now, we have a lack of computer tools to assist the crime analyst," he continued. "Crime analysts are doing a lot of legwork -- reading lots of cases using text searches to find cases. Nothing is grouping the case by offender patterns, MO [Modus Operandi], things like that." Also, since cases are often assigned arbitrarily, communication between detectives may not be what it should, and links between cases may go uncovered without such a system. "Some sergeant is passing [case assignments] out to people," Muscarello said. "If [the sergeant] doesn't know right off the top of his head that it sounds like a case that's related to others, it's kind of a round-robin thing; 'Well, Joe you got two yesterday so I'm going to give these to Frank.' A lot of times people don't communicate with each other as often as you'd think people in an office environment would." Modeling the Brain The CSSCP has been in the works for a decade, and after considerable tinkering, should be ready for the Chicago Police Department this year. Muscarello said changes in leadership at the police department and adjustments to the system have delayed its advance.
<urn:uuid:e74481be-3c97-439b-bc6b-ddf2abd6919c>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Brain-Power.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00299-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956295
760
2.6875
3
Subnetting IPv6 sounds very complex but to be true – it is very easy! All you need to do is to understand basics of IPv6 addressesing – how an address is formed and how to efficiently use CIDR notation. Firstly how an IPv6 address looks like? (good to clear fundamentals first!) An IPv6 address has 8 sections seprated by coloums and each sections has carries 4 hexadecimal digits. So an IPv6 address is something like: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx – Each x can have a hexa decimal value i.e from 0 to 9 and a to f. Thus 16 possible values for each x. Since each each x is stored in binary i.e 0 or 1 (that is 2 possible value) – number of bits per section turns out to be 2x2x2x2 = 16bits. Thus we have now each section with 16 bits per section and 8 sections in total. This turns out to be 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 bits = 128bit. This is why an IPv6 address has 128bits. This means total possible addresses in IPv6 space is 2^128 = 340 282 366 920 938 463 463 374 607 431 768 211 456 addresses. Next, an important point to remember here is – in IPv6 address clients are mostly based on /64 subnet which means first 64 bits go to network part while next 64 bits go to the host part i.e usage IPv6 addresses which are allocated to end machines. Now getting back on main question on how to subnet IPv6? In most of cases RIRs like ARIN/APNIC allocate a /32 IPv6 block. This means first 2 sections 16+16 bits are reserved and rest 6 sections i.e 128-32 = 96 bits are available for use. E.g let’s pick example of Google’s block. Google has a allocation of 2404:6800::/32 from APNIC in Asia. Now this is HUGE chunk. First let’s understand what is range of 2404:6800::/32 looks like. :: here means that zeros are skipped and thus we can fill zeros to understand block. 2404:6800::/32 means = 2404:6800:0000:0000:0000:0000:0000:0000/32 and since only first 32 bits (16 bit per section) are reserved – we have first 2 sections reversed while rest 6 sections are available and we can fill any hexa decimal value in those sections. Thus block 2404:6800::/32 goes from 2404:6800:0000:0000:0000:0000:0000:0000 to 2404:6800:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF Now that’s a huge number of address space. You can simply count it by doing 2 to the power 96 (128-32) which will be 792 281 625 142 643 375 935 439 503 36 unique possible addresses! Breaking it down further… - If you have multi datacenter setup – it is very likely that you would like to use IPv6 space across multiple locations and thus doing a BGP announcement for whole /32 isn’t a very good idea. - Many people on NANOG mailing list suggested me to use /48 block as it works well with BGP and most of ISPs do accept a /48 block. - Most of servers are allocated /64 block of IPv6 further down. So in idea situation – you would have to break your /32 allocation into multiple /48s – which you can annouce from BGP and further /64s out of /48 for allocation per server/per client. At this point it is likely that will think of how many such small blocks are possible out of main bigger block? Ok – here’s the answer. You can break /32 into 65,6536 /48s. Each can represent a separate network below a BGP session. Next, you can further break /48 block into 65,536 /64s and each /64 you can allocate to a client. Thus each client will have 2^64 addresses i.e 184 467 440 737 095 516 16 addresses per client! Let’s break it! Coming back on example of Google’s block – 2404:6800::/32 here to get /48s out of the block – all you need to do is to change the 3rd section. Remember as each section represents 16bits, altering 3rd section gives 16+16+16 = 48 bits. Thus possible /48s out of 2404:6800::/32 will be also since it takes hexadecimal values, we can put a,b,c,d,e & f. one can also use complete combination to fill all 4 digits i.e here XXXX can take hexa decimal values of 65,536. next, in similar manner altering 4th section gives /64s. Possible /64s out of Google’s IPv6 block: Next, each client can alter last 4 sections – and generate ton of IPv6 addresses! E.g unique IP addresses 2404:6800:1:1::1 which is 2404:6800:1:1:0000:0000:0000:0001 2404:6800:1b11:21dd:00ab:0030:0020:0001 or just anything! Quick point to remember here: - If you alter JUST the last i.e 8th section you can have 65,536 (2^16) IPs. - If characters in hexa decimal values confuse you, you can simply take last section values from 0 to 9999 i.e 10k possible IPs by just altering last section without hexa decimal. - Its a good idea to alter just last section and fill zeros in 5th, 6th and 7th section because 10k IPs would be sufficient per server and one can always add more later. - Also when filling 0 in 5th, 6th and 7th section, one can simply use double coloumn notation i.e 2404:6800:1:1:0000:0000:0000:0001 can be written as 2404:6800:1:1::1 skipping all zeros! Well that’s all about IPv6 addressing. Hope you will find it useful! 🙂
<urn:uuid:0bf06ea2-7bd0-465c-aea3-757e53e054c2>
CC-MAIN-2017-04
https://anuragbhatia.com/2012/03/networking/how-to-subnet-ipv6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875994
1,387
3.671875
4
The life expectancy of Linux has lengthened dramatically since 2001 and 2002, the project said, from a mere 72 hours two and three years ago to an average of three months today. Honeypot Project is a non-profit that, as its name suggests, connects vulnerable systems to the Internet in the hope of drawing attacks so that they can be studied. To figure out the lifespan of a Linux system, the group set up a dozen "honeynets" -- the project's term for a system that hosts numerous virtual honeypot machines -- in eight countries, then tracked the time it took for those machines to be compromised. "What's surprising is that even though threats and activity are reported as increasing, we see the life expectancy of Linux increasing against random attacks," said the group's report. In comparison, unpatched Windows systems often are hacked within minutes of connecting to the Internet. Late last month, similar "honeypot" research done by AvanteGarde tallied the average survival time of several versions of Windows at just four minutes.
<urn:uuid:b1bc0bff-860d-481a-9a21-fb0a7c92ac32>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/honeypot-project-finds-unpatched-linux-pcs-stay-secure-online-months/504484957
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961719
215
2.953125
3
A think tank has called on the United Nations to draw up a universal 'Declaration of Data Rights' in order to ensure that enterprises, countries and citizens are able to enjoy the benefits of big data, while still balancing this with individuals' rights to privacy. The suggestion was one of a number of recommendations made in a report by the Institute of Development Studies (IDS), which examined what will be needed to ensure that the developing world does not miss out on the opportunities afforded by the technology. It noted that now is the time for action in this area, as many major decisions that will influence the future direction of big data are currently being taken. With it claiming around 90 per cent of digital data being created in the last two years – and the quantity of information doubling every two years – the area is attracting much more attention on an international level. For instance, the report observed that the recent European Court of Justice ruling that invalidated the Safe Harbor agreement between the EU and the US will open up "fundamental questions" about how personal data is used. At the same time, issues of data sharing and privacy are part of the Transatlantic Trade and Investment Partnership (TTIP) negotiations between the EU and US, and the 24-nation Trade in Services Agreements (TISA) discussions. "The outcome of these negotiations will shape big data impacts for years to come," the report said. It also noted that the fact deals such as TTIP and TISA are being negotiated in secret makes it very difficult for citizens to engage, as does the fact that the long-term implications of big data are still not well understood. IDS' study observed there are four key areas where big data will have an impact on developing nations as they increasingly embrace the technology. These are its economic impact, its effect on human development through advancing health or education, its implications for human rights and how it can reduce the strain on environmental resources. Dr Stephen Spratt, research fellow at the IDS, noted that developing countries face particular challenges when it comes to the implementation of big data, as in many cases, protections for civil liberties have not been encouraging. "A worst case scenario is one where a government can see citizen data but information on government activities remains closed, and where corporations offering internet access to people in developing countries do so on the condition of targeted advertising and right to use data in exchange," he said. Dr Spratt stated that "much more needs to be done" to minimise the risks facing these nations and ensure that the benefits of big data are shared equally, rather than just among large corporations, the richest individuals and developed countries. The report called for the UN to establish a panel of social science, ethics, legal and technical experts to draft new guidelines that will "enshrine citizens' rights to access data on their government's activities in the process, and a citizen's right to see and control the information held about them by governments and corporations." Other recommendations in the report included improving funding for public research into the implications of the increasing use of automated decision-making and learning algorithms, and requiring large enterprises based in developed countries to employ the same approach to data privacy in all countries they operate.
<urn:uuid:c97332fe-2900-494a-86e9-829d490fe309>
CC-MAIN-2017-04
http://kognitio.com/declaration-of-data-rights-called-for-to-ensure-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00171-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960756
650
2.890625
3
The age of traditional teaching methods are in the past now. We do not perceive the teaching-learning process as a one way street anymore. On the contrary, it is proven that the knowledge is acquired the best when there is an interaction between the teacher and the students. In order to make that interaction more effective and enhance the process of knowledge acquisition, we use technology. However, the thing with technology is that it is rapidly changing and the teachers need to constantly adapt to new processes. So how is technology already in use and how is it going to be used in future? Let’s take a look. There are no more traditional textbooks Textbooks are not interesting anymore because we have interactive applications now which enable the students to acquire knowledge much faster. The reason why they work much better is simple – focus. Interactive and captivating apps will make the student focus more on the current topic and turn on their brain in the process of solving the problem offered by the app. Apart from apps, there are interactive games as well as millions of hours of educative videos that can all be used as a part of teaching process and replace textbooks. Teaching is not conducted in the classroom anymore The sole space that we use in order to make a successful transfer of knowledge is now expanded from classroom to worldwide. Thanks to the world wide web that all of us have been using for more than 25 years now, we have managed to take the communication through the distance on the next level. The advantages of Information and Communication Technologies are endless. The Internet throughout the world is now much faster and better connection can be established. Therefore, students who are not physically able to attend can follow the class via Skype. One more creative way to use Skype is to call many professionals from the class that you are teaching and have them present something on the big screen to the students. Blackboards are big computers The most concrete way that shows how technology is slowly becoming a part of every part of the class is the blackboard. Although still available in just a few schools, interactive blackboards are threatening to take over the world. What they can do is everything that a computer can do – play videos, play flash animations, presentations, etc. Apart from all of the new stuff, they can still be written on with either a special marker or with a finger – because they are touchscreen. This way you can turn your classroom into a playground of a sort, where students can play games on the board, watch YouTube videos and show their homework that they have previously submitted to the cloud. Homework is more interactive Everybody hates homework when there is a lot of assignment to be done, especially when it comes to writing tasks. But there is one secret word that we all know and we can use it and thank technology for that – research. The homework nowadays is not just repeating what was done in the class but an active acquisition of new knowledge on the topic by actively researching the internet. Apart from the research, homework can also have different submission procedures because not all homework has to be brought in the class. On the other hand, homework can be e-mailed, uploaded to a cloud server, etc. Teachers get to express more creativity The last thing incorporates all the previous things mentioned here into one big creative process that every teacher should go through in order to convey the knowledge in the best manner. This also means that the teacher should actively be learning about all the new pieces of technology that are invented and try to use them in their classroom. The problem may arise with older teachers who feel that they are running behind with technological advancements, but that can be solved by someone younger teaching them how to use it. In other words, a teacher nowadays must take on a role of a coder, an IT person, a researcher and to be a technology enthusiast in order to be able to make the classroom an unforgettable experience for the students. To sum up, teaching methods are not constant and whenever something new is invented, teachers should try to use it for the advancement of knowledge transfer. Some of the aspects of traditional classroom are gone and things such as applications, computers, interactive blackboards and the Internet are actively used in teaching. It is up to the teacher to come up with the most efficient way to combine these and make the students not only learn everything but enjoy the interactive process of teaching as well. Scott Ragin is an online tutor and experienced high school educator. Scott is always trying to create all the necessary conditions for the development of a well integrated personality in his students. He loves guiding other people through their teaching and provides assignment help at Aussie writer. Feel free to contact him at Facebook.
<urn:uuid:e2e0480c-3eb9-479e-8ec3-640c90c1e68e>
CC-MAIN-2017-04
https://techdecisions.co/facility/technology-replacing-conventional-classrooms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00409-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968217
960
3.28125
3
At the bottom of the world, atop an ice sheet 2 miles thick, sits the new Amundsen-Scott South Pole Station. This elevated research station is by far the largest of three that have been constructed since 1956. Besides being above the frozen Antarctic surface, the station's underside includes winglike properties that force winds blowing underneath it to speed up. This results in the natural dispersion of snow that would otherwise build up around the structure. Scientists living at the station are studying, among other things, climate change, seismology and astronomy. See the Amundsen-Scott South Pole Station.
<urn:uuid:69871fb3-acb4-4855-bb04-b77532f56dbc>
CC-MAIN-2017-04
http://www.govtech.com/technology/New-Antarctic-Research-Station.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00225-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923719
121
3.125
3
What is digital citizenship? If you asked 10 people to define it, you’d most likely receive 10 different answers. It’s complicated, and the terms “digital” and “citizenship” are broad. As a pioneer in the field, Dr. Mike Ribble created the foundation for digital citizenship by establishing its nine elements, which are widely used. Often referred to as the “Godfather of Digital Citizenship,” Ribble defines digital citizenship as “the norms of appropriate, responsible behavior with regards to technology use.” His nine elements of digital citizenship are digital access, digital commerce, digital communication, digital literacy, digital etiquette, digital law, digital rights and responsibilities, digital health and wellness, and digital security. His seminal work led the way for others to explore the safe, savvy and ethical use of technology. My own definition is based on Dr. Ribble’s research and is a direct result of working with students on the iCitizen Project. My ultimate goal is to help students think and act at a local, global and digital level simultaneously. Ways to Help Students Under Digital Citizenship The first step to helping students become responsible digital citizens is to make digital citizenship a verb. We can’t just read or write about it; we need to do digital citizenship everyday. We need to encourage our students to use social media to solve problems and change the world. Encourage your students to think of themselves as skipping stones. Your actions both on and offline have a ripple effect. Each day, choose to send the most positive actions out into the world. Skip your stones, and send a chain reaction of goodness and kindness. Lead with empathy, and solve problems in your own community. When you change your own communities, you change other communities in the process. It is the ripple effect in action. 1. Change the design of your classroom. In her article entitled “Why the 21st Century Classroom May Remind You of Starbucks,” author Kayla Delzer details how she redesigned her classroom to be more conducive to a collaborative learning environment for her second grade students. The flexible seating and open floor space empowers her students to choose the space that works best for them to learn. This is a game-changer. Providing the choice and ability to move ensures that the student who could not sit still in a traditional classroom will no longer be penalized for disrupting the class. 2. Add student voices to your classroom. Your classroom should be full of student insight and opinion. Invite your students to solve problems and create solutions on topics that can connect to the nine elements of digital citizenship. Carve out time for students to participate in Genius Hour and Makerspaces projects. Amplifying student choice and voice in the classroom will bring out the genius and maker in every student and is the key to making learning meaningful. 3. Provide teachers with the chance to connect. Encourage your teachers to attend an Edcamp, and then ask them to bring the experience back to the district on a professional development day. Lift blocks and bans in your school that prevent teachers from connecting with each other. Encourage teachers to join other connected educators during a Twitter chat. Let them be content creators and learn alongside our students. The possibilities are endless. Social media as a learning tool has the potential to revolutionize how students communicate, connect, network, solve problems, collaborate and learn. The bottom line is that we need to embed digital citizenship into our curriculum and instruction every single day. This way, our young people understand it, practice responsible digital citizenship and use technology in impactful, positive ways. Our students’ futures are depending on us to teach these lessons. Marialice B.F.X. Curran, Ph.D., is the co-founder of the Digital Citizenship Summit and is instrumental in the development of digital citizenship curriculum and instruction in K-12 education, teacher education and professional development. She developed and created the first three-credit digital citizenship course in the country and is committed to student voice around the safe, savvy and ethical use of technology. She is passionate about empowering students around the world with positive and practical solutions. An international speaker and named one of the Top 10 Digital Citizenship bloggers to follow in 2014 by Common Sense Media, Curran co-founded (2011) and moderates the digital citizenship #digcit chat on Twitter and also serves on the leadership team for the Digital Citizenship PLN through ISTE.
<urn:uuid:939e3565-6d80-4827-8111-33f475f29363>
CC-MAIN-2017-04
https://www.imperosoftware.com/creating-responsible-students-by-teaching-digital-citizenship-in-the-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00126-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94839
924
3.96875
4
A new Wall Street Journal Online/Harris Interactive Healthcare Poll finds that only one-third (33 percent) of U.S. adults are very confident in their physicians and other healthcare providers having a complete and accurate picture of their medical history. However, this confidence increases to half (50 percent) for those who have an electronic medical record. About one-fourth (26 percent) of adults say they use some form of electronic medical record, mainly one kept by their physician. These are just some of the results of an online survey of 2,153 U.S. adults ages 18 and over conducted by Harris Interactive between November 12 and 14, 2007 for The Wall Street Journal Online's Health Industry Edition. Katherine Binns, Division President for Healthcare Research at Harris Interactive, comments, "There has been more and more talk lately about electronic medical records -- from inclusion in Presidential frontrunners' healthcare reform plans to Microsoft announcing a consumer Website to store and share health information. Insurance companies and employers are also jumping on this bandwagon. It is estimated that each year billions of dollars are spent on redundant tests, and that many otherwise avoidable injuries are caused by medical reporting errors. And it is assumed that much of this could be eliminated with online health systems that communicate with each other." One key concept is that patients would have control over an Internet-based medical record and they would decide with whom and when to share that information. But, as when banking or shopping first went online, there have been issues of privacy concerns regarding healthcare data as well. As things become more common though, these concerns tend to wane, evidenced by a 10-point drop this year (from 61 percent in 2006 to 51 percent) in those who say electronic records make it difficult to ensure privacy. When it comes to other online medical services, three-fourths of adults feel that patients should be able to schedule an appointment with their physician via email or the Internet (77 percent) and communicate with their physician via email (75 percent). These online applications are big first steps in overcoming privacy concerns. More adults (60 percent) feel that the benefits outweigh the privacy risks than those who do not (40 percent). Majorities agree that electronic medical records could reduce healthcare costs (55 percent), decrease medical errors (63 percent), and reduce redundant tests (67 percent) -- similar to 2006 results. Even more (74 percent) believe that patients could receive better care if doctors and researchers were able to share information more easily. However, about one-quarter of adults are just not sure that electronic medical records could provide any of these benefits, indicating a need for continued talk about this matter.
<urn:uuid:c4b0b36c-6ca2-4f1f-9402-5ff7490d2097>
CC-MAIN-2017-04
http://www.govtech.com/security/Most-People-Believe-Electronic-Medical-Records.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00180-ip-10-171-10-70.ec2.internal.warc.gz
en
0.97212
536
2.5625
3
The so-called armored fiber optic cable, is outside the optical fiber is then wrapped in a layer of protective of “armor”, is mainly used to meet the requirements of animals rodent, moisture proof, etc. Armored cable is a power cable made up by assembling two or more electrical conductors, generally held together with an overall sheath. This electrical cable with high protective covering is used for transmission of electrical power, especially for underground wiring needs. However, these cables may be installed as permanent wiring within buildings, buried in the ground, run overhead, or may even be kept exposed. They are available as single conductor cable as well as multi-conductor cables. To be more precise, armored cables can be explained as electrical cables with stainless steel or galvanized wire wound over the conductors and insulation. They often have an outer plastics sheath for main distribution supply and buried feeders. The main role of Armored fiber optic cable Armored optical fiber in telecommunications fiber optic long-distance lines, twele trunk transmission has important applications. Armored fiber mostly used for general network management usually come into contact with two fiber optic network equipment in the engine room building internal connections. The armored fiber length is relatively short. Often referred to as the armored jumper. General armoured jump line there is a layer of metal armoured in skin, make protection inside the fiber core, crush tensile resistance function, can prevent rat bite to eat by moth, etc. The Types of Armored fiber cable According to the use of premises Indoor armored cable: A single armor and indoor fiber optic cable. Single-core indoor armored cable structure: Single mode indoor armored cable, Tight fiber+ kevlar (tensile effect) + stainless steel hose (the compressive strength, resistance to bending, rodent) + stainless steel woven wire (torsional) + outer sheath (usually using PVC, according to the barrier role of fire-retardant PVC, LSZH, Teflon, silicone tube, etc.) Single armor excluding stainless steel braided wire cable; Double armored fiber optic cable with stainless steel hose and stainless steel woven wire. Advantages: high tensile strength, high compressive strength, rodent bite; possess resistant to the improper torsional bending damage; construction is simple, saving maintenance costs; adapt to harsh environments and man-made damage. Disadvantages: weight heavier than common fiber optical cable. The price is higher than common fiber optic cable Outdoor armored cable: There are divided into light armored and heavy armored (outdoor fiber optic cable). light armored strip and aluminum, is to strengthen and anti-rodent bite. heavy armored is wrapped in a circle wire, generally used in the riverbed, submarine. The market in general armored cable than the non-armored cable is cheap, usually steel, aluminum, much cheaper than the aramid (Kevlar is mainly used for special occasions). According to the material divided by metal armor There are steel armored and aluminum armored. Was usually bolt-type core with a layer of metal armored to protect narural bent, with high pressure, resistance to the advantage of the strong pull, provides excellent cable protection and safety. Aerial optical fiber cable If it is an outdoor aerial optical fiber cable, in order to avoid the harsh environment, human or animal damage (such as someone with a shotgun with a birds when the fibers interrupted case often occurs), play a role in the protection core armored cable ships. Recommended steel armor, light armor, cheap and durable better. Light armor, the price is cheap and durable. There are two general outdoor aerial cables: one is the center beam tube; another Standed. In order to durable, overhead layer sheath, and direct burial with two layers of sheating safer.
<urn:uuid:e1d672e7-4809-44df-b620-109b81b295ee>
CC-MAIN-2017-04
http://www.fs.com/blog/what-is-armored-fiber-optic-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00116-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908757
778
3.0625
3
Research out of the University of London shows that bees are on par with supercomputers when it comes to solving certain complex mathematical problems. Despite having brains the size of a grass seed, bees can learn to fly the shortest route possible between flowers even if they discover the flowers in a different order. They are in essence solving the “traveling salesman’s” shortest route problem, and are the first animals shown to have this ability. The traveling salesman puzzle goes like this. The salesperson must figure out the shortest path that will make possible a visit to all locations on a given route. Computers solve this problem by comparing all possible routes and selecting the shortest. Bees, however, are able to do the same task sans computer assistance. Dr Nigel Raine, from the School of Biological Sciences at Royal Holloway, explains that finding the shortest route allows bees to conserve their energy by keeping flight time to a minimum. For their study, the scientists used computer-controlled artificial flowers to test whether the bees would follow a simple route defined by the order in which they found the flowers, or would look for the shortest route. After an initial exploration of the flower locations, the bees quickly determined the shortest route. Scientists are eager to understand how bees are able to solve the traveling salesman problem without a computer. Hidden in the bee movements are algorithms that can potentially be repurposed for the benefit of human-related endeavors. Breaking the bee code could lead to more effective management of network flow problems with reduced reliance on computers. This makes sense for a variety of real-time tasks that take place in the field where access to big computers is impractical. Targeted applications are likely to include traffic control, network flow, and business supply chains.
<urn:uuid:3f75a0a4-cb53-42ce-9ca3-85b8ea3f75c3>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/10/25/have_honey_will_compute_bees_wax_numeric/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00446-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946169
354
4.21875
4
Table of Contents Introduction to Family Safety Protecting your children while they are online can be a difficult and scary task for any parent. While it is important to introduce children to computers and the Internet, it is also just as important to do this in a safe environment. To help you with this is Family Safety, which is a parental control applications that monitors your children's online activity, and if necessary, controls what they can do on the computer. By using a tool like Family Safety a parent can feel comfortable allowing their child to use the computer and the Internet while knowing exactly what they are doing and where they are going. Using Family Safety a parent can restrict what web sites a child visits, what applications they can use, and when they are allowed to use the computer. Family Safety will also send out daily and weekly reports to the parents that contain detailed information about a particular child's activity on the computer. You can also view these reports and change restrictions on a particular child's account via the http://familysafety.microsoft.com/ web site. This tutorial will provide in-depth guidance on how to properly setup Family Safety on a Windows 8 computer to create a safe environment for your children. We will discuss how to create a new account with parental controls enabled, activate Family Safety on an existing account, and provide detailed explanations on each of the available configuration options. Create a new child's account and enable Family Safety The first step in enabling Family Safety is to create a new account for your child and enable parental controls. To create an account, type in Add User when you are at the Windows Start Screen. When the search results appear click on the Settings category as shown below. Figure 1. Add User search Now click on the option labeled Give other users access to this computer, which will open the User Settings screen. Figure 2. User's Setting screen Scroll down and click on the Add User option as shown above. You will now be at a screen prompting you to enter the user's email address. When creating an account in Windows 8 you can either set it up as a local account or use a Windows Live account to integrate online services such as SkyDrive into Windows 8. If you wish to use a Microsoft account, please enter the child's email address or create a Windows Live account for them. On the other hand, if you wish to just create a local account, click on the Sign in without a Microsoft account option. You will then be brought to a new screen prompting you again whether you want to use a Microsoft or Local account. At this screen you should select Local Account. You will now be at a screen prompting you for the user's account name, password, and a password hint that can be used to recover the password in the future. Please fill in this information and click on the Next button. You will now be at a confirmation screen that lets you review the account that will be created as shown below. Figure 3. Adding a new user confirmation To enable Family Safety monitoring on this child's account, put a check mark in the checkbox labeled Is this a child's account? and then click on the Finish button. The account is now created and Family Safety is now configured for that account. For information on how to configure Family Safety settings for this new account, please skip to this section. If you wish to enable Family Safety on an existing account, please type in Family Safety when you are at the Windows Start Screen and then click on the Settings category as shown below. Figure 4. Searching for the Family Safety Setting Now click on the option labeled Family Safety, which will open the Family Safety screen. Figure 5. Family Safety screen To enable Family Safety on an account, click on it once with your mouse. This will open up the Family Safety User Settings screen. Figure 6. Family Safety Settings for a particular user To monitor this child's activity, select the On, enforce current settings button. Family safety is now enabled on this user's account and your will receive weekly reports on their activity. You can now close the User Settings window. For information on how to configure Family Safety settings for this account, please see the next section. Microsoft's Family Safety approach is to monitor first and then restrict as necessary. Using this approach, when a child is enrolled in Family Safety there will be no restrictions placed on the account by default. Instead they will be able to use the full capabilities of the computer without limitation and you will receive daily and weekly reports about the child's activity. As you review their behavior and you find that you need to restrict them in some way, you can then modify the Family Safety settings to limit their usage on the Internet or on the computer. To configure restrictions on a child account, you need to do so from the Family Safety control panel. To get there you should type Family Safety when you are at the Windows Start Screen and then click on the Settings. When you click on the Settings category, you should see an option on the left called Family Safety. Please click on that setting and you will now be at the Family Safety control panel. Figure 7. Family Safety screen To change a particular child's settings, click on the child's name. This will open up the User Settings screen as shown below. Figure 8. User Settings On this screen you will see a variety of options that you can configure. By default, Activity report is enabled so that you receive daily and weekly reports on your children's activities. You should not disable this feature. Under the Windows settings category you see four categories that you can configure to limit your child's activity on the computer and the Internet. These categories are: The web filtering category allows you to restrict what web sites the child can visit. For more detailed information about this setting, please read the Web Filtering section below. This section allows you to configure how much time a child can use the computer in a given day or when they are allowed to use it. For more detailed information on how to configure this setting, please read the Time Limits section below. Windows Store and game restrictions This category allows you to restrict what types of games and Windows Store apps the child is allowed to play. For more detailed information on the Windows Store and game restrictions, please see this section. This section allows you to place restrictions on what applications a child can use. For more detailed information on how to configure this restriction, please see the App restrictions section. If you wish to make changes to a user's settings please left-click once on the category that you wish to modify. It is also possible to manage these settings remotely via the http://familysafety.microsoft.com/. This allows you to add restrictions even when you are not home in the event that you find something concerning in a child's activity report. Please note to use the Family Safety web site, you must be using a Microsoft account on your computer rather than a local account. For more information about all of the setting in these categories, please read the following sections. By default, Family Safety is configured to allow access to all web sites, but you can modify this behavior to only allow your child to visit certain web site categories or even specific sites. If you place restrictions on a web site and your child visits it, the site will be blocked. They will then be presented with a message stating that their parent needs to provide permission to access the site. The parent can then enter their login info to allow their child to continue to the site. If you wish to restrict web sites, you can select the Child can only use the websites I allow option, which will enable two other settings called Set web filtering level and Allow or block specific websites. If you click on the Set web filtering level you will be brought to a screen where you can select the restriction level you wish to use for web sites. Figure 9. Web Restrictions At this screen you should select the level that you wish to use for your child. You can also select the Allow list only option that will only allow the web sites you specifically allow in the Allow List, which will be described next. Furthermore, you can override restriction level settings by specifically adding a site in the allow list. The Allow or Block List setting allows you to specify the particular sites that you your child can or cannot visit. The sites you enter in this section override the web restriction you previously setup. This means that if you add a site to the Allow list and it's not in the restriction level you selected, they will still be able to visit it. The reverse holds true as well for sites you add to the block list. Even if the site is part of the category you allow, the child will not be able to visit it because you specifically blocked it Figure 10. Allow or Block Websites To allow or restrict a site, simply enter the address in the field and press the Allow or Block buttons. When entering the address, you can enter the domain (example.com) and every page on that domain will be blocked. If you wish to specific a specific page, then you should enter the entire address and press the desired permission. For example, if you block example.com then every web page on that domain will be blocked. On the other hand, if you block example.com/games/ only web pages under the games folder will be blocked, but the child can access other pages on the example.com domain such as example.com/forums/. The Time Limits section is used to configure specific times or the amount of time a child can use the computer. By default, Family Safety allows a child to use the computer whenever they want and for as long as they want. To restrict their usage you can use the Set time allowance and Curfew settings. If you click on Set Time Allowance you will be brought to a screen where you can specify how much time per day they are allowed to use the computer. Figure 11. Time Allowance Please note that you can specify different allowances for the weekdays and weekends in case you wish to give them a bit more time during the weekend. If you select the Curfew option you will be brought to a screen where you can specify what hours of a particular day the child cannot use the computer. Figure 12. Curfew Settings To select the time frame per day that they cannot use the computer simply left click on the beginning time and drag to the right. The time frame that they are not allowed to use the computer will become highlighted in blue and the child will not be able to use the computer during those times. The Windows Store and game restrictions section allows you to specify the games that you wish to allow your child to play. By default, Windows 8 will allow your child to play every game installed on the computer. To change this setting you should select the Child can only use games and Windows Store apps that I allow option. This will enable two other options called Set game and Windows Store ratings and Allow or block specific games. The Set game and Windows Store ratings section allows you to select the games rating level that is appropriate for your child. Figure 13. Game Rating Level You can also specify whether or not you wish to block games that do not have a rating. To changing the rating system Family Safety uses, please see here. If you wish to allow or block specific games regardless of their game rating, you can select the Allow or block specific games option. This will open a screen that allows you to specify how you wish a particular game to be handled. Figure 14. Allow or Block Games This screen will contain a list of all installed games and whether or not you wish to use the rating setting, always allow it, or always block it. Once you make the desired changes, you can close the Allow or Block games screen. This category allows you to specify what applications on a computer they are allowed to use. By default, this setting is configured so that a Child can use all apps on the computer. If you wish to restrict the applications a child can use, please select the Child can only use the apps I allow option. When you select this option, a list of available programs will be displayed. Figure 15. App Restrictions To allow a child to use a particular application, simply place a check mark next to the desired app. If the app you wish them to use is not list, then you can click on the Browse button to select the executable (.exe) file that you wish the child to use. When you are done selecting the applications, you can go ahead and close the App Restrictions window. It is possible to change the game rating system that will be used by Family Safety. As different countries use different rating systems, it is important to select the one that is appropriate for your child and location. To do so, please click on the Rating Systems link shown in the main Family Safety screen as seen in Figure 5. This will open a screen showing a list of all the rating systems supported by Family Safety. Figure 16. Rating systems supported by Windows 8 Scroll through the list of rating systems and select the one you wish to use. Family Safety will now use this rating system when restricting access to a particular game. To view the activity reports for a particular child, you need to go to their Family Safety settings screen as shown in Figure 8. Once in that screen you should see an option to View activity reports. When you click on that option a screen will open that shows the past weeks activity for the child. Figure 17. Activity Report You can then look through the different activity categories on the left hand navigation bar to see what other activity the child did this week. You can also access your child's activity through the familysafety.microsoft.com web site. In order to use this site, you must be using a Microsoft account instead of a local account. Once you are logged into the Family Safety website, you have full access to your children's activity reports as well as the ability to modify their restrictions remotely. Question: How do I enable Family Safety on a computer connected to a domain? Windows is configured to not display Family Safety options on a computer connected to a domain. To enable Family Safety on a Domain you can use the Group Policy Editor and enable the Make Family Safety control panel visible on a Domain policy. This policy can be found under Local Computer Policy\Administrative Templates\Family Safety. Question: How do I add another parent to Family Safety so that they receive activity reports? If you want another parent to receive activity reports, you can add them through the http://familysafety.microsoft.com/ web site. Once logged in, you will see two links under the list of family members. The first link Add a new parent, allows you to add a parent who does not have an account on the Windows 8 computer. The second link, Make a family member a parent, will allow you to select an existing account to add as a parent. Once you add a parent, they will start to receive activity reports as well. Windows 8 allows multiple users to share the same computer using different accounts. This allows each user to have their own location where they can store personal information such as documents, pictures, videos, saved games, and other files so that they are not mixed in with the files of other users on the same computer. Having multiple accounts also plays a strong role in Windows Security. It is ... When creating accounts on Windows 8 you have the option to choose a Local account or a Microsoft account. A Microsoft account, formerly known as a Windows Live ID, is an account that has been registered with Microsoft so that you can use their online services such as Hotmail, SkyDrive, Calendar, or the Windows Store. In order to use most of these services and integrate them into Windows 8, you ... A filesystem is a way that an operating system organizes files on a disk. These filesystems come in many different flavors depending on your specific needs. For Windows, you have the NTFS, FAT, FAT16, or FAT32 filesystems. For Macintosh, you have the HFS filesystem and for Linux you have more filesystems than we can list in this tutorial. One of the great things about Linux is that you have the ... Have you ever been connected to your computer when something strange happens? A CD drive opens on its own, your mouse moves by itself, programs close without any errors, or your printer starts printing out of nowhere? When this happens, one of the first thoughts that may pop into your head is that someone has hacked your computer and is playing around with you. Then you start feeling anger tinged ... Many organizations that use Remote Desktop Services or Terminal Services are not using a VPN connection before allowing connections to their in-house servers or workstations. If no VPN is required, this means that the Terminal Server or Remote Desktop is publicly visible and allows connections from anyone on the network and in most cases the Internet. This is a major security risk ...
<urn:uuid:d0e59e69-dfba-4b06-9c5f-9d9df9cfb3aa>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/monitor-childrens-activity-with-family-safety-in-windows-8/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00172-ip-10-171-10-70.ec2.internal.warc.gz
en
0.892435
3,490
3.109375
3
Most data recovery laboratories tend to highlight cases that involved physical damage and needed repair in an environment free of dust and contaminants. These are great cases, because they are usually dramatic, have visual appeal, and they highlight an expensive investment for a data recovery lab – a clean room. We like these stories too, but what we want to confront with this post is the “logical data recovery.” This is where the data can’t be accessed because some link in the chain of instructions a computer expects to follow is missing or unreadable. Maybe you turn on your computer, and the drive sounds normal, but you can’t get past a blue screen. Maybe you plug in your external drive, the light comes on, it hums along as usual, but you no longer can access your files. The data on a hard drive has to be thoroughly organized for a computer to access it reliably. Think of a library: it’s functional because it has a very methodical system for cataloging its contents and allowing a reader to logically determine where information lies. Electronic storage devices are even more reliant on a precise and complete organizational system that can direct a machine to an exact location. A hard drive’s organizational structure typically starts at Sector 0 with a partition table, which is part of something called a Master Boot Record. The partition table shows the basic divisions within the hard drive. True to its name, you can think of a partition as a boundary — one room within your hard drive. If you haven’t done any partitioning of your hard drive, it’s likely it has one main partition with your operating system and all your files, and a backup partition with the original factory settings. These used to come on a set of DVDs; now manufacturers typically place this data on a restore partition. But maybe you’ve partitioned your drive several times, and you have one partition that’s formatted differently to hold your old files from a Mac computer. Whatever the case, these partitions are distinct. From a logical standpoint, they are almost like separate drives. At the beginning of any partition is a boot sector. A boot sector gives directions within the boundaries of the partition. It doesn’t know anything about the world outside the partition, such as where the partition lives on the hard drive. If you picture reading a partition as entering a room, imagine the boot sector just inside the door, ready to guide you. It doesn’t know where the room is, but it knows the room itself; it can tell you to go 20 paces from the door to find a map of the information you seek. In real terms, the boot sector gives the location of the master file table, the root directory and the bitmap in relation to the boundaries of the partition. Taking those new terms in order, the master file table is constantly changing with the data held on the partition. It is a record of all the file names and where they live. The master file table exists in chunks or segments; it’s usually scattered around the partition. And again, it changes each time you add, alter or delete a file. The root directory is the beginning of the structure you control to organize your files. You probably have noticed paths to get to your files. The first step of this path is the root directory. Thirdly, the bitmap tells your hard drive where data has been written and where there’s available space. It doesn’t tell what’s where, it simply shows what’s been used. There are a lot of other things to explore and detail, but if this is your introduction to how data is organized on a hard drive, you deserve credit for getting through that. It’s enough, I hope, to illustrate some of the issues with logical data recoveries. You can probably already guess how this relates to data recovery: when some of the information we discussed above is gone, your information is suddenly inaccessible. It could have happened due to unexpected shutdown, or the data being written over something important. Whatever the reason, perhaps there is no longer any partition table, so the hard drive doesn’t know how to find the boot sectors. Or the boot sectors themselves are blown away. The more of this meta data that is lost, the more interesting it is to try to find and make sense of the data written to the drive. Now when you walk into the room, it’s dark. There is no guide to point you to the map of your information. You’re not even sure you’re in the room, to be honest. You pick up a note that says some useful information is 20 paces from the wall, but there’s no longer a wall. That’s the picture as the meta data of a hard drive vanishes. Logical obstacles, such as those bad sectors and other data corruption, will sometimes make data recovery more difficult than those visually appealing and more popular mechanical repair cases. Mechanical repairs are only half of the data recovery process. After the drive is restored to a workable state, our engineers still have a critical task ahead of them. They have to retrieve the actual data from the failed device. Here at Gillware, we have our own data recovery software platform to help retrieve this type of data to make your life and our engineers’ lives easier. If you are experiencing logical problems to your device, give us a call. We will be more than happy to consult with you and provide a free evaluation.
<urn:uuid:d2689fd2-c897-4a1a-adf4-105eb6e41fd2>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery/when-it-does-not-compute-a-look-at-logical-data-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00290-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941771
1,146
2.921875
3
When it comes to agriculture and farming, success is always dependent on cycles of one kind or another. Seasons change, precipitation varies, climate patterns shift… In short, there are times of plenty and there times of great demand, thus farmers need to automatically scale their resources and provision according to immediate needs and their own spending, yield and other policies. Does that model sound familiar – like perhaps the case for cloud computing? While the shifting, cyclical demand creates plenty of uncertainty for farmers, the sporadic, constantly-changing nature of needs versus spare capacity creates an ideal environment for cloud computing models to thrive. Cloud computing is catching on for large providers of agricultural services and as a tool to help agricultural researchers in the field and in the lab. This year there should be a wealth of new use cases that highlight the way a number of technologies come together – everything from application development for mobile devices operating in the cloud to new sensors that send data to remote resources – all of which are either enabled or enhanced by cloud. Agricultural cloud computing use cases are wide-ranging; from the refinement of planting and harvesting operations to research based on the integration of global positioning data and in-field studies. What is interesting about this field is that cloud computing in agriculture is benefiting from the integration of a number of improvements in mobile, sensor, GIS, GPS and other technological developments in tandem. Use cases for agricultural production and research projects go far beyond simple remote hosting. Of the synthesis in new technologies enabled by the cloud, Emily Padfield states, “Radio-frequency identification tags (RFIDs), which can hold and automatically download a mass of data, are becoming part of agriculture. Bale tagging systems that hold data on the bale’s moisture content, weight and GPS position of where they came from in the field already exist, but in the future, micro-tags of the size of soil particles will be deployed extensively in fields measuring such things as moisture, disease burden and even whether the crop is ready to harvest or not.” Padfield continues about the merger of cloud computing with a number of newer technologies, noting: “Mountains of GPS-sourced 3D data is now being gathered on farms, but instead of experts analyzing the data, the job is being done by automated computer systems that allows farmers to benefit from new techniques almost immediately…And all of that information coming from farm mapping, machines working in the field and remote sensors will be transferred to remote servers for access anywhere. Mobile phones and tablets are leveraging cloud computing live from the field (as in the case of the USDA’s new Object Modeling System for agricultural research, for example) and large internal IT systems are being virtualized to improve efficiency given that needs literally change with the season, as do requirements for compute capacity. Some large farm-focused companies have already looked to streamline their operations using cloud computing. Monsanto, a global provider of agricultural products is among the first large-scale agriculture company to jump on board with cloud computing solutions. During planting season, Monsanto sees massive increases in need for IT services, which makes them an ideal candidate for scalable systems that change with demand. In October, Monsanto selected BMC Software and the Cisco Unified Computing System to create their IT environment that would scale automatically and help Monsanto reach its stated goal of 70 percent server virtualization. In April of last year, Fujitsu announced that it would be rolling out a series of cloud computing services for Japan’s agriculture industry. This was deemed a good fit for the country because “agricultural producers and corporations throughout Japan are scattered and given the relatively limited size of their operations, most lack ICT skills and dedicated professionals.” The Fujitsu agriculture services were delivered out of one of Fujitsu’s Japanese datacenters and delivered services ranging from farm management and accounting tasks to agricultural product safety. Supporting agriculture at the national level like Fujitsu did in Japan, especially with a population that lacks access to sophisticated computational power and tools, is one way that we might see cloud computing in agriculture explode over the coming years. It is a perfect fit – demand fluxuates, software-as-a-service provides tools needed without installation on machines that would require capital investment, and most farmers are not IT experts and need abstraction from the technical layers. Big agricultural companies and focused research efforts to aid in collaboration and better farming techniques are seeing some benefit in a shift from on-site systems to remotely hosted platforms. While a great deal outside of research is not necessarily in the HPC realm, it is interesting to watch how a “traditional” enterprise is making use of new paradigms in IT.
<urn:uuid:594b9ee2-78ca-4cdb-8dff-c61fe25b227f>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/02/21/the_high-tech_farm_report_cloud_computing_comes_to_agriculture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00198-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937821
950
2.9375
3
Some of the world’s biggest IT companies and their suppliers are contaminating rivers and underground wells in developing countries with a wide range of hazardous chemicals, according to Greenpeace. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The environmental campaigning group has released a report called ‘Cutting Edge Contamination: A study of environmental pollution during the manufacture of electronic products’. Analysis of samples taken from industrial estates in China, Mexico, the Philippines and Thailand reveals the release of hazardous chemicals in each of the three sectors investigated: printed wiring board (PWB) manufacture, semiconductor chip manufacture and component assembly. Most noteworthy, said Greenpeace, was the discovery at most of the investigated sites of polybrominated diphenyl ethers (PBDEs), a group of brominated chemicals used as flame retardants, and of phthalates, chemicals used in a wide range of processes and materials, though they are most commonly used as plasticisers (softeners) in some plastics. “Over recent years we have seen an increasing concern over the use of hazardous chemicals in electronic products but attention has focused on the contamination released during disposal or ‘recycling of electronic waste’,” said Dr Kevin Brigden from the Greenpeace Research Laboratories. “Our findings of contamination arising during the manufacturing stage make it clear that only when we factor in the complete lifecycle will the full environmental costs of electronic devices begin to emerge,” he said. Zeina Al-Hajj, toxics campaigner for Greenpeace International, said, “There is shockingly little information on precisely which major brand companies are supplied by which manufacturing facilities. “Responsibility for the contamination lies as much with those brands as with the facilities themselves. “There has to be full transparency regarding the supply chain within the electronics industry, so that brand owners are forced to take responsibility for the environmental impacts of producing their goods.” The study also documents the contamination of groundwater aquifers at a number of sites, particularly around semiconductor manufacturers, with toxic chlorinated volatile organic chemicals (VOCs) and toxic metals including nickel. Contamination of groundwater is of particular concern, said Greenpeace, since local communities in many places use groundwater for drinking water. At one site, the Cavite Export Processing Zone (CEPZA) in the Philippines, three samples contained chlorinated VOCs above World Health Organisation (WHO) limits for drinking water. One sample contained tetrachloroethene at nine times above the WHO guidance values for exposure limits and 70 times the US Environmental Protection Agency maximum contaminant level for drinking water. Elevated levels of metals, particularly copper, nickel and zinc, were also found in groundwater samples in some sites. The use of such toxic chemicals in manufacturing processes also poses potential risks to workers through workplace exposure. Wastewater discharged from an IBM site in Guadalajara, Mexico contained hazardous compounds, including some (such as the potent hormone disruptor nonylphenol) that were not found at other sites. IBM’s Supplier Conduct Principles Guidelines state that suppliers should operate in a manner that is protective of the environment. “IBM should act upon our findings and investigate activities at the site in order to prevent any releases of persistent organic compounds from the Guadalajara site,” Al-Hajj said. IBM has so far not responded to the Greenpeace report. Comment on this article: email@example.com
<urn:uuid:c2339597-aaaf-40e5-ba02-c3478bce4f7c>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240079970/IT-firms-contaminating-the-environment-says-Greenpeace
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00318-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942471
745
2.65625
3
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn’t changed is the MPI programming model. To get around the bottleneck that MPI poses to exascale computing, developers are banking on the new GPI programming model to unlock the potential of future parallel architectures. GPI, which stands for Global Address Space Programming Interface, takes an entirely new approach than MPI for enabling communication among processors in a supercomputer. The model implements an asynchronous communication paradigm that’s based on remote completion, according to a story in Phys.org. Each processor in a parallel HPC system can directly access all data, regardless of where it resides and without affecting other parallel processes. This gives GPI the potential to scale beyond what’s possible with MPI, and to fully exploit today’s highly parallel clusters of multicore systems, using traditional HPC programming languages, such as C and Fortran. The effort to create GPI was spearheaded by Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM. Lojewski was working on HPC problem involving seismic data, and the existing methods weren’t working. “The problems were a lack of scalability, the restriction to bulk-synchronous, two-sided communication, and the lack of fault tolerance,” Lojewski tells Phys.org. “So out of my own curiosity I began to develop a new programming model.” The GPI model, which was first unveiled at the ISC 2010 conference in Hamburg, continues to be developed by dozens of developers around the world, including Rui Machado from Fraunhofer ITWM and Dr. Christian Simmendinger from T-Systems Solutions. Together with Lojewski, the three HPC developers were awarded the Joseph von Fraunhofer prize this year for their work. GPI is also finding its way into production as development continues. According to Simmendinger, the European aerospace industry worked with the German Aerospace Center (DLR) to port an aerospace HPC program called TAU to use GPI. The results have been impressive. “GPI allowed us to significantly increase parallel efficiency,” Simmendinger tells Phys.org. GPI is not a plug-in replacement for MPI, and requires developers to port applications to use the new low-level API. Squeezing the most benefit from GPI also requires applications to be multi-threaded, which may also bring additional work. But based on early reports, GPI has a promising future as the communications protocol for tomorrow’s exascale supercomputers.
<urn:uuid:8ed0b1c6-0215-4e87-8a80-0421cb82b02d>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/06/19/developers_tout_gpi_model_for_exascale_computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00134-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94184
568
3.078125
3
Today there are over 200,000 cable Wi-Fi hotspots. They’re hidden in and around cities, among telephone poles and streetlights, and they’re delivering broadband to our ever-growing collection of Internet connected devices. But because there are so many devices, many utilizing bandwidth-hungry applications, Wi-Fi resources are becoming dangerously congested. Publicly available Wi-Fi is struggling to keep up with demand. The solution is access to additional spectrum. Wi-Fi is critical to economic growth and innovation and the new “Gigabit Wi-Fi” standard can only offer Americans gigabit speeds if the FCC makes available 160-MHz-wide channels on which it depends. Fortunately, the FCC is on the verge of opening the 5.1 GHz band (U-NII-1) to outdoor Wi-Fi, moving us substantially closer to President Obama’s commitment to find 500 MHz of new broadband spectrum. At present, the only objection to rule changes that would permit more effective Wi-Fi use of the U-NII-1 band has come from Globalstar, an incumbent user of UNII-1 frequencies who asserts that sharing with Wi-Fi is infeasible. But according to a new study from researchers at CableLabs and the University of Colorado, interference based on valid Wi-Fi system characteristics will not cause harmful interference. It also provided the FCC with a second, more intensive type of interference study, which again confirmed this conclusion. The study goes on to explain that Globalstar uses the 100 MHz-wide band for only four U.S. feeder links serving 85,000 duplex customers. With such light traffic, Globalstar’s system can share with Wi-Fi without any customer impact. There is no reason for the FCC to delay action in changing rules to permit a wider sharing of frequencies in the U-NII-1 band with current and next generation Wi-Fi devices. As a country, we have the opportunity to establish ourselves as a global leader in public Wi-Fi availability, speed, and scale. The possibilities are practically endless if we are willing to work together in common purpose. With the analysis provided in the new study, the FCC should be able to move expeditiously to resolve remaining issues and to render a decision that will speed the development of next generation Wi-Fi.
<urn:uuid:df9fb0c7-c132-4586-ae52-a1a5b33e320d>
CC-MAIN-2017-04
https://www.ncta.com/platform/industry-news/new-study-confirms-wi-fi-can-share-the-5-1-ghz-band-without-causing-harmful-interference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929225
486
2.734375
3
Tech Glossary – L to O Linux (pronounced “lih-nux”, not “lie-nux”) is a freely distributed Unix-like operating system (OS) created by Linus Torvalds. Also known as an “IP number” or simply an “IP,” this is a code made up of numbers separated by three dots that identifies a particular computer on the Internet. Every computer, whether it be a Web server or the computer you’re using right now, requires an IP address to connect to the Internet. IP addresses consist of four sets of numbers from 0 to 255, separated by three dots. For example “220.127.116.11” or “18.104.22.168”. MAC Address (Media Access Control Address) A MAC address is a hardware identification number that uniquely identifies each device on a network. The MAC address is manufactured into every network card, such as an Ethernet card or Wi-Fi card, and therefore cannot be changed. This is the operating system that runs on Macintosh computers. A megabyte is 2 to the 20th power, or 1,048,576 bytes. It can be estimated as 10 to the 6th power, or one million (1,000,000) bytes. A megabyte is 1,024 kilobytes and precedes the gigabyte unit of measurement. Large computer files are typically measured in megabytes. A name server translates domain names into IP addresses. This makes it possible for a user to access a website by typing in the domain name instead of the website’s actual IP address. For example, when you type in “www.microsoft.com,” the request gets sent to Microsoft’s name server which returns the IP address of the Microsoft website. OEM (Original Equipment Manufacturer) This refers to a company that produces hardware to be marketed under another company’s brand name. For example, if Sony makes a monitor that will marketed by Dell, a “Dell” label will get stuck on the front, but the OEM of the monitor is Sony.
<urn:uuid:5f577acc-672c-4d80-b97d-0535812002e0>
CC-MAIN-2017-04
http://icomputerdenver.com/tech-glossary/tech-glossary-l-o/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.892367
452
3.765625
4
Any day now, trucks will begin arriving weekdays at the University of Texas (UT) at Austin to begin hauling away thousands of books. The books will be ferried to undisclosed locations to be scanned and put into a database posted on Google. By scanning books from libraries, Google is creating the largest digital database of books in the world through the Google Books Library Project. The effort began in December 2004 and makes books searchable online, the same way Google makes Web sites searchable. For copyrighted books, users see only bibliographic information and book snippets. Books in the public domain, however, can be downloaded from cover to cover. Because of the promise of digital archives and preservation of books, libraries nationwide are participating in the program. However, the project has been bogged down by controversy and copyright infringement lawsuits. While Google says the project is only meant to make books more accessible to the public and to generate interest at the benefit of authors and publishers, critics argue Google's scanning of copyrighted material - although it's not publicly accessible - is entirely illegal. Full Speed Ahead Despite ongoing court cases, Google is moving forward with its project, scanning and digitizing thousands of volumes of books each day from libraries worldwide, including the New York Public Library, the Complutense University of Madrid and the National Library of Catalonia (and four affiliate Catalonian libraries). With millions of books available electronically on Google Book Search, the company's Web site purports that it's "expanding the frontiers of human knowledge." "Google's mission is to organize the world's information and make it universally accessible and useful," said Adam Smith, product management director at Google. "That mission would be incomplete if we did not include books. There's an incredible wealth of knowledge held on bookshelves in libraries and publishing houses, and we want to help people find it." Google is also piloting the World Digital Library with the Library of Congress. The project is an online collection of rare books, manuscripts, maps, posters, stamps and other library materials. Google has contributed $3 million to the project. The Austin UT Library, which is part of the UT Libraries system, has the fifth largest academic library system in the United States containing more than 9 million books. The Austin UT Library joined the Google Library Project in January 2007 with an initial six-year contract agreement to digitize 1 million books, although all 9 million books will be considered for digitization. Library officials are ironing out details of book handling with representatives from Google, who are particularly interested in UT's Latin America collection, regarded as the nation's most comprehensive. Similar to other library partnerships, books will be scanned by Google and approved by the university. In return for their involvement in the project, the UT Libraries will receive digital files of scanned books, which will help with long-term preservation of the volumes. All books, no matter how carefully they are handled, will deteriorate over time, said Doug Barnett, chief of staff for the UT Libraries. "There are many ways we go about safeguarding the books, and there are different levels of safeguards depending on how valuable or rare or fragile the item is, but to the degree we let anybody use them, there's always a risk of damage. And in terms of very long term, even carefully handled, the materials will eventually deteriorate," Barnett said. "It's not a complete solution by any means, or the only way the libraries are approaching this, but it does have the value of helping preserve the information. Even if the item itself was to somehow be lost or destroyed, there would be a digital copy of it." Librarians are eager to participate in the Google Library Book Project because it coincides with their goal of making books more visible and accessible, Barnett said. Google is also working with more than 10,000 publishers to copy books and give limited previews, which is expected to help the book industry by giving more exposure to new and out-of-print books. "We feel like there are incentives for many parts of the book industry to become involved here, both libraries in terms of sharing information, but also authors and publishers, by making people more aware of work and providing opportunities to find them in the library or buy them," Barnett said. Books from the UT Libraries scanned for the project are expected to be in the Google Book Search next year, but some of the titles previously digitized are already available at the beta version of the Google Book Search Web site <http://books.google.com>. Books on the Web site have already been integrated into standard Google searches. When a search is entered in Google Book Search, several books are typically revealed, with bibliographic data, title, author, publication data, length and subject. If a book is out of copyright - for instance literary classics like Moby Dick and Sense and Sensibility - the book can be read online and downloaded. Key terms that are entered in the book search are highlighted throughout the book to assist in information search. True to old library books, some of the scanned books contain marks, such as underlines and notes from previous users. For copyrighted books, an entry similar to a card catalog is shown with basic information about the book. Either way, the search engine directs users to places they can buy or borrow the book. Google is not the first book digitization project. Carnegie Mellon University hosts a project called the Universal Library, which has scanned nearly 1.5 million books; the Open Content Alliance, an association of technology, nonprofit and governmental organizations, and several major college libraries, has scanned more than 100,000 books; Microsoft's Windows Live Book Search service is considered a response to Google's project; the Library of Congress has the American Memory project; Amazon.com has digitized hundreds of thousands of books it sells; and there are many smaller projects. Yet because of the Google project's magnitude, two lawsuits filed in a federal court in New York have challenged it - one suit coming from several writers and the Authors Guild, and the other from a group of publishers, which include McGraw-Hill, Penguin Group, Simon and Schuster, and Pearson Education. The lawsuits contend that Google's copying of complete volumes under copyright constitute infringement, even though Google does not make the full texts of copyrighted material available through its Web site. The publishers who filed suit are actually collaborators of the initiative, who acknowledge that the Google Books Library Project will help promote and sell books, but remain opposed to the copying of copyrighted work without consent. The Authors Guild is also concerned with the vulnerability of massive digital databases of books held by Google and other universities. "In our view, Google needs a license to do what it wants to do," said Paul Aiken, executive director of the Authors Guild. "There are additional concerns with security since authors and publishers have not been contacted to audit security measures to make sure the database is reasonably hacker-proof and the data center is secure." The Authors Guild is also concerned that Google could set a precedent for other scanning ventures with a widespread proliferation of copyrighted text available for viewing, Aiken said. Google argues the Google Book Search, which is designed to comply with international copyright laws, helps people find books and increases the incentive of publishers to publish them. "Because Google Book Search makes all of the knowledge contained within the world's books searchable online, it exposes readers to information they might not otherwise see, and it provides authors and publishers with a new way to be discovered," Google's Smith said. "Many of our publishing partners are reporting increased sales and traffic to their sites since joining the program." Libraries across the country have known for some time they need to adapt to the digital age and have developed their own digitization projects before Google's. Stanford University founded HighWire Press in 1995, which provides electronic access to more than 1,000 scholarly journals. When Stanford digitized its card catalog a few years later, its book circulation increased by 50 percent. Around the same time, the library at Princeton University digitized its card catalog. Other libraries nationwide are digitizing their collections. Some publishers who are partnering with Google have talked with the company about making their books available for purchase online through Google Book Search. Regardless of the outcome of the lawsuits, the Google Book Project signifies a major shift for libraries into the digital world. Google will help reacquaint a new generation with books in libraries and direct students and other researchers to information that is more reliable than Web sites, according to the American Library Association. However, the association warns that Google's search index could allow viewers to forgo library research, and instead rely on information snippets provided by Google Book Search. Regardless, librarians across the country, including Barnett and his staff at the UT Libraries, are waiting to see how Google's project will fit into the future of libraries and research. "I don't think this is the only library of the future, but this sort of digital collaboration is part of the future," Barnett said. "Certainly working with various technology and various partners to make information easier to find and more widely accessible is a major part of the future."
<urn:uuid:94802afd-5c36-4af4-8ab5-648dfc4c75ae>
CC-MAIN-2017-04
http://www.govtech.com/education/New-World-Library.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00099-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950481
1,867
2.921875
3
One of the challenges in mobile computing is battery life. It's hard to be productive with a dead battery, so IT personnel and users alike need to think about maximizing run time between charges. Optimizing the power conservation settings of a mobile computer or communicator, including dimming the display when on battery, turning off the display and hard drive after a pre-set period of time, suspending (keeping memory alive but the computer otherwise powered down) and hibernating (writing the image of main memory to disk for later resumption) help in getting the most out of any given charge. (Read a related story on how to get the most out of your battery.) And there are also power conservation settings in most Wi-Fi adapters that (at first glance, anyway) are intended to allow a high degree of control over the power consumed by the wireless network interface card (NIC) found in almost all notebooks and many handhelds as well. In gross terms, wireless power conservation involves turning off the radio, synchronously or asynchronously with the fixed infrastructure, for a portion of time - a technique used in various forms on essentially all production wireless systems today, including WANs. But this technique motivates an interesting and fundamental question: do Wi-Fi power-conservation techniques, when enabled, actually save a meaningful amount of energy or have any negative impact on throughput? We set out to define a simple test to answer these questions as they pertain to 802.11's Power Save Mode (PSM), the most common form of Wi-Fi power saving implemented today. We do note that there are several new power saving mechanisms defined for 802.11n (see related story on standards) gear, but we have not found those to be widely implemented, so we could not assess those at this juncture. Vendors have delivered a number of PSM variants, with the primary difference being how quickly and how often the adapter wakes up. Having a NIC wake up faster could negatively affect power consumption, the fundamental tradeoff in this strategy, although this could theoretically improve throughput. The opposite of PSM is Constantly Awake Mode (CAM), in which PSM is disabled. Our test compared various forms and implementations of PSM against CAM and, for good measure, a wired gigabit Ethernet baseline test. Using PSM in our tests produced only a marginal benefit in terms of battery life (and was even slightly worse than CAM in one test). In terms of throughput, the results ranged from marginally positive to having a very negative impact on throughput in two cases tested Bottom line: PSM isn't likely to be of any value in contemporary implementations, and may even hurt performance. We contacted all vendors whose products were included in this test regarding the results. Only Broadcom's PR department would comment, saying that its internal testing showed that battery life gains from PSM implementations in notebooks varies between brands, sometimes showing that PSM can maximize battery life with no impact on throughput. Test configuration and procedures The basic test strategy was to copy a file consisting of roughly 1MB (1,095,680 bytes, to be precise) from a source computer to a destination computer as many times as possible, beginning with a fully charged battery and ending each test run when the notebook computer went into hibernation as a result of near exhaustion of the battery, defined in this case as 5% battery charge remaining. The test was driven by a simple DOS .bat file, the logic for which was to print the time of day, copy the file from source to destination, pause for three seconds, increment and display a counter indicating the loop iteration, and then run continuously until the battery gave out. The purpose of the pause was to allow more than enough time for the notebook to go into PSM and to simulate a fairly low Wi-Fi usage duty cycle, so as to maximize the time the radio was asleep. The test script was run on the destination computer so that transactions would be initiated and recorded by the mains-powered computer. The destination in all test cases was a Dell 4500 desktop upgraded with a PCI gigabit Ethernet adapter and running Windows XP Pro with all current updates applied as of the date that testing began. Power conservation features on this machine were disabled for all test runs as it was operating on AC power. We used two different source computers, both notebooks: an Acer Aspire 5920 notebook equipped with Vista Ultimate (including all updates available as of the date of the test run, but not including SP1), and featuring both gigabit Ethernet ports and an integral Intel 4965 a/g/n wireless adapter; and an HP Compaq nx6125 with gigabit Ethernet and a Broadcom 802.11 b/g radio. Both machines were directly connected to each other either via a gigabit Ethernet link (for baseline testing), and via a wireless connection (using an access point). At no time were any elements connected to any other network or the Internet. The Acer's built-in wireless adapter was used in 802.11g mode only, with a Netgear WNR854T router (used only as an access point in this test) forced to 802.11g mode as well. The Acer internal adapter was also used in 802.11n mode with a Linksys WRT350N router (again, as an access point). Similar 802.11g testing was performed with the HP notebook, using the Linksys AP, and also with two external 802.11n adapters (a Linksys PC card and an SMC USB adapter) connected to the Linksys AP. All of this provided a good variety of test cases. Our test procedure involved first establishing a baseline for performance in terms of throughput. We then repeated the test with each Wi-Fi client adapter/access point pair, in each case the only variable being the changing of the level of client Wi-Fi power conservation. Both notebooks kept the hard drive on all the time, and the Acer was set to 50% display brightness while the HP's display was kept all the way up. We used a spectrum analyzer to monitor for any high-amplitude interference that might affect results throughout all test runs, and none was observed. Less than satisfying results What jumps out almost immediately from this data is that PSM in any form delivered very little in terms of additional run time, and occasionally had a major detrimental impact on throughput. The best improvement in runtime that we saw was a little over 8% in the case of the Linksys AP/Linksys Adapter running on our HP notebook with PSM enabled. That said, this combination also simply decimated throughput to less than half that of the CAM case. Interesting, this same combination of gear with "Fast" PSM enabled still resulted in 4% better run time and yielded a .5% gain in throughput. Overall, though, it was clear that PSM was not contributing to significantly longer runtimes, and thus appears to have a negligible impact on notebook battery life. Moreover, in most cases, throughput was adversely affected and, where it was not, no real benefit was noted. And the reason for this is the relatively large amount of power consumed in modern notebooks in comparison with the energy used by today's Wi-Fi adapters. The 802.11 standard was initially developed during a time when processor clocks were in the 100MHz to 200MHz. range, and initial WLAN designs involved a significant number of power-hungry components. Today, however, Wi-Fi adapters are highly integrated -- meaning fewer chips are required to implement a Wi-Fi solution -- and designs are more power-efficient. While the notebooks' other components -- most notably the processor (because of higher clock rates) and display and backlighting (due to much higher resolutions) -- often consume more energy than in the past. Notebook designers have compensated with larger batteries and a continual emphasis on power-conservative designs and provisions for a high degree of end-user control over power conservation settings in many cases, but the proportion of energy consumed between the computer and the WLAN adapter has clearly flipped. As a consequence, it would be hard to encourage users to enable PSM in their daily operations. PSM is mostly harmless, but can also have very negative performance impacts. We also noted in the testing of some of the power-save modes on the Intel adapter that test runs would not complete, timing out with an error message, indicating that the notebook was simply not responding fast enough to meet application demands. Users thus need to be cautioned about setting PSM options without some knowledge of the possible consequences. Saving energy in any form is, as Martha Stewart might say, a good thing. But, more importantly, anyone who is mobile knows that, after dropping one's mobile computer or communicator on a concrete floor, the most likely failure mode for these devices is a battery going dead. While I still recommend carrying a fully charged spare battery for all critical mobile devices essentially everywhere (understanding that is problematic with notebook batteries that tend to be large, heavy, and expensive), anything we can do to optimize battery life without creating a significant impact on network throughput needs to be considered, if not implemented as a matter of policy. Our tests show, however, that a slam-dunk case for Wi-Fi Power Save Mode cannot be made. As a final note, it's also important to point out that we've only been considering the client-side elements of power conservation. While infrastructure plays a critical role in the implementation of the protocol-related elements of WLAN power management, it also makes sense to examine the power, and thus the cooling and cost, impacts of all WLAN infrastructure-side equipment, most importantly access points. While not all 802.11n access points, for example, can run on 802.3af power over Ethernet, it is wise to consider access point power consumption when evaluating new equipment. While this may not be the deciding factor in a purchasing decision, it makes sense that such at least be an item in the RFP. Mathias is a principal with Farpoint Group, an advisory firm specializing in wireless networking and mobile computing. He can be reached at email@example.com.
<urn:uuid:2566dc5b-42a3-435e-9013-5c75720e3c69>
CC-MAIN-2017-04
http://www.networkworld.com/article/2278762/network-security/wireless-computing-power-saving-measures-may-not-be-worth-the-effort.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954881
2,095
2.6875
3
If you're reading this, you're a brave man or woman indeed. Yes, you're one of life's risk takers. But before you start picturing yourself alongside the likes of a highwire walker, trapeze artist or sky diver, a little context is required. It seems that working in computers is, in itself, a potentially life threatening occupation, and that the more time you spend working in front of computers, the most dangerous it is. Who would have though that field sales would have been less dangerous than programming, for example, or even accounts? Turns out it is, at least if a recent study by Emmanuel Stamatakis at University College London is to be believe. The study found people spending four or more hours in front of a screen (TV or computer) had a 48% higher risk of mortality (death) and the risk of heart-related disease was more than doubled for anyone who spent more than two hours sitting watching TV, playing a video game or in front of a computer. The only caveat I have is that the study was based on respondents to the Scottish Health Survey 2003. As a Scot even I know Scotland is a country infamous for its unhealthy diet. So much so, I wouldn't be surprised if some people thought adding crisps to a plate of sandwiches was a potato salad. Anyway, just to be on the safe side, it might be a good idea to get up and stretch your legs a bit. Go get a cup of tea or coffee - and don't take the lift.
<urn:uuid:2e79718c-c200-4c7e-b838-e3ddae77c08a>
CC-MAIN-2017-04
http://www.computerweekly.com/microscope/opinion/Screen-of-death
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974453
314
2.625
3
Malware has been employing anti-researcher and detection-evading tactics, almost since the beginning of malicious code. And while phishing and spam have been using detection-evading techniques for ages, anti-researcher tactics seem to be a new tool in their arsenal. Both phishing and spam seemed to prefer the shotgun approach, preferring quantity to quality when it came to finding victims. According to an article in SC Magazine, this is beginning to change. This change employs a simple tactic that’s commonly used in emails sent by companies to existing customers – it includes a link that can only be accessed by the user him or herself. Anyone else accessing the link will be given an error message. By using this technique, they make it difficult for anyone who isn’t the targeted user to view the phishing email, and it makes adding the phish to anti-phishing detection potentially more difficult. But if the history of anti-malware has taught us anything, “more difficult” does not by any stretch of the imagination mean “impossible.” Sometimes, the evasion itself can give detection methods a solid hint that something is up to no good. In the case of polymorphic viruses, AV software can often use the code that generates the virus’ changes to identify it. Legitimate software seldom tries to do such squirrelly things as changing their own code. Because companies commonly use dynamic mass emails, it might be difficult to exclude this behavior generically. But phishing emails that lead to a unique site and push the sort of code that would be useful for a zero-day exploit would be very clearly problematic. Have you seen any of this new phishing behavior? Or are all of the questionable emails you receive caught in spam filters or by security software?
<urn:uuid:0a0271cb-3fc1-4930-ac54-74f3995de2ce>
CC-MAIN-2017-04
https://www.intego.com/mac-security-blog/new-phish-tries-to-evade-researchers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00209-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94537
373
2.609375
3
Definition: A transformation of one problem into another which is computable in polynomial time. See also NP-complete, Turing reduction, Cook reduction, Karp reduction, l-reduction, many-one reduction. Note: From Algorithms and Theory of Computation Handbook, page 24-19, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "polynomial-time reduction", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/polynomtredc.html
<urn:uuid:89c250f7-55e5-4eb3-be6b-5c31e444b2a8>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/polynomtredc.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.810712
225
2.578125
3
Definition: A spatial access method which splits space in hierarchically nested, possibly overlapping, boxes. The tree is height-balanced. It is similar to the R-tree, but reinserts entries upon overflow, rather than splitting. Note: After [GG98]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 25 November 2013. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "R*-tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 25 November 2013. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/rstartree.html
<urn:uuid:c54804a3-61c0-45a9-a506-e39c4dc4e83a>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/rstartree.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.857347
172
2.84375
3
Military Drones Present And Future: Visual TourThe Pentagon's growing fleet of unmanned aerial vehicles ranges from hand-launched machines to the Air Force's experimental X-37B space plane. 10 of 22 Aurora Flight Science's Excalibur is intended to fill a gap between piloted fighter jets and armed drones that require remote piloting. An advanced flight control system operates the Excalibur with a high degree of autonomy, so ground-based operators can focus on finding targets instead of flying. The aircraft is designed to carry Hellfire air-to-surface missiles and other weapons. Its design allows for vertical takeoffs and landings. Image credit: Aurora Flight Sciences Drones To Fly U.S. Skies, In DOD Plans Military Transformers: 20 Innovative Defense Technologies Spy Tech: 10 CIA-Backed Investments 14 Amazing DARPA Technologies On Tap Air Force Drone Controllers Embrace Linux, But Why? Secret Spy Satellite Takes Off: Stunning Images 5 Items Should Top Obama's Technology Agenda U.S. Military Robots Of The Future: Visual Tour Iran Hacked GPS Signals To Capture U.S. Drone 10 of 22
<urn:uuid:e003932a-1727-4b00-bdf6-fe848ff406d0>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/military-drones-present-and-future-visual-tour/d/d-id/1107839?page_number=10
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00199-ip-10-171-10-70.ec2.internal.warc.gz
en
0.805284
244
2.65625
3
What exactly is data? Many define it as a set of values that belong to a set of items. Essentially, data is all around us, and we use it on a daily basis. Data, by itself is largely useless – think of the last time you saw a spreadsheet without any labels. But through the use of tools that help us analyze, interpret and manipulate data, we are able to turn it into something useful: Information. The question is, what are the useful data analysis tools? Here’s a brief overview of five data analysis tools that you could use in your business. One of the more common uses of data is to help a business manager make predictions. We all know predictions are among the hardest things to do. Enterprises hire staff and invest in systems solely with the aim of making predictions. If you’re a small business, you likely don’t need expensive software that is hard to use. Enter BigML. How it works is you define and upload a set of data and format it. BigML will then take that data, help you to create a prediction model which you then can apply ‘what-if’ variables to and have it generate predictions. The site runs on credits; you pay for a set amount of credits and each part of the process – dataset, model and prediction – is worth a certain amount of credits. Prices start at around USD$6.50 for credits, which gives you 10MB of data, 5MB worth of models and 10K predictions based on this data. Wolfram|Alpha’s Facebook Reports WolframAlpha is a search engine that collects data and uses algorithms to interpret it. One feature of this site is that you can develop reports, one of the more useful being Facebook Reports. You can access the report feature by clicking here. Alternatively, you can go to the WolframAlpha website and search for Facebook. This report provides users with a glimpse into their Facebook Page’s information. It provides you with information on who are the most active posters, how many shares/likes, etc. you get and other useful information in easy to read charts and graphs. The key here is that the report can show you how customers access your Page and where they come from. You could use this information to see what posts users liked and didn’t like, and provide more engaging content. The basic version of the report is free. More advanced controls and data analysis is available for USD$4.99 a month. Many Eyes is a data analysis and visualization tool developed by IBM Research. If you already have data sets then you can upload them to the website and use one of the many different visualization tools to create charts, graphs, etc. A cool feature of this site is that it has the ability to analyze written documents. Say for example you are writing new content for your website, you can copy and paste the content and get a visual representation of the words you use, how you connect words, etc. If you have a set of keywords you would like to use for SEO and search purposes, you can manually compare them with the visualization. If you notice that an important keyword is missing, or not represented enough, you can go through and re-write the copy a bit. Best of all, it’s free. If you have an idea about Business Intelligence, or have worked with data on a regular basis and have sets that are structured, Tableau Public is probably the most powerful free analysis tool available for small businesses. While powerful, it isn’t the most user-friendly of options. To get the most out of this program you are going to need to know the basics behind data analysis. If you feel comfortable with the basics, you’ll be creating dashboards, charts, interactive graphs, maps, etc. that look great and can be embedded on your blog or website. Oh yes, did we mention it’s free? Big data is all the rage these days, it’s hard not to hear techies and data specialists talk about it. While it is an important part of many large businesses’ data analysis practices, the truth is many small businesses don’t need big data just yet. If you have simple data you need to analyze e.g., how many hours have your five employees worked this month? Why not stick with simple spreadsheets like Excel or Google Spreadsheet. As long as you have data entered in a logical way, you can easily create graphs and charts that can help you visualize and analyze your data. If you would like help establishing a system that can help you track and analyze your data, please contact us today, we may have a solution that works for you.
<urn:uuid:5c3c6cf7-0a3c-4ddb-8cad-ce9e8118a242>
CC-MAIN-2017-04
https://www.apex.com/5-tools-data-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937639
973
2.703125
3
About this series The typical UNIX® administrator has a key range of utilities, tricks, and systems he or she uses regularly to aid in the process of administration. There are key utilities, command-line chains, and scripts that are used to simplify different processes. Some of these tools come with the operating system, but a majority of the tricks come through years of experience and a desire to ease the system administrator's life. The focus of this series is on getting the most from the available tools across a range of different UNIX environments, including methods of simplifying administration in a heterogeneous environment. Simplifying remote login Secure Shell (SSH) tools provide a secure method for logging in and exchanging information with a remote host. A number of different tools are provided, including the general-purpose SSH tool (which provides a remote terminal connection), SCP (a secure, host-to-host, copy solution), and SFTP, a secure file copy solution that works in a similar fashion to the standard FTP tools. All of these tools are secure in that the information that is exchanged is encrypted. In addition, the authentication of connections is secured using a public or private key mechanism. One of the main benefits of SSH is that you can bypass the normal login and password exchange by copying your public key to a remote machine. Although this is useful when using SSH to log in to a remote machine (as it means you don't have to provide a password), it is even more useful when performing remote administration. Having to type in a password can also make automated remote administration (for example, running a command through cron) impossible, because in an automated script, you won't be around to type in the password! When using SSH to run commands across multiple machines without exchanging your public key, you need to type in your password for each machine. A quick and simple way of setting this up is to create a public key: $ ssh-keygen -t rsa Follow the on-screen instructions, but don't set a password when prompted, as you will then need to enter the password each time you want to use the key. This creates a private and a public key file. Now you just need to append the contents of the public key file in .ssh/id_rsa.pub, and append it to the .ssh/authorized_keys file on the remote host and user you want to use when logging in. You need to append the public key file contents to each machine you want to log in to automatically. Running a remote command There are many ways in which you can run a remote command. You can run a single remote command by adding the command you want to run to SSH after the login or host information. For example, to get the disk information for a remote host, you might use the command and get the output in Listing 1 below. Listing 1. Running a simple command through SSH $ ssh firstname.lastname@example.org df Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda3 14544820 3611520 10194464 27% / udev 128044 564 127480 1% /dev /dev/hdc1 1968872 50340 1818516 3% /var/tmp /dev/hdc2 1968904 1482220 386668 80% /usr/portage /dev/hdc3 1968904 35760 1833128 2% /home/build shm 128044 0 128044 0% /dev/shm Bear in mind that the sequence in Listing 1 requires you to enter a password if you haven't already exchanged your public key with the remote host. You can also execute a sequence of commands by separating each command with a semicolon and then placing the entire sequence of commands into quotes so that it is identified as a single argument. An example of executing both a disk check and an uptime check is shown in Listing 2. Listing 2. Executing a disk and an uptime check $ ssh email@example.com "df;uptime" Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda3 14544820 3611520 10194464 27% / udev 128044 564 127480 1% /dev /dev/hdc1 1968872 50340 1818516 3% /var/tmp /dev/hdc2 1968904 1488100 380788 80% /usr/portage /dev/hdc3 1968904 35760 1833128 2% /home/build shm 128044 0 128044 0% /dev/shm 14:31:27 up 12 min, 2 users, load average: 0.01, 0.05, 0.06 You can string as many commands as you like into this operation. Filtering, for example, using grep or other tools, is also possible, but you need to make sure to embed the entire remote command expression into the quotes (see Listing 3). Listing 3. Filtering using grep $ ssh firstname.lastname@example.org "cat /var/log/messages|grep 'su\['" Dec 17 18:05:37 localhost su: pam_authenticate: Permission denied Dec 17 18:05:37 localhost su: FAILED su for root by mc Dec 17 18:05:37 localhost su: - pts/1 mc:root Dec 17 18:06:31 localhost su: pam_authenticate: Permission denied Dec 17 18:06:31 localhost su: FAILED su for root by mc Dec 17 18:06:31 localhost su: - pts/1 mc:root Dec 17 18:06:40 localhost su: pam_authenticate: Permission denied Dec 17 18:06:40 localhost su: FAILED su for root by mc ... The first item to note about Listing 3 is that you are logging in directly to the remote machine as root. This is because the file you want to view is only accessible to the superuser. You must ensure that your system is configured to allow remote root logins for this to work. The second important note about this example is that you've performed the grep operation remotely. In actual fact, you don't need to do this. The standard input and output of the remote host are replicated to the local machine, so the output from the command can be filtered locally, as shown here in Listing 4. Listing 4. Output filtered locally $ ssh email@example.com "cat /var/log/messages" | grep 'su\[' Dec 17 18:05:37 localhost su: pam_authenticate: Permission denied Dec 17 18:05:37 localhost su: FAILED su for root by mc Dec 17 18:05:37 localhost su: - pts/1 mc:root Dec 17 18:06:31 localhost su: pam_authenticate: Permission denied Dec 17 18:06:31 localhost su: FAILED su for root by mc Dec 17 18:06:31 localhost su: - pts/1 mc:root Dec 17 18:06:40 localhost su: pam_authenticate: Permission denied Dec 17 18:06:40 localhost su: FAILED su for root by mc Dec 17 18:06:40 localhost su: - pts/1 mc:root Of course, the effect is essentially the same. Using the remote pipe method, though, is useful when the information or command that you want to pipe with is remote. For example, you can use ls in combination with determine the disk usage of different directories with the command shown in Listing 5. Determining disk usage of different commands ssh firstname.lastname@example.org "ls -d /usr/local/* |xargs du -sh " Password: 4.0K /usr/local/bin 4.0K /usr/local/games 4.0K /usr/local/lib 0 /usr/local/man 4.0K /usr/local/sbin 12K /usr/local/share 4.0K /usr/local/src Before moving on to redistributing these techniques to multiple machines, there's a quick trick for running remote interactive sessions directly without having to log in first. Direct interactive sessions As shown previously, you can directly run a number of different commands and chains of commands. One of the benefits of the SSH solution is that although the command itself is executed remotely, the input and output of the command are sourced from the calling machine. You can use this as a method for exchanging information between the two machines relating to the commands that you want to execute. The commands that you execute can cover almost anything from a range of different commands. However, because you are running commands directly from the command line, there are limits to what you can execute directly with this method. For example, trying to edit a remote file with an editor using this method and techniques shown above usually fail (see Listing 6). Listing 6. Edit a remote file fails $ ssh email@example.com "emacs /etc/amavisd.conf" emacs: standard input is not a tty You can resolve this by forcing SSH to allocate a pseudo-tty device so that you can interact directly with the remote application. Running a remote command across multiple machines So far, you have concentrated on running a single command or command string on a single remote machine. Although the interactive session trick is useful when performing remote administration directly with SSH, it is likely that you will want to automate the process, which means that the interactive element is unlikely to be of very much use. To run the same command remotely across a number of machines, you need to build a simple wrapper around the SSH command and the remote command that you want to run so that the process is repeated on each remote machine. You can do this with a very simple for loop, as demonstrated in Listing 7 below. for loop to run the command remotely for remote in firstname.lastname@example.org mc@redhat; do echo $remote; ssh $remote 'df -h'; done email@example.com Filesystem Size Used Avail Use% Mounted on /dev/hda3 14G 4.1G 9.2G 31% / udev 126M 564K 125M 1% /dev /dev/hdc1 1.9G 56M 1.8G 4% /var/tmp /dev/hdc2 1.9G 1.3G 558M 70% /usr/portage /dev/hdc3 1.9G 35M 1.8G 2% /home/build shm 126M 0 126M 0% /dev/shm mc@redhat Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 7.1G 5.5G 1.3G 82% / /dev/hda1 99M 13M 82M 14% /boot none 125M 0 125M 0% /dev/shm You can easily turn this into a simple script, as shown here in Listing 8. Listing 8. Reducing the for loop to simple command #!/bin/bash # Script to run a command across multiple machines # Global options TIMEOUT=10 ERRLOG=/tmp/remote-err-$$.log OUTLOG=/tmp/remote-out-$$.log # Extract the command line MACHINES=$1;shift COMMAND=$1;shift for machine in $MACHINES do echo $machine ssh -oConnectTimeout=$TIMEOUT $machine $COMMAND >>$OUTLOG 2 >>$ERRLOG done cat $OUTLOG cat $ERRLOG >&2 rm -f $OUTLOG $ERRLOG The MACHINES and COMMAND are "as-is" as you extract them from the command line. When using the script, you must put the user or host combinations and the command into double quotes to ensure they are identified as a single argument. The only other addition is the TIMEOUT option. This ConnectTimout option to SSH to ensure that when running a command you don't needlessly wait to connect to a host that might not be available. The default is set at the head of the script and should ensure you don't wait too long. When running the commands, you send the output to a couple of log files, one for standard output and the other for standard error. Then you output these individually to the appropriate location. This highlights one of the benefits of SSH—the remote machine redirects to the same location (standard output, standard error), so you can redirect locally while retaining the meaning of the output. For example, you can repeat the df check using this script: $ runremote.sh "gentoo redhat" "df -h" Because you redirected the standard output and error, you can even generate a log of the whole process: $ runremote.sh "gentoo redhat" "df -h" 2>/tmp/error.log Using remote execution for performance monitoring When using runremote.sh, you might want to play with the exact value used for the timeout value, and you might even want to change the value, depending on what you are doing. For example, if you were using this script to get a snapshot of the current status by using uptime across a bunch of machines, you wouldn't want to wait too long for the connection and command to take place, otherwise the snapshot would be imprecise. Also, the script, as it stands, runs the command sequentially. Not only does this take a long time if you have a large number of machines, but the time delay between the first machine and the last executing the chosen command might be so significant that correlation across machines might be impossible. A slightly adjusted script, runremote2.sh, is shown in Listing 9. This executes the remote command almost simultaneously (by running it in the background), and then also pipes the output to individual log files. Listing 9. Script that executes the remote command almost simultaneously #!/bin/bash # Script to run a command across multiple machines # Global options TIMEOUT=10 ERRLOG=/tmp/remote-err-$$.log OUTLOG=/tmp/remote-out-$$.log # Extract the command line MACHINES=$1;shift COMMAND=$1;shift for machine in $MACHINES do echo $machine >>$OUTLOG.$machine ssh -oConnectTimeout=$TIMEOUT $machine $COMMAND >>$OUTLOG.$machine 2>>$ERRLOG.$machine & done # Wait for children to finish wait cat $OUTLOG.* cat $ERRLOG.* >&2 rm -f $OUTLOG.* $ERRLOG.* In this script, you also echo the machine name out to the command log (unique for each machine supplied). To ensure that the script doesn't exit before all the remote commands have executed, you need to add a command to wait for the children of the script to finish. Now you can use the script to check multiple machines simultaneously (see Listing 10). Listing 10. Using the script to check multiple machines simultaneously $ runremote2.sh "narcissus gentoo.vm droopy@nostromo mcbrown@nautilus" 'uptime' droopy@nostromo 19:15 up 9 days, 23:42, 1 user, load averages: 0.01 0.03 0.00 gentoo.vm 18:10:23 up 1 day, 10:02, 2 users, load average: 1.72, 1.84, 1.79 mcbrown@nautilus 19:15 up 10:08, 4 users, load averages: 0.40 0.37 0.29 narcissus 19:15 up 8 days, 7:04, 4 users, load averages: 0.53 0.54 0.57 This kind of monitoring can be useful when you want to get a whole network picture—for example, to check a problem with a group or cluster of machines when running Web or database services and want to identify potential spikes or issues simultaneously across that group of machines. Be aware, however, that there will still be delays, especially if a machine is particularly busy—the time for the connection to be made and the command to be executed could leave some significant time delays across different machines. Running the same operation across multiple machines Creating users across a number of machines can be a pain. There are obviously plenty of solutions for trying to resolve the difficulty from the use of single-sign on utilities, such as the Network Information Service (NIS) or LDAP-based solutions, but you don't always have to synchronize the users in this way. You could use SSH to do this for you by running the adduser command across multiple machines. But under Solaris, the name of the command is command-line options are largely the same, so you could use run-remote.sh twice (see Listing 11). Listing 11. Running run-remote.sh twice $ runremote.sh "gentoo redhat" "adduser -u 1000 -G sales,marketing mcbrown" $ runremote.sh "solaris solaris-x86" "useradd -u 1000 -G sales,marketing mcbrown" You've now created the same user across a number of machines with the same groups and the same user ID, but this is hardly practical. A much better way would be to use the tips demonstrated in the "System Administration Toolkit: Standardizing your UNIX command-line tools" article (see Resources) to use the same command across multiple machines: $ runremote.sh "gentoo solaris" "adduser.sh -u 1000 -G sales,marketing mcbrown" In this article, you've examined a simple, but powerful, method to run commands on a remote machine. Although the basics of the process are straightforward, you can also create additional functionality to complete some robust, automated remote administration tasks (for example, the ability to redirect and pipe remote local input together). By implementing some simple shell script tricks, you can even use the system to remotely administer a number of machines simultaneously, simplifying many of the repetitive tasks and performance monitoring. - System Administration Toolkit: Check out other parts in this series. - "System Administration Toolkit: Standardizing your UNIX command-line tools" (Martin Brown, developerWorks, May 2006): Read this article to learn how to use the same command across multiple machines. - "Scheduling recurring tasks in Java" (Tom White, developerWorks, November 2003): Learn how to build a simple, general scheduling framework for task execution conforming to an arbitrarily complex schedule. - Read Wikipedia pages on crontab. - "The road to better programming: Chapter 11. Crontab management with cfperl" (Teodor Zlatanov, developerWorks, June 2003): In Part 11 of an article series on developing a cfegine interpreter written in Perl, crontab entries can be added or deleted easily. - "Scheduling recurring tasks in Java" (Tom White, developerWorks, November 2003): Read this article to learn how to build a simple, general scheduling framework for task execution conforming to an arbitrarily complex schedule. - For an article series that teaches you how to program in bash, see: - "Bash by example, Part 1: Fundamental programming in the Bourne again shell (bash)" (Daniel Robbins, developerWorks, March 2000) - "Bash by example, Part 2: More bash programming fundamentals" (Daniel Robbins, developerWorks, April 2000) - "Bash by example, Part 3: Exploring the ebuild system" (Daniel Robbins, developerWorks, May 2000) - "Making UNIX and Linux work together" (Martin Brown, developerWorks, April 2006): This article is a guide to getting traditional UNIX distributions and Linux® working together. - IBM Redbooks: Different systems use different tools, and Solaris to Linux Migration: A Guide for System Administrators helps you identify some key tools. - "Exploring the Linux memory model" (Vikram Shukla, developerWorks, January 2006): This article helps you understand how Linux uses memory, swap space, and exchanges pages and processes between the two. - Popular content: See what AIX® and UNIX content your peers find interesting. - Check out other articles and tutorials written by Martin Brown: - AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills. - New to AIX and UNIX?: Visit the "New to AIX and UNIX" page to learn more about AIX and UNIX. - AIX 5L™ Wiki: A collaborative environment for technical information related to AIX. - Search the AIX and UNIX library by topic: - Safari bookstore: Visit this e-reference library to find specific technical resources. - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. - Podcasts: Tune in and catch up with IBM technical experts. Get products and technologies - IBM trial software: Build your next development project with software for download directly from developerWorks. - Participate in the developerWorks blogs and get involved in the developerWorks community. - Participate in the AIX and UNIX forums:
<urn:uuid:84c6ffc1-955e-4388-ac0f-1fc5631654c1>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-satdistadmin/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00227-ip-10-171-10-70.ec2.internal.warc.gz
en
0.862827
4,597
2.671875
3
The advent and exponential growth of Internet connected devices has the potential to change the world – but not always in benign ways. As everyday objects from our cars to our refrigerators to our thermostats begin to act more and more like smartphones, IoT devices have the potential to automate and simplify many areas of our everyday lives. Smart homes will adjust the temperature and turn on the lights for us, smart cars will help us reach our destinations and call for help when there’s a problem, and smart appliances will make sure we never run out of milk and schedule their own maintenance. But with new technologies, there is almost always a cost, and the Internet of Things is no exception. As the IoT expands rapidly, it seems as though we will pay for the convenience that it creates for us with increased risks to our security. At least at first. In the past, people had to worry about data security only as it applied to a desktop computer and maybe a laptop. We were running anti-virus software on these devices to protect ourselves against attackers, and that was pretty much all we needed to do. Today, we have so many new devices connected to the Internet, and the number growing rapidly. The risks that accompany this trend are big. According to an HP security survey, more than 70 percent of the most commonly used IoT devices have serious security vulnerabilities. Innovators and entrepreneurs are rushing to bring the next big thing to the market and, in their haste, they are putting security concerns second (or sometimes even last) to creating the device and cornering the market. But as IoT devices gather more and more data about us and our lives, we as consumers should be extremely concerned about these vulnerabilities. We may not think about it very much, but these IoT devices have collected a lot of information about our private lives. The refrigerator that orders your milk must have some sort of payment method set up with the grocer. Your thermostat knows when you are likely to be at home – and also when you are not. And your smart watch or wearable fitness tracker may have private information about your health and habits that you wouldn’t want anyone but your doctor to know. Last year, the Federal Trade Commission (FTC) released a report urging IoT manufacturers to put security first with these new technologies. The report recommends having a defensive security plan in place, rather than reacting to security threats after the fact, and recommends that companies train employees in how to secure customer data. Furthermore, the FTC argues that companies should be transparent about how they collect and use data, and offer users the choice to opt out. Hacking Smart Devices Of course, data theft isn’t the only threat from IoT devices. In 2015, hackers proved that they could take control of a Jeep’s braking, steering, and transmission systems remotely while it was being operated. A similar hack took over a Tesla. Connected cars are among the most anticipated and hyped innovations that the IoT promises to deliver. But the more automated cars become, the more vulnerable they are to dangerous attacks. Smart TVs also made news last year when it was revealed that the Samsung SmartTV can listen to your conversations, record them, and send the data to Samsung’s servers for analysis. And if the corporation can do that, a hacker also could access those features to spy on you. Even our children aren’t immune to attacks on vulnerable IoT-connected devices. Reports of creeps hacking into baby monitors led to studies that revealed that none of the top Internet-connected baby monitors passed security tests. And that harmless-looking Hello Barbie that’s recording your child’s conversations? If Mattel can listen in, so can a clever hacker. And this isn’t idle speculation, or concern about a potential problem that, in the real world, isn’t being exploited. Attacks on cloud-connected devices jumped 152 percent last year. For now, the best recourse as consumers is to be educated, be cautious, and demand more security features from the companies with which you choose to do business. Bernard Marr is a bestselling author, keynote speaker, strategic performance consultant, and analytics, KPI, and big data guru. In addition, he is a member of the Data Informed Board of Advisers. He helps companies to better manage, measure, report, and analyze performance. His leading-edge work with major companies, organizations, and governments across the globe makes him an acclaimed and award-winning keynote speaker, researcher, consultant, and teacher. Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.
<urn:uuid:d56751f1-83b7-4cbf-a043-0c271f423d75>
CC-MAIN-2017-04
http://data-informed.com/hidden-risks-of-iot-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00043-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956141
953
2.671875
3
Cisco 827 Configurations for Voice Service Figure 6-3 shows a theoretical network with VoIP configured on the 827-4V. The voice capability of the 827-4V starts with the same configurations as for the 827 without voice ports. Figure 6-3 Cisco 827-4V with Basic VoIP As discussed earlier, setting up voice on the router actually includes two configurationsone for data and one for voice. This section leads you through the data configurations first and then the voice configurations. These consist of the following steps: Step 1 Configure the data network: Configure the class map, route map (optional), and policy map Configure the Ethernet interface Configure the ATM interface Configure Enhanced IGRP Step 2 Configure the voice network: Configure the POTS (plain old telephone service, or basic telephone service) dial peers Configure VoIP dial peers for H.323 signaling The following sections discuss the details of each of these steps. Configuring the Data Network The data network depends on a specific traffic classification policy that allocates bandwidth and interface access by priority according to traffic type. Traffic types include digitized voice service, standard IP-type packets, and various traffic types whose priorities fall between high-priority voice traffic and standard-priority routine data traffic. Defining the traffic policy determines how many types of packets (number of classes) are to be differentiated from one another. Packets are matched to each other, forming classes based on protocols, access control lists, and input interfaces. These three are the usual match criteria. Before starting the configurations themselves, you must understand the class options. To characterize a class, you can specify the queue limit for that class, which is the maximum number of packets allowed to accumulate in the class's queue. Packets belonging to a class are subject to the bandwidth and queue limits that characterize the class. After a queue has reached its configured queue limit, enqueuing additional packets to the traffic class causes either tail drop or weighted random early detection (WRED) drop to take effect, depending on how the service policy is configured. Tail drop is a means of avoiding congestion that treats all traffic equally and does not differentiate between classes of service. Queues fill during periods of congestion. When the output queue is full and tail drop is in effect, packets are dropped until the congestion is eliminated and the queue is no longer full. WRED drops packets selectively based on IP precedence. Packets with a higher IP precedence are less likely to be dropped than packets with a lower precedence. Thus, higher-priority traffic is delivered with a higher probability than lower-priority traffic in the default scenario. However, packets with a lower IP precedence are less likely to be dropped than packets with a higher IP precedence in certain WRED configurations. Flow classification is standard weighted fair queuing (WFQ) treatment. That is, packets with the same source IP address, destination IP address, source TCP or User Datagram Protocol (UDP) port, or destination TCP or UDP port are classified as belonging to the same flow. WFQ allocates an equal share of bandwidth to each flow. Flow-based WFQ is also called fair queuing because all flows are equally weighted. WFQ can speed up handling for high-precedence traffic at congestion points. There are two levels of queuing: ATM queues and IOS queues. Class-based weighted fair queuing (CBWFQ) is applied to IOS queues. It extends the standard WFQ functionality in support of user-defined traffic classes. For CBWFQ, you define traffic classes based on match criteria including protocols, access control lists (ACLs), and input interfaces. Packets satisfying the match criteria for a class constitute the traffic for that class. Each class has a weight derived from the bandwidth you assigned to the class when you configured it. The weight specified for the class becomes the weight of each packet that meets the class's match criteria. Packets that arrive at the output interface are classified according to the match criteria filters you define, and then each one is assigned the appropriate weight. After a packet's weight is assigned, the packet is enqueued in the appropriate class queue. CBWFQ uses the weights assigned to the queued packets to ensure that the class queue is serviced fairly. Tail drop is used for CBWFQ traffic classes unless you explicitly configure a service policy to use WRED to drop packets as a means of avoiding congestion. Note that if you use WRED packet drop instead of tail drop for one or more traffic classes making up a service policy, you must ensure that WRED is not configured for the interface to which you attach that service policy. If a default class is configured, all unclassified traffic is treated as belonging to the default class. If no default class is configured, by default the traffic that does not match any of the configured classes is flow-classified and given best-effort treatment. As soon as a packet is classified, all the standard mechanisms that can be used to differentiate service among the classes apply. A first-in, first-out (FIFO) IOS queue is automatically created when a PVC is created. If you use CBWFQ to create classes and attach them to a PVC, a queue is created for each class. CBWFQ ensures that queues have sufficient bandwidth and that traffic gets predictable service. Low-volume traffic streams are preferred; high-volume traffic streams share the remaining capacity, obtaining equal or proportional bandwidth. Bandwidth for the policy map may not exceed 75 percent of the total PVC bandwidth. Resource Reservation Protocol (RSVP) can be used in conjunction with CBWFQ. When both RSVP and CBWFQ are configured for an interface, RSVP and CBWFQ act independently, exhibiting the same behavior that they would if each were running alone. RSVP continues to work as it does when CBWFQ is not present, even in regard to bandwidth availability assessment and allocation. RSVP works well on PPP, HDLC, and similar serial-line interfaces. It does not work well on multiaccess LANs. RSVP can be equated to a dynamic access list for packet flows. You should configure RSVP to ensure QoS if the following conditions describe your network: Small-scale voice network implementation Links slower than 2 Mbps Links with high utilization You need the best possible voice quality Configuring the Traffic Policy: Traffic Precedence, Class Maps, Policy Maps After considering the queuing and prioritization techniques explained previously, you can begin designing the traffic policy configuration for the Cisco 827. Starting with the classification of traffic types, there are two principal aspects to configuring the traffic policy: Class maps, which define the traffic classes Policy maps, which associate the policies (traffic classes) with interfaces The first step in building the class map is to configure the access list, including setting an IP precedence, to associate with the class map. As per RFC 791, there are eight classes of service, although later RFCs provide more independence in proprietary precedence definitions. Cisco Systems endeavors to conform to RFC 791, meaning that you can partition traffic in up to six classes of service using IP precedence; two others are reserved for internal network use. The network queuing technologies then use this IP precedence definition to expedite traffic handling. The original, RFC 791-defined classes are as follows, in order from lowest priority to highest priority: Traffic class (TC) = 0: Routine (uncharacterized traffic)If otherwise undefined, these packets are assigned the lowest-priority value and are delivered based on the available bandwidth. Non-TCP/IP traffic is assigned to this traffic class. There is a very high possibility of packet drop in the event of congestion. TC = 1: PriorityThere is a high possibility of packet drop if congestion is encountered. TC = 2: ImmediateThere is a medium possibility of packet drop in the event of congestion. TC = 3: FlashThere is a low possibility of packet drop in the event of congestion. TC = 4: Flash-overrideThere is a very low possibility of packet drop compared to the lower-priority classes. TC = 5: CriticalCisco recommends this class for voice traffic. TCs 6 and 7For Internet and network traffic, respectively. Examples include signaling protocols. IP precedence is not a queuing method, but it gives other queuing methods (WFQ, WRED) the capability to prioritize based on the packet's IP precedence. The network gives priority (or some type of expedited handling) to the marked traffic through the application of WFQ or WRED at points downstream in the network. The mapping from keywords such as routine and priority to a precedence value is useful in only some instances. In other words, the use of the precedence bit is evolving. Bear in mind that IP precedences can be used to establish classes of service that do not necessarily correspond numerically to better or worse handling in the network. The ip precedence command is used by the Cisco 827 router to differentiate voice traffic from data traffic and to assign voice packets a higher priority. Here is an example of this command applied on a Cisco 827 DSL router for voice service: Router (config)#access-list 101 permit ip any any precedence 5 This command builds an extended access list numbered 101, which permits IP traffic from any source to any destination and then assigns this permitted traffic the IP precedence of 5 for voice packets. You can also use the plain-text priority designations themselves rather than the numbers: Router (config)#access-list 101 permit ip any any precedence critical Features such as policy-based routing and committed access rate (CAR) can be used to set precedence based on extended access lists. The next step in building the voice configuration on the 827 is to configure the class map called voice. The command class-map voice defines a traffic class and the match criteria that are used to identify traffic as belonging to that class. match statements can include criteria such as an ACL, an IP precedence value, or a Differentiated Services Code Point (DSCP) value. The DSCP is a designation by the Internet Engineering Task Force (IETF) of the 6 most significant bits of the 1-byte IP Type of Service (ToS) field. The match criteria are defined with one match statement entered in class-map configuration mode. Here is an example of what might be in the class-map VOICE definition: Router(config)#class-map VOICE Router(config-cmap)#match ip rtp 16384 16383 Router(config-cmap)#match access-group 101 In the first command, IP Real-Time Protocol (RTP) ports 16384 and 16383 (possible values through 32767) are configured as the match criteria. In the second command, access list 101 is matched with the class map. That is, the class map is now associated with IP packets whose IP precedence is 5, the recommended voice packet precedence you defined earlier with the command access-list 101 permit ip any any precedence 5. Policy maps group one or more class maps, up to 64 different classes of service, for later association with a particular interface. The policy map thereby confers all its referenced class map values onto the interface. In the commands discussed in the preceding section, you first defined an access list and assigned permitted traffic the priority of 5. Then you referenced that access list, 101, in defining the class map called VOICE. The class map is also related to traffic only on ports 16383 and 16384. Because that is a relatively narrow definition, the policy map should also contain a class for other types of traffic, although this is more of a security consideration than a voice configuration consideration. The commands in the following listing define the policy map named MYPOLICY, which associates the class maps VOICE and class-default (the default for unreferenced traffic). As an example of one option, the command Priority 176 guarantees 176 kbps of bandwidth for the priority traffic. Beyond the guaranteed bandwidth, the priority traffic is dropped in the event of congestion to ensure that the nonpriority traffic is not starved. Another option is to define the guaranteed bandwidth as a percentage of the overall interface bandwidth. A third option is to specify a maximum burst size in bytes to be tolerated before dropping traffic. Policy-map MYPOLICY Class VOICE Priority 176 Class class-default You have finished configuring the class map and policy map, the first steps in configuring the data network, leading to final voice service configuration. Now you will adjust the interface configurations. You learned earlier about configuring the Ethernet interface for the TCP/IP architectures common to DSL. The ATM interface has some details beyond those earlier, basic ATM interface commands for the Cisco 827. These new ATM configuration commands provide for voice service and draw on the concepts already explained in this chapter, as well as some more specific details. ATM Interface Configuration The next step is to configure the ATM interface. Here are the commands to do so: interface ATM0 mtu 300 ! ip address 192.168.2.1 255.255.255.0 no atm ilmi-keepalive pvc 1/32 service-policy out MYPOLICY vbr-rt 640 640 10 encapsulation aal5snap The first step in configuring the ATM interface is to adjust the size of the MTU. If you are configuring PPP, either PPPoA or PPPoE, you should decrease the ATM interface's MTU size so that large data packets are fragmented. It is recommended that you use 300 for the MTU size because it is larger than the size of the voice packets generated by the different codecs. With multiclass multilink PPP interleaving, large packets can be multilink-encapsulated and fragmented into smaller packets to satisfy the delay requirements of real-time voice traffic. Small real-time packets, which are not multilink-encapsulated, are transmitted between fragments of the large packets. The interleaving feature also provides a special transmit queue for the smaller, delay-sensitive packets, enabling them to be transmitted earlier than other flows. Interleaving provides the delay bounds for delay-sensitive voice packets on a slow link that is used for other best-effort traffic. Next, the policy map named MYPOLICY that you created earlier is associated with the PVC in the outbound direction. You can then specify the PVC's service class. In this case, the command vbr-rt 640 640 10 defines Variable Bit Rate Real Time with a peak cell rate (PCR) of 640 kbps, a sustained cell rate (SCR) of 640 kbps, and a Maximum Burst Rate (MBR) of ten cells in a single burst. You should configure the SCR to be at least four times the particular codec's bandwidth when the four voice ports are used. For example, if you have a 640 kbps upstream PVC running codec G.729, you could configure the PVC with an SCR of 176. Finally in configuring the ATM interface, this PVC is assigned the encapsulation of aal5snap. Enhanced IGRP Configuration Continuing with configuring the data aspects of the Cisco 827, you should enter router configuration mode and enable Enhanced IGRP. The autonomous-system number identifies the route to other Enhanced IGRP routers and is used to tag the Enhanced IGRP information. Specify the network number for each directly connected network. The following configuration shows the Enhanced IGRP routing protocol enabled in IP networks 10.0.0.0 and 172.17.0.0. The Enhanced IGRP autonomous system number is assigned as 100: Config#router eigrp 100 Config-router#network 10.0.0.0 Config-router#network 172.17.0.0 You can now proceed to the voice-specific configurations. Voice Network Configuration Following is the voice-specific configuration: !(lines omitted) voice-port 1 timing hookflash-in 0 voice-port 2 timing hookflash-in 0 voice-port 3 timing hookflash-in 0 voice-port 4 timing hookflash-in 0 !(lines omitted) scheduler max-task-time 5000 dial-peer voice 1 pots destination-pattern 1001 port 1 ! dial-peer voice 10 voip destination-pattern 2... session target ipv4:192.168.2.8 ! codec g711ulaw (optional) The commands voice-port X and timing hookflash-in 0 turn off any hookflash indications that the gateway could generate on an FXO interface. Currently the Cisco 827-4V does not support hookflash indications, although that support is probably pending, because it is already available on other Cisco platforms with H.323 Version 2 Phase 2. On an analog phone, hookflash means pressing the switchhook for a moment (about one-half second) to produce a special stutter dial tone. This engages supplemental services, such as call waiting. The command scheduler max-task-time 5000 is not specific to the 827. It is how long, in milliseconds, a specific process is handled by the CPU before it reports debugging informationin this case, 5 seconds. Dial Peer Configuration Dial peers enable outgoing calls from a particular telephony device. All the voice technologies use dial peers to define the characteristics associated with a call leg. A call leg is a discrete segment of a call connection that lies between two points in the connection. Bear in mind that these terms are defined from the router perspective. An inbound call leg means that an incoming call comes to the router. An outbound call leg means that an outgoing call is placed from the router. Two kinds of dial peers can be configured for each voice port: POTS and VoIP: POTS associates a physical voice port with a local telephone device. The destination-pattern command defines the telephone number associated with the POTS dial peer. The port command associates the POTS dial peer with a specific logical dial interface, normally the voice port connecting the 827-4V to the POTS network. You can expand an extension number into a particular destination pattern with the command num-exp. You can use the show num-exp command to verify that you have mapped the telephone numbers correctly. The VoIP dial peer also associates a telephone number with an IP address. The key configuration commands are the same destination-pattern and session target commands that are used with the POTS dial peer. The former command is the same as with the POTS dial peer, defining a telephone number. The session target command specifies a destination IP address for the VoIP dial peer. This command must be used in conjunction with the destination-pattern command. Going further than the POTS dial peer, you can use VoIP dial peers to define characteristics such as IP precedence, QoS parameters, and codecs. For instance, you can optionally specify a different codec than the default codec of g.729. For both POTS and VoIP, after you have configured dial peers and assigned destination patterns to them, you can use the show dialplan number command to see how a telephone number maps to a dial peer. When a router receives a voice call, it selects an outbound dial peer by comparing the called number (the full E.164 telephone number) in the call information with the number configured as the destination pattern for the POTS peer. The router then strips the left-justified numbers corresponding to the destination pattern matching the called number. On POTS dial peers, the only digits that are sent to the other end are the ones specified with the wildcard character (.) with the command destination-pattern string. The POTS dial peer command prefix string can be used to include a dial-out prefix that the system enters automatically instead of having people dial it. If you have configured a prefix, it is put in front of the remaining numbers, creating a dial string, which the router then dials. If all the numbers in the destination pattern are stripped, the user receives (depending on the attached equipment) a dial tone. For example, suppose there is a voice call whose E.164 called number is 1 (310) 555-2222. If you configure a destination pattern of 1310555 and a prefix of 9, the router strips 1310555 from the E.164 telephone number, leaving the extension number of 2222. It then appends the prefix 9 to the front of the remaining numbers so that the actual numbers dialed are 9, 2222. The comma in this example means that the router pauses for 1 second between dialing the 9 and dialing the first 2 to allow for a secondary dial tone. Earlier, you defined a class called VOICE with the class-map command. You matched the access control list 101 with this class of service using the match access-group command. You also defined a policy map with the policy-map command. Those commands are shown here, along with some new options: class-map VOICE match access-group 101 ! policy-map POLICY class VOICE priority 480 pvc 1/32 service-policy out POLICY vbr-rt 640 640 10 encapsulation aal5snap ! bundle-enable ! dial-peer voice 1 pots destination-pattern 1001 port 1 dial-peer voice 10 voip destination-pattern 2... ! session target ipv4:192.168.2.8 ! ip precedence 5 ! access-list 101 permit ip any any precedence critical The command priority 480 defines the priority of the VOICE class in terms of guaranteed bandwidth. In this case, if there is congestion on the network, even this priority traffic is dropped when it exceeds 480 kbps. This ensures that the nonpriority traffic is not starved. The command service-policy out POLICY attaches the policy map to this particular PVC, 1/32. The policy map could also be attached to an interface, either inbound or outbound. The command vbr-rt 640 640 10 defines Variable Bit Rate Real Time (suitable for this voice traffic) with a PCR of 640 kbps, an SCR of 640 kbps, and an MBR of ten cells in a single burst. This PVC's encapsulation type is aal5snap, suitable for either RFC 2684 bridging (IRB or RBE) or PPPoE. The command bundle-enable creates ATM PVC bundles, about which you learned earlier. The command dial-peer voice 1 voip simply uses one of two options; this was explained earlier as well. Two values at a minimum are required to configure a VoIP peer: the associated destination telephone number and a destination IP address. The command destination-pattern defines the destination telephone number. This specification is then associated with port 1. In this configuration example, the last digits in the VoIP dial peer's destination pattern are replaced with wildcards. The ip precedence command defines precedence 5, preferred for voice. Last, the access-list command clears the way through access list 101 for any IP traffic and sets that traffic's precedence as critical. Returning to check the steps in configuring the 827-4V for data and voice, you are now ready for the last step, which completes the process by configuring the VoIP dial peers for H.323 signaling. VoIP Dial Peers for H.323 Signaling The H.323 signaling protocol was explained in Chapter 4, "Cisco DSL Products." Following is the configuration: interface ATM0 h323-gateway voip interface h323-gateway voip id GATEKEEPER ipaddr 192.168.1.2 1719 ! h323-gateway voip h323-id GATEWAY ! !(lines omitted: define telephone number, specify port number) ! dial-peer voice 10 voip destination-pattern +.T session target ras gateway The first H.323-related command in this configuration, h323-gateway voip interface, identifies the interface ATM0 as the gateway interface. The command h323-gateway voip id GATEKEEPER ipaddr 192.168.1.2 1719 defines the name and location (IP address) of the gatekeeper for this gateway. The next command, h323-gateway voip h323-id GATEWAY, defines the gateway's H.323 name, identifying this gateway to its associated gatekeeper. The command destination-pattern +.T introduces a new value. The plus sign (+) indicates an E.164 standard number, and the T indicates the default route. The command session target ras specifies the destination as having Registration, Admission, and Status (RAS) functionality, providing gateway-to-gatekeeper functionality. Finally, the one-word command gateway defines this 827-4V as the H.323 gateway device. Completing the 827-4V Configuration You can now complete the configuration of the Cisco 827-4V. Figure 6-4 shows the use of the Cisco 827-4V configured for RFC 2684 bridging (IRB) and VoIP. Figure 6-4 Cisco 827-4V Using IRB for VoIP When the Cisco 827 replaces existing bridged DSL modems, the IRB configuration is a typical starting point. Although the Cisco 827-4V supports voice service in all the other previously discussed architectures in which the network scheme would be different, IRB is shown here simply as an example. The new commands are explained after the configuration listing. Here is the configuration required for the 827-4V in this legacy replacement scenario: version 12.1 service timestamps debug datetime msec service timestamps log datetime msec ! hostname R1 ! bridge irb ! interface Ethernet0 no ip mroute-cache ! interface ATM0 no ip address no atm ilmi-keepalive pvc 1/150 encapsulation aal5snap bundle-enable bridge-group 1 hold-queue 224 in ! interface BVI1 ip address 172.16.0.1 255.255.0.0 ! ip classless ip route 0.0.0.0 0.0.0.0 BVI1 no ip http server ! bridge 1 protocol ieee bridge 1 route ip ! voice-port 1 timing hookflash-in 0 ! voice-port 2 timing hookflash-in 0 ! voice-port 3 timing hookflash-in 0 ! voice-port 4 timing hookflash-in 0 ! dial-peer voice 1 pots destination-pattern 2222 port 1 ! dial-peer voice 2 voip destination-pattern 1111 session target ipv4:172.16.0.3 ! The command bridge irb enables RFC 2684 bridging (IRB) for the whole Cisco 827-4V router. The command ip mroute-cache configures IP multicast fast switching. In this Cisco 827-4V, it is disabled on the Ethernet interface. When packets arrive on this Ethernet interface for a multicast routing table entry with mroute caching disabled, those packets are sent at process level for all interfaces in the outgoing interface list. When packets leave via this Ethernet interface for a multicast routing table entry, the packet is process level-switched for this interface, but it may be fast-switched for other interfaces in the outgoing interface list. The command bridge-group 1 specifies the bridge group to which the interface belongs. The command BVI1 creates a BVI and assigns a corresponding bridge group number to that BVI, as discussed earlier in this chapter. The command bridge 1 protocol ieee is an IOS standard specifying Spanning Tree Protocol for bridge group 1. The command bridge 1 route ip lets the BVI accept and route routable packets received from its corresponding bridge group. You must enter this command for each routed protocol (such as IPX) that you want the BVI to route from its corresponding bridge group to other routed interfaces. You are now done configuring the Cisco 827-4V for IRB and VoIP. Look again at Figure 6-3 and consider the more-complex explanation of the use of your new, complete configuration. This figure shows a voice scenario configuration using the Cisco 827-4V router in an H.323 signaling environment. Traffic is routed through the 827 router and then is switched onto the ATM interface. The 827 router is connected through the ATM interface through one PVC, and it is associated with a QoS policy called mypolicy. Data traffic coming from the Ethernet must have an IP precedence below 5 (critical) to distinguish it from voice traffic. NAT (represented by the dashed line at the edge of the 827 routers) signifies two addressing domains and the inside source address. The source list defines how the packet travels through the network. Now that you have configured the 827-4V as a voice-carrying router, you need to configure the PVC endpoint. An interesting option is to use multiple PVCs. Multiple PVCs, separating voice and data, create an easily expandable, easily traced configuration, although this is not required for minimal functionality. Here is that configuration: !(lines omitted) interface ATM0.1 point-to-point ip address 192.168.2.1 255.255.255.0 pvc 1/35 protocol ip 192.168.2.2 broadcast vbr-rt 424 424 5 encapsulation aal5snap ! interface ATM0.2 point-to-point pvc 1/36 (data PVC) protocol ip 192.168.3.2 broadcast encapsulation aal5snap dial-peer voice 1 pots destination-pattern 1001 port 1 dial-peer voice 10 voip destination-pattern 2... session target ipv4:192.168.2.8 In this configuration, the first PVC is for voice service. It is configured on a point-to-point subinterface, ATM0.1. This IP PVC has a point-to-point IP address of 192.168.2.1, with a subnet mask of 255.255.255.0. Then the service class of Variable Bit Rate (VBR) is set, with parameters of PCR of 424 kbps, SCR of 424 kbps, and MBR of five cells in a single burst. This voice PVC's encapsulation is aal5snap.encapsulation aal5snap. Troubleshooting the Cisco 827 The first thing you should do when troubleshooting the Cisco 827 is check the front panel CD LED. If the light is not on, no ADSL carrier is detected. Usually this is a physical problem, probably due to a bad cable or a problem with an ADSL line or WAN service. You can try replacing the cable, but you will probably have to contact the DSL provider. Another simple solution to 827 problems might lie with the ATM interface. To verify its status, you can enter the command show interface ATM 0. If the status is up/down, the Cisco 827 sees the ADSL carrier but cannot train up with the central office (CO)/exchange IP-DSL Switch properly. In this case, check the cable itself. The Cisco 827 uses pins 3 and 4 of the ADSL cable. The ADSL cable must be 10BASE-T Category 5 unshielded twisted-pair (UTP) cable. Using regular telephone cable can introduce line errors. Contact your ADSL line or service provider to determine if there is a problem. If the Cisco 827 does not establish a satisfactory DSL circuit to the CO/exchange ADSL port, you can observe the process of DSL synchronization as the 827 trains up to help isolate the problem. Following are the normal stages of the synchronization so that you can verify which steps are occurring correctly to aid your troubleshooting. To observe the training process, you can enter the command debug atm events and observe the outputs, shown in the following: Normal activation state changes are STOP In shutdown state DLOAD_1 Initialized and downloading first image DLOAD_2 Downloading second image DO_OPEN Requesting activation with CO In DO_OPEN state, look for the modem state for the progress information: Modem state = 0x0 Modem down Modem state = 0x8 Modem waiting to hear from CO Modem state = 0x10 Modem heard from CO and now is training Modem state = 0x20 Activation completed and link is up SHOWTIME Activation succeeded
<urn:uuid:5a2e6af3-5367-41a8-bae7-af4e7ac77d0e>
CC-MAIN-2017-04
http://www.ciscopress.com/articles/article.asp?p=31445&seqNum=2&rll=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00465-ip-10-171-10-70.ec2.internal.warc.gz
en
0.865343
6,812
2.609375
3
The format of the 'Virus Top Twenty' reports from Kaspersky Lab has changed as of July 2008. The previous method used to compile these reports and to assess the current threat landscape was based on data generated by analysing email traffic and the files checked using our Online Scanner. However, this method no longer provides an accurate reflection of the changing nature of malicious threats; email is no longer the main attack vector, and our data shows that malicious programs make up a very small proportion of all mail traffic. From July 2008 onwards, the Top Twenty will be composed using data generated by Kaspersky Security Network (KSN), a new technology implemented in the 2009 personal product line. This data not only makes it possible for Kaspersky Lab to get timely information about threats and to track their evolution, but also makes it possible for us to detect unknown threats, and roll out that protection to users, as quickly as possible. The 2009 personal products haven't been officially launched in all countries, e.g. in Russian and the USA. The data presented in this report therefore provides an objective reflection of the threat landscape in the majority of European and Asian countries. However, in the near future, such reports will include data provided by users in other countries of the world. The data received from KSN in July 2008 has been used to compile the following rankings. The first is a ranking of the most widespread malicious, advertising, and potentially unwanted programs. The figures given are a percentage of the number of computers on which threats were detected. As the rating is only compiled using data received during the course of a single month, it's very hard to make any predictions. However, future reports will include such forecasts. Nonetheless, it is possible to divide all the malicious and potentially unwanted programs shown above into the fundamental classes used by Kaspersky Lab in its classification: TrojWare, VirWare, AdWare and Other MalWare. Clearly, most of the time, victim machines are attacked by a wide range of Trojan programs. Overall, in July 2008, there were 20704 unique malicious, advertising, and potentially unwanted programs detected on users' computers. Our data indicates that out of these, approximately 20000 of them were found in the wild. The second Top Twenty provides figures on the most common malicious programs among all infected objects detected. The majority of the programs listed above are able to infect files. The figures given are interesting as they indicate the spread of threats which need to be disinfected, rather than simply dealt with by deleting infected objects. GetCodec.d, a program we talked about recently, is among the malicious programs in the rankings. We recently issued an announcement (http://www.kaspersky.com/news?id=207575664) about this worm, which infects audio files; its presence in the Top Twenty indicates that it is spreading actively. Details of change in position, and the proportion of all malicious, advertising, and potentially unwanted programs, as shown in previous reports, will be provided from August onwards.
<urn:uuid:0db3f918-3eec-456d-a28b-e1b11fe3954e>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2008/Monthly_Malware_Statistics_for_July_2008
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00366-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960097
625
2.546875
3
Climate Economy: Why Weather Means Business Learn more about ClimatEdge. Meet CSC's Climate & Energy group. Read the full Summer 2012 issue. For biologists, the year 2000 is remembered for the unveiling of the human genome. For climate scientists, the start of the new millennium was famous for another reason — that is, until recently. The streak from November 1999 to October 2000 is no longer the hottest 12-month period on record, according to the U.S. National Oceanic and Atmospheric Administration (NOAA). The recent mild winter and warm spring in the U.S. were a pleasant surprise for some, but they were also historic. May 2011 to April 2012 were the warmest 12 months ever recorded (since 1895) in the U.S., edging out the previous milestone by 0.1°F and beating the 20th century average by 2.8°F. But it isn’t only weather in the U.S. making headlines in 2012. In March, Scotland felt its warmest temperature ever recorded; Victoria, Australia, broke monthly rainfall records in one day, and Norway had its warmest March since national records began in 1900. With extreme weather becoming more commonplace, what used to be the province of the scientific community is increasingly moving mainstream, as public and private sectors search for better climate insights and techniques to mitigate risk, limit disruptions and adapt. Adapting to change “Whatever the cause, climate and weather patterns have fundamentally changed,” says Sharon Hays, vice president of CSC’s Office of Science and Engineering and former deputy director for science in the White House’s Office of Science and Technology Policy. “Even if you don’t want to think about why they’re changing, you have to at least be asking how your business might be affected.” CSC supports climate research and helps manage some of the world’s largest climate and weather data collection and storage systems. “Climate change can also represent opportunity,” says Dan Walker, CSC’s chief climate scientist. For example, exploiting knowledge of how emerging climatic conditions may favor certain crops could maximize production even in the face of change. “People are thinking about how to adapt to change,” adds Walker, “but what they always stub their toe on is how do they spend their resources most wisely?” The finances of weather The financial community certainly thinks about the subject, as evidenced by the U.S. Securities and Exchange Commission’s (SEC) issuance of the world’s first economy-wide climate risk disclosure requirement. In the SEC’s interpretive guidance, it directs companies to evaluate for disclosure purposes the actual and potential material impacts of environmental matters on their business. NOAA’s Comprehensive Large Array-data Stewardship System, a secure and evolutionary environmental data storage and distribution system that CSC helps maintain, has seen an increase in inquiries for weather data, with property and casualty insurers, reinsurers and catastrophe modelers being among the familiar groups of requesters. Commodities traders, in their own right, keep a close eye on this information as well. “Any perturbation in the climate has ripple effects on everything from grain and cocoa prices all the way through to meat and dairy. These industries are feeling the change in climate more deeply than the rest,” says Guy Turner, director of commodity market research at Bloomberg New Energy Finance, noting the power markets’ sensitivity to weather, too. “Businesses that depend on or are highly sensitive to the weather are having to look at those risks and put plans in place to mitigate the potential impact.” CSC supports the insurance industry's interest in weather and climate through a new breed of predictive analytics that can reduce uncertainty of next year’s extreme weather events. Our analytics are solidly grounded in peer-reviewed science and previously untapped climate data from NASA and NOAA, many managed by CSC on the behalf of the United States federal government. A recent Lloyd's of London Emerging Risk report describes the very approach used by ClimatEdge for General Insurance. Impact on infrastructure While many sectors keep a semi-annual or seasonal focus on the weather, others consider a decade a short run. Those involved in infrastructure typically factor the climate, as opposed to weather, into their calculations, and the potential for variability in future patterns has them looking for better data and knowledge. “There’s been a lot of discussion around climate change for quite some time, but traditionally that’s been around energy efficiency and greenhouse gas mitigation,” says Ben Preston, deputy director of the Oak Ridge National Laboratory’s (ORNL) Climate Change Science Institute. “What’s more recent is the thinking about climate adaptation and risk management, particularly over long time scales, and what actions we can implement now that make us more resilient 10, 20 or 50 years down the road.” To help water utilities assess local climate threats and incorporate critical infrastructure needs associated with climate change into long-term capital planning strategies, a CSC team developed a climate resilience evaluation and awareness tool (CREAT) in support of the U.S. Environmental Protection Agency. “This climate data-driven, adaptation decision support tool, which has been piloted at some of the nation’s largest water utilities, will save both water and wastewater utilities — and their public rate-payers — hundreds of millions of dollars in avoided infrastructure impacts due to emerging risks,” says Shalini Jayasundera, CSC program management principal leader. Climate risks also affect other global issues, such as food security, biodiversity and pollution. CSC and ORNL, which is the U.S. Department of Energy’s largest science and energy laboratory, have agreed to collaborate on solutions to help customers address potential effects of climate change. One key challenge is the sheer quantity of the data. “When you’re talking about 100 terabytes of data, that’s orders of magnitude beyond what many people out there in the user community can use or need to use,” Preston says. “Then the question is how we synthesize those data streams in order to get at key bits of information. That’s where folks like CSC, who are quite comfortable working with large data streams, can play a helpful part." JENNY MANGELSDORF is a writer for CSC’s digital marketing team.
<urn:uuid:40a60b01-0641-4937-916b-c32b4019b684>
CC-MAIN-2017-04
http://www.csc.com/cscworld/publications/84646/84678-climate_economy_why_weather_means_business
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942873
1,350
2.875
3
Scientists develop a material that enables robots to transform from solid to squishy. A phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat. In the movie, Terminator 2, the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed. This new, real-life material could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way. Robots built from the material – developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University – could also be used in search-and-rescue operations to squeeze through rubble looking for survivors, Hosoi said. Working with robotics company Boston Dynamics, the researchers began developing the material as part of the Chemical Robots programme of the Defense Advanced Research Projects Agency (DARPA). The agency was interested in ‘squishy’ robots capable of squeezing through tight spaces and then expanding again to move around a given area, Hosoi said — rather like octopuses do. But if a robot is going to perform meaningful tasks, it needs to be able to exert a reasonable amount of force on its surroundings, she explained. "You can’t just create a bowl of Jell-O, because if the Jell-O has to manipulate an object, it would simply deform without applying significant pressure to the thing it was trying to move." Also, controlling a very soft structure is extremely difficult. It is much harder to predict how the material will move, and what shapes it will form, than it is with a rigid robot. So the researchers decided that the only way to build a deformable robot would be to develop a material that can switch between a soft and hard state. Hosoi added: "If you’re trying to squeeze under a door, for example, you should opt for a soft state, but if you want to pick up a hammer or open a window, you need at least part of the machine to be rigid."
<urn:uuid:4b5a7e00-69c2-4f2e-91e3-50492a52fe85>
CC-MAIN-2017-04
http://www.cbronline.com/news/enterprise-it/could-terminator-2-style-robots-become-reality-4317016
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944737
484
3.484375
3
Columbus Debunker Sets Sights on Leonardo da VinciBy Reuters - | Posted 2008-07-29 Email Print Gavin Menzies sparked headlines across the globe with the claim that Chinese sailors reached America 70 years before Christopher Columbus. Now he says a Chinese fleet brought encyclopedias of technology undiscovered by the West to Italy in 1434. LONDON (Reuters) - Leonardo da Vinci's drawings of machines are uncannily similar to Chinese originals and were undoubtedly derived from them, a British amateur historian says in a newly-published book. Gavin Menzies sparked headlines across the globe in 2002 with the claim that Chinese sailors reached America 70 years before Christopher Columbus. Now he says a Chinese fleet brought encyclopedias of technology undiscovered by the West to Italy in 1434, laying the foundation for the engineering marvels such as flying machines later drawn by Italian polymath Leonardo. "Everything known to the Chinese by the year 1430 was brought to Venice," said Menzies, a retired Royal Navy submarine commander, in an interview at his north London home. From Venice, a Chinese ambassador went to Florence and presented the material to Pope Eugenius IV, Menzies says. "I argue in the book that this was the spark that really ignited the renaissance and that Leonardo and (Italian astronomer) Galileo built on what was brought to them by the Chinese. "Leonardo basically redrew everything in three dimensions, which made a vast improvement." If accepted, the claim would force an "agonizing reappraisal of the Eurocentric view of history", Menzies says in his book "1434: The Year A Magnificent Chinese Fleet Sailed To Italy and Ignited The Renaissance". The urbane 70-year-old sold more than a million copies of his first book, "1421", which argued Chinese sailors mapped the world in the early 1400s shortly before abandoning global seafaring. His theories are dismissed as nonsense by many academics -- Menzies says Chinese fleets reached Australia and New Zealand as well as America before European explorers -- but have gained an international following among readers. "This whole fantasy about Europe discovering the world is just nonsense," said Menzies. In his latest book -- published in the United States in June and this month in Britain -- Menzies says four ships from the same Chinese expeditions reached Venice, bringing with them world maps, astronomical charts and encyclopedias far in advance of anything available in Europe at the time. Menzies says Leonardo's designs for machines can be traced back to this transfer of Chinese knowledge. Leonardo, born in 1452, is perhaps best known for his enigmatic "Mona Lisa" portrait of a woman in Paris's Louvre Museum, but he also left journals filled with intricate engineering and anatomical illustrations. Menzies says designs for gears, waterwheels and other devices contained in Chinese encyclopedias reached Leonardo after being copied and modified by his Italian antecedents Taccola and Francesco di Giorgio. To support his argument, Menzies publishes drawings of siege weapons, mills and pumps from a 1313 Chinese agricultural treatise, the Nung Shu, and from other pre-1430 Chinese books, next to apparently similar illustrations by Leonardo, Di Giorgio and Taccola. "By comparing Leonardo's drawings with the Nung Shu we have verified that each element of a machine superbly illustrated by Leonardo had previously been illustrated by the Chinese in a much simpler manual," Menzies writes. "It's very suggestive, very interesting, but the hard work remains to be done," said Martin Kemp, Professor of the History of Art at Oxford University and author of books on Leonardo. "He (Menzies) says something is a copy just because they look similar. He says two things are almost identical when they are not," Kemp said. "It's not strong on historical method," he added. But Kemp said he would look out for any signs that Leonardo had access to Chinese material, directly or indirectly, when studying his manuscripts in future.
<urn:uuid:3d713067-8ef1-4d59-9a8e-7d379ba8118b>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Business-Intelligence/Columbus-Debunker-Sets-Sights-on-Leonardo-da-Vinci
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966825
847
2.828125
3
Chapter 3: Common IPv6 Coexistence Mechanisms As the name suggests, transition mechanisms help in the transition from one protocol to another. In the perspective of IPv6, transition basically means moving from IPv4 to IPv6. One day, IPv6 networks will completely replace today’s IPv4 networks. For the near term, a number of transition mechanisms are required to enable both protocols to operate simultaneously. Some of the most widely used transition mechanisms are discussed in the following sections. To continue reading this article register now
<urn:uuid:118f38ce-150a-4b8c-a125-715e5390ba34>
CC-MAIN-2017-04
http://www.networkworld.com/article/2202943/lan-wan/book-excerpt-from-ipv6-for-enterprise-networks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884374
106
2.59375
3
A relatively new email laundering virus affecting Gmail accounts is spreading online and turning unsuspecting Gmail users into spammers. This new vulnerability is called Gmail Filter Virus. It takes control over your account and starts sending out spam emails on your behalf without you even realizing it. If you have been using Gmail for a while now, you should be familiar with the filter feature. Filters organize your mail into different categories and they are also used for automating the actions taken on the mail that you receive. Gmail Filter Virus creates filters for spam email messages. Gmail Filter Virus targets your settings by adding filter rules for the incoming email. It automates a process to receive messages without adding them to the inbox. Once the message arrives, the virus marks it as read and forwards it to your contacts. The message is later deleted, so that you do not notice that this unwanted activity has ever occurred. The origin of this infection is not clear. It should be noted, however, that there is no specific malicious file that spreads the virus. It is a vulnerability within Gmail accounts. This means that you could infect your email when logging on from a different computer as well. The good news is that Gmail Filter Virus cannot corrupt or steal your personal data. If you want to check whether or not Gmail Filter Virus has infiltrated your email account, you should log into your Gmail and go to Settings. Once there, select the Filter tab and check if there are any filters listed on it that you do not recognize. If you do, it means that the virus has infiltrated your account. Google is aware of this vulnerability and is taking steps to resolve the issue on the server, however, there are a few tips you can follow as well to take care of your account. First, we suggest that you change your password and secret question and answer in your Gmail account as soon as you can. Secondly, you should delete all filters created by the annoying virus. And finally, you should scan your PC with a reliable anti-malware utility. It is important to have a reputable malware prevention and removal tool installed on your computer as it is the only way for you to be sure that various malicious programs do not damage your system and steal you private data. Incoming search terms: 2-remove-virus.com is not sponsored, owned, affiliated, or linked to malware developers or distributors that are referenced in this article. The article does not promote or endorse any type of malware. We aim at providing useful information that will help computer users to detect and eliminate the unwanted malicious programs from their computers. This can be done manually by following the instructions presented in the article or automatically by implementing the suggested anti-malware tools. The article is only meant to be used for educational purposes. If you follow the instructions given in the article, you agree to be contracted by the disclaimer. We do not guarantee that the artcile will present you with a solution that removes the malign threats completely. Malware changes constantly, which is why, in some cases, it may be difficult to clean the computer fully by using only the manual removal instructions.
<urn:uuid:b61594cd-7a5e-4443-b4bd-9366e614ee61>
CC-MAIN-2017-04
http://www.2-remove-virus.com/gmail-filter-virus-attacks-gmail-users-turning-them-into-spammers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948781
630
2.609375
3
According to cvedetails.com, more than 1,305 vulnerabilities have been found in the Linux core since 1999. Sixty-eight of these were in 2015. Most of them don't cause many problems (they are marked as Local and Low), and some may cause problems only if they are attached to certain applications or OS settings. In reality these numbers are not that big, but the core is not the entire OS. There are also vulnerabilities found in GNU Coreutils, Binutils, glibs and, of course, user applications. Let's take a look at the most interesting of the bunch. Vulnerabilities in the Linux core The vulnerability found in the "__driver_rfc4106_decrypt" function of the "arch/x86/crypto/aesni-intel_glue." file in the Linux core in June is related to the fact that use of RFC4106 for x86 processors that support the AES AES-NI command system extension (suggested by Intel, Intel Advanced Encryption Standard Instructions), in certain cases calculates buffer addresses incorrectly. If IPsec tunnel is set to use this mode (AES – CONFIG_CRYPTO_AES_NI_INTEL algorithm), the vulnerability may lead to damaged memory content, emergency shutdowns and, potentially, remote execution of CryptoAPI code. The most notably interesting thing is that this problem might appear by itself in fully legal traffic without any external intrusion. As of the moment of publication, this problem has been removed. Five vulnerabilities have been identified in the "Linux 4.0.5 ozwpan" driver, which has an experimental status. Four of these vulnerabilities allow for the launch of a DoS attack by shutting down the core sending specially executed packets. This problem is connected to buffer overflow due to the incorrect processing of sign integers, where calculations between "required_size" and "offset" in "memcpy" returned a negative figure, and as a result the data was copied in heap. It is found in the "oz_hcd_get_desc_cnf" function in "drivers/staging/ozwpan/ozhcd.c" and in the "oz_usb_rx" and "oz_usb_handle_ep_data" functions of the "drivers/staging/ozwpan/ozusbsvc1.c" file. In other vulnerabilities there are possible divisions by 0, along with system looping and the possibility of reading from areas beyond the border of the allocated buffer. The ozwpan driver, one of the new editions to Linux, can be linked to existing wireless devices compatible with Ozmo Devices (Wi-Fi Direct) technology. It provides for the use of the USB host controller, but the tricky thing is that instead of a physical connection, the periphery interacts via Wi-Fi. The driver accepts network packets of the "0x892e" (ethertype) type, then it deconstructs them and transfers them to different parts of the USB functionality. It is still rarely used, so it can be disabled by unloading the "ozwpan.ko" module. OS: Linux Ubuntu 12.04-15.04 (core before June 15, 2015) Critical vulnerability in the OverlayFS file system lets users get root rights in Ubuntu systems, where OverlayFS partition mounting by unprivileged users is allowed. The default settings required for vulnerability exploitation are used in all Ubuntu 12.04-15.04 legs. OverlayFS itself appeared in the Linux core rather recently, starting from "3.18-rc2" (2014), this is the SUSE development to replace UnionFS and AUFS. OverlayFS allows for the creation of a virtual multi-layer file system that connects several parts of other file systems. FS is created from the lower and upper layers, each of which is attached to different catalogs. The lower layer is only used for the reading of any Linux-supported FS, including network ones. The upper layer is usually available for recording and overlays the lower layer data if the files are duplicated. It is used in Live distributives, container virtualization systems and for the organization of containers operations for several desktop applications. User namespaces allow for the creation of container-specific sets of user and group IDs in containers. The vulnerability is caused by incorrect scans of access rights during the creation of new files in the lower FS catalog. If the core is assembled with the "CONFIG_USER_NS=y" parameter (inclusion of user namespaces), and an "FS_USERNS_MOUNT" flag is indicated during mounting, OverlayFS may be mounted by a regular user in another namespace, including where root rights operations are permitted. In this case, operations with root rights files executed in such namespaces get the same privileges as during the execution of actions with the lower-lying FS. It is therefore possible to mount any FS partition and view or modify any file or catalog. From the moment of publication, a core update with the corrected OverlayFS module from Ubuntu has become available. So, if the system is updated, there shouldn't be any problems. When updating is impossible, a temporary measure could be to stop using OverlayFS by deleting the "overlayfs.ko" module. Vulnerabilities in main applications Vector: Local, Remote A dangerous vulnerability in the standard GNU glibc library, which is a main part of Linux OS, and in certain versions of Oracle Communications Applications and Oracle Pillar Axiom, identified during a code audit by hackers from Qualys. It has since received the code name GHOST. This vulnerability is related to buffer overflow inside the "__nss_hostname_digits_dots()" function, which is used to acquire node names by such "glibc" functions as "gethostbyname()" and "gethostbyname2()" (hence the name GetHOST). To exploit the vulnerability, one needs to cause buffer overflow with an inadmissible host name argument in an application that executes name permissions via DNS. Theoretically, this vulnerability can be exploited in any application that uses the network to some extent. It can be activated locally or remotely and lets random code be executed. The most interesting thing is that the bug was corrected back in May 2013 and a patch was presented between "glibc" releases 2.17 and 2.18, but the problem was not classified as a security patch, so they did not pay any attention to it. As a result, many distributives became vulnerable. From the start it was reported that the very first vulnerable version was version 2.2 (November 10, 2000), but it might actually stretch all the way back to version 2.0. RHEL/CentOS 5.x-7.x, Debian 7 and Ubuntu 12.04 LTS were also exposed to vulnerabilities, among others. Corrections for these are now available. Hackers themselves offered a utility that explains the nature of these vulnerabilities and helps users check their systems. Everything is fine in Ubuntu 12.04.4 LTS: $ wget https://goo.gl/RuunlE $ gcc gistfile1.c -o CVE-2015-0235 A module for Metasploit was released almost instantly that allows remote execution of code on x86 and x86_64 Linux with a working Exim mailing server (with the activated "helo_try_verify_hosts" or "helo_verify_hosts" parameter). Other uses then appeared for it, for instance the [Metasploit] module (http://goo.gl/SuXP2I) to scan blogs in WordPress. A bit later in 2015, three other vulnerabilities were discovered in GNU glibc that lets remote users perform a DoS attack or rewrite memory cells beyond the stack: CVE-2015-1472, CVE-2015-1473, CVE-2015-1781. OS: Linux (GNU Coreutils) Vector: Local, Remote GNU Coreutils is one of the main "*nix packets that includes pretty much all the basic utilities (cat, ls, rm, date…). The problem was found in "date." A bug in the "parse_datetime" function allows for a remote user who doesn't have an account in the system to create a "denial of service," and possibly execute a random code by using a specially formulated date string via "timezone." The vulnerability looks like this: $ touch '--date=TZ="123"345" @1' $ date -d 'TZ="Europe/Moscow" "00:00 + 1 hour"' $ date '--date=TZ="123"345" @1' *** Error in `date': free(): invalid pointer: 0xbfc11414 *** If there is no vulnerability, we get a message about an incorrect date format. Almost all Linux distributives developers have reported on vulnerabilities already. The update is now available. OS: Linux (grep 2.19-2.21) Vulnerabilities in the "grep" utility (used for searching text based on template), are rarely found. However, this utility is often activated by other software applications, including system ones, which is why the presence of vulnerabilities is much more problematic than it first seems. The error in the "bmexec_trans" function in "kwset.c" may lead to the reading of uninitialized data from the area beyond the allocated buffer or to application failure. This might be used by a hacker who could create a special data set and send it to the application entrance using "grep -F." The updates are now available. There are no exploits that use the vulnerability or module to Metasploit. Vulnerability in FreeBSD Vector: Local, Remote CVE: CVE-2014-0998, CVE-2014-8612, CVE-2014-8613 There are much fewer vulnerabilities found in the CVE database in 2015. Just six, to be precise. Three vulnerabilities were found at once in FreeBSD 8.4-10.1 at the end of January 2015 by researchers from the Core Exploit Writers Team. CVE-2014-0998 is related to the use of the VT (Newcons) console driver, which provides several virtual terminals activated by the "kern.vty=vt" parameter in /boot/loader.conf. CVE-2014-8612 became apparent when using the SCTP protocol and is caused by a bug during the scan of the SCTP stream ID that uses SCTP sockets (local port 4444). It is caused by memory overflow in the "sctp_setopt() (sys/netinet/sctp_userreq.c)" function. This lets a local, unprivileged user have the ability to write or read 16 bits of data of the memory core and increase their privileges in the system, expose confidential data or disrupt the system. CVE-2014-8613 initiates dereferencing of the null indicator during the processing of an SCTP packet received externally while installing the SCTP_SS_VALUE option of SCTP socket. In contrast to the previous ones, CVE-2014-8613 can be used to bring about core failure remotely by sending specially executed packets. It is possible to protect oneself in FreeBSD 10.1 by setting "net.inet.sctp.reconfig_enable" to 0, thereby prohibiting the processing of RE_CONFIG blocks. Or users can simply prohibit the applications (browsers, email clients, etc.) from using the SCTP connection. But the developers have actually released an update as of the moment of publication. Vulnerability in OpenSSL A critical "Heartbleed" vulnerability was found in 2014 in OpenSSL, a widely used cryptographic packet for SSL/TLS operation. The incident caused massive criticism of the code quality back when it happened leading to alternatives such as LibreSSL, but it also spurred the developers themselves to get down to business. The critical vulnerability was identified by Adam Langley from Google and David Benjamin from BoringSSL. The changes introduced in versions 1.0.1n and 1.0.2b of OpenSSL led to OpenSSL trying to find an alternative certificate verification chain, in case the first attempt to construct a trust confirmation chain failed. They bypass the certificate verification procedure and establish a confirmed connection using the false certificate. In other words, they quietly lure users to fake websites or email servers or use any MITM attack where the certificate is used. After the vulnerability was detected, the developers released versions 1.0.1p and 1.0.2d where the problem was rectified on July 9. This vulnerability is absent in versions 0.9.8 and 1.0.0. The end of fall was marked by the appearance of a whole range of encryption viruses: first Linux.Encoder.0, followed by Linux.Encoder.1 and Linux.Encoder.2 modifications, which infected more than 2,500 websites. According to antivirus companies, Linux and FreeBSD servers are exposed to attacks when interacting with websites that use various CMS, including WordPress, Magento CMS, Joomla, and others. Hackers use an unidentified vulnerability. Then a shell script (error.php file) was placed, which was used for any further actions (through a browser). In particular, the Linux.Encoder trojan was launched, which determined the OS architecture and launched an encoder. The encoder was launched with web-server rights (Ubuntu : www-data), which is enough to encrypt the file in the catalog where files and CMS components are stored. The encrypted files get the ".encrypted" extension. The encoder also tries to browse other OS catalogs; if the rights are configured incorrectly, it could go well beyond the borders of the website. Then a "README_FOR_DECRYPT.txt" file containing the file decryption instructions and the hacker's demands was saved in the catalog. Antivirus companies have already provided utilities for catalog decryption. For instance, Bitdefender set. However, it should be remembered that no file decryption utilities remove the shell code, so the problems might repeat themselves. Considering the fact that many users who develop or experiment with website administration often install a web server on a home PC, security measures should be taken: external access should be prohibited, software should be updated, and experiments should be conducted on VMs. Not to mention that the idea itself may be used in the future to attack home systems. There is no actual sophisticated software free of all bugs, so we have to put up with the fact that vulnerabilities will always be found. However, not all of them present serious problems. Users can make their systems safer by taking the following simple steps: uninstall unused software, keep track of new vulnerabilities, always install security updates, set up a firewall, and install an antivirus. Moreover, never forget about special technologies such as SELinux, which are quite effective if a daemon or a user application is compromised.
<urn:uuid:6b1c414a-93c1-4417-bada-bddb792eef82>
CC-MAIN-2017-04
https://hackmag.com/malware/nix-vulnerabilities-2015/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00228-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917092
3,142
2.5625
3
Here's a short history on computer science student enrollments. Leading up to the dot-com bust, computer science enrollments soared to new highs, and then they plunged. Like a rock. The number of computer science graduates at Ph.D.-granting institutions reached a low of 8,021 in 2007, down from 14,185 in the 2003-2004 academic year. But it's been rising since. The number of new undergraduate computing majors at Ph.D.-granting U.S. universities rose by more than 13.4% in the 2012-13 academic year, according to the Computing Research Association's just-released annual report on computer science programs. That was slightly lower than the increases of the previous few years, but it nonetheless represents the sixth straight year of enrollment gains. The dot-com crash of 2001 turned people away from computer science and sent enrollments falling until they bottomed out in 2007. The number of bachelor's degrees awarded in computer science last year was up 3.7% overall from the previous year, reaching 12,503, according to the CRA, but the increase at schools that reported figures for both last year and the previous year was 9.4%. The number of computer science graduates will continue to increase. Computer science enrollments rose by nearly 30% in the 2011-12 academic year, and they increased 23% the year before that. The trend of enrollment increases since 2010 bodes well for a "future increase in undergraduate computing production," according to the report. The recession that hit in 2008 sent IT unemployment soaring, but it may have done more damage to the finance sector, especially in terms of reputation. That prompted some educators at the time to predict that the recession might send math-inclined students from the world of hedge funds to computer science. It's hard to draw a direct apples-to-apples comparison between computer science enrollments and enrollments in business-related disciplines, in part, because the number of students pursuing computer science degrees is much smaller and comparisons may not be fair. But still, according to government data, 327,500 business bachelor's degrees were awarded in 2006-07, and that figure rose 12% to 366,800 in 2011-12. Meanwhile, the number of bachelor's degrees awarded in computer science has increased by 55%, but over a slightly longer period. There were 63,873 students enrolled in computer science programs last year, compared with 56,307 in 2012. That includes all the majors in computer science departments, such as computer engineering. The overall number doesn't include computer science schools that don't have Ph.D. programs. Despite the slowdown in enrollments last year, the reality may be better than the data indicates. Among schools that submitted enrollment data to the CRA for its annual Taulbee Survey in two consecutive years, enrollments were up 22%. There are 266 Ph.D.-granting institutions, and 179 of those schools responded to the survey. The list of responding schools includes Harvard, Yale, Princeton, Georgia Tech, Purdue and several schools in the University of California system, including Berkeley UC Davis, as well as many of the country's other major state universities. The data on computer science graduates reflects the fact that women are still under-underrepresented in the tech workforce. Women accounted for just 14.2% of the recipients of bachelor's degrees in computer science in the 2012-13 academic year. While low, that figure does represent a modest increase from 11.7% in 2010-11. Meanwhile, just 13.9% of the students enrolled in computer science programs last year were women. The number of Ph.D. degrees granted last year rose 3.2% to 1,991. Of those, 58% went to non-resident aliens. Artificial intelligence, networking and software engineering, in that order, were the most popular areas of specialization for recipients of doctoral degrees, according to the report. The next two most popular disciplines were databases and theory and algorithms. These five areas "have been the most popular for the past few years," the report said. The job prospects for Ph.D. grads are exceptional. Their unemployment rate is currently 0.8%, compared to 0.4% last year, and only 8% of them took jobs outside of North America, according to the report. Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed . His e-mail address is email@example.com.
<urn:uuid:06cfb989-4675-4b5f-82d7-6723291a001a>
CC-MAIN-2017-04
http://www.computerworld.com/article/2489255/it-careers/wall-street-s-collapse-was-computer-science-s-gain.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00044-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964896
954
2.625
3
But the increasingly covert nature of attackers is not the only factor behind the global proliferation of crimes. Weak business authentication strategies also play a key role, as evidenced by a recent exposure of user passwords for a business that was carried out not through a direct attack on the site, but instead through exploiting the fact that customers used the same password for different sites, according to CBR. If businesses required customers to use stronger authentication, perhaps this may not have occurred. The breadth and depth of present-day malware attacks are likely apparent to most people who read the news. But what might not be as evident is the scale of criminality behind the incidents. According to a report by the RAND Corporation and sponsored by Juniper Networks, cybercrime is as cohesive and strategic a criminal enterprise as any — and likely more profitable than most. The Cybercrime Black Market is Perhaps Raking in More Money than Drugs The report’s assertion that “in certain respects, the [cybercrime] black market can be more profitable than the illegal drug trade” is as surprising as it is alarming. After all, one likely does not imagine hackers acting with the same force and law-eluding skill as major druglords. And yet through a thorough examination of the structure of the black market for cybercrime, RAND uncovered a criminal network of significant complexity and scale. The primary difference between cybercrime and any other crime is that it does not need an operational base. Because it exists in the virtual sphere, it can and does operate around the world. The scattered nature of the market makes it highly difficult to track. A Rigidly Hierarchical Structure That is Hard to Trace — and Harder to Breach Despite being scattered, though, the report pointed out that the black market is able to retain a chain of command that is very difficult to penetrate. At the bottom of this chain are the mules, a group that represents an easy entry point into taking down cybercriminals. That is because unlike the hackers themselves, the mules are largely unskilled, and the work they do is outside the tech sphere. For instance, a mule would come into play if hackers breached an ATM machine and needed someone to physically pick up the cash. Another example would be a person who mails out money that was acquired through a hack. Unfortunately, though, even if a mule is caught, it is unlikely that authorities will be able to extract helpful information from him. That is because the top tier of the hacker enterprise — containing administrators and subject-matter experts — is very closely guarded, and mules are unlikely to have direct communication with any of these people. “Getting to the top tier and involved in high-level, sophisticated crimes still requires personal connections and a good reputation, especially for being trustworthy,” the report stated. Therefore, the hope of actually breaching an administrative hacking system by means of a mule informant is slim. This leaves authorities often scrambling in the dark in the wake of a breach, not really knowing where to turn. Safeguarding Personal and Enterprise Identity to Protect Against Attack A series of large-scale data breaches over the past year have demonstrated the ease with which identities can be stolen and taken advantage of. Unfortunately, authorities are often at a loss about where such attacks are coming from and how to prevent them. For this reason, it is necessary for all organizations to guard identities with strong authentication measures. Doing so may not defeat cybercrime, but it can protect end-users, and any involved institutions or enterprises, from feeling its impact.
<urn:uuid:deec8209-7e59-4673-993c-5b8b82c70ff3>
CC-MAIN-2017-04
https://www.entrust.com/cybercrime-compromising-identity-reaping-profits-slowing-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00275-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95839
732
2.515625
3
Organizations Using the Internet A former Portugeuse colony, occupied by Indonesia since the Portugeuse granted independence and pulled their forces out. In September of 1999, the East Timorese overwhelmingly voted for independence in a U.N.-sponsored referendum. Government and military supported militias immediately started killing and terrorizing the population and foreign aid workers. By late 1999 things had stabilized with help from the U.N. and Australia. - East Timor International Support Center — http://www.easttimor.com/ (site no longer available) - TimorNet — http://www.uc.pt/Timor/TimorNet.html (site no longer available) - Check the list at: http://www.mathaba.net/www/timor/index.shtml Internet Assigned Numbers Authority, had seemed to recognize East Timor as a country all along, as they assigned it the tptop-level domain. However, to quote RFC 1591 (March 1994), `` The IANA is not in the business of deciding what is and what is not a country. The selection of the ISO 3166 list as a basis for country code top-level domain names was made with the knowledge that ISO has a procedure for determining which entities should be and should not be on that list.'' Also see the Indonesia section. Intro Page Cybersecurity Home Page
<urn:uuid:fc69bcc6-b0e6-4a65-b434-e3c9c47e64b4>
CC-MAIN-2017-04
http://cromwell-intl.com/cybersecurity/netusers/Index/ea
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921105
289
2.9375
3
In an effort to promote public-sector support of alternative energy technology, researchers in Australia will study noise caused by wind turbines -- of which there are many unknowns, the University of Adelaide researchers said. Of particularly uncertainty are the low-frequency sounds produced by large wind turbines found around the world, Chief Investigator and Associate Professor Con Doolan said in a press release. "This project is aimed at getting to the bottom of what is creating the noise that can cause disturbance," he said in the release. "When we know what is contributing most to that noise -- exactly what's causing it -- then we can stop it." The researchers will build a small-scale wind turbine in the university's wind tunne and will build an anechoic chamber (a specialist acoustic test room) around the turbine. By using “laser diagnostics” and arrays of microphones, researchers say they will test wind turbines in a lab to recreate real-world scenarios and identify the source of the sound generated. By finding a correlation between aerodynamics and sound production, researchers hope to identify engineering solutions and influence public policy. "If we can understand what's creating these sounds, then we can advise governments about wind farm regulation and policy, and make recommendations about the design of wind farms or the turbine blades to industry," Doolan said.
<urn:uuid:9c87ab46-8fd0-4db1-ba47-fc2a089c962f>
CC-MAIN-2017-04
http://www.govtech.com/technology/Researchers-Will-Study-the-Sound-of-Wind.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937699
271
3.328125
3
The International Space Station recently took a snapshot of the Korean peninsula that explicitly details the night-time power consumption of North and South Korea - North Korea is almost completely dark. From NASA: "The darkened land appears as if it were a patch of water joining the Yellow Sea to the Sea of Japan. The capital city, Pyongyang, appears like a small island, despite a population of 3.26 million (as of 2008). The light emission from Pyongyang is equivalent to the smaller towns in South Korea. Coastlines are often very apparent in night imagery, as shown by South Korea's eastern shoreline. But the coast of North Korea is difficult to detect. These differences are illustrated in per capita power consumption in the two countries, with South Korea at 10,162 kilowatt hours and North Korea at 739 kilowatt hours." +More on Network World: Fabulous space photos from NASA's Hubble telescope+ NASA said the photo is oriented toward the north and the brightest lights are coming from Seoul. There are 25.6 million people in the Seoul metropolitan area-more than half of South Korea's citizens. Check out these other hot stories:
<urn:uuid:4238d14b-7eeb-4e87-b993-0ae985e2cfa7>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226415/security/nasa-space-photo-shows-incredible-light-disparity-between-north-and-south-korea.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945857
234
3.203125
3
Ensuring IT Accessibility Ensuring IT accessibility Developers can ensure their work is accessible and 508-compliant by doing the following three things: 1. Using common sense As you're developing software or a Website, the common sense rule is that, if it's not generally easy to use, it's likely not accessible. Simple elements such as ensuring that fonts are not too small and that the design is not overly complicated will help to improve the accessibility of your application or Website. 2. Leveraging automated tools Readily available tools can test for the absence of required elements and attributes and determine whether Websites are well-formed and will work with automated devices. There are some free automated tools available such as Firefox Accessibility Extensions. Jim Thatcher's online book chapter, Accessibility Checking Software, is a good resource for information on six commercially available Web accessibility testing tools (including WebKing from Parasoft and WebXM from Watchfire). 3. Creating and following checklists Research the regulations and requirements, and then create a list of the accessibility features your software or Website needs to include. For example, the Web Accessibility Initiative (WAI) created the Web Content Accessibility Guidelines (WCAG). The WCAG are a series of Web-related documents that are part of a larger set of accessibility guidelines. While the WCAG are not intended to cover every aspect of each disability, they do cover broad topics and give developers and Website designers a launching pad from which to create applications.
<urn:uuid:9847f903-a8a1-47e7-9044-55ad41ac7d40>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/How-to-Ensure-IT-Accessibility-in-Applications-and-Websites/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00229-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915351
307
2.671875
3
for a stand alone program, it has no difference. But in a called program, for example : A program calls B program. in B program, if "stop run" is used, then it will not return to program A, instead it will return to the OS. That's why in a called program, STOP RUN is never used. Joined: 06 Jan 2004 Posts: 247 Location: Hyderabad Whatz the diff. between go back and exit program. There is no difference between Goback and Exit program. Both works in a similar way if they were coded in the called program. Exit program/Goback is used to exit a program that has been called by another program. But When Exit Program is coded in a stand alone program, Exit program is ignored and the process may continue. So it is always better to code GoBack in any kind of program (standalone/called program).
<urn:uuid:96eaa7d1-49d7-482f-bad1-92cb8ab804c9>
CC-MAIN-2017-04
http://ibmmainframes.com/about965.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952076
187
2.921875
3
Wikipedia has a wonderful definition of Big Data. It's complete and full of examples, all backed up by 89 source references. If you think about it, Wikipedia is Big Data. Thanks to Wikipedia (and search engines), you can find expansive definitions and spellings for just about everything. For all it's vastness, I chuckled when I saw it cannot keep up with itself. It's own definition says it has 365 million users and is the #6 popular web site in the world. It's own page for "most popular web sites" also ranks it #6. But the appeal for donations plastered on all its pages says it has 500 million users and is now the #5 web site. Time to update its own definitions, I would think. Dear Wikipedia readers: We are the small non-profit that runs the #5 website in the world. We have only 175 staff but serve 500 million users, and have costs like any other top site: servers, power, programs, and staff. Wikipedia is something special. It is like a library or a public park. It is like a temple for the mind, a place we can all go to think and learn. To protect our independence, we'll never run ads. We take no government funds. We survive on donations averaging about $30. Now is the time we ask. If everyone reading this gave $3, our fundraiser would be done within an hour. If Wikipedia is useful to you, take one minute to keep it online and ad-free another year. Please help us forget fundraising and get back to Wikipedia. Thank you. For journalists like me, Wikipedia is a godsend (I donated). I cannot explicitly attest to the overall accuracy of its 30 million articles in 287 languages, but the ones I have accessed over the years have generally been on the mark. I have heard few challenges to its accuracy although researchers and students are discouraged from using it as a primary source. It's just too easy to use Wikipedia just like it was to do math with electronic calculators in the 1970s. We knowed how that worked out. Here's what the respected science journal "Nature" said about Wikipedia's accuracy in 2005: "Wikipedia is often cited for factual inaccuracies and misrepresentations. However, a non-scientific report in the journal Nature in 2005 suggested that for some scientific articles Wikipedia came close to the level of accuracy of Encyclopædia Britannica and had a similar rate of "serious errors." I have some sympathy about how current its definitions are given the challenge of keeping up. It need look no further than the pages about itself to understand the impossible race against time. Besides citing producers of Big Data like Walmart, the Large Hadron Collider, the human genome and Facebook, the Wikipedia Big Data page offers critiques, sizes up the market, runs down research-related projects and architectures. It's more a white paper sans vendor bias than a definition! Indeed, Wikipedia is more encyclopedia than dictionary. I have read the "editing" section of Wikipedia's definition of itself several times and I am still not sure how these encyclopedic segments come together so coherently given that they are compiled by outside contributors. Consider: "In a departure from the style of traditional encyclopedias, Wikipedia is open to outside editing. This means that, with the exception of particularly sensitive for vandalism-prone pages that are "protected" to some degree, the reader of an article can edit the text without needing approval, doing so with a registered account or even anonymously." Now in its 13th year, Wikipedia is an amazing demonstration of how Big Data works for everyone. By the way, there's 175 staffers at Wikipedia and I bet more than a few know a thing or two about Big Data.
<urn:uuid:65394d17-516f-4318-992f-3535c6faad7c>
CC-MAIN-2017-04
http://www.cio.com/article/2370252/big-data/was-wikipedia-the-start-of-big-data-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950977
773
2.71875
3
I see a % sign after my IPv6 address?! Zone IDs After Link-local Addresses? What the hell is that? If that is what you see with “ipconfig” on Windows machine with IPv6 enabled, this article is for you. If you have one network card (NIC) inside your computer, everything is working fine and your computer can speak IPv6 to all others in the local network. On the other side! If you are one of those guys (strange networking guys who run strange labs on their big PCs), having more NICs inserted in their machine? In that case, your PC will have two or more network interfaces and every one of them will have the same network identifier fe80:0000:0000:0000. If you go back to networking fundamentals, you will remember that a host (or router) with more interfaces cannot have two of them with IP addressing from the same subnet. If you want to ping the address fe80::5c9f:bc10:bb38:63ec from your computer and your computer has two NICs with addresses fe80::1111:1111:a000:0001 and fe80::5555:5555:5555:1111. Out of what interface will the ping exit the computer? Hm, on both? Only on random one? This is not going to work. To resolve this issue there is Zone ID added to every NIC. This is the mysterious number after the % sign in the IPv6 Link-Local address. The number is basically an Interface ID. In the network example here, Interfaces have Zone IDs 18, 19, 20, and 21, respectively. This number distinguishes the network segments by using a numeric zone ID following a percent sign after the IP address: Zone ID il localy significant and enables us to define out which interface we want to send some traffic. If you want to ping a neighbour computer, you will need to specify the neighbour’s IPv6 Link-Local address plus the Zone ID of your computer’s network adapter that is gooing towards that computer. It is now completely rewritten and hopefully without any big mistake.
<urn:uuid:67ee62a2-11f8-4174-a130-14991e01744f>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2013/ipv6-zone-id
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916125
451
2.546875
3
In this modern world, fiber optics are gaining more and more popularity among the communication networks. They’re trusted in every kind of communication nowadays this special kind of cables are the back bone of each and every known network whether it’s telecommunication or live broadcasting Television channels. When bulk data transfer is needed then top end fiber communication is considered as the best choice. Nowadays of fiber data is transferred in the form of light pulses. It was small introduction, now we come towards fiber optic patch cables. Fiber optic patch cable is a two-fiber cable that uses exactly the same connector type and optical fiber type because the optical fiber cabling that it is connected to. Sometimes we aslo refer to it as a fiber optic jumper. The terms fiber optic patch cable and fiber optic jumper are often interchanged but as it happens they are different. An area cable is really a two-fiber cable, however the term fiber optic jumper is usually used to describe a single-fiber cable. Fiber jumper is defined in IEEE 802.3 as an optical jumper cable assembly used for bidirectional transmission and reception of information. A fiber jumper can be a single-fiber cable or a multi-fiber cable. The jumper cable attached to the source of light is known as the transmitter jumper. The fiber jumper cable attached to the fiber optic power meter is known as the receiver jumper. But you may also see these test fiber jumpers referred to as a reference jumper. However they are named, fiber jumpers are a critical a part of your fiber optic test equipment setup. Fiber patch cables are like joints, these are used to join 2 kinds of optic cables in order to make a third connection out. The first thing that is most important, while choosing the patch cable is the compatibility of those patch cables with the original cable. When you purchase the wrong cable then, it won’t work. Second thing may be the rate of information transfer. Different types of these cables have different data transfer rate and when you need to join them through patch cables then you need to make sure that the information rate of patch cables should match the information rate of original cable if this doesn’t match then, you will see a lag in communication which could cause a delay or total loss of information. There are some other advantages too. For instance they offer a very high-speed of information transfer. Fiber optic cables are made to possess a little more speed than usual fiber cables to complement what’s needed when they are adjusted towards the network. Another factor that is higher in such cables is band width. These offer a high bandwidth than normal fiber optic cables. Last but not the least is the security factor. These patches are made very secure to operate at any level which is nearly impossible to interrupt into them. These are the best answer for your home communication needs. Whether you’ll need a high speed internet connection or else you wish to connect your TV with a satellite antenna. These patch cables would be best since you just need to bring them and fasten them with any place in the fiber network and they’ll fulfill all your needs. So I we do hope you will consider fiber patch cables for your home communication needs after reading a lot of advantages and also because I have installed them inside my own home. They’re little expensive but that comes with quality.
<urn:uuid:6722129d-1c3f-4f39-8b32-7491626ab3ed>
CC-MAIN-2017-04
http://www.fs.com/blog/what-exactly-are-fiber-patch-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00184-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951284
689
3.25
3
The Computer Conservation Society (CCS) has commissioned a working replica of EDSAC, the world's first operational stored program computer. The Electronic Delay Storage Automatic Calculator (EDSAC) was a general purpose research tool at Cambridge University, which led directly to the development of the first business computer. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The EDSAC was the brainchild of Sir Maurice Wilkes, who died in November 2010. Wilkes was in charge of the Mathematical Lab in Cambridge after the war - and he attended a series of lectures in the US in 1946. The trip back from the US, which took four days, gave Wilkes the time to sketch out plans for the design of EDSAC. While the Manchester Baby is considered the first computer, David Hartley, chairman of the Computer Conservation Society, described it as a proof of concept, whereas the EDSAC was "the first practical stored program computer", he said. It was used to aid scientific research at Cambridge. Hartley said, "EDSAC replaced a human being with a desk calculating machine. It was 1,500 times faster than the human." The project is being run by Hartley, along with Kevin Murrell, who specialises in post-1945 computer history, Martin Campbell-Kelly, who created the EDSAC emulator and Chris Burton, who rebuilt the Manchester Baby. The first IT department EDSAC was not only the first computer, it also heralded the concept of the user, the IT function and reusable programs, known as subroutines. Murrell, said, "It is a very important project for the museum. EDSAC is the first general purpose machine. It was built to answer the problems of researchers who would never have expertise to program the machine themselves. "There were people queuing up with their programs [to feed into the computer]". In fact, Murrell said the the concept of users did not exist before the EDSAC machine. Programs were loaded using puch tape. A punch card operator would key in the codes for the program, which would be printed out on paper tape. The tape contained the code for the program, which was then fed into the EDSAC, where it was executed. The results would then be printed out. He said, "There is awful lot we don't know about how it was used on a day to day basis. The rebuild is a fantastic opportunity to understand the engineering." The coding itself was quite insightful, according to Murrell. The IT function at Cambridge created computer programs that could be reused in more complex programs, such as the square root function. This concept is the basis of modern programming and allows programmers to use programming libraries that contain subroutines that have already been tested and debugged, saving a considerable amount of programming effort, "reinventing the wheel". On the EDSAC, these subroutines existed on strips of paper tape. The operators literally strung together the subroutines with the main program by feeding in these tapes and the main program tape, to create a master tape with the whole program. The process mirrors how C programs are comiled today, said Murrell. A programmer "#includes" a library like "stdio.h", which provides programs with basic input/output routines; the source code is compiled and the subsequent program is then executed. The build is a substantial project. "We will need 2,500 valves of a particular type, together with chassis and power supplies." Luckily said Murrell, "Sir Maurice used components readily available at war time and there are still stocks of components left." The project team will also look at alternatives, like using transistors instead of valves, or even remanufacture valves. As a far as the chassis goes, the project has had a stroke of good luck. "There are literally two or three chassis left out of the hundreds that were used in the original." These will form the blueprint to remanufacture the EDSAC chassis. Five facts about EDSAC - EDSAC was over two metres high and occupied a ground area of four metres by five metres. - Its 3,000+ vacuum tubes used as logic were arranged on 12 racks. - Mercury-filled tubes acted as memory. - It performed 650 instructions per second. - EDSAC ran its first program on 6 May 1949 and soon began nine years of regular service, ending in July 1958 when it was dismantled to enable the re-use of precious space. By then it had been superseded by the faster, more reliable and much larger EDSAC 2.
<urn:uuid:a5f99c4d-0624-4799-a493-25cabc5e09b1>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280094813/EDSAC-rebuild-gets-the-go-ahead
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00396-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968088
978
3.390625
3
Porting software to 64-bit compatibility can have unexpected security implications. 64-bit architecture is well and truly here, but 32-bit software is still in wide use. However, any porting of software to 64-bit compatibility can have unexpected security implications, even without any code changes in the programs, drivers or operating systems. This is particularly dangerous in situations where code has already been subject to code review and been assessed to be free from exploitable vulnerabilities in a 32-bit environment: it could immediately become vulnerable when compiled on a 64-bit system. With the wide availability of x64 CPUs, many organisations are now switching to 64-bit operating systems and applications. This is driven by the increasing memory requirements of applications and servers, the decreasing cost of the new hardware and the widely-available support for applications and operating systems. When code reviews are conducted of C/C++ applications that were developed on 32-bit systems and then ported to 64-bit, certain classes of security vulnerability are commonly identified. This article gives a brief overview of these types of vulnerability and what to do about them. It should be noted that these classes of vulnerability are not new and similar issues have been found and exploited before. However, the migration to 64-bit technology is regularly leaving organisations exposed to risk, particularly when there is a reliance on security reviews and assurance activities performed previously on a different architecture. On 32-bit systems, the amount of possible input to an application is naturally limited by the available address space. For example, on Microsoft Windows systems, memory allocations in user-mode are usually less than 2 gigabytes in size. In reality however, the space available for memory allocations on 32-bit systems will be much less, as space will be reserved for binaries, stacks and heaps. Nevertheless, this can still be more than 2 gigabytes when the /3GB switch is used during booting, although this is not the default setting. However, on 64-bit systems these limits are greatly increased and allocation of much larger memory blocks may be possible, particularly with the large amounts of RAM now available on 64-bit systems. Whilst good practice dictates that the size of any data passed to a function is checked, it is often the case that developers make assumptions about the maximum possible size of that data – and these assumptions could be based on the upper limit for a memory allocation on the platform itself. When transferred to a 64-bit system, these deviations from best practice can become exploitable if an attacker can introduce large amounts of data into the application. While providing large amounts of data to an application may not seem practical as an attack in some situations, it should be remembered that on a 20Mbit line it will only take about half an hour to send 4 gigabytes of data. As many applications will happily sit there unattended and unmonitored accepting input, this is a perfectly viable attack. Similarly, local application or kernel vulnerabilities which require large amounts of memory are even more likely to be exploited, as allocating and filling 4 gigabytes of memory will only take seconds on modern systems. There follow some examples of vulnerabilities that can occur on 64-bit systems that would not be exploitable on 32-bit systems. Where the size of input is obtained and added to (e.g., incremented to make space for a terminating character) and that size is represented as an unsigned integer, the integer could overflow if more data were introduced than the maximum value of that unsigned integer. On 32-bit systems, there would not be enough memory to hold 0xFFFFFFFF bytes of data along with program code and the operating system, so the size could never be enough to trigger the overflow. However, on 64-bit systems this becomes a real possibility. On 32-bit systems, the value types ‘unsigned int’, ‘long’ and ‘size_t’ can be used interchangeably. However, on 64-bit systems these value types are not equivalent. In situations where these have not been used in the correct manner, exploitable conditions can exist. As the previous examples show, the migration of software from 32-bit to 64-bit systems can introduce new vulnerabilities, or make previously unexploitable vulnerabilities exploitable. Consequently, it is recommended that the migration process should always include a code review during which the focus should be placed on security. As we have seen, the assumptions made by programmers and used in previous code reviews may not hold true. The following recommendations provide general guidance on identifying and resolving the types of issues which could be encountered:
<urn:uuid:980c84b1-e8e1-40c3-a5ba-b7165415fa0c>
CC-MAIN-2017-04
https://www.mwrinfosecurity.com/our-thinking/will-you-still-feed-me-when-im-64-bit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95211
937
2.765625
3
An optical transceiver can best be described as a device that converts high-speed data from a cable source to an optical signal for communication over optical fiber. Optical transceivers are used to update the communications networks to manage broadband, to update the data center networks to make them manage traffic with higher speeds, to implement the backbone network for mobile communications. For transceivers that plugs into Gigabit Ethernet and links to a fiber optic network, the Gigabit Interface Convertor is the standard and SFP is for small form factor pluggable transceiver. The GBIC transceiver operates as an input and output transceiver and is linked with the fiber optic network generally through the optic patch cords. GBIC transceivers are deemed to be ideal for any interconnections over the Gigabit Ethernet centers and for switches environment. The converters are virtually intended for high performance and continuing interactions that have need of gigabit or fiber channel interconnections. From SFP, users are able to generate connections utilizing the multi or single mode fiber optic ports along with the copper wiring. The GBIC transceiver and the Cisco SFP offer companies with the opportunity to set up a Fiber Channel and Gigabit Ethernet connection effortlessly within their network. However, many Cisco GBIC transceivers would be the Cisco GLC-SX-MM, GLC-T, GLC-LH-SM, GLC-ZX-SM, and so much more. There are also 155M/622M/1.25G/2.125G/4.25G/8G/10G SFP optical transceivers, among which 155M and 1.25G are used widely on the market. GBIC, SFP, SFP+, SFP, 1×9 covers low rate to 10G products, and is fully compatible with the global mainstream vendor equipment. And 10G SFP+ technology is becoming mature, with rising trend development of demand. 10G SFP optical module has been through development of 300Pin, XENPAK, X2, XFP, ultimately achieving to transmit 10G signals by the same size with SFP, and this is SFP+. SFP+, by its virtue of small size and low cost, meets the high-density requirements of devices to optic modules. Since 2010, it has replaced XFP and become the main stream in 10G market. The SFP+ modules support digital diagnostics and monitoring functions, which are accessed through a 2-pin serial bus and provide calibrated, absolute real-time measurements of the laser bias current, transmitted optical power, received optical power, internal QSFP transceivers temperature, and the supply voltage. Digital diagnostic functionality allows telecommunication and data communications companies to implement reliable performance monitoring of the optical link in an accurate and cost-effective way. Optical transceiver market driving forces relate to the increased traffic coming from the Internet. The optical transceiver signal market is intensely competitive. There is increasing demand optical transceivers as communications markets grow in response to more use of smart phones and more Internet transmission of data. The global optical transceiver market will grow to $6.7 billion by 2019 driven by the availability of 100 Gbps devices and the vast increases in Internet data traffic. A palette of pluggable optical transceivers includes GBIC, SFP, XFP, SFP+, X2, CFP form factors are available at FiberStore. These are able to accommodate a wide range of link spans. The 10Gbps optical transceivers can be used in telecom and datacom (SONET/SDH/DWDM/Gigabit Ethernet) applications to change an electrical signal into an optical signal and vice versa.
<urn:uuid:e534fe34-4acb-4ba6-a85c-8d4510e29daa>
CC-MAIN-2017-04
http://www.fs.com/blog/pluggable-fiber-optic-transceivers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00405-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921656
763
2.609375
3
Industry Perspective is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT. This week, Industry Perspective asks Lisa Rhodes about the relevance of energy efficiency and renewable energy sources for data center operators. Lisa is vice president of marketing and sales for Verne Global, a data center company that owns and operates a facility in Keflavik, Iceland. Industry Perspective: How critical are energy-efficiency measures to the average data center? Lisa Rhodes: With data centers consuming more than 1.5% of total electricity usage worldwide and data center carbon emissions set to quadruple by 2020, according to the EPA, energy efficiency is a critical issue for any data center operator. Key factors contributing to this rise include - Escalating demand for computing resources: there are over 2.4 billion Internet users worldwide, and that number is expected to grow by 8% for the next few years. - Increasing amounts of energy needed for data center power and cooling requirements: in 2012, Gartner reported that energy-related costs account for nearly 12% of the overall data center expenditure, and it’s the fastest-rising cost in the data center campus. - Global rising energy costs: Ernst & Young published a survey in September that found 38% of global executives expect energy costs to rise by 15% or more in the next five years. The report also found that 42 of respondents spend at least $50 million annually on energy expenses and another 27% spend in excess of $100 million annually. All of these factors combined have led to a point where greening the data center is now a must-have and no longer a nice-to-have. IP: What kinds of environmental regulations are having the greatest effect on data centers, and what might be on the horizon? LR: In the last few years, the United Kingdom and United States have both tried to address this issue with initiatives and proposals designed to combat the carbon emissions brought on by fossil-fuel-powered data centers. Britain’s CRC Energy Efficiency Scheme, formerly the Carbon Reduction Commitment, has estimated that the Scheme will reduce carbon emissions by over one million tons per year by 2020. The Scheme aims to encourage organizations that are responsible for 10% of the U.K.’s emissions to develop energy-management strategies that provide better understanding of energy usage. The broader European Commission also established a Code of Conduct on Data Centres’ Energy Efficiency. Designed to drive data center infrastructure efficiency from 50% or less across most European data center campuses to the 80% range, the Code of Conduct primarily focused on voluntary compliance measures that could possibly lead to legislation down the road. In the U.S., President Obama recently made headlines with an emphasis on climate change during his second Inaugural Address. Starting this month, policy-forum meetings are getting underway to discuss how renewable energy can be a key component of economic growth in the United States. Policymakers and businesses will come together to make recommendations on tax incentives and energy policy for the U.S. renewable-energy market in 2013. IP: Is reliance on renewable energy sources feasible for most data centers? LR: One of the trends that we are seeing across the technology industry as a whole is how data centers no longer need to be tethered to the population centers that they serve. Companies are empowered to segment their business applications and choose platforms that match technical, financial and sustainability goals for each business application. The tragedy of Hurricane Sandy really highlighted that a place like New York, although close to a population center, is not an always an ideal location for the critical equipment that data centers use to deliver services. Companies now have the option of having data stored in multiple locations, which means they are able to take advantage of locations where better and more-abundant renewable energy resources are available. IP: What is the energy mix for Verne Global’s Iceland data center? LR: Verne Global runs off of Iceland’s power grid, which is 100% powered by renewable geothermal and hydroelectric energy resources. IP: What can data centers in more moderate climates learn from Verne Global’s data center? LR: Verne Global’s data center campus has definitely been a pioneering effort, and we have learned some key lessons that can be applicable to the rest of the industry. Every data center, for example, houses applications that can be located outside of Tier 1 cities. These can be applications high in power consumption, applications low in network latency needs or even backup recovery that doesn’t need to be in a location that requires paying a premium for power costs. With cloud computing and virtualization driving so many IT business decisions today, it is important for data center managers to map out the flexibility of their applications and make smart choices about where they can be located. They may be surprised to find that nontraditional sites can be the right decision for their business needs.
<urn:uuid:916d8ff2-928d-4f0c-9181-2de759a12475>
CC-MAIN-2017-04
http://www.datacenterjournal.com/industry-perspective-energy-efficiency-and-renewable-sources-for-the-data-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943669
1,021
2.546875
3
3.6.6 What are MD2, MD4, and MD5? MD2 [Kal92], MD4 [Riv91b] [Riv92b], and MD5 [Riv92c] are message-digest algorithms developed by Rivest. They are meant for digital signature applications where a large message has to be "compressed" in a secure manner before being signed with the private key. All three algorithms take a message of arbitrary length and produce a 128-bit message digest. While the structures of these algorithms are somewhat similar, the design of MD2 is quite different from that of MD4 and MD5. MD2 was optimized for 8-bit machines, whereas MD4 and MD5 were aimed at 32-bit machines. Description and source code for the three algorithms can be found as Internet RFCs 1319-1321 [Kal92] [Riv92b] [Riv92c]. MD2 was developed by Rivest in 1989. The message is first padded so its length in bytes is divisible by 16. A 16-byte checksum is then appended to the message, and the hash value is computed on the resulting message. Rogier and Chauvaud have found that collisions for MD2 can be constructed if the calculation of the checksum is omitted [RC95]. This is the only cryptanalytic result known for MD2. MD4 was developed by Rivest in 1990. The message is padded to ensure that its length in bits plus 64 is divisible by 512. A 64-bit binary representation of the original length of the message is then concatenated to the message. The message is processed in 512-bit blocks in the Damgård/Merkle iterative structure (see Question 2.1.6), and each block is processed in three distinct rounds. Attacks on versions of MD4 with either the first or the last rounds missing were developed very quickly by Den Boer, Bosselaers [DB92] and others. Dobbertin [Dob95] has shown how collisions for the full version of MD4 can be found in under a minute on a typical PC. In recent work, Dobbertin (Fast Software Encryption, 1998) has shown that a reduced version of MD4 in which the third round of the compression function is not executed but everything else remains the same, is not one-way. Clearly, MD4 should now be considered broken. MD5 was developed by Rivest in 1991. It is basically MD4 with "safety-belts" and while it is slightly slower than MD4, it is more secure. The algorithm consists of four distinct rounds, which has a slightly different design from that of MD4. Message-digest size, as well as padding requirements, remain the same. Den Boer and Bosselaers [DB94] have found pseudo-collisions for MD5 (see Question 2.1.6). More recent work by Dobbertin has extended the techniques used so effectively in the analysis of MD4 to find collisions for the compression function of MD5 [DB96b]. While stopping short of providing collisions for the hash function in its entirety this is clearly a significant step. For a comparison of these different techniques and their impact the reader is referred to [Rob96]. Van Oorschot and Wiener [VW94] have considered a brute-force search for collisions (see Question 2.1.6) in hash functions, and they estimate a collision search machine designed specifically for MD5 (costing $10 million in 1994) could find a collision for MD5 in 24 days on average. The general techniques can be applied to other hash functions.
<urn:uuid:41278175-10be-4edb-adc4-8c3f04c6b185>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/md2-md4-and-md5.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949229
763
3.609375
4
Authors: Jahanzeb Khan and Anis Khwaja As you can see, we have yet another wireless review on Help Net Security. As more and more people are migrating their wired networks into wire-free environment, wireless security is becoming one of the most talked about IT topics. What is this book all about? Read on. About the authors Jahanzeb Khan is Principal Engineer with RSA Security, Inc., where he is responsible for the research and development of encryption, Public Key Infrastructure, and wireless LAN Security standards. He is active in the 802.11b community and is a member of IEEE International. Anis Khwaja works in the IT department of a leading financial services firm. Khwaja has more than fifteen years of experience in networking and is currently involved in deployment of 802.11b (WiFi) networks Inside the book On September 11, 1940, George Steblitz used a Teletype machine at Dartmouth College in New Hampshire to transmit a problem to his Complex Number Calculator in New York and received the results of the calculation on his Teletype terminal. This transfer of data is considered the first example of a computer network. Fast forward 60 years where computers networks are connected over the air. The book doesn’t start with an overview of WLAN basics, but with a historical view on computer networks from day one, to the modern Ethernet networks and Internet. As general network knowledge is needed to understand and setup a wireless network, early chapters present the information on different network types, standards and protocols. What follows next is a similar chapter which is focused on wireless networks. Here the readers stumble across all the WiFi and WLAN basics, including standards and operating modes. Besides all the good things wireless networks provide, there is a number of technological and security issues that should be closely considered. Some of the possible pros/cons can be found in the following chapter titled “Is Wireless LAN right for you?” Following the idea on explaining wireless networks with having the “usual” wired networks on mind, the authors divide the part on secure wireless LANs to two chapters, each dealing with security issues and concepts of one of these network types. The security aspects of wired networks receive 10 more pages than their wireless counterparts, which is normal, as wired networks are much older and are used much more than wireless ones. After talking a look at both basics and security issues of wireless LANs, authors now dedicate several chapters on building and securing WLANs. Here, the first time implementers will learn how to deploy wireless networks through seven logical steps. The steps are varying from understanding the wireless needs to the product consideration and ROI (return of investment). All of the steps are practically shown through a non-existing Bonanza Corporation wireless LAN installation. From the advanced user’s point of view, authors provide a chapter on 802.1X authentication mechanism, which is presented through a semi-visual guide on setting up 802.1X with Microsoft Windows XP and Cisco 350 series AP. Besides this setup, there is a nice scope on using a Virtual Private Network to secure wireless communications. The ast part of the book takes care of methods related to troubleshooting and maintaining secure operations in your WLAN. In a brief overview manner, authors give the future administrators tips on the things that can go wrong. A nice addition to this chapter is a sample security policy that can be easily modified for usage in different environments. The book contains two interesting appendixes. The first contains several actual mini case studies, where the readers can take a look at several different wireless deployment scenarios. All of these wire free setups (home, small corporation, campus wide and wireless ISP) are presented through the same template detailing on each network’s background, problem, solution and the final result. These examples aren’t so in-depth, but provide a good read on several possible WLAN installations. If you are interested in wireless LAN technologies, you probably realized that a number of book publications and articles, reference some of the Orinoco hardware. The most talked about wireless adapters, especially when taking a look at Wardriving and similar activities, are surely Orinoco Gold and Silver PC cards. The authors carry on the Orinoco fame, with an appendix detailing Orinoco PC card on a number of operating systems, including Windows 98/ME/2000/NT, MacOS and Linux. As I’m an owner of Orinoco Residential Gateway (RG 1000) access point, it was nice to see that the authors use this AP for a sample LAN setup. For those who don’t know, RG1000 is like a clam-shell – when you open it you will find a pearl in the way of the ever useful Orinoco Silver PC (pcmcia) card. Information security experts Khan and Kwaja combined their WiFi knowledge and created this step-by-step guide covering all the major aspects of 802.11 networks. They cover the whole circle, from initial network and product considerations, over installation and security, to troubleshooting the existing network. “Bulding Secure Wireless Networks with 802.11” is easy to read, informative and deals with a number of WiFi security facts. The difference between this and other WiFi security publications is that the book is intended to be an implementers guide into building a secure wireless Local Area Network. I should note that the book is strictly Windows related, so, besides the Orinoco guide, don’t expect any implementation methods for other operating systems. From the implementer’s perspective, the book should be suitable to novice and inter mediate readers, because the topics surrounding actual implementations and advanced techniques are covered in a less technical way. Don’t get me wrong, there are technicalities inside the book, but not so deep enough to be of interest to advanced users familiar with WLANs. Yet another fact that proves that the book is more novice user suited is that software installations are started with “Insert CD”, click “setup.exe” etc.
<urn:uuid:8a3351f2-a108-4cd5-95c7-eeb88f2b9e0b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2003/08/28/building-secure-wireless-networks-with-80211/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92457
1,252
2.53125
3
RTP (Real-time Transport Protocol) Application layer protocol RTP is accessible in the TCP/IP protocol suite. Assigned port for this protocol is 5004 and it belongs to the working groups “AVT” and “FEC Framework”. As a standardized packets format, Real-time Transport Protocol (RTP) is used to deliver the audio or video or both on the IP networks. IETF standards association working group of Audio and Video Transport was built up it, at first. Features of RTP are included end to end communication and data streams transmission in real time manners. But transfer of data to more than one destination is done with the IP multicast support in case of RTP. Moreover, RTP as a primary audio/video transport standard within the IP networks is used along with payload format and connected profile. Today, communication and entertainment systems with streaming media (telephony) are being used it extensively. And some common examples of such systems are such as teleconference applications and television services. Well! To carry the media streams or data, this protocol is used with the RTCP because later is well suited for monitoring the communication statistics, for throwing occasionally the control information and for the QOS (quality-of-service)) etc. This RTP protocol can also be used with H.323 protocol. According to the RFC 3550, RTP is most suitable for the end-to-end network’s transportation job such as simulation statistics, over both the multicast and unicast networking services. But RTP doesn’t deal with the resource reservation within a network. It also doesn’t give the assurance of the quality-of-service. That’s why; for the data transportation improvement “RTCP” (a control protocol) that allows data delivery along with monitoring to outsized multicast network, is also used with it. But both RTP protocol and RTCP should be considered as free from the core transport as well as network layer. RTP header (version 2) is consisted on: ver (2 bits for RTP ver number), P (1 bit for padding which will be required by some algorithms for encryption purposes and preset block sizes etc) X (1 bit extension), CC (4 bits CSRC count), M (1 bit Marker that is allowed for significant events), PT (7 bits payload type), Sequence number (16 bits) that can be incremented by one, and is being used for each data packet (RTP) forwarded. Another use of sequence number is to get help in detecting the receiver of data if any packet is lost during the transmission and to get help in restoring the packet sequence. One more to be remembered is that its initial value can be any random number so to make the plaintext hits on encrypted data more complex. Timestamp (32 bits) part is made to reflect the first octet’s sampling instant in a RTP packet but SSRC (32 bits synchronization source), which recognizes synchronization source with the help of random value. CSRC (32 bits Contributing source) is an range from 0-15 CSRC elements. PT (7 bits Payload Type can identify the RTP payload’s format and can settle on its understanding via application. In short, RTP Session with the help of an IP address can be established for every multimedia stream along with RTP and RTCP.
<urn:uuid:ba779013-42b5-4d38-ae2c-e5517f5c06d2>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/rtp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00277-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931187
706
3.03125
3
The year is only half over and already seems to be particularly disaster-prone. From the devastating earthquake and tsunamis in Japan, to the multiple tornadoes events in the American Midwest, to the deadly floods in southern China, 2011 just seems to be one never-ending natural catastrophe. And supercomputers, the very machines that are relied upon to predict and mitigate these deadly events, are not escaping nature’s wrath. This week the wildfires in New Mexico led to a shutdown of two of the largest supercomputers in the world at Los Alamos National Lab. Roadrunner, the first machine to break the petaflop barrier, and currently number 10 on the TOP500, and Cielo, a Cray XE6 that holds the number six position, were powered off this week. The fire has destroyed nearly 100,000 acres and is likely to become the New Mexico’s largest and most destructive in the state’s history. According to a Computerworld report, the exact reason for the hardware shutdown was not provided. The supercomputers themselves are not in any direct danger from the fires. As of this writing, nothing was burning on LANL property, but the surrounding smoky air could compromise the cooling system, which would force the machines to be powered off. Also, the lab will be closed at least until Friday, with all nonessential personnel directed to remain off-site. That in itself would make the operation of these high-maintenance supercomputers a little dicey. Lights-out supercomputing has yet to become a reality. Meanwhile in Japan, supercomputers there are still suffering from the after effects of the 9.0 earthquake and subsequent tsunamis in March. When power supplies were disrupted Immediately following the quake, a number of supercomputers across the country were powered off. And as we reported last week, due to the longer term shutdown of four large power plants, the Tokyo area will have to shave energy consumption by 15 percent this summer, resulting in at least on large supercomputer (the PACS-CS machine at the University of Tsukuba) to be shut off during the day. Although catastrophic floods and fires can occur nearly anywhere, certain locations are particularly susceptible to natural disasters. It’s worth noting that the majority of the top 10 supercomputers in the world live in dangerous geographies: K computer: Kobe, Japan (earthquake zone) Tianhe-1A: China (earthquake zone) Jaguar: Oak Ridge, United States Nebulae: Shenzhen, China (hurricane zone) TSUBAME 2.0: Tokyo, Japan (earthquake zone) Cielo: Los Alamos, United States (wildfire danger) Pleiades: Moffett Field, United States (earthquake zone) Hopper: Berkeley, United States (earthquake zone) Tera-100: Bruyères-le-Châtel, France Roadrunner: Los Alamos, United States (wildfire danger) In general, supercomputers tend to be pretty well protected from the direct effects of disasters. Of course, it’s possible an HPC data center could get washed away by a flood or get leveled by a tornado or earthquake, but it’s far more likely that damage to the surrounding infrastructure — power facilities, transmission lines, water systems, transportation corridors, etc. — would force the supercomputers to be shut off. As we saw in the case of Japan, the destruction doesn’t even have to be local. Power and water are transported far and wide, and the loss of a critical power plant a thousand miles away can have serious consequences for megawatt-consuming hardware. The fact is that supercomputers are high maintenance machines, requiring lots of electricity, water, clean air, and highly skilled personnel to keep them running. And unfortunately, the most elite machines are becoming even more high demanding as they become ever larger and more complex. Power interruption is the biggest risk. The new top super, the K computer in Japan, draws 10 megawatts of electricity, and most of the top 30 systems are in the multi-megawatt range. The goal for future exaflop-level machines is 20 megawatts, but many people think that number will be two to ten times too low for the first such systems. The irony, of course, is that these same machines are being employed to help predict and mitigate the effects of natural disasters. Climate modeling, weather forecasting, hurricane tracking, earthquake prediction, and disaster management/response are the bread-and-butter applications for many of these supercomputers. The hope is that these systems will become so proficient at modeling these events that they will able to predict these natural disasters far in advance and avoid their worst effects. That will not only save their masters, but themselves as well.
<urn:uuid:53a5f6d2-3a51-481a-ad70-72776e8e1952>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/06/30/supercomputer_defend_thyself/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00085-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950748
1,022
3.078125
3
Network Time Protocol provides the service of time synchronisation for all devices on the network (which have NTP service available and configured). NTP service usually listens on UDP port 123. It will most often distribute UTC (Coordinated Universal Time) along with well planed leap second modifications. But no other information like time zone can be forwarded with it. It means that your devices can get the clocks sync with NTP but you first need to be sure that you configured time-zone on that device so that he can show local time. |Check out some other Layer 7 protocols:| With Marzullo algorithm’s help, NTP can refuse to accept the possible consequences of the variable’s latency. But NTP can, over internet, by and large retain time to around tens of the milliseconds. Moreover, in case of LANs (local area networks), there is one millisecond chance of accuracy. NTP 64-bit timestamps, that are consisted seconds part (32 bits) and a fractional second part (32 bits), gives to it (Network Time Protocol) a time scale. The function of this time scale is to roll over at each 232 seconds etc. But to perform the clock synchronization function, a remote server and NTP client must calculate the RTD (round trip delay time) as well as offset. The formula to compute RTD is δ = (t3 − t0) − (t2 − t1), where t0 (request for packet transmission time), t1 (request for packet reception time), t2 (response for packet transmission’s time) and t3 (response for packet reception’s time) while t3 − t0 (time slip away on the client side) etc. But NTP synchronization is only be exact when both sides’ inward and departing routes between the client side as well as the server side are having symmetrical nominal transmission delay. If there will be no such a general nominal delay for routes existed, the chance of systematic bias of synchronization is possible due to forward and backward travel’s times differences. The subsequent three NTP architecture’s structures are existed for the use over the internet. These are: - Flat peer architecture structure - Hierarchical architecture structure - Star structure As to work, NTP can use a hierarchical structure, but it will be a semi-layered system of clock source levels. Each level can be expressed as a stratum. But the initial value of upper most stratum should be 0. Besides this, according to this hierarchical structure, routing pecking order will be derivative from the NTP hierarchy. Such as central part router must have a client/server connection with the outer time sources. Similarly, the inner time server must have a client/server correlation with the center routers and so on. But in the case of star structure, all the involved routers contain a client/server association with few time servers available in the middle. Well! Peer with reference to NTP is referred to the NTP protocol instantiation over a remote processor linked via a network pathway from the neighboring node. As a curiosity, David L Mills (University of Delaware) had designed this protocol.
<urn:uuid:bcc81274-9304-4fc0-9d4b-cebfd21561d1>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/ntp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89654
663
3.359375
3
Who among us doesn't connect to public WiFi in the airport or at a coffee shop. One of the conveniences of the modern world is being able to connect wherever you are. But using secure HTTP is especially important when using wireless in a public place. When you connect to a server using HTTPs, the “s” stands for secure. More specifically, your HTTP request is using Secure Sockets Layer (SSL). SSL is a protocol used to secure communications between a client and server. The protocol employs encryption to keep eavesdroppers from “hearing” your conversation. It also keeps a Man-in-the-Middle (MITM) hacker from hijacking your conversation. The hacker that perpetrates a MITM can feed you false info and gather info from you that you assume is protected. We will come back to the risks of not using a secure connection later. First let’s discuss how SSL works. You take it for granted that when you select a shortcut or start typing "Facebook", that you automatically connect to Facebook quickly and securely. But a lot needs to happen during this quick process which you take for granted. First your browser resolves the website’s name to an IP address using Domain Naming Service (DNS). Once you communicate with the DNS server you will many times be redirected to a secure connection, even if you typed http://. Now is when the interesting part begins. When the server replies to the HTTP request, it replies first with a server certificate. This certificate contains information important to setting up the encryption strength and type of encryption. But most importantly it contains a use type and Certificate authority (aka, issuing authority). In this case the type is “server certificate” and the issuing authority is DigiCert. Since DigiCert is trusted by the major browsers (Explorer, Chrome and Firefox) there is already a trusted root certificate in your browser that certifies or verifies that the certificate presented by Facebook can also be trusted. Now that the connection is verified, the key can be exchanged via the SSL tunnel. At this point the client and server begin building an encrypted tunnel using the public key and private keys. You are now connected securely to https://facebook.com. Even if you trust the security that a secure SSL connection offers via HTTPs, there is still a chance that you could be duped, though it is quite small. However, if you are using HTTP at a hotspot, your information is traveling unencrypted across the air. The reason is that by its very nature hotspots or guest networks need to be open (unencrypted) because if the pre-shared key is exposed to everyone then you need only to possess the key and you can successfully decrypt all traffic. So watch what info you share at the local coffee shop or hotel. When you use unsecured servers (HTTP) do not share personal info. Be careful what media you share. Take care in sending a password or social security numbers. Now let's say you are at the coffee shop and try to connect to any of the more popular banking institutions with online banking. If you are not sure who you are connecting to and it is not an SSL connection to the splash page to accept the terms-of-use, there is a chance you will be compromised. Let's say I am in the diner next door or in the parking lot with a laptop running unix. I can broadcast an SSID and issue IP address info and a DNS server with a free DHCP server running on the same laptop. I can poison your DNS and direct you to a bogus IP address for which a webpage will reply with any number of banking institutions. When you enter your credentials I collect them and you are compromised. Yes, it is pretty scary. Even less far-fetched is that the coffee shop has an open network and whatever data you send wirelessly, (if SSL is not used) I can set my packet analyzer to collect your packets and decode them to reveal your data. So enjoy the convenience of pervasive public WiFi but proceed with caution! This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:7ff1ce2e-df6e-496d-be95-3f3eff76cc63>
CC-MAIN-2017-04
http://www.computerworld.com/article/3021276/mobile-security/why-the-s-in-https-is-important.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00259-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937186
852
3.5625
4
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: AirQualityAlongTheWasatchFront Select a size Air Quality Along the Wasatch Valley Wasatch Front has a serious winter environmental issue called Inversion. We live in a valley with mountains all around, just like living in a bowl. Cool air above,and warm air below. In the winter months, normal atmospheric conditions become inverted. A trapped dense layer of cold air, under a layer of warm air. The warm air acts like a lid’ trapping pollution on the valley floor. When it snows, the valley floor reflects the sun rather than warming the floor causing the inversion. The longer the inversion lasts the problems worsen. Typically in the valley we get fog and freezing rain, along with than fine particulate pollution (PM.2.5). PM 2.5 enters the atmosphere as soot from roads or tailpipe emission. This image answers the question “What is PM2.5”? PM stands for Particulate Matter. According to the web site for the Breath Utah Organization they say that,”Particulate matter is the term used for a mixture of solid particles and liquid droplets found in the air. PM pollution is made up of soot(from diesel and coal burning), dust, and vehicle emissions” .These fine particulates can pass through the nose and throat, lodging deep in the lungs and pass across to the heart. Young children and the elderly have the highest risk.. Animals are at risk too: Dr. Scott W. Leiter,veterinarian at the Country View Animal in American Fork, says, “The inversion, I think over time, can definitely affect their lungs and everything else.” Cats and small dogs have a higher risk of allergies, even asthma. Larger dogs, huskies, malamutes, American eskimo dogs--are bred for the cold.. Air Pollution has been a problem along the Wasatch Front since the 1870’s. At the turn of the century, there were more than 32 different smelters in the valley. The Murray smoke stack, where the largest in the valley,from 1869 to 1949. At one time the Murray smoke stacks were the biggest lead smelter in the world. It processed from hundred to thousands of tone of ore per day. The smelter was a big boost to the local economy,but the farmers complained that the pollution from the smelter was ruining their crops. According to this article written by Ryan D. Curtis “A history of the Inversion: A foe That Grows Strong,” He states, “ In 1906, more than 400 farmers filed suit against the American Smelting and Refining Company, (the owner of the Murray smoke stacks), and four other companies . In November of that years, U.S.Federal District Judge John A. Marshall ruled in favor of the farmers.” As a result of the law suit many smelters closed or relocated. The Murray smoke stacks tired to reduce pollution by building their stacks higher. By 1918, the smoke stacks reached 455 feet. The air quality in the valley continued to worsen. This Time the burning of coal was a major factor. In the 1920’s with the population in Salt Lake City exploding. The University of Utah and the U.S Bureau of Mines, partnered with Salt Lake City and conducted the first air pollution survey. In 1921 the city adopted an ordinance to help air pollution.. In 1967 the first air pollution law was passed, called “The Air Conservation Act.” The Air Conservation Council was comprised of nine members. They were all giving different duties to come up with ideas to help the air quality, but after several years the air quality over the Wasatch Front was still bad. And now especially in the winter when the inversion hits the air quality seems to get worse. On February 12, 2016, Fox 13 New reports,”It’s the worst inversion I’ve seen in the middle of February,” says, Erik Grossman of the Department of Atmospheric Science, at the University of Utah.” Grossman has been studying Utah inversion for the last 10 years. He says that the worst part of the inversion is the effect it has on the people that live here. Many non-profit organization have jumped in and are trying to help fight the make are quality of air better to breath. One such group is called “Breath Utah.” It is made up of a group of professionals with experience in the scientific and medical field, and citizens from communities along the Wasatch Front. They all work together finding solutions to the Air Quality in Utah. Their web site,(www.breathutah.org) is full of all kinds of great information. Their organization even has programs in the school from Pre-K up to 12th grade. Since 2010 They have taught over 7,000 children. They teach students about air pollution and where it comes from how it affects our bodies and what we can do about it. This web site is set up that a person can go on it and find from education about air quality to how a person can help with fix Utah's Air Quality. Not only is it up to organizations to help with are air quality but the individual needs to decide what they can do to help Fox13now.com,”Inversion conditions in Utah worse than ever;area hospitals see in patients”,February 12,2116,by Robert Boyd. www,utahbusiness.com. Non Giving Guide 2015 http://www.ci.slc.ut.us/winterwww.ci.slc.ut.us/winter. “Winter Inversions: What Are They and What We Can All Do To Help Air Quality in The Wasatch Front I am glad that this is the topic that I decided to do my Signature piece on. I have lived in Utah most of my life , from 1959-1976 I live in Price, Utah. I don’t think we had bad air quality their, even though Carbon county had several Coal mine, they did run a natural gas line to Price and they encouraged people in Carbon County to convert to Gas as coal was the major heating fuel. When we moved to Taylorsville,in my early twenty’s I didn’t care about the Air Quality in the Valley. It took until I had a family and moved out to Magna that I even noticed the bad air. I had a job in Taylorsville and had to take the bus to work every day and was in the air year around. I had a hard time with winter time allergies. Now I realize what caused them. I know have a good grasp on Inversions and PM2.5 and have found though my research ways that I personally can do to help with the Air Quality in Utah. I don’t have a car so I ride public transportation. More people in the winter need to realize that the major problem in this valley is our vehicles . My eyes have been open to this problem and I will try to help my family by educating them to what I have found out, and see what they can do. https://www.weebly.com/Weebly Website Builder: Create a Free Website, Store or Blogor relocated. The Murray smoke stacks tired to reduce pollution by building their stacks higher. By 1918, the smoke stacks reached 455 feet. Many non-profit organization have jumped in and are trying to help fight the make are quality of air better to breath. One such group is called “Breath Utah.” It is made up of a group of pr
<urn:uuid:4313629a-d9df-40ea-9073-d51e2c423ff0>
CC-MAIN-2017-04
https://docs.com/marsha-nay/7074/airqualityalongthewasatchfront
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00259-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956944
1,671
3.234375
3
There’s no doubt that in this age of information technology, Internet is becoming an important part of our daily life. As a member of the Internet users, we maybe very familiar with the modem. With the advent of FTTh program, a new term which is called “Fiber optic modem (FOM)” comes into our vision. In fact, this device is accurately called “Serial to Fiber Converter” according to its working principle and applications. So, how much do you know about this device and do you know how to choose the best one? This paper is written as a solution which will introduces the serial to fiber converter and gives you some tips on how to choose the best one. Introduction of Serial To Fiber Converter Serial to fiber converter, also called fiber optic modem, is a device which is used in fiber optic networks for sending and receiving data of interface protocol. Serial to fiber converter provides electrical to optical conversion of electronic communication and data signals for transmission using tactical fiber optic cable assemblies. This device simultaneously receives incoming optical signals and converts them back to the original electronic signal allowing for full duplex transmission. Together with the tactical fiber optic cables, it provides a rugged, secure, and easy deployable optical link. In addition, the serial to fiber converter is available in both single and multi-channel configurations. Serial Communication Interface Types of Serial To Fiber Converter Serial to fiber converter provides the functions of RS-232/485/422 serial to fiber optic link. The following content will explain these three serial communication interfaces. RS-232, namely EIA-RS-232, is a standard for serial communication transmission of data which formally defines the signals connecting between a DTE (data terminal equipment) such as a computer terminal, and a DCE (data circuit-terminating equipment, originally defined as data communication equipment), such as a modem. The standard defines the electrical characteristics and timing of signals, the meaning of signals, and the physical size and pinout of connectors. RS232 interface is arguably the most well known connector on a personal computer, mostly used as the interface to modems. However, RS-232 only allows for one transmitter and one receiver on each line. RS-232 also uses a Full-Duplex transmission method. Some RS-232 boards sold by National Instruments support baud rates up to 1 Mbit/s, but most devices are limited to 115.2 kbits/s. RS-422, also called as TIA/EIA-422, is a serial communications standard that uses fewer signal lines when compared to RS-232. Differential signaling can transmit data at rates as high as 10 Mbit/s, or may be sent on cables as long as 1500 meters. Some systems directly interconnect using RS-422 signals, or RS-422 converters may be used to extend the range of RS-232 connections. The standard only defines signal levels, other properties of a serial interface, such as electrical connectors and pin wiring, are set by other standards. It provides a mechanism for transmitting data up to 10 Mbits/s. RS-422 sends each signal using two wires to increase the maximum baud rate and cable length. RS-422 is also specified for multi-drop applications where only one transmitter is connected to, and transmits on, a bus of up to 10 receivers. RS-485, also known as ANSI/TIA/EIA-485, TIA/EIA-485, EIA-485 or TIA-485-A, is a superset of RS-422 and expands on the capabilities. This standard can be used effectively over long distances and in electrically noisy environments. Multiple receivers may be connected to such a network in a linear, multi-drop configuration which smartly address the multi-drop limitation of RS-422. RS-485 allowes up to 32 devices to communicate through the same data line and offers data transmission speeds of 35 Mbit/s up to 10 m and 100 kbit/s at 1200 m. Any of the slave devices on an RS-485 bus can communicate with any other 32 slave devices without going through a master device. Since RS-422 is a subset of RS-485, all RS-422 devices may be controlled by RS-485. Both protocols have multidrop capability, but RS-485 allows up to 32 devices and RS-422 has a limit of 10. Overview of comparison of the above three serial interface: |Mode of Operation||Single-Ended||Differential||Differential| |Total Number of Drivers and Receivers on One Line. One driver active at a time for RS-485 networks||1 Driver1 Receiver||1 Driver10 Receiver||32 Drivers32 Receivers| |Maximum Cable Length||50 ft (2500 pF)||4000 ft||4000 ft| |Maximum Data Rate (40 ft – 4000 ft for RS-422/RS-485)||160 kbits/s (can be up to 1Mbit/s)||10 Mbit/s||10 Mbit/s| RS-232/RS-422/RS-485 interface optic fiber modem is the best choice to connect RTU to HOST or SCADA controllers via multi-mode optical fiber. The RS-485 fiber converter can extend serial transmission distance up to 2km (multi-mode fiber) or up to 20km (single-mode fiber). With resistant to the effects of lightning strikes, power surges and other electromagnetic interference, the RS-485 fiber optical modem provides a reliable data network. RS-232/RS-422/RS-485 interface optic fiber modem incorporates a method for automatically detecting the serial signal baud rate by hardware (Auto Baud Rate Detection). This is an extremely convenient feature for the user. Even if device baud rate is changed, the signal will still be transmitted through the RS232/RS-422/RS-485 to fiber converter without any problem. Fiber Optic Connector Types of The Serial To Fiber Converter Except the serial interface, the other end of serial to fiber converter should be a optical interface. Fiber optic cables connect the device through the fiber optic connectors. These connectors are usually available in ST, FC or SC types. The direct advantage of serial to fiber converter is that it allows users to replace existing coaxial cable communication links with lightweight fiber optic cable. While the advantages of using fiber optic cable is as follows. All of these advantages impact defense mobility and rapid deployment requirements. Tips On How To Choose The Best Serial To Fiber Converter Serial to fiber converters are the ideal when working with large amounts of data. Fiber optics allow data to be transferred quickly and effciently. Available in singlemode or multimode models. It is important you choose the option that’s best for your needs. Choosing the best fiber-optic modem depends on a few factors, including availability, usability and cost. Fiberstore’s Solution Of Serial To Fiber Converter Fiberstore‘s serial to fiber converter are available in various form factors depending upon the protocol selected, such as RS-232/RS-485/RS-422 to fiber converter. Our serial to fiber converter has a higher bandwidth and greater electromagnetic immunity than wire-based modems. Together with multimode or single-mode fiber, the device allows data to be transmitted and convert electrical signals to light. It provides transmission distance up to 2km (multimode) or up to 20km/40km/60km (single-mode). Get more information about this product, welcome to visit our website or contact us by E-mail email@example.com.
<urn:uuid:a89b41d8-20cf-4c36-87e8-7b8df6545a85>
CC-MAIN-2017-04
http://www.fs.com/blog/fiber-optic-modem-solution.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00406-ip-10-171-10-70.ec2.internal.warc.gz
en
0.886256
1,579
3.171875
3
As more and more business is done using the World Wide Web, web sites themselves have become increasingly attractive to cybercriminals. What makes a web site such an lucrative target for an attack is not only that there are so many sites to attack, but the fact that an overwhelming majority of all web sites can be easily exploited by some of the most common vulnerabilities. According to a three year study by WhiteHat Security that assessed vulnerability data in 1031 different web sites, it was found that: In the early days of the World Wide Web, hackers would engage in hacking attacks to deface web sites as a sign of protest against a corporate or political ideology, or test their hacking skills using defacement as a way to gain notoriety amongst their peers. However as the Web has grown, and more business is reliant on web technologies to function, attacks against web sites have become more complex and sophisticated because of one reason - money. In light of this, Web application security has never been more critical to business. Attackers, no longer driven by notoriety and ideology, have focused more on techniques that allow them to profit from their illegal activities. Exploited sites allow the theft of steal credit card data, financial information, identities, intellectual property, and anything else cyber criminals can get their hands on. Funded by criminal organizations, attackers now rely on large botnets that can rent for as low as $150 for 2000 machines. In the hands of these cyber criminals, these zombie machines are able to seek out vulnerable web sites. Once these sites are identified, the attacker turns the focus of the botnet towards launching coordinated, distributed attacks against them exploiting web applications, web servers, FTP servers, and any other possible point of entry. There are many different ways in which attackers are able to compromise a web site. Some of the most common vulnerabilities that attackers use are: With the proliferation of out of the box web applications, it has never been easier for web sites to be built rather quickly. Unfortunately, these quick solutions also make it easier for attackers. Without proper training and knowledge, many of these sites are left with multiple vulnerabilities. In addition to a compromised web site exposing sensitive data, there are other risks associated with web site security. Denial of Service attacks are intended to disrupt a web site’s ability to serve pages to its visitors. Usually, these attacks are carried out by overloading the server with requests. Businesses that rely on their web site for normal business operations can find a tremendous drop in revenue as a result. One of the most damaging things that can happen to a web site is to have it flagged as malicious. According to Stopbadware.org, not many sites even realize that they serve malicious pages. That is until it is too late. Sites that are flagged as malicious lose customers and visitors as a result. Web sites that are compromised can provide the attacker access to a company’s internal network. Through attacks like Remote File Includes, an attacker is able to access protected files that may contain authentication information used on other network resources. dotDefender enables companies to address challenges facing their web site in a straightforward and cost-effective manner by utilizing a Security as a Service solution. dotDefender offers comprehensive protection against SQL injection, cross-site scripting and other threats that your web site faces every day. The reasons dotDefender offers such a comprehensive solution to your web application security needs are: Architected as plug & play software providing optimal out-of-the-box protection, dotDefender creates a security layer in front of the application to detect and protect against application-level attacks in incoming web traffic that could be used to compromise the web server, steal sensitive information, or disrupt web services.
<urn:uuid:7d444a33-d72b-46eb-9ae5-a50ec351514e>
CC-MAIN-2017-04
http://www.applicure.com/solutions/website-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94938
762
2.71875
3
The recently shelved SOPA (Stop Online Piracy Act) and PIPA (PROTECT-IP Act) bills that were up for consideration in the U.S. Congress have once more brought to the forefront a touchy subject, particularly with regard to the Internet: intellectual property (IP). With IT becoming such a critical underpinning of the U.S. and world economies, a frank reconsideration of the meaning and role of IP is long overdue. Re-evaluating Assumptions About Intellectual Property One missing item on the agenda of discussing IP legislation like SOPA and PIPA revolves around the meaning, scope and legal status of the term intellectual property. This article is by no means intended to resolve the issue, but merely to point out some considerations that suggest that the predominant conception of IP may well be flawed. Stopping draconian bills like SOPA and PIPA was necessary for reasons having nothing to do with IP, but in anticipation of the next round of legislation (and it will arrive sooner or later), an honest discussion of intellectual property is desperately needed. IP laws are ostensibly intended to protect the nonmaterial goods produced by individuals and companies. In the context of IT and data centers, such goods encompass software, innovative design practices or devices, and product/company names and logos (which also applies beyond these industries). By prohibiting others from simply copying, by whatever means, these protected items, those who developed them are enabled to earn a return on their invested labor and capital. But like so many concepts that sound good on paper, IP can also be abused. Examples of copyrights and patents (legal recognitions of IP) range from the understandable to the downright absurd. For instance, it’s easier to believe that a musician has the right to control distribution of a recording he or she made than it is to believe that a company has the right to patent human genes (“How human genes become patented”)—which, incidentally, that company did not create. Similarly, some companies have gained patents on seeds, imposing legal restrictions on farmers’ ability to collect and use seeds from the next generation (“Saving Seeds Subjects Farmers to Suits Over Patent”). But where is the line to be drawn between a common-sense use of IP law and a patently (no pun intended) absurd use? Complicating the situation is the digital nature of information—music, software, images, text and so on. To computers, these are nothing but long binary numbers (010011101011100010011...). Can a company or individual own the rights to a binary number? But what if two programs use the same binary number for two different things—one to play back a song, one to produce an image? (This is an unlikely occurrence, but it’s conceivable.) And how much of the number is actually owned? For example, if one or two bits are reversed, is it the same number for IP purposes? And even if we look at the actual content rather than the underlying digital numbers—the notes of music, the shapes and colors of an image, the words of a text, and so on—how much difference is enough difference to objectively avoid legal jeopardy in regard to infringement of IP rights? Are IP rights applicable when no commercial benefit is gained? For example, the Girl Scouts were asked to pay to be able to sing certain tunes around the campfire (“Ascap Asks Royalties From Girl Scouts, and Regrets It”). Again, the extent of IP rights is far from clear: common sense would tend to find more favor in protecting a musician’s recording (say, in MP3 format) than in preventing some Girl Scouts from singing some hit tune among themselves. SOPA and PIPA Highlight Need for Frank Discussion The SOPA and PIPA bills were ostensibly intended to protect IP rights on the Internet, but the recent shutdown of Megaupload proves that these bills weren’t really needed to enable enforcement of IP rights on the Internet. (For a simple discussion of the real problems with SOPA and PIPA, see the Khan Academy’s lucid presentation.) What these bills do illustrate is an overboard reaction to IP infringement on the Internet. And similar bills content will continue to come up in the Congress until one of them passes, unless a clearer understanding of what constitutes IP and IP rights is developed. Although the fact that IP laws are violated regularly by a large number of Internet users doesn’t mean that those laws are unwarranted, it does raise some question about whether the laws do not somehow miss the reality of the digital situation. In some sense, the question really does come down to whether an artist, musician, programmer or other individual can own a number—or, more to the point, whether he or she can control what others do with that number. The Economics of IP Intellectual property is an attempt to extend the concepts that apply in the realm of physical possessions to the realm of concepts, ideas and other immaterial things. When you take someone’s car, you’re taking a one-of-a-kind object (there’s only one of that exact car in existence)—the violation in this case is tangible, and the stolen item is irreplaceable (in the sense that there’s only one “that car”). But what about a software program in digital format? Innumerable copies can be created in a manner that has no effect on the physical ownership of the original by the programmer, company or whomever has it. Thus, from the perspective of the owner in exclusive terms of physical/digital possessions, nothing has changed. Of course, the counterargument is the uncontrolled duplication of the program has an economic effect: it essentially eliminates any monetary value to the program (or whatever the item—by the laws of economics, an infinite supply means the price must fall to zero). The owner could then state that although the program wasn’t stolen, its value was. But granting that actions that reduce the monetary value of an object are no less than theft, one opens a can of worms that effectively leads to necessary regulation of all economic activity. For example, say two programmers write two different programs that do exactly the same thing (but, to avoid IP considerations, they do it in two entirely different ways). Assume the value of these two programs is thus equivalent in this sense. But if one programmer offers his version for sale at half the price of the other—killing the sales of the more expensive version—is that programmer, in effect, “stealing value” from the other? This small example illustrates the kind of economic and philosophical morass that a discussion can fall into with regard to IP. This is not, however, to say that IP has absolutely no place in the law or common morality—nor is it to say that IP has a definite place in the same. This is simply to note that an unquestioning allegiance to the prevailing notions of IP (particularly when purveyed by large corporations with huge financial stakes in the discussion) can lead to absurdities, like companies owning the rights to your genes. What we need, therefore, is a healthy debate on the topic of IP. This debate shouldn’t be limited to laws like SOPA and PIPA, but should focus on what truly constitutes IP and whether the law has a role. The debate need not be just a revolutionary exercise in tearing down an established dogma, but should be a means for both sides to clarify their positions and, one would hope, reach a broader consensus. At that point, any necessary laws can be passed to protect both rights holders and everyone else. Photo courtesy of Kevin Spencer.
<urn:uuid:aea8a2b7-58c7-4e0c-9c15-57bcbadec251>
CC-MAIN-2017-04
http://www.datacenterjournal.com/aftermath-of-sopa-and-pipa-lets-talk-ip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00040-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940733
1,594
2.515625
3
The folloing are highlights of GAO-12-961T, a testimony before the Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia, Committee on Homeland Security and Governmental Affairs, U.S. Senate. What GAO Found Technological developments since the Privacy Act became law in 1974 have changed the way information is organized and shared among organizations and individuals. Such advances have rendered some of the provisions of the Privacy Act and the E-Government Act of 2002 inadequate to fully protect all personally identifiable information collected, used, and maintained by the federal government. For example, GAO has reported on challenges in protecting the privacy of personal information relative to agencies’ use of Web 2.0 and data-mining technologies. While laws and guidance set minimum requirements for agencies, they may not protect personal information in all circumstances in which it is collected and used throughout the government and may not fully adhere to key privacy principles. GAO has identified issues in three major areas: • Applying privacy protections consistently to all federal collection and use of personal information. The Privacy Act’s protections only apply to personal information when it is considered part of a “system of records” as defined by the act. However, agencies routinely access such information in ways that may not fall under this definition. • Ensuring that use of personally identifiable information is limited to a stated purpose. Current law and guidance impose only modest requirements for describing the purposes for collecting personal information and how it will be used. This could allow for unnecessarily broad ranges of uses of the information. • Establishing effective mechanisms for informing the public about privacy protections. Agencies are required to provide notices in the Federal Register of information collected, categories of individuals about whom information is collected, and the intended use of the information, among other things. However, concerns have been raised whether this is an effective mechanism for informing the public. The potential for data breaches at federal agencies also pose a serious risk to the privacy of individuals’ personal information. OMB has specified actions agencies should take to prevent and respond to such breaches. In addition, GAO has previously reported that agencies can take steps that include: • assessing the privacy implications of a planned information system or data collection prior to implementation; • ensuring the implementation of a robust information security program; and • limiting the collection of personal information, the time it is retained, and who has access to it, as well as implementing encryption. However, GAO and inspectors general have continued to report on vulnerabilities in security controls over agency systems and weaknesses in their information security programs, potentially resulting in the compromise of personal information. These risks are illustrated by recent security incidents involving individuals’ personal information. Federal agencies reported 13,017 such incidents in 2010 and 15,560 in 2011, an increase of 19 percent. Why GAO Did This Study The federal government collects and uses personal information on individuals in increasingly sophisticated ways, and its reliance on information technology (IT) to collect, store, and transmit this information has also grown. While this enables federal agencies to carry out many of the government’s critical functions, concerns have been raised that the existing laws for protecting individuals’ personal information may no longer be sufficient given current practices. Moreover, vulnerabilities arising from agencies’ increased dependence on IT can result in the compromise of sensitive personal information, such as inappropriate use, modification, or disclosure. GAO was asked to provide a statement describing (1) the impact of recent technology developments on existing laws for privacy protection in the federal government and (2) actions agencies can take to protect against and respond to breaches involving personal information. In preparing this statement, GAO relied on previous work in these areas as well as a review of more recent reports on security vulnerabilities. What GAO Recommends GAO previously suggested that Congress consider amending applicable privacy laws to address identified issues. GAO has also made numerous recommendations to agencies over the last several years to address weaknesses in policies and procedures related to privacy and to strengthen their information security programs. The full GAO report and testimony can be downloaded here:
<urn:uuid:4dab825f-43d5-4583-85ca-ce048481ef0f>
CC-MAIN-2017-04
http://infosecisland.com/documentview/22065-GAO-Federal-Law-and-the-Changing-Technology-Landscape.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925851
851
2.53125
3
Instruct by asking questions and not lecturing. Answer questions with a question. At first this sounds like a simple corporate learning strategy. However, as you try to implement the Socratic style, you may find it quite daunting. Following the ABCs will help master the technique that will aid in monumental development of those learners you instruct. First let’s understand why the Socratic method is effective, especially in corporate training and development. Many thoughts swirl around in the mind, each with a different origin and pathway. Some thoughts are programmed like memorizing an organization’s mission statement or core values. Other thoughts originate from observational learning such as shadowing another employee. Most of employee learning comes from experience and the process of doing a workflow repeatedly. However, it can be challenging for trainers to change later in employee development. In the digital era, corporate trainers should prompt and encourage critical thinking every step of the way to minimize risk and maximize excellence. Our thoughts result from complex neurological growth and development throughout our personal life experiences. The most powerful lessons that stick out come from connecting thoughts with own neurologic fuel. The Socratic teaching style facilitates deep learning and understanding. Your job is to facilitate learning by asking key questions that allow learners to arrive at the discovery point on their own. To illustrate, imagine a connect-the-dots drawing activity. At first glance, there is no recognition of the picture that the collective dots will represent. Once you start connecting the dots, an image begins to take shape. Halfway through the activity there may be some guesses as to what the dots are forming. Once all the dots are connected, the image is clear. It’s the “a-ha" or "eureka" moment. Develop learners and help them gain critical thinking tools by facilitating learning with questioning. Here are the ABCs for beginning a Socratic teaching style. Assure deep understanding of the concept that you present. Studying and reviewing an already known topic drives greater understanding. It eases the next steps like primer paint on a wall. Be in the mind of the learner. What is it that you want them to figure out on their own? Write it down and be clear on what thoughts you want them to derive on their own without lecture. Create a question or a series of questions to ask. The question(s) should prompt ongoing thought. The brain scans billions of bits of information in search of answers. Once the right bits fuse together, learners will come to the discovery phase on their own. The Socratic technique is effective, but does not come naturally to many. Practice once a week by taking one concept from your curriculum and applying the ABCs. Depending on commitment to practice, the method becomes more fluid.
<urn:uuid:898e7184-a97b-4a5e-8400-5adf64523edb>
CC-MAIN-2017-04
http://blog.contentraven.com/learning/maximize-learning-with-the-abcs-of-the-socratic-method
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00058-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947365
564
3.265625
3
Measuring and monitoring patient vital signs is especially important when a patient is sedated, such as during an operation, in intensive care, in recovery, or while critically ill. A central computer is usually located at the nurse's station, but healthcare facilities are finding an increasing need to monitor from other areas in the hospital or medical facility. Additionally, reliable reporting, 24/7 uptime, and the sharing of highly detailed images electronically are some of the challenges for healthcare providers. New infrastructure needs to integrate current technology while also future-proofing scalable systems to keep costs in control. Extend EKG signals to a remote station for a healthcare provider to monitor. Another option for signal extension is to use a remote desktop system with physical and virtual computers that can deliver video and data signals over a LAN/WAN. Real-time access for multiple users Transmit high-definition medical images long distances without dropping a pixel. Give multiple users access to video and other data sources very quickly. Use the LAN or Ethernet to share high-definition images. Using the LAN improves transmission distances and keeps demands on bandwidth low. Integrate a multiviewer KVM system to monitor up to four video sources on one screen. Use locking cables to ensure continual monitoring and healthcare. Locking patch cables prevent networks from dropping signals either through accidental or malicious disconnects.
<urn:uuid:4eb14429-9f1a-4fce-b685-15e5f37cb34b>
CC-MAIN-2017-04
https://www.blackbox.com/en-us/industries/healthcare-old/healthcare-product-solutions/patient-monitoring
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00086-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914744
278
2.75
3
Flash: A program that lets users create animation for the Web. Blog: A portmanteau combining Web and log. It is a user- generated Web site on which entries often are journal-style and provide news or the writer's commentary on a topic. Blogosphere: The term that refers to the social network of all blogs on the Internet. Facebook: A social networking site that initially was limited to college students but was extended to the general public in September 2006. As of July, Facebook had 47 million users. Mashup: A Web site or application that combines content from more than one source into an integrated experience. MySpace: A general-interest, social-networking site. As of September 2007 it had more than 200 million global users. Podcast: A digital file that is distributed over the Internet via syndication feeds. It is designed for playback on MP3 players. Social Networks: Internet applications that help connect friends, business partners or other individuals using a variety of tools. Examples of online social networks are MySpace, Facebook, Friendster and LinkedIn. Virtual World: A computer-based simulated environment intended for its users to inhabit and interact via avatars. Web 2.0: Term coined by O'Reilly Media in 2004 used to describe the Internet applications that arose amid the ashes of the dot-com collapse. Particular focus has been given to user-created content, lightweight technology, service-based access and shared-revenue models. Wiki: A type of Web site that allows visitors to easily add, edit and remove content. The term wiki also can refer to the collaborative software itself. Sources: Gartner, O'Reilly Media, Wikipedia, TowerGroup
<urn:uuid:3de23236-4728-4fac-9e09-6a5618beea6c>
CC-MAIN-2017-04
http://www.banktech.com/management-strategies/a-glossary-of-web-20-terms-for-banks/d/d-id/1291572
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00388-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901753
354
2.6875
3
NYC Saves Money by Helping Smaller Communities with Flood Control In a unique relationship with towns in New York City’s watershed, the city allows access to otherwise unattainable disaster prevention expertise, mainly in the area of flood control, in addition to millions of dollars in construction aid to fund solutions to flooding problems. This came in handy recently for towns in New York’s Catskill Mountains. One town that benefits from the relationship is Phoenicia, NY, which recently suffered flooding when Hurricane Irene passed through that area, followed closely by Tropical Storm Lee. Both storm systems together brought considerable flooding to the main street area and prompted action by town officials. Such action included the removal of over 20,000 cubic yards of sediment from an area encompassing a quarter-mile of river. This was followed by the installation of a V-shaped rock wall upstream from the Main Street bridge. Accomplished through the use of hydraulic modeling, the wall prevents the formation of sand bars and protects against erosion. In essence, some good emerged from the disastrous flooding caused by Irene and Lee, exposing gaps in the disaster preparedness plans of small towns. Now, with a decisive plan in place and steps taken to prevent forthcoming flooding, future scenarios will be easier to manage, as the work so far has paid off in the protection of key roads. According to the New York State Energy Research and Development Authority, it is something that other towns should be preparing for, as stated in its 600-page ClimAID report released in November. Global climate change, especially in the New York area, will bring more frequent floods. It is those towns that will have ongoing efforts to deal with this new threat; they also must build disaster education and subsequent response into their everyday procedures in preparations for such future events. Because these small communities in the Catskill region are in New York City’s watershed, the New York City Department of Environmental Protection provides funding for stream management. The DEP funds work in these small communities to prevent floods that wash clay, silt, and other unwanted debris into the reservoirs. By doing so, the city is allowed to avoid spending billions of dollars on a water filtration plant. For more information about New York City’s watershed program, visit: http://online.wsj.com/article/AP6bd16a7ce7cd49808542306f7715a4c5.html
<urn:uuid:b87ab541-7ee7-4e0f-b59a-30df4214f0e3>
CC-MAIN-2017-04
http://www.disaster-resource.com/index.php?option=com_content&view=article&id=1599
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941142
502
2.859375
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: 16F_Kines235_Cardiovascular System_LearningGuide_FINAL Select a size The Cardiovascular System Overview Students will understand the division of the CV system into the pulmonary circulation and the systemic circulation and the unique needs of these two circulatory systems. Students should be able to describe the path of the blood through the heart. Students will understand the similarities and differences between cardiac muscle and skeletal muscle and how these differences relate to differences in the function of these two muscle types. Students will be able to explain the differences between a contractile muscle cell and a pacemaker cell action potential. Students will understand the control of cardiac output by control of Heart rate and Stroke Volume, and understand how these two variables are controlled. Students will be able to define the waves of an electrocardiogram and related them to the electrical and mechanical changes in the heart. Key Terms & Concepts Cardiovascular system: a circulatory system comprising a heart, blood vessels, and blood. Capillaries: the microscopic vessels where blood exchange material with the interstitial fluid Arteries: Blood vessels carry blood away from the heart Veins: Blood vessels that return blood to the heart Septum: the heart id divided by a central wall; into right and left halves Atria: receives blood returning to the heart from the blood vessels Ventricles: pumps blood out into the blood vessels ***The right side of the heart receives blood from the tissues and sends it to the lungs for oxygenation. The left side of the heart receives newly oxygenated blood from the lungs and pumps it to tissues throughout the body. Pulmonary circulation: the blood vessels that go from the right ventricle to the lungs and back to the left atrium are known collectively as the pulmonary circulation. Aorta: is the main artery in the human body, originating from the left ventricle of the heart and extending down to the abdomen, where it splits into two smaller arteries (the common iliac arteries). The aorta distributes oxygenated blood to all parts of the body through the systemic circulation. Systemic circulation: the blood vessels that carry blood from the left side of the heart to the tissues and back to the right side of the heart are collectively known as the systemic circulation. Vena cava (superior and inferior): the veins from the upper part of the body join to form the superior vena cava. Those from the lower part of the body form the inferior vena cava. The two vena cavae empty into the right atrium. Resistance: the tendency of the cardiovascular system to oppose blood flow Vasoconstriction: a decrease in blood vessel diameter Vasodilation: an increase in blood vessel diameter Pericardium: the heart is encased in a tough membranous sac Myocardium: the heart itself is mostly composed of cardiac muscle Atrioventricular valves: between the atria and ventricles Semilunar valves: between the ventricles and the arteries Pacemaker potential: In the pacemaking cells of the heart (e.g., the sinoatrial node), the pacemaker potential (also called the pacemaker current) is the slow, positive increase in voltage across the cell's membrane (the membrane potential) that occurs between the end of one action potential and the beginning of the next action potential If channels: When the cell membrane potential is -60mV, If channels that are permeable to both K+ and Na+ open. These channels are called If channels because they allow current (I) to flow and because of their unusual properties. Sinoatrial node: the depolarization begins in the sinoatrial node (SA node), autorhythmic cells in the right atrium that serve as the main pacemaker of the heart. Atrioventricular node (AV node): a group of autorhythmic cells near the floor of the right atrium. Purkinje fibers: specialized conducting cells of the ventricles, transmit electrical signals very rapidly down the atrioventricular bundle, or AV valve, also called the bundle of His, in the ventricular septum. Electrocardiogram: show the summed electrical activity generated by all cells of the heart. P wave: corresponds to depolarization of the atria QRS complex: represents the progressive wave of ventricular depolarization. T wave: represents the repolarization of the ventricles. ***Atrial repolarization is not represented by a specific wave but is incorporated into the QRS complex. Systole: the timing during which the muscle contracts Diastole: the timing during which cardiac muscle relaxes Stroke volume: the amount of blood pumped by one ventricle during a contraction Cardiac Output: the volume of blood pumped by one ventricle in a given period of time. Because all blood that leaves the heart flows through the tissues, cardiac output is an indicator of total blood flow through the body. Heart rate: the number of heartbeats occurring within a specific length of time Frank-Starling Law of the Heart: stroke volume is proportional to EDV. As additional blood enters the heart, the heart contracts more forcefully and ejects more blood. It means that within physical limits, the heart pumps blood that return to it. Venous return: Venous return is the rate of blood flow back to the heart. It normally limits cardiac output. Superposition of the cardiac function curve and venous return curve is used in one hemodynamic model. Venous return (VR) is the flow of blood back to the heart End-diastolic volume: is the volume of blood in the right and/or left ventricle at end load or filling in (diastole) or the amount of blood in the ventricles just before systole. End-systolic volume: is the volume of blood in a ventricle at the end of contraction, or systole, and the beginning of filling, or diastole. ESV is the lowest volume of blood in the ventricle at any point in the cardiac cycle. Silverthorn: Chapter 14 pp. 436-476 Outline the path the blood takes through the heart from its arrival into the right atrium until its ejection from the left ventricle, including the action of the heart valves. List 3 similarities and 3 differences between contractile cardiac and skeletal muscle and one similarity and one difference with smooth muscle and relate these differences to differences in the function of these two muscle types. Using the numbered steps, compare the events shown in EC Coupling in skeletal muscle and smooth muscle: Smooth and cardiac muscle are the same except where indicated. (1) Multi-unit smooth muscle and skeletal muscle require neurotransmitters to initiate the action potential. (2) No significant Ca2+ entry in skeletal muscle. (3) No CICR in skeletal muscle. (4) Ca2+ leaves the SR in all types. (5) Calcium signal is all types. (6)-(7) Smooth muscle lacks troponin. Skeletal muscle is similar too cardiac. (8) Same in all types. (9) NCX lacking in skeletal muscle. (10) Same in all types. Outline the path of excitation coupling in cardiac muscles. Identify the different steps involved in the action potential of a cardiac contractile cell. What is the lowest voltage of the unstable membrane potential for a myocardial autorhythmic cells and how does this relate to the depolarization process? What controls the speed of pacemaker depolarization? Autorhythmic cells have unstable membrane potentials called pacemaker potentials. A: The pacemaker potential gradually becomes less negative until it reaches threshold, triggering an action potential. B: Ion movements during an Action and Pacemaker Potential C: State of Various Ion channels Speeds up the depolarization rate of the pacemaker potential: Increase in Ca+ influx and Increase in Na+ influx Trace the steps of the electrical signal for cardiac contraction beginning at the SA node. Why does the contraction push blood up from the bottom of the ventricle? Briefly map the events of an electrocardiogram (P wave, QRS complex and T wave) onto their corresponding electrical events. Define systolic versus diastolic blood pressure. The top number is the maximum pressure your heart exerts while beating (systolic pressure), and the bottom number is the amount of pressure in your arteries between beats (diastolic pressure). The numeric difference between your systolic and diastolic blood pressure is called your pulse pressure. Define stroke volume and cardiac output. Stroke Volume: is the amount of blood ejected by the left ventricle in one contraction. Although stroke volume can refer to either left or right side of the heart, it is most associated with the left side. It is measured in ml/beat and generally has a normal value of about 1 cc/kg Cardiac Output: The amount of blood the heart pumps through the circulatory system in a minute. The amount of blood put out by the left ventricle of the heart in one contraction is called the stroke volume. The stroke volume and the heart rate determine the cardiac output. How does the Autonomic Nervous System Impact Heart Rate? The sympathetic and parasympathetic branches of the autonomic division influence heart rate through antagonistic control. Parasympathetic activity slows heart rate, while sympathetic activity speeds it up. What is the role of venous return in regulating stroke volume? What are 3 factors that influence venous return? Venous return: the amount of blood that enters the heart from the venous circulation. The Three factors that affect Venous Return: Contraction or compression of veins returning blood to the heart (the skeletal muscle pump) Pressure changes in the abdomen and thorax during breathing (the respiratory pump) Sympathetic innervation of veins. *** According to the Frank Starling law, stroke volume increases as end-diastolic volume increases. Fall 2016ion, or systole, and the beginning of filling, or diastole. ESV is the lowest volume of blood in the ventricle at any point in the cardiac cycle. The top number is the maximum pressure your heart exerts while beating (systolic pressure), and the bottom number is the a
<urn:uuid:23ad3c7b-c717-4b19-9838-e46934179b45>
CC-MAIN-2017-04
https://docs.com/brittany-jansen/3808/16f-kines235-cardiovascular
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00315-ip-10-171-10-70.ec2.internal.warc.gz
en
0.867534
2,277
3.671875
4
It might not be much to look at but a new computer at the University of Idaho (UI) can run simulations most computers across the nation cannot physically handle. UI faculty have been experimenting with the supercomputer since the fall and have started to integrate its abilities into their research this semester. The computer, nicknamed "Big-STEM," provides researchers with an enhanced amount of memory to support large-scale simulations for the science, technology, engineering and mathematical divisions. "It is a very rare type of machine, especially for a school like ours," said Jim Alves-Foss, director of the Center for Secure and Dependable Systems at the university. "It is primarily used for simulating and modeling." The machine, which is about the size of an average computer tower, has eight 10-core processors and 4 terabytes of memory - and is expected to double by summer 2014. The first part of the machine was funded from a $300,000 grant from the National Science Foundation. They have now received another $240,000 grant from the Murdock Charitable Trust and are hoping to match funds with the Idaho Global Entrepreneurial Mission to add another 4 terabytes of memory - giving the machine 4,000 times the memory of the average home computer. "It is unique because every processor can access all of the memory, which makes it tremendously faster than anything we have had before," Alves-Foss said. Mathematics professor Lyudmyla Barannyk has started using the machine to test the efficiency and accuracy of math algorithms. In the past, she has had access to systems at the Idaho National Laboratory but those computers did not have enough memory to support her research. "It is more powerful than anything we know of," Barannyk said. "It opens up huge possibilities we couldn't even think of before." Before the arrival of the machine, researchers had to simplify and downsize their models in order to run them on existing software. Often they had to back up their work to disks and run the models in different sequences. Because of the massive amount of memory storage, the Big-STEM allows the different simulations and models to be run simultaneously. "We have some projects that would take weeks to run that are now down to hours," Alves-Foss said. Several faculty members have already put the computer to work, which requires a constant cooling fan to keep the machine from overheating. Alves-Foss said a physicist is currently working on modeling complex proteins and their interactions with chemicals in the human body to be used in the design of targeted drugs. The mechanical engineering department is working on off-shore wind turbines on waves. The computer can simulate the complex motion of the waves and the wind modeling to help further understand building in that specific type of environment. Researchers access the machine from their own computers, using the software and programs already installed on their devices to run by remote access on the supercomputer. They schedule use of the computer based on the amount of memory needed to run their project which allows multiple faculty members to use the machine at the same time. Alves-Foss said aside from the supercomputing centers at select colleges around the nation he does not know of any other computer like the Big-STEM in the northwest and there are few others in the country. The UI does not currently plan to acquire more supercomputers but they have not ruled it out for the future. Alves-Foss said they want to monitor the Big-STEM before adding more machines. Ray Anderson, information technology resource manager for research in the College of Engineering, called the machine "turn-key high performance computing" meaning they can take the computer right out of the box and begin using it because it does not require any specialized software. "It allows researchers to start doing the research instead of spending time figuring out how to use the software," Anderson said. "This is the next step in the process - simplification." ©2014 the Moscow-Pullman Daily News (Moscow, Idaho)
<urn:uuid:f4ff9616-f67f-415c-835b-d642d9f29d8c>
CC-MAIN-2017-04
http://www.govtech.com/education/University-of-Idaho-Competes-with-Big-Leagues-in-Computing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00435-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96653
826
3.390625
3
Cloud computing has the potential to save large companies a total of $12.3 billion annually on energy costs and reduce annual carbon emissions by 85.7 metric tons come 2020, according to a recent study examining the environmental impact of cloud computing published by the Carbon Disclosure Project. The study, "Cloud Computing: The IT Solution for the 21st Century," revealed that those dramatic energy savings and carbon emission reductions come as the amount of cloud computing spending by large U.S. enterprises increased from 10 percent to 69 percent of IT budgets. The study was conducted by independent analyst research firm Verdantix and was sponsored by AT&T. It found that companies plan to accelerate their adoption of cloud computing from 10 percent to 69 percent of their IT spending by 2020. Eleven large global companies with $1 billion or more in annual revenue were profiled in the study, including Boeing, Citigroup, Dell, Deutsche Bank and Juniper Networks. All participants have been using cloud services for at least two years. The study found that many of them reported cost savings as their primary driver to adopt the cloud and anticipated that cloud computing it would reduce their costs by 40 percent to 50 percent. "We are experiencing significant reuse, and hence carbon reduction, in our internal private cloud environment," Paul Stemmler, Citigroup managing director, engineering and integration, Citi Global Operations & Technology, said in the study. The data gathered from the participants was used to formulate two distinct cloud computing scenarios in which a food and beverage company with annual revenues of $10 billion, 60,000 employees and operations in 30 countries can reduce energy costs and chop carbon emissions. The Carbon Disclosure Project study found that when that food and beverage firm transitions its human resources application from dedicated IT to the public cloud, it can reduce carbon emissions by 30,000 metric tons over five years, which is the equivalent to the annual emissions from 5,900 passenger vehicles. The lifetime cost of implementing and operating the public cloud solution for five years is $12.3 million, compared to the $24.6 million to upgrade and operate a dedicated IT solution over the same time frame. The study found that if the food and beverage firm moves its HR application from dedicated IT to the public cloud, it can achieve a net present value of $10.1 million over five years with an ROI of under a year. If that same firm moves its HR application from on-premise to a private cloud, it chops carbon emissions by 25,000 metric tons over five years, or the equivalent of the annual emissions from 4,900 passenger cars. In that private cloud scenario, the net present value is $4.4 million over five years with payback coming in year two, the study found. Next: Cloud Computing Makes Environmental, Business Sense
<urn:uuid:ebff6347-fa3a-42b1-b8f4-1359e3531661>
CC-MAIN-2017-04
http://www.crn.com/news/cloud/231002227/cloud-computing-chops-energy-costs-carbon-emissions-study.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961583
568
2.890625
3
If using radar to catch speeding motorists can be described as throwing a net over a wide area in hopes of catching something, then lidar is a rifle shot, pinpointing the intended quarry. But both have their places in law enforcement, and neither can replace the other. Radar has been used for decades with more than adequate results, and lidar has come onto the scene in recent years as a viable companion to radar. Radar -- short for radio detection and ranging -- sprays a web of high-frequency radio waves in a cone shape, finds an object, and gauges its speed. Radar uses electromagnetic waves, or radio waves, to locate moving or fixed objects. A radar beam used for tracking the speed of motor vehicles is typically 12 feet wide and 100 feet long. It uses the Doppler principle, which measures frequency change. The radar transmits a microwave frequency that bounces off the vehicle and returns to the initiator. The vehicle's speed is calculated by measuring the difference between the frequency that reached the vehicle and the frequency that returned. The radar frequency, or beam, is conical in shape and reaches outward until it is reflected, refracted or absorbed. The range of the beam can be controlled by the operator. Radar disperses its beam and clocks any vehicle that enters that beam. A benefit of radar is that it can be used in a moving vehicle, whereas a lidar operator must be stationary. Much like the technology used by surveyors, lidar -- short for light detection and ranging -- shoots a laser at a target to measure its distance and speed. This laser beam is about 1 foot to 3 feet in diameter, and with its approximate 1,000-feet reach, lidar has a wider range than radar. Another advantage of lidar over radar is that it lets police target a specific vehicle. Closing in on Tailgaters Lidar's ability to measure the distance between moving vehicles is a relatively new feature of the technology that police increasingly use to bust tailgaters. The officer sets the gun to measure the distance between himself and the center of a traffic lane. When two cars pass by, the gun tracks the speed of both cars and calculates the distance between them. The Colorado State Patrol uses this function in heavy traffic to target aggressive drivers as well as speedsters. "Lidar works extremely well in heavy traffic conditions," said Sgt. Kevin Ratzell. "The laser beam allows the officer to individually pick out a violator's vehicle even while in a group of cars." The Colorado State Patrol also uses lidar to take accurate measurements at accident scenes by measuring skid marks, reference points and so forth. "This is great stuff that radar does not have the capability of doing," Ratzell said. The Arizona Department of Public Safety recently purchased nine lidar units and uses them primarily to bust speeding motorists. But a side benefit, and one that the department sought when it made the purchase, was the feature that measures the distance between cars. "That was one of the benefits we were looking for," said Tom Mason, public information officer for the Department of Public Safety. "Lidars are like a blender; they come with different functions within them. You can get upgraded versions. We elected to purchase them with that extra [tailgating] feature on it. People don't think [tailgating] is dangerous. They don't understand why it's illegal, and we've definitely gotten the message out by using those instruments. It's been a really good tool." About two years ago, the Newark, Del., Police Department had grant money to spend and considered either lidar units or radar units. "The decision was to give lidar a chance," said Master Cpl. Curtis Davis. "It wasn't a new technology, but it was technology that was just hitting the mainstream at that time." Pros and Cons Newark bought two units then later added five more. Lidar has its place in Newark, but will not replace radar, Davis said. "In rural environments, open roadways and limited-access roadways, lidar is absolutely fabulous. The range is incredible, and you can zero in on a specific car so that you're not getting the slower vehicle." The advantage, Davis said, is that if a group of cars is approaching and one is clearly going faster than the others, the officer can target that car. With radar, any of the cars that come into the stream can trigger a response. "Laser is different from radar because you're used to throwing it out there, and whoever is going too fast gets stopped," Davis said. "At least with laser you can get the most flagrant offenders." But Newark, primarily an urban area, has found that there are also disadvantages to lidar. "In an urban setting it's absolutely horrible, because if a telephone pole gets into the way of your laser -- between you and the car -- or a branch or a sign or anything that interrupts the stream, you have nothing," Davis said. "Radar will go around that sort of thing." Capt. Lisa Solomon of the Paso Robles, Calif., Police Department said her department found both radar and lidar to be valuable. "We use lidar on a daily basis. The primary benefit is the small bandwidth of the beam as opposed to radar," she said. "This makes it the best choice for speed-enforcement tools on congested roadways. Our motor officers use it most frequently where traffic is heavy. Lidar is a handheld device that has no moving mode. For patrol officers who have other beat responsibilities, radar usually works better because it is always on and ready as they are driving around. "If the officer spots a possible violation, he can consult the dash-mounted radar and get instant results regarding speed of a vehicle coming toward him while moving," Solomon continued. "This is really the key difference and the reason a municipal law enforcement agency would want both." You've Come a Long Way Lidar has come a long way since the Colorado State Police first toyed with it in the early 1990s. The units then were clumsy and costly, even more expensive than the $4,000 cost of today's unit -- radar units sell for about half the price. New lidar units are much smaller and easier to handle, and have been likened to a pair of binoculars. Lidar providers are also beginning to produce more options, including digital photo evidence. When the lidar captures a speed violation, it also records a digital picture for evidence. The image shows the vehicle, its speed, the lidar target and a time stamp. Even with the new options, experts say lidar won't replace radar, but it will be a handy companion that fills a niche, as it does for Ratzell and the Colorado State Patrol. "We feel lidar will help our efforts to control speed, following too closely, and with the safety of officers at an accident scene," he said.
<urn:uuid:b029d65d-cec9-4357-a232-fb2e12cacfa7>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/A-Snipe-at-Speedsters.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969376
1,449
3.4375
3
A converter, also called a transceiver, is a device comprises both atransmitter and a receiver which are combined and share common circuitry or a single housing. When no circuitry is common between transmit and receive functions, the device is a transmitter-receiver. The term originated in the early 1920s. Technically, transceivers must combine a significant amount of the transmitter and receiver handling circuitry . Fiber media converter, also known as fiber transceivers or Ethernet media converters, are simple networking devices that make it possible to connect two dissimilar media types such as twisted pair such as Cat 5 or Cat 6 cable with fiber optic cabling. To be plainer, they receive data signals, sent via one media, convert the signals and then transmit the signals into another. Fiber optic media converters can convert the signals sent from copper cable to signals that run on the fiber cable. They are copper to fiber or fiber-to-fiber conversion devices. They are important in interconnecting fiber optic cabling-based systems with existing copper-based, structured cabling systems. Fiber Ethernet media converters support a variety of communication protocols including Ethernet, Fast Ethernet, fiber media converter gigabit. There are single mode converter and multi-mode converters. For single mode converter, there are dual fiber type and single fiber type, in which the fiber cable functions both as transmitting media and receiving media. While for multi-mode converter, there are only dual fiber types. Single fiber media converters are also called WDM fiber optic converters. Fiber media converter can connect different Local area network (LAN) media, modifying duplex and speed settings. For example, switching media converters can connect legacy 10BASE-T network segments to more recent 100BASE-TX or 100BASE-FX Fast Ethernet infrastructure. For another, existing Half-Duplex hubs can be connected to 100BASE-TX Fast Ethernet network segments over 100BASE-FX fiber. When expanding the reach of the LAN to span multiple locations, fiber transceivers are useful in connecting multiple LANs to form one large campus area network that spans over a wide geographic area. Our fiber media converters are designed to meet the needs for massive fiber network deployment and able to extend a legacy copper based Ethernet network via fiber optic cable to a maximum distance up to100Km. It is fully compliant with IEEE802.3u standards, support bi-directional transmission of 10/100/1000MFast IP Ethernet data or over one multi-mode or single-mode fiber. We can offer compact, cost-effective, low dissipative, high reliable and stable fiber media converter which can be used in standalone applications, or Rack-Mounted applications where multiple media converters can be inserted into a rack-mount chassis (up to 16 units), and allowing all the converters to be powered by a single internal power supply.
<urn:uuid:bbac4642-6e59-4ba2-a373-cbe333342d22>
CC-MAIN-2017-04
http://www.fs.com/blog/the-definition-of-fiber-media-converter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911091
592
3.515625
4
Assumptions are untested beliefs or predictions. We use them in building many models because we are projecting or anticipating results. We have to test assumptions through "what if" testing or sensitivity analysis before accepting the results of the model. DSS analysts and managers need to make assumptions about the time and risk dimensions for a situation. Model-Driven DSS can be designed assuming either a static or dynamic analysis. Making either assumption about changes in a decision situation has advantages and disadvantages. Static analysis is based on a "single snapshot" of a situation. Everything occurs in a single interval, which can be a short or long duration. A decision about whether a company should make or buy a product can be considered static in nature. A quarterly or annual income statement is static. During a static analysis it is assumed that there is stability in the decision situation. Dynamic analysis is used for situations that change over time. A simple example would be a five-year profit projection, where the input data, such as costs, prices, and quantities change from year to year. Dynamic models are also time dependent. For example, in determining how many cash registers should be open in a supermarket, it is necessary to consider the time of day. This time dependence occurs because in most supermarkets there are changes in the number of people that arrive at the market at different hours of the day. Dynamic models are important because they show trends and patterns over time. Also, they can be used to calculate averages per period or moving averages, and to prepare comparative analyses. A comparative analysis might examine profit this quarter versus profit in the same quarter of last year. Dynamic analysis can provide an understanding of the changes occurring within a business enterprise. The analyses may identify possible solutions to specific business challenges and may facilitate the development of business plans, strategies and tactics. DSS analysts and managers also must examine whether it is appropriate to assume certainty, uncertainty, or risk in a decision situation. When we build models the following types of situations need to be considered and an appropriate assumption needs to be made. The assumptions of DSS analysts and managers limit or constrain the types of models that can be used to build a DSS for the situation. Most of the rest of this chapter discusses various types of models.
<urn:uuid:2872b4e8-1bce-45ff-a0d4-e8e6aa75cd4d>
CC-MAIN-2017-04
http://dssresources.com/subscriber/password/dssbookhypertext/ch9/page4.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936706
458
3.34375
3
What You'll Learn - Customize slide layouts, create and use multiple themes, add graphics to backgrounds, and share custom themes between presentations - Use copied, imported, linked, and embedded data from other presentations and programs to create presentations - Use charts and tables to present data graphically - Use graphics and animation effects to enhance presentations, and increase the impact of text and graphics in a presentation - Add notes and annotations to slides; and use PowerPoint features to rehearse, package, and prepare slide shows for presentation Who Needs To Attend This course is intended for students who have a foundational working knowledge of Microsoft PowerPoint 2013, who wish to take advantage of the application's higher-level usability, security, collaboration, and distribution functionality.
<urn:uuid:578f9655-43d0-47a1-9ec2-6177e8eb60e4>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/117646/microsoft-powerpoint-2013-level-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.86753
151
2.59375
3
September 22 is World Car Free Day, a global event which “promotes improvement of mass transit, cycling and walking, and the development of communities where jobs are closer to home and where shopping is within walking distance,” according to a 2009 article in the Washington Post. And while it’s still a relatively novel idea on this side of the Atlantic, it’s already a big deal in Europe; according to a New York Times article that appeared the same year, “1,667 cities across the continent designate at least one day this week as car-free,” a move that’s coupled with a push to reduce pollution and increase mass-transit use. So how can we bring some of this enthusiasm for sustainable living, community engagement, and a healthy work-life balance to the United States, without having to fundamentally rearrange our society? We’re going to take a look at why World Car Free Day is important in the first place, and how we can honor the day while making only minor changes to our lifestyles. First, let’s examine the environmental impact of automobile use. According to the Union of Concerned Scientists, over a third of the carbon monoxide and nitrogen oxide pollutants found in our atmosphere are produced by on-road vehicles. Not only does a world-wide week of reduced automotive travel reduce some of the environmental strain caused by these greenhouse gases, it also serves as a reminder for us to consider the ways our choices impact, even in a small way, the world around us. But World Car-Free Day isn’t just about helping the environment – it’s also about reducing the negative effects of car culture on our personal lives. According to the US Census Bureau, the average US commute time is 25.4 minutes – that’s 50.8 minutes round-trip, or nearly an hour every day spent in traffic. As more and more companies begin offering flexible work schedules and videoconferencing-enabled telecommute opportunities, more of their employees will be able to enjoy the benefits of a reduced commute: more time spent at home with their families, less travel-related stress, and the resulting improved health. In 2010, Reuters reported on a study that found that employees with flexible work schedules had lower blood pressure and heart rates than their cubicle-dwelling peers, and reported better sleep quality and less fatigue as well – and that’s really just the beginning. Thanks to the video conferencing, it’s easier than ever to enjoy the benefits of World Car Free Day today. So celebrate the holiday this Monday – skip the commute this by telecommuting to work. Not only will you save time and money, but you’ll be helping the environment and improving your well-being in the process. Talk about a win-win! If you don’t already have video conferencing download Lifesize Cloud free today!
<urn:uuid:711f7a45-ccc5-44cf-980d-569e153a16ba>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/celebrating-world-car-free-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00179-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943916
606
2.75
3
Although the business world is profit oriented, profits are not the only way or even the best way to measure business performance. There are many other elements to consider that may be even more important than profit. Profit is a short sighted gauge of success. What is more important is sustainability. Sustainability is the characteristic of being able to exist indefinitely. This includes employees, availability of raw materials, machinery and other value-adding elements. Rather than maintaining unsustainable processes in order to increase the profit margin, a company would be better off optimizing efficiency. Efficiency is the ratio of output to input. Input involves all of the resources that are exhausted in order to perform a business process. Output is measured by the quantity of products or number of job orders for a particular service. In order for business processes to become more efficient, the manager must have the ability to effect change. This is not only a characteristic dependent on the manager but it reflects on the entire workforce. The ability to effect change can mean the difference between one batch of defective products and an entire shipment of them. The manager must be aware of the risks involved in the changes that he puts into effect. Making changes to a business process always has pros and cons. The assessment of risks is part of being a manager. Risk is another element to consider when measuring business performance. Lastly, customer satisfaction and corporate goals are two key performance indicators. These may seem like abstract elements but they largely impact a business in ways that are difficult to quantify like reputation and employee morale.
<urn:uuid:5668d7df-bacf-4c71-b1c6-f075ca0f3011>
CC-MAIN-2017-04
http://www.appian.com/about-bpm/business-performance-apart-profit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00573-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96072
309
2.609375
3
THE KILL CHAIN is a concept that has been borrowed from the military. It describes the phases that are involved in an attack. It came into use in the commercial sector in 2011 when Lockheed Martin coined the phrase “cyber kill chain” to describe the phases that are involved in any advanced targeted attack on computer networks. These are: reconnaissance, weaponisation, delivery, exploitation, installation, command and control, and action. Every phase of the kill chain provides an opportunity to disrupt attacker activity using a combination of people, processes and technology. The earlier that an attacker can be disrupted, the easier and quicker it is for an organisation to mitigate the threat and prevent serious interruption to their operations, as well as preventing the consequences and costs of a full-blown assault. Any organisation, whatever its size or line of business, could be the target of an advanced attack. This document describes what options are available for disrupting attackers at each stage of the kill chain.
<urn:uuid:b774d78d-9f8f-4490-a736-a31f39d2ccf6>
CC-MAIN-2017-04
http://www.bloorresearch.com/research/ebook/turning-tables-cyber-criminals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946037
195
2.65625
3
What are webservices? Broadly speaking "Web Services" are programs that let one computer system talk to another computer system over the internet. For example, you might want to enter customer details into a office computer system and have those customer details be checked on an online 'address checking' website or something similar. Today, Web Services are self-contained, self-described, component applications that can be published, located, and invoked across the Web. Web Services provide a standard means of interoperating between different software applications running on a variety of platforms. eXtensible Markup Language (XML) provides the extensibility and language neutrality that is the key for standards-based interoperability of Web Services. They perform functions that can include anything from simple query responses to complex business processes. Once a Web Service is deployed, other applications can discover and invoke it. At present, Web Services require human interaction for identification and implementation. A web service has list of methods and procedures that can be used by any of the applications irrespective of the programming languages, OS, hardware used to develop them. Any type of applications can access the functionality provided by the web service and such functionality is called web methods or web APIs. "How do I check the status of the System 21 (JBA/GEAC/INFOR) background jobs from an RPG program?" I was asked to hack together a quick little routine to check the status of the System21 background jobs, for some specific web-service calls that were talking to an IBM i system running INFOR's System21 ERP. Hmmm... That is an awful lot of words in that sentence! Let's try it again. I am trying to say this "a web-service call to JBA SYSTEM21 needs to know if a background job is running before it does something". In this case the background job its checking for is called "WH_CONFDSP" the warehousing confirm dispatch job (or post shipment as Americans call it). So, we know that System21 ERP does some things interactively and some things in background (aka "batch") mode. We can easily view the background job status' using the System21 Menus: we can either goto menu /L1S (or L1SUS if American) in both the green screen and Infor Client or simply call the program from the command line: The wonderful IBM RPGLE programming language: What is Chain(N) and Chain(NE) For file I/O requests in RPG (ie: Chain, Read, Reade, Setll, etc) we can add some options using (N) - No Lock (E) - Error Logging (NE) - No Lock and Error Logging Writing code in RPG the CHAIN operation code is used to go and get a specific ROW (or record) of data from a file. It returns the first entry that matches the KEY that is being used. In this blog let's look at what happens when the file(CustomerMasterFile) is defined in our program as an UPDATE file. So, if we wanted to retrieve the Customer information from a Customer Master file (assuming it is keyed by the company name and the customer id number) we might code it simply like this: CHAIN (Company : CustomerNumber) CustomerMasterFile; which is basically the same as: SETLL (Company : CustomerNumber) CustomerMasterFile; READE (Company : CustomerNumber) CustomerMasterFile; I've been a fan of talk radio for many years. I enjoy listening to all kinds of talk radio ranging from Political to Theatrical, from Religious to Sports. I regularly drive long road trips and listening to an engaging conversation on the radio keeps my gray matter buzzing. Even when I disagree with someones point of view, it's always interesting to hear other peoples opinions. Mister Iain Lee is an English radio presenter who excels at this art form ;) Sometimes he makes me laugh out loud, sometimes I disagree with him and find myself yelling at the radio and at other times I've got home and just sat in the car for 20 minutes to listen to the end of the show. For the last few months I've been listening to Iain Lee's podcasts from BBC Three Counties Radio... So, I was surprised to see his name plastered all over Twitter yesterday and found out that the BBC has sacked him or maybe he quit lets just say they have parted ways after he was forced to apologize for an interview he hosted. What was his grand mistake? He asked someone with bigoted views on homosexuality if they realised their views were bigoted! I've been working on an interesting project focused on taking some old RPG code and re-factoring it to make it more efficient. Fascinating work for a client that is focused on doubling its IBM i throughput and reducing the CPU load of all its old programs. This has frequently made me choose between writing a single line of %BIF'd up code that looks slick and minimalist - or - writing 3 lines of code that are more readable and arguably (marginally) less efficient. /me remembers the AS400 "SETON vs MOVEL *ON" arguments of yesteryear with a fond smile... I think I've finally found my official position on this: I prefer code readability over specialized (aka clever!) techniques. I would rather write code that is a little more verbose, and well commented rather than do the same thing in a cryptic or obfuscated manner. Yes, I'm guilty of waffling in my comments and sometimes using variable names that make me smile: Remember, Software undergoes beta testing just before it's released. Beta is Latin for "the program still doesn't work". Eldest son has started a new high school class - Engineering. I'm excited for him because Engineering Science was one of the classes that I really enjoyed. Back in the day, we used slide rules, sketched on huge sheets of paper with hard pencils, used sin/cos/tan to calculate angles and load bearing values. Things were different back then but the basic principles were the same! Computers were something that I only ever saw on my weekly dose of Star Trek, Space 1999 or Blakes7. Nowadays, the students will be learning how to use Autocad 2014 on their laptops. Tapping away and creating perfect technical drawings in the space of a few minutes. *sigh* Kids nowadays - /insert rant about how hard it was working down the mines when I was a kid :) But I digress... So, Nate inherited my two year old IBM Lenovo Thinkpad Yoga 13s. It's a solid laptop, touchscreen and perfect for him to carry to from school and to play with Autocad on. Or so we thought. Jerry is a friendly chap, accomplished mechanic with great prices and swift work! After recently relocating to Charleston, South Carolina, my stepsons Jeep Wrangler having the front brakes catch on fire -- well, maybe not a fire but a serious smokescreen for sure! He was driving along in front of me and suddenly the front drivers side started smoking heavily and I could see the Jeep juddering from side to side as he tried to slow it down. So, we swapped cars and I limp it home and get on Craigslist to find a cheap, local mobile mechanic who can come to our house and have a look at the thing... A couple ofyears ago, when I first moved to GMAIL, I wrote a blog (how to display images in Google mail). Things have changed massively since then, not least being the introduction of email .sigs in GMAIL itself. But, nowadays I get so much email fluff I have just decided to split my email into two accounts: GoogleMail for personal stuff and Office365 for business. So I now have two seperate email sigs... Personal email -- FACEBOOK Image in gmail signature Now, with a basic knowledge of HTML you can edit the source code to include any extra bits you wish. I have a much more boring business email signature. So, when sending mails from my *work* account I simply use images pulled from the business server @ projex.com
<urn:uuid:104d0adb-d0d4-4156-acb8-3ac68c98e79f>
CC-MAIN-2017-04
https://www.nicklitten.com/blog?page=4
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946494
1,710
2.890625
3
The word "interoperable" entered the popular vocabulary on Sept. 12, 2001. The awful day before revealed the shocking inability of New York City police, firefighters and other emergency personnel to talk to each other in times of crisis. Things have changed since then. San Diego, Calif., for example, is well on the way to crafting a communications system that reaches across multiple emergency agencies in multiple jurisdictions. "It means that we can talk to each other irrespective of the crisis, which means that we can save time and save lives," said Jill Olen, deputy chief operating officer for Public Safety and Homeland Security in San Diego. The U.S. Department of Homeland Security (DHS) recently recognized San Diego's work, naming the San Diego region one of the top four major urban areas in terms of disaster interoperability. City and county managers say careful technology planning has been the key to their success. Officials from diverse cities, towns and unincorporated areas began laying the groundwork for this objective 10 years ago, according to Sue Levine, regional interoperable communications project manager at the San Diego State University Research Foundation. "The people responsible for the area's response networks have been working cooperatively for a long time to provide the best level of interoperability," Levine said. In May 1998, that planning led to the creation of a regional communications system uniting approximately 200 agencies on the same technology platform by implementing the same technology tools across a broad range of government entities. "It shows that we take regionalization very seriously," said Ron Lane, director of San Diego County's Office of Emergency Services. The emphasis was put on technological consistency, rather than complexity. Throughout the region, virtually all emergency services communicate using 800 MHz Motorola equipment. Municipalities share a Motorola SmartZone wide-area trunked communication system. The system has been programmed to include "talk groups" of police, fire and other users, creating a seamless communications network among all participating jurisdictions. "When any agency joins the system they purchase compatible radios so that they can utilize the towers and the infrastructure," Lane said. "This way, a dispatch agency can call any radio on the frequency." Ironically the only jurisdiction not directly plugged into this shared space is San Diego itself. "We have patches and Band-Aids on our systems," Olen said, "We have made it work, but we need to all be on a single system." The problem stems from an untimely purchase. The city bought its 800 MHz radios four years ahead of the county, and while they are compatible on the street, officer to officer, county dispatchers can't reach police officers on their beat in the city. The Band-Aid has been to route calls through a city dispatch hub. This lets county dispatchers deliver messages to city emergency workers. Planners needed a robust system, given the enormity of their task. San Diego County is roughly the size of Connecticut, with 65 separate fire districts just within the county, not counting the city or unincorporated areas. "It could potentially be hundreds of agencies, depending on the severity of the incident," Olen said of the response to a crisis. To ensure connections across the system, planners have looked beyond the radios themselves. In the wake of wildfires in 2003, for instance, the county spent $20 million upgrading its towers and other communications infrastructure. "One of the things we learned during that fire was that our system was quickly overwhelmed, especially when some of our repeaters and towers went down in the fire," Lane said. "We have focused on building redundancy and adding additional infrastructure to achieve that redundancy around the region." The effort appears to be paying off. In summer 2006, a fire broke out in the Cleveland National Forest, the southernmost national forest in California. Even with 1,000 firefighters on the scene, the county heard virtually no complaints about communications issues. By coincidence, DHS representatives were in town that same week to assess the communications situation. Looking ahead, regional planners said they hope to maintain communications integrity as the area moves toward Project 25 (P25), a new suite of standards for digital radio communications that aims to help jurisdictions and agencies talk to one another more effectively. The region will need to move approximately 29,000 radios to P25 before the new standard comes into play in 2012. "Between now and then," Lane said, "we need to come up with a road map and a way forward." As many government technologists know, technology is not the biggest hurdle. Radio replacements will cost $100,000 and while a potential $1 billion nationwide grant from the DHS may support that effort, the budget still is going to be tight. "There certainly is going to be a challenge in funding an upgrade of this magnitude," Lane said.
<urn:uuid:f1727308-14cb-4424-bc5e-7f07bb2a5d10>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/100493439.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955615
985
2.59375
3
Table of contents. I. Find the process and try terminating it + Alternative steps for finding and terminating the process II. Locate the malicious file and try deleting it III. Using Pocket KillBox for removal of difficult malware Each program is a collection of files. To start the program you launch an executable file that runs the entire program or some of its components. When you launch an executable, part of its code is being loaded into computer’s memory. This code is the process. It allows the system to run the corresponding program. In simple phrase, every running program is represented by its main process (or task). If such process doesn’t exist, the application doesn’t run at the moment. Parasites are programs and also have processes. However, unlike regular software, their processes run without user knoledge. You cannot terminate a parasite like a common application by simply closing its window. That’s why you have to learn how to kill malicious processes. Each program consists of files. Even spyware, a virus or a different parasite – all have their own files. Removing a parasite often means deleting all its files. However, some files cannot be easily erased. You cannot delete the file while it’s used by an active application. Furthermore, some files are “invisible”. Imagine the situation: your anti-spyware program keeps detecting a parasite, and you know where its files reside. You open the corresponding folder, but see nothing in there! The parasite continues performing malicious actions and its files remain in that “empty” directory. You wonder how this happens? Files can really be “invisible”. However, it’s not their exceptional feature – the operating system simply hides them from you. Such OS behavior can be a result of recent malware activity. Fortunately, there are several ways to make your system display such files, and thus allow you to delete them. In this guide manual process termination methods are described. These methods can be applied to all modern Windows operating system versions. The following instructions also explain how to find a file, make it visible (in case it’s hidden) and completely remove it from the system. This information is also fully applicable to folders (directories). 1. Start Windows Task Manager Use the following key combination: press CTRL+ALT+DEL or CTRL+SHIFT+ESC. This will open the Windows Task Manager. If that didn’t work, try another way. Press the Start button and click on the Run… option. This will start the Run tool. Type in taskmgr and press OK. This should start the Windows Task Manager. Image 1. Start the Task Manager 2. Find and terminate the process Within the Windows Task Manager click on the Processes tab (it is in the red box). This will bring the complete list of all active tasks. Find the process by name. Names are in the first column from the left. Click on the Image Name button (it is designated by the blue box) to sort tasks in alphabetical order. Then scroll the list to find required process. Select it with your mouse or keyboard and click on the End Process button (in the green box). This will kill the process. Image 2. Terminate the process Let’s assume you know the file name or at least a part of it. In such case run Windows default search tool: Start > Search > For Files and Folders. Type in the file name or its part to the search field. Specify search location. For better results select “Look in: Local Hard Drives” or “Look in: My Computer“. Now start searching. The file should appear in search results. Image 6. Search for the file If you have no idea how to spell a filename, but you know, where it can possibly be, then you should try finding this file manually. Most parasites attempt to hide their tracks, so you will have to enable the displaying of hidden and system protected files. Open Windows Explorer. Click on the Tools menu and select Folder Options. Image 7. Make hidden files visible Choose the View tab. In the Advanced Settings list find the option Show hidden files and folders (on Image 8 it is designated by the red box) and select it. Then remove a checkmark next to the line Hide protected operating system files (Recommended) (in the blue box). Image 8. Change view settings Some files may still be invisible. To see them, launch the Command Prompt. Press the Start button and then select Run. This should open the Run dialog. Type in cmd and press enter or click on the OK button. Image 9. Open the Command Prompt Type in dir /A name_of_the_folder to the console. This will list all the files that reside in that folder. Hidden files will also be displayed. Image 10. View folder content Simply delete the file using the Windows Explorer or any other program that you use to browse the file system. Don’t forget to empty the Recycle Bin. If an error message appears saying that file is in use and cannot be removed, try terminating the associated process and then delete the file. To do this you will have to open the Windows Task Manager (press CTRL+ALT+DEL or CTRL+SHIFT+ESCAPE). Then in the Processes tab select the corresponding process and click on the End Process button. However, some processes will run immediately after you terminate them. In such case you have to reboot your system into Windows Safe Mode (this tutorial article explains how to do this). In this mode many system services are disabled and programs do not run automatically on startup. Practically any file can be easily removed. The malicious file can also be deleted from the Command Prompt. Open the Command Prompt and navigate to the folder, where the harmful file is. To do this issue the following command: cd name_of_the_folder. Then invoke this command: del name_of_the_file. To delete the folder use another command: rmdir /S name_of_the_folder. Image 11. Delete the folder from the Command Prompt Sometimes malicious files cannot be deleted normally or even after entering into Safe Mode. Sophisticated parasites use integrated rootkits and special techniques in order to lock their files and prevent them from being deleted. Usually, such files run processes that cannot be terminated by the Task Manager. In such cases specially designed third-party tools should be used. One of them is Pocket KillBox, a tiny, but priceless utility designed for terminating harmful processes, deleting malicious files and folders containing malware. If the above steps did not help you to delete a parasite file or kill its process, please do the following. There is no need to install the tool. Pocket KillBox comes as a single executable file. Just unpack (if you downloaded Pocket KllBox as an archive) and run the downloaded file. This will launch the utility. 2. Delete the file Type in the full path of file you want to delete as shown on Image 12. Make sure that the Standard File Kill option is selected (it is designated by the blue box). Then click on the Delete file button (it is designated by the green box). Image 12. Delete the file with KillBox As parasites becoming more complex and sophisticated, there is always a possibility that even Pocket KillBox or similar powerful tool may fail removing certain files. In such case it is highly recommended to repeat the removal procedure in Windows Safe Mode (this tutorial explains how to do restart your system into it). If the file cannot be deleted in Safe Mode too, repeat the removal once again, but this time select the Delete on Reboot option instead of Standard File Kill. Then restart your computer. Pocket KillBox will attempt to delete the file on next system startup. If the process or file is still present, if you do not know how to follow steps above, if you are not sure why you have to do certain tasks, or the above guide is too difficult for you, feel free to try our recommended automatic spyware removers. You can also ask for help in our free spyware removal forum.
<urn:uuid:e20aaebd-c14f-4cc6-be96-ec3b11513722>
CC-MAIN-2017-04
http://www.2-spyware.com/news/post203.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902489
1,710
2.875
3
With cyber attacks becoming the norm, it is more important than ever before to undertake regular vulnerability scans and penetration testing to identify vulnerabilities and ensure on a regular basis that the cyber controls are working. Geraint Williams, Senior Consultant at IT Governance, explains: “Vulnerability scanning examines the exposed assets (network, server, applications) for vulnerabilities – the down side of a vulnerability scan is that false positives are frequently reported. False positives may be a sign that an existing control is not fully effective, i.e. sanitizing of application input and output, especially on web applications.” Penetration testing looks at vulnerabilities and will try and exploit them. The testing is often stopped when the objective is achieved, i.e. when an access to a network has been gained – this means there can be other exploitable vulnerabilities not tested.” Organizations need to conduct regular testing of their systems for the following key reasons: - To determine the weakness in the infrastructure (hardware), application (software) and people in order to develop controls - To ensure controls have been implemented and are effective – this provides assurance to information security and senior management - To test applications that are often the avenues of attack (Applications are built by people who can make mistakes despite best practices in software development) - To discover new bugs in existing software (patches and updates can fix existing vulnerabilities, but they can also introduce new vulnerabilities). Geraint adds: “If people are attacked through social engineering this bypasses the stronger perimeter controls and exposes less protected internal assets. The worst situation is to have an exploitable vulnerability within infrastructure, application or people that you are not aware of, as the attackers will be probing your assets even if you are not. Breaches, unless publicized by the attackers, can go undetected for months.” Vulnerability scanning and penetration testing can also test an organizations ability to detect intrusions and breaches. Organizations need to scan the external available infrastructure and applications to protect against external threats. They also need to scan internally to protect against insider threat and compromised individuals. Internal testing needs to include the controls between different security zones (DMZ, Cardholder data environment, SCADA environment etc.) to ensure these are correctly configured. Pen testing should be conducted regularly, to detect recently discovered, previously unknown vulnerabilities. The minimum frequency depends on the type of testing being conducted and the target of the test. Testing should be at least annually, and maybe monthly for internal vulnerability scanning of workstations, standards such as the PCI DSS recommend intervals for various scan types. Pen testing should be undertaken after deployment of new infrastructure and applications as well as after major changes to infrastructure and applications (e.g. changes to firewall rules, updating of firmware, patches and upgrades to software).
<urn:uuid:a69559b3-4f1a-4588-9b6d-23364120a99c>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/09/09/how-important-is-penetration-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00252-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945509
572
2.640625
3
Fiber optic transceiver is only used for the photoelectric signal conversion, protocol converter is a physical layer device used to convert one protocol to another fiber optic transceivers, fiber broadband converts the twisted pair equipment, 10/100/1000M conversion. Protocol converter, the majority is basically a 2 layer devices, often ran into a protocol converter RAD 2M E1 line convert V.35 data lines connected to the router device, of course there are, 2M turn twisted pair Ethernet, with 2M communication lines can achieve a range of LAN remote access and expand. Maintenance of this equipment it is not generally not bad, as long as it does not burn. The Optical Multiplexer is divided into the optical transmitter and off receiver, the optical transmitter is to receive the electrical signal, converts the optical signal transmitted in the optical fiber access, while the optical receive the machine. Network video server is to accept analog audio and video signals, cameras and monitor head directly to the signal after digital compression encoding, the popular MPEG-4, dedicated network transmission equipment, video servers generally come with Ethernet RJ-45 interface and fiber FC interface with SCSI interface external hard drive for front- end storage, it is the phrasebook remote network monitoring system in the best equipment. The role of the digital optical multiplexer is to digitize the image, voice and data signals to be transmitted, then the processing of these digital signals multiplexed, multi-channel low-speed digital signals are converted into the way high-speed signal, and this signal is converted into an optical signal. At the receiving end to restore the optical signal into an electrical signal, the high-speed signal, and finally restore the data signals into an image, voice and data signals. The fiber optic transceivers and optical multiplexer similarities are to perform photoelectric conversion; While they are different in that the fiber optic transceivers primarily pass newtork, only to perform photoelectric conversion, without changing the encoded data is not used for other processing, and the transceiver is for Ethernet network, running 802.3 protocol, only for point-to-point connection. Optical multiplexer in addition to the photoelectric conversion, but also the data signal multiplexing and demultiplexing, a core transmission of video, 485/422 / audio / ON light quantity / network, usually Optical Multiplexer out a plurality of pairs of E1 lines. Fibre transceiver applications such as banking, education, networking. While, SDH, PDH optical multiplexer mainly for telecom operators to provide more point-to-point data circuits; The Video optical multiplexer is mainly for security monitoring, the timeliness field of distance education, video conferencing, video transmission requirements, at the same time control, switching, voice, Ethernet can transmit signals to meet the needs of more service applications, so we sometimes also referred to as integrated optical multiplexer.
<urn:uuid:34b22299-e586-4dc8-9f33-abe9c16613a2>
CC-MAIN-2017-04
http://www.fs.com/blog/the-difference-between-fiber-optic-transceiver-and-optical-multiplexer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00556-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893892
600
2.71875
3
Multi-Step Authentication and Why You Should Use It Authentication is one of the essential components of security. It is one part of the concept known as authentication, authorization, and accounting (AAA). Authentication is the process of claiming an identity the proving that you are that claimed identity. Authorization is the mechanism to control what you can access or do. Accounting is the recording of events into a log to review the activities against the rules and policies in order to detect violations or confirm compliance. All three of these should be addressed when constructing a system in order to have a reasonable foundation for reliable security. As users of online sites and services, we have no control over the security policies and technologies implemented on those sites and services. At best, we may be offered a few authentication options. If any authentication mechanisms are available in addition to a standard password, you need to take full advantage of those benefits. When a site or service offers authentication options, those options are usually one of the following: OAuth single sign-on Certificate authentication is the process of verifying identity, which involves the use of a digital certificate. A digital certificate is produced by a certificate authority (CA) using asymmetric public key cryptography. The digital certificate itself is the subject's public key signed (i.e., encrypted) the CA's private key. A digital certificate is a form of trusted third-party authentication. Its most common use is by servers (i.e., web sites) on the Internet. A web site with a digital certificate is the first party. The second party is the visiting end-user. The third party is the CA that issued the certificate to the web site. If the end-user already knows and trusts in the CA, then the enduser can trust in the identity of the web site by validating the digital certificate. Unfortunately, most end-users do not have a digital certificate. And, even if users obtained a digital certificate from a public and respected certificate authority, most online sites and services are not configured to accept client-side certificates. When it becomes common or standard for servers to accept client-side certificates, this will be the most secure authentication option. Until then, you will likely have to use one of the other options (assuming one of them is offered/supported by a particular site). OAuth Single Sign-On OAuth is a type of single sign-on solution that is gaining popularity online. Single sign-on is the concept of authentication when a single logon event can be used to allow access into a collection of systems. This is different than traditional authentication where each system would require its own unique and local authentication. Single sign-on has been a standard element in company networks for decades. There have been many attempts to duplicate this concept on the Internet, but only now with the adoption of OAuth is that actually becoming a reality. OAuth is a way to share or borrow the authentication from one site to grant access to another site. Let's call the first site a primary site. The primary site must support OAuth and allow its authentication to be shared by other secondary sites. Secondary sites must also support OAuth and then select which primary site's authentications they will accept. The way OAuth works is: 1. You visit a secondary site and click on an offering to use a primary site's authentication to access the secondary site. 2. This takes you to the primary site. If you do not have a current active session with the primary site, you are prompted to authenticate to the primary site. 3. With an active session to the primary site, you are prompted to confirm or accept the secondary site's request to link to your account on the primary site. 4. Clicking to confirm this returns you to the secondary site where you now have access to that site. Once OAuth has been confirmed on a secondary site, all future visits to that site will automatically log you in as long as you have a current active session with the primary site. The three most common or popular sites used as primaries are Facebook, Twitter, and Google, but there are dozens of other potential primary sites as well, including Amazon, Dropbox, Evernote, Flickr, LinkedIn, Microsoft, Netflix, PayPal, Tumblr, and Yahoo. Plus, there are numerous sites supporting OAuth to function as secondary sites. OAuth is a huge convenience for users as it reduces the number of unique logon credential sets that you must keep track of. However, this is not necessarily a good security option. If the primary site's authentication is a basic password only, then when your account is compromised on the primary site, the intruder automatically gains access to all the linked secondary sites as well. By the way, the primary site will maintain a list of secondary sites that have been linked. This list is for your convenience when you want to disconnect an OAuth link, but an intruder can use it to follow your links to those secondary sites. ONLY use OAuth to link sites back to a primary site if you have configured multi-factor or multi-step authentication on the primary site. Otherwise, you would be better served setting a long and complicated password for each site and putting up with the hassle of managing multiple difficult credential sets (see my whitepaper Ten Steps to Better, Stronger Passwords for guidance on this).
<urn:uuid:ac80e75d-480e-4900-b8e0-0d194d3f6990>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/multi-step-authentication-and-why-you-should-use-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931455
1,091
3.546875
4
Egnatia Motorway in Greece is one of the largest road construction projects in Europe and creates a vitally important link to the rest of Europe in one direction and Asia in the other. It wanted to develop a wireless traffic surveillance project along the nine kilometers of the motorway that runs from the Metsovo interchange to the Peristeri interchange. Despite being only nine kilometers in length, this route has the most difficult and varied terrain on the motorway, hence the need for surveillance cameras. A comprehensive study was undertaken to determine the best approach to the project and revealed a number of unique factors for consideration. The complexity of the terrain meant that a more traditional point to multipoint topology would be impossible. Instead, the wireless solution provider who came up with the study recommended that a serial (cascading) scheme with multiple point to point junctions. IP-based surveillance cameras would deliver the necessary functionality for the project including pan, tilt zoom functionality, high definition images and warnings of irregular ˉeventsˇ such as a traffic accident. The network topology meant that load would gradually increase because of the cameras, peaking at the final point. There were 13 junctions that required installation along the length of the highway and six of them were not visible because of tunnels and bends in the Egnatia Motorway. The cameras needed to be spaced relatively far apart approximately one kilometer between each one. Another key requirement was that the cameras and the wireless equipment needed to stand up to the adverse weather conditions which prevail in the area, such as continuous rainfall for most of the year, very low temperatures, strong winds and a high snowfall during the winter. It was also recommended that the system output should not vary noticeably when large vehicles pass, especially between tunnels. In order to satisfy the complex mix of requirements dictated by the survey, the projectˇs contractor recommended a combination of Axis 213 PTZ network cameras and Proximˇs Tsunami MP.11 5054-R. The Axis network cameras met a number of requirements dictated by the environmental conditions including pan, tilt, zoom functionality, the ability to operate under all light conditions and a vandal-resistant and outdoor-proof housing. Crucially, the cameras can send alerts about unusual traffic events to motorway staff. To ensure high-quality video transmissions, the project contractor insisted that the cameras initially support the M-JPEG video format and later MPEG-4. Since the Axis Camera Station supports both high quality recordings in M-JPEG and MPEG-4 format, this requirement was easily met. The built-in 26x optical zoom, auto focus lens along with 12x digital zoom ensured that the camera could be placed one kilometer apart as required. The combined solution delivered excellent performance and met Egnatia Motorwayˇs complex mix of requirements. Following the successful deployment of the project, the regionˇs control center now has full command over the network cameras and the combination of high bandwidth and low latency of wireless network ensures that high quality images are constantly available to motorway staff. Staff members are also alerted to unusual traffic events so they can respond accordingly. The system is completely secure and can be managed remotely.
<urn:uuid:cf80518b-6d0f-46b2-a7e2-04e0a5d7d597>
CC-MAIN-2017-04
https://www.asmag.com/showpost/5878.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00400-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947305
652
2.609375
3
An Internet inventor, or perhaps dreamer, named Dave Hakkens has a pretty cool idea about how phones should work. According to Hakkens, phones should be created a la carte, with each component plugging into a master board. This would allow individual parts to be replaced as they wear out or need upgraded, without replacing the entire phone. It would also allow individual users to design their own components. His idea is called a Phoneblok. Hakkens isn't an engineer, and doesn't pretend to be able to build such a phone. But he's gotten almost a million people to support his idea through the Thunderclap website, which lets people socially “back” projects they might buy. Then if the project gains a critical mass of interest, that evidence can be shown to funding companies as proof of concept to get things built. No money is involved with Thunderclap, so it requires less of an endorsement than Kickstarter, where backers have to pony up money. For feds, a Phoneblok would be a perfect BYOD phone. Those who needed a bigger screen or a faster processor could configure one exactly how they wanted. And those who aren't allowed to have a camera in their workplaces could simply leave that block blank or fill it with something more useful, like a bigger hard drive or a more powerful speaker. Some have commented on Hakkens’ YouTube page that Phonebloks would be impossible to build, as components couldn't share a common interface due to wildly different power and data interfaces. And given current technology, that's probably true. But I think the point of the Thunderclap program is to show that this cool idea could be the next big thing. For those who think that Phonebloks might be impossible, I submit the humble Raspberry Pi for consideration. This little $25 computer came from nowhere a few years ago, and now people are using it as the brain power for everything from electric cars to weather monitoring stations to filming wildlife with infrared cameras at night. My guess is that if given some type of platform to work with, the world would come up with lots of uses for a Phoneblok — and even individual components for it. So what do you think? Could Phoneblok be the next big thing, or are they simply the unrequited longings of a technology dreamer? Posted by John Breeden II on Oct 22, 2013 at 12:40 PM0 comments In many of the old war movies I watch, mission planners crowd around a sand table in a smoke-filled room to map out an operation. But it's not just the military that uses this collaborative strategy. Many government agencies that deal with emergencies and rapidly changing situations find that sand table collaboration leads to better, more informed decisions. A high-tech decedent of the sand table is a GIS information table, like the TouchTable that can display very detailed maps with vital information overlaid on top as needed. But it also required collaborators to be in the same room. When dealing with a situation like the Boston Marathon bombing, such a gathering might not be possible for some time after the event. And real-time information from people in the field might not become part of the decision process until it's too late to make a difference. For agencies that need the power of a TouchTable, but also the speed of real-time collaboration, the company is coming out with a product that runs on tablets, PC and mobile devices — and even changing the company name to reflect this new focus. It’s now called TouchShare. The move to a mobile product specifically aimed at government agencies was an easy one to make, according to TouchShare officials. "Collaboration is one of the most important things you can do in an emergency," said Bob Pette, CEO of TouchShare. "Anytime you have multiple smart people working together, you will come up with a better decision than if one or two people were making those calls alone." The new TouchShare software runs on PCs and Windows tablets right now, and it is being optimized for the iOS platform with a projected January release date. After that, Pette said, the platform will be rolled out to Android devices as well. The main program can be installed in the cloud, with seat licenses shared among users. The software currently runs as part of the Amazon Web Services but is in the process of getting certified to run within specific government cloud services. For agencies that really want to lock down their security, the software can be set to run on a standard server behind an agency firewall. Once deployed, it will allow mobile users to not only collaborate with others as if they were standing around one of the big TouchTables but also to add information, photos, videos or personal assessments into the pool of data that the group is considering. Doing something on the tablet interface, such as circling an important landmark, is also instantly shared with the rest of the group. Mobile TouchShare costs $4,000 to install on a server or in the cloud, and that comes with 100 client licenses. Pette said he expects the iOS version will cost about the same amount, and it will also be offered as an additional fee to existing systems. It offers an effective way for agencies ranging from military to emergency responders to collaborate about important events. Although I think something is lost with the removal of the actual sand, I suppose given the fact that most mobile electronics are made of silicone is sort of like carrying the sand along for the ride, just in a much more useful format. Posted by John Breeden II on Oct 10, 2013 at 10:51 AM0 comments Some federal workers sent home because of the government shutdown have been told to turn off their government-issued Androids, BlackBerrys and iPhones. A presidential memorandum on implementing “orderly shutdown procedures” warns against furloughed employees doing work outside the office, including by using mobile devices or remote connections. The memo gives agencies some leeway, saying some could have employees turn in their agency-issued devices, but “others may determine that circumstances warrant a different approach.” Some agencies, such as the State Department have told furloughed employees to turn off their agency devices. But Good Technology, which manages mobile device security for many federal agencies, warns that powering down devices could result in lost data. "People may not realize that, as part of their mobile device management plan, there are security practices that will wipe out a phone if it does not connect to its network for a certain amount of time," said Jeffrey Ait, Good Technology's public-sector director. "Normally after a reasonable amount of time, say after seven days, if a phone has not connected back with mom, we assume that it's been lost or stolen." As a security precaution, phones managed by Good Technology, and many other mobile device management providers, can be set to wipe themselves if they are left powered off for days at a time. This is done to protect government data that might be sitting on the device in the event that it’s lost or stolen. But a shutdown scenario probably wasn’t considered. The wipe command and timer is housed on the phone, but can be modified by administrators back at the office, assuming they are considered essential employees and can work to make the changes during the shutdown. However, a powered-down phone still won't get that update and would still wipe itself out after the connection time has expired. "Most of the data that will be lost is probably just a copy of what is on the Exchange server, but not always," Ait said. "It's something that users may not have thought about, which could become a problem if the shutdown lasts for a while." Ait said the easiest way to avoid getting a phone wiped out would simply be to turn it on from time to time during the shutdown so that it can connect back to the host network and do its check-in. Also, if administrators have suspended the check-in requirement or lengthened it, the phone would then be updated with the new profile. Whether this breaks the no-work rule is an open question, but it’s something that mobile users and their administrators probably need to consider. Posted by John Breeden II on Oct 02, 2013 at 2:13 PM1 comments
<urn:uuid:d3d8f3f2-0f75-4193-a6c0-ce77258a1817>
CC-MAIN-2017-04
https://gcn.com/Blogs/Mobile/List/Blog-List.aspx?m=1&Page=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00216-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965763
1,712
2.53125
3
A PDF stream object is a sequence of bytes. There is a virtually unlimited number of ways to represent the same byte sequence. After Names and Strings obfuscation, let’s take a look at streams. A PDF stream object is composed of a dictionary (<< >>), the keyword stream, a sequence of bytes and the keyword endstream. All streams must be indirect objects. Here is an example: This stream is indirect object 5 version 0. The stream dictionary must have a /Length entry, to document the length of the (encoded) byte sequence. The stream and endstream keywords are terminated with the EOL character(s). In this example, the byte sequence is a set of instructions for the PDF reader to render the string Hello World with a given font at a precise position. It’s precisely 42 bytes long. In this example, the byte sequence is represented literally, but it’s possible (and usual) to encode the byte sequence. This is done with a stream filter. A stream filter specifies how the sequence of bytes has to be decoded. Let’s take the same example, but with an ASCII85 encoding: The /Filter entry instructs the PDF reader how to decode the byte sequence (/ASCII85Decode). Notice the change of the length value. There are many encoding schemes (ASCII filters and decompression filters), here is a list: This list is not so long, so why do I claim an almost limitless number of ways to encode a stream? I have 2 reasons: - Many filters, like /FlateDecode, take parameters (in this case, the compression level), which influence the encoding too - Filters can be cascaded, meaning that the stream has to be decoded by more than one filter Here is our example, where the stream is encoded twice, first with ASCII85 and then with plain HEX (I know, this is rather pointless, but it yields simple and readable examples): Cascading filters also inspired me to create a couple of test PDF documents. For example, I’ve created a 2642 bytes small PDF document that contains a 1GB large stream (a ZIP bomb of sorts). Some PDF readers will choke on this document.
<urn:uuid:a41ce448-d2f1-4eef-bdce-c992e7c38c08>
CC-MAIN-2017-04
https://blog.didierstevens.com/2008/05/19/pdf-stream-objects/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.848519
464
2.765625
3
The hype that surrounds the Internet of Things (IoT) has ballooned to unprecedented levels; pundits were already pointing it out last year, and yet the fervor has only grown stronger. Beneath the surface, however, the questions to ask here are, “Is all the attention being showered upon the IoT justified,” and, “How large do we make our bet that the IoT will meet its prophesied impact, and do so sustainably?” To answer these questions confidently, we need first to understand what the IoT is. Even this is not without its challenge. If you were to ask an industry stakeholder, a technology analyst, an associated media professional, and a bystander on the street what the Internet of Things is, you’d likely get four different definitions. Some might say it is “any natural or man-made object that can be assigned an IP address and imbued with the ability to transfer data over a network.” Others may tell you it is, “a network of physical objects embedded with electronics, software, sensors and connectivity to enable them to achieve greater value and service by exchanging data with the manufacturer, operator and other connected devices.” Still others might take it further to say it is a, “phenomena where anything that can be connected, will be connected.” The differences in understanding are subtle, typically measured in the degrees by which people believe the IoT has taken over, or will take over, the technology landscape. What tends to get lost in the discussion is the IoT’s fundamental building block, Machine-to-Machine communication, or M2M. Most people may not be able to tell you where M2M fits into the picture, or even what it is. Would they simply deem it synonymous with IoT? Some likely still do believe that M2M and IoT are the same; but in fact, they are not. Yes, the waters do get murky as one starts to dissect all the features that differentiate these two monikers and what makes each unique, but let us attempt here to set the record straight. The wheel – a befitting metaphor It’s argued that the most significant invention of all-time was the wheel. Simple, elegant, yet with game-changing utility. In keeping, we’ll use a wheel to explain how M2M and IoT diverge, and come together to accomplish more. It starts in the middle Every wheel starts with a hub. The hub harnesses energy from the axle and delivers it outward. It acts as a connection for the entire machine. In the IoT/M2M discussion, connectivity is similarly critical to the entire operation. Without a means for devices to talk to one another, there is no IoT. And while there are a growing number of useful connectivity options, from cellular to WiFi, satellite, Bluetooth, Zigbee and LPWA, the central tenet remains constant: Connectivity is what enables an interconnected world of “things” to become real. It is the hub. We now arrive at the spokes, each of which aligns in an outward flow to tie the hub to the rim. The spokes provide both structural strength and a means to transfer energy to the road, and propel the vehicle along. This is where we find M2M in our analogy. Individual M2M applications each act like a wheel’s spokes, delivering connectivity outward to provide value to the marketplace. M2M asserts itself most visibly in specific, vertically-focused markets such as automotive, healthcare, energy, utilities, environmental and industrial monitoring and physical security. These touchpoints of value combine to give structure and shape to the overall evolution of the IoT and drive it forward, just as a wheel’s spokes do for the machine. Connecting the dots There’s a lot that goes on before a wheel’s rim finds its place in the equation, but indeed this is where the benefits of the machine get realized as a whole. We will therefore liken the IoT to the rim. Not only does it tie the spokes to the tire, where the real work gets done, but it also ties the spokes to one another. In terms of our analogy, the IoT is the place where individual M2M applications can begin to interact among themselves and create even greater value. Which brings us, full circle (see what I did there?), to the questions posed at the beginning: “Is all the attention being showered upon the IoT justified,” and, “How large do we make our bet that the IoT will meet its prophesied impact, and do so sustainably?” Indeed, the full providence of the IoT is still coming into focus within the overall “connected” ecosystem, but the sheer number of ways it can be envisioned to make everyday lives run much more easily, smoothly, and productively would tend to support the hype. Look at the soon-to-release Apple Homekit, for example, which brings together thermostats, lighting, power usage monitoring and ingress/egress tracking in a single connected platform for the home. This is a product that is rightly classified as an expression of the IoT. But at its base, it is a series of many M2M applications bound together, where individual connected “machines” (light dimmers, door and window sensors, on/off switches and thermostats) communicate with the Homekit server, which then, in turn, communicates with your iOS phone or watch. Hub, to spoke, to rim. Moreover, as we’ve just explored, there are myriad new connected health devices coming online, wearable and otherwise, that are changing the game for healthcare delivery and a person’s ability to control his or her own wellness. Again, these are all contingent on a sensor or data collection device communicating to a server, and then to a medical professional back to a personal Smartphone. What begins to emerge, then, is that the IoT is not only about devices communicating with each other, but also being able to bridge the gaps between specific verticals. As we see it, this is the path to new value (E.g. security systems working with HVAC in ways that benefit a building owner or occupant). Developing synchronicity here will open doors to tremendous new possibilities, and it’s where IoT utility can really rise to astronomical levels. For these reasons, we are certainly on board with many of the connected device forecasts thrown about in the media. But, as with the wheel, all is dependent on making wise use of foundational components: connectivity (the hub) and M2M (the spokes). And that is precisely where we at KORE will make our mark on this quickly growing landscape, guiding developers and practitioners through that matrix.
<urn:uuid:09689ac7-e6f4-4597-8e25-d5a9ef2576ae>
CC-MAIN-2017-04
http://www.koretelematics.com/blog/m2m-vs.-the-iot-a-primer
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00032-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953934
1,412
2.734375
3
ICANN stipulates that all domains must be connected to a registrar, and all applications for domain names must be submitted through a registrar. Today there are hundreds of thousands of Web sites registered. The process is simple and not very costly. However, spammers can easily register domains, and it is often hard for registrars to distinguish between spammers and legitimate organizations and Web site developers. Spammers often rotate domains in their spam messages as they feel this tactic allows them to circumvent some antispam filters that depend on pattern matching to block the spam message. On average approximately 90 percent of all spam messages today contain some kind of a URL. A recent analysis conducted by Symantec showed that over the last 7 days 68 percent of all URLs in spam messages had a com TLD, 18 percent had a cn ccTLD which is reserved for China and 5 percent had a net TLD. Ru is the ccTLD for Russia and de is ccTLD for Germany. Spammers often rotate between TLDs to try and evade antispam filters. Directories are often used to arrange or display certain files, and Symantec found that while 71 percent of URLs in spam messages had no directory, 2.4 percent had more than six directories. Similar to subdomains scammers often use many directories as the spammers try to create URLs that look like legitimate URLs. Source: Symantec’s “The State of Spam Monthly Report” – January 2009.
<urn:uuid:d525ac6e-df7c-44aa-9ab9-0dc382c9e67a>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2009/01/09/closer-look-on-the-spam-url-tld-distribution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955287
309
2.90625
3
Extremely high frequency (EHF) is the highest band of radio waves, and operates at a frequency range of 20 GHz-300 GHz. The radio waves in this band have wavelengths that are in the range 10 mm to 1mm, therefore, the waves in this band are called as millimeter waves (mmW). In wireless communications, frequency is a major factor that ascertains the feasibility of the technology. Millimeter wave technology operates in an unregulated bandwidth that is available world-wide, with better efficiency than traditional wireless LAN frequencies such as 2.5 GHz or 5 GHz. The mmW technology has many applications in imaging, telecommunications, home networking, satellite communication, and construction & manufacturing, among others, due to its unique features. The value of the U.K. millimeter wave technology market is estimated to reach $13 million by the end of 2014, and is expected to reach $78 million by 2018, at a CAGR of 59.9%. U.K. is one of the sizeable markets in Europe, with many key players across the value-chain already established in the region. It has become a manufacturing hub for millimeter wave technology. This report also looks into the whole value chain of the market. It also focuses on the parent markets and the sub-markets of this industry, thus identifying the total potential market that can be tapped by millimeter wave technology. The report, based on an extensive research study of the market and the related frequency ranges, frequency band licenses, and product industries, is aimed at identifying the entire market, specifically the mmW products and mmW components in all the applications excluding consumer electronic applications. The report covers the overall market and sub-segment markets through extensively detailed classifications, in terms of both revenue and shipments. The market segmentation detailed in the report is as given below: · MM scanners and imaging systems (active and passive), MM RADAR and satellite communication systems, perimeter and surveillance RADAR, application-specific RADAR, satellite systems, MM telecommunication equipment, mobile back-haul equipment, sub-segments such as small-cell and macro-cell, Pico-cell and Femtocell, enterprise, and other networking equipment By Application Areas: · Mobile and telecommunication, consumer & commercial, healthcare, industrial, automotive & transportation, military, defense & aerospace, and other emerging and next-generation applications · 8 GHz to 43 GHz Frequency Millimeter wave – sub-segments – 23 GHz-38 GHz Band, 38 GHz-43 GHz Band; 43 GHz to 80 GHz Frequency Millimeter wave – sub-segments - 57 GHz-64 GHz Band, 71 GHz-76 GHz Band; 80 GHz -300 GHz Frequency Millimeter wave - 81 GHz-86 GHz Band, 92 GHz-95 GHz Band By Licensing Nature: · Fully-licensed, light-licensed, and unlicensed frequency bands Along with the market data, customize the MMM assessments to meet your company’s specific needs. Customize to get a comprehensive summary of the industry standards and deep dive analysis of the following parameters: · In-depth trend analysis of products in competitive scenario · Product matrix, which gives a detailed comparison of product portfolio of each company · Product matrix, which gives a detailed comparison of product portfolio for mmW market along with the various applications they are used for · A comprehensive coverage of regulations followed in the market Data from mmW Firms · Fast turn-around analysis of firms’ responses to market events and trends · Various firms’ opinions about different products and applications from different companies · Qualitative inputs on macro-economic indicators and mergers & acquisitions · In-depth analysis of the market for different frequency ranges and band licenses in the mmW market Shipment/ Volume Data · Tracking the value of components shipped annually · Tracking the quantitative inputs of mmW applications Trend Analysis of Applications · Application matrix, which gives a detailed comparison of application portfolio of each company · Application matrix, which gives a detailed comparison of application portfolio for mmW market along with the various products they are used for · Application matrix, which gives a detailed comparison of application portfolio for mmW market along with frequency range and band licenses Frequency Range & Band Licenses Analysis · Frequency range matrix, which gives a detailed comparison of frequency range portfolio and frequency band licenses on the basis of different products they are used in Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:2163a0bc-8936-4860-9612-048be63cdc61>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/united-kingdom-millimeter-wave-technology-4989164948.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900676
962
2.84375
3
Don’t let privilege creep be the downfall of a project to secure your company’s IT systems. What is Privilege Creep? Despite the work Microsoft has done to make Windows easier to run with standard user access, some Windows features and legacy applications still require administrative privileges. When users experience an issue, the first step that the helpdesk often takes is to grant administrative privileges to check that the problem isn’t caused by a lack of access rights. Even if the problem turns out not to be caused by standard user permissions, administrative privileges are often deliberately left in place so that the user doesn’t continue to call the helpdesk, or the privileges are simply forgotten and never removed. This phenomena of moving from standard user privileges to administrative rights is referred to by system administrators as privilege creep. What are the Reasons for Privilege Creep? The motives for granting users administrative privileges vary from one environment to another, some of the most common are: - The ability to connect hardware, such as printers and scanners - To install new programs or update software - Problems associated with legacy applications - Access to Windows tasks or features that require administrative privileges - Addressing support issues when users take notebooks out of the office - Pressure from end users on helpdesk staff The Consequences of Privilege Creep While administrative privileges will both solve some issues in the short term and appease users, problems will occur further down the line as a direct result of granting these rights. When users are given administrative-level access to Windows without the assistance of a 3rd-party privilege management solution, the elevated rights cannot be rationally limited to just one task or application. Administrator rights give users, or malicious processes running in the security context of the user, the opportunity to compromise the system and any data processed therein. Moreover, users can circumvent other controls, such as Group Policy settings, allowing changes to critical system configuration that might render PCs unstable or insecure. Logging in to Windows with administrative privileges also significantly increases the risk that vulnerabilities can be exposed in applications or the operating system, and reduces the overall reliability and stability of Windows, leading to a higher total cost of ownership. In managed environments where organizations distribute customized Windows images, any configuration work undertaken as part of that process can easily be reversed by administrative users, including the ability to install unlicensed software. It is important to maintain a stable and known configuration to ensure that IT can provide adequate support, comply with regulatory mandates, and secure desktops so that users have a consistent and dependable computing experience. Avoiding Privilege Creep - Understand how privileges are used across your network before admin rights are removed: in the long run, you’ll experience fewer problems with users logging tickets to the helpdesk because they are unable to run applications or complete tasks that require privileged access. - Use a third party privilege management solution to allow granular privilege control. Microsoft’s built-in tools do not allow IT to elevate privileges for end users in a way that enables them to carry out the tasks necessary while still providing a secure computing experience. - Don’t forget the notebook users. They’re harder for IT to provide support to due to the more infrequent availability of remote access. Do you have the provision to grant your remote users temporary access to a given task or application, without granting full administrative privileges? Be Secure and Flexible Whilst some users may be able to support themselves when given administrative privileges, this increases the potential for damage and also prevents organizations from meeting the requirements of industry standards or regulatory compliance mandates, demonstrating poor governance. As such, it is always prudent to remove administrative privileges and implement a 3rd-party solution to give standard users the flexibility to perform the tasks required for their everyday duties.
<urn:uuid:02cbf5a7-f17e-4e71-9819-969bce3a7afe>
CC-MAIN-2017-04
https://blog.avecto.com/2013/07/dont-fall-victim-to-privilege-creep/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00546-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90147
778
2.609375
3
Table Of Contents Ethernet was developed by Xerox Corporation's Palo Alto Research Center (PARC) in the 1970s. Ethernet was the technological basis for the IEEE 802.3 specification, which was initially released in 1980. Shortly thereafter, Digital Equipment Corporation, Intel Corporation, and Xerox Corporation jointly developed and released an Ethernet specification (Version 2.0) that is substantially compatible with IEEE 802.3. Together, Ethernet and IEEE 802.3 currently maintain the greatest market share of any local-area network (LAN) protocol. Today, the term Ethernet is often used to refer to all carrier sense multiple access collision detect (CSMA/CD) LANs that generally conform to Ethernet specifications, including IEEE 802.3. When it was developed, Ethernet was designed to fill the middle ground between long-distance, low-speed networks and specialized, computer-room networks carrying data at high speeds for very limited distances. Ethernet is well suited to applications on which a local communication medium must carry sporadic, occasionally heavy traffic at high peak data rates. Ethernet and IEEE 802.3 Ethernet and IEEE 802.3 specify similar technologies. Both are CSMA/CD LANs. Stations on a CSMA/CD LAN can access the network at any time. Before sending data, CSMA/CD stations "listen" to the network to see if it is already in use. If it is, the station wanting to transmit waits. If the network is not in use, the station transmits. A collision occurs when two stations listen for network traffic, "hear" none, and transmit simultaneously. In this case, both transmissions are damaged, and the stations must retransmit at some later time. Back-off algorithms determine when the colliding stations retransmit. CSMA/CD stations can detect collisions, so they know when they must retransmit. This access method is used by traditional Ethernet and IEEE 802.3 functions in half-duplex mode. (When Ethernet is operated in full-duplex mode, CSMA/CD is not used.) This means that only one station can transmit at a time over the shared Ethernet. This access method was conceived to offer shared and fair access to multiple network stations/devices. It allows these systems fair access to the Ethernet network through a process of arbitration by dictating how stations attached to this network can access the shared channel. It allows stations to listen before transmitting and can recover if signals collide. This recovery time interval is called a slot time and is based on the round-trip time that it takes to send a 64-byte frame the maximum length of an Ethernet LAN attached by repeaters. Another name for this shared LAN is a collision domain. For half-duplex operation, the mode on which traditional Ethernet is based, the size of your collision domain can be limited by the physical limitations of the cabling utilized. Table 4-1 lists the collision domains for 10/100/1000 Mbps. The limitations of the cable itself can create even smaller boundaries. Because the 64-byte slot time is consistent for 10/100/1000 transmission speeds, this severely limits the scalability for 1000BaseX to operate in a network with a diameter of more than 20 meters. To overcome this obstacle, use carrier extension bits in addition to the Ethernet frame size to extend the time that transmits on the wire. This expands the network diameter to 100 meters per segment, like 100BaseT. For this system to work, everyone must abide by the same rules. For CSMA/CD the rules are as follows: 1. Listen—Stations listen for signals on the wire. If a signal is detected (carrier sense), then stations should not attempt to transmit frame. If a station "hears" another signal on the wire while transmitting the first 64 bytes of a frame, it should recognize that its frame has collided with another. 2. Collision detect—If a station detects a collision, it must back off from sending the frame using the truncated back-off algorithm. The back-off algorithm counts the number of collisions, if any, to determine how long a station must wait to retransmit the frame. This algorithm backs off each time that a collision is detected. The goal of this method is to provide the system a way to determine how many stations are trying to transmit simultaneously and then guess when it should be safe to try again. The way that the truncated back-off algorithm tracks and adjusts timers is based on the value of 2n , where n is the number of collisions encountered during transmission of the frame. The result is a guess of how many stations may be on the shared channel. This result gets plugged in as a range, counting from zero, for the number of slot times to wait. The algorithm randomly selects a value from this range as shown in Table 4-2. Table 4-2 Back-off Algorithm 2n value1 Actions Stations either try to retransmit immediately or wait for one slot time. Stations randomly wait zero, one, two, or three slot times to retransmit. Stations randomly wait from zero to seven slot times. . . . you get the point. 1 2n where n = the number of collisions Depending on the number of collisions the algorithm randomly selects to back off, a station could potentially wait a while before retransmitting. The algorithm collision counter stops incrementing at 10, where the penalty wait time is selected from a range of 0 to 1023 slot times before retransmission. This is pretty bad, but the algorithm will attempt to retransmit the frame up to 16 collisions. Then it just gives up, and a higher-layer network protocol such as TCP/IP will attempt to retransmit the packet. This is an indication that you have some serious errors. When a station successfully sends a frame, the collision counter (penalty) is cleared (for that frame) and no loner must wait for the back-off time. ("Interface" statistics are not cleared, just the timer is). Any stations with the lowest collisions will be capable of accessing the wire more quickly because they do not have to wait. Both Ethernet and IEEE 802.3 LANs are broadcast networks. In other words, all stations see all frames, regardless of whether they represent an intended destination. Each station must examine received frames to determine whether the station is a destination. If it is a destination, the frame is passed to a higher protocol layer for appropriate processing. Differences between Ethernet and IEEE 802.3 standards are subtle. Ethernet provides services corresponding to Layers 1 and 2 of the OSI reference model, whereas IEEE 802.3 specifies the physical layer (Layer 1) and the channel-access portion of the link layer (Layer 2), but does not define a logical link control protocol. Both Ethernet and IEEE 802.3 are implemented in hardware. Typically, the physical manifestation of these protocols is either an interface card in a host computer or circuitry on a primary circuit board within a host computer. Now, having said all that regarding the regular operation of traditional Ethernet and 802.3, we must discuss where the two separate in features and functionality. The IEEE 802.3 standard was based on traditional Ethernet, but improvements have been made to this current standard. What we have discussed so far will not scale in today's demanding service provider and enterprise networks. Full-Duplex Operation 10/100/1000 Everything you've read so far dealt with half-duplex operation (CSMA/CD, back-off timers, and so on). Full-duplex mode allows stations to transmit and receive data simultaneously. This makes for more efficient use of the available bandwidth by allowing open access to the medium. Conversely, this mode of operation can function only with Ethernet switching hubs or via Ethernet cross-over cables between interfaces capable of full-duplex Ethernet. Full-duplex mode expects links to be point-to-point links. There are also no collisions in full-duplex mode, so CSMA/CD is not needed. Autonegotiation allows Ethernet devices to automatically configure their interfaces for operation. If the network interfaces supported different speeds or different modes of operation, they will attempt to settle on a lower common denominator. A plain repeater cannot support multiple speeds; it knows only how to regenerate signals. Smart hubs employ multiple repeaters and a switch plane internally to allow stations that support different speeds to communicate. The negotiation is performed only when the system initially connects to the hub. If slower systems are attached to the same smart hub, then faster systems will have to be manually configured for 10 Mbps operation. To make sure that your connection is operating properly, IEEE 802.3 Ethernet employs normal link pulses (NLPs), which are used for verifying link integrity in a 10BaseT system. This signaling gives you the link indication when you attach to the hub and is performed between two directly connected link interfaces (hub-to-station or station-to-station). NLPs are helpful in determining that a link has been established between devices, but they are not a good indicator that your cabling is free of problems. An extension of NLPs is fast link pulses. These do not perform link tests, but instead are employed in the autonegotiation process to advertise a device's capabilities. Autonegotiation on 1000BaseX networks works at only 1000 Mbps, so the only feature "negotiated" is for full- or half-duplex operation. There may be new vendor implementations on the market that can autonegotiate speeds 10 to 1000BaseX, but at this time they are not widely deployed. A backup alternative, called parallel detection, works for 10/100 speeds if autonegotiation is disabled or is unsupported. This is basically a fallback mechanism that springs into action when autonegotiation fails. The interface capable of autonegotiation will configure itself for bare bones 10-Mbps half-duplex operation. IEEE 802.3 specifies several different physical layers, whereas Ethernet defines only one. Each IEEE 802.3 physical layer protocol has a name that summarizes its characteristics. The coded components of an IEEE 802.3 physical layer name are shown in Figure 4-1. Figure 4-1 IEEE 802.3 Physical Layer Name Components A summary of Ethernet Version 2 and IEEE 802.3 characteristics appears in Tables 4-3 and 4-4. There are other 100Basen implementations, but they are not widely implemented for various reasons. One particular case in point is 100BaseT4. This system uses four pairs of copper wire and can be used on voice- and data-grade cable. 10/100BaseT systems perform well on Category 5 data-grade cable and use only two pairs of copper wire. Ethernet is most similar to IEEE 802.3 10Base5. Both of these protocols specify a bus topology network with a connecting cable between the end stations and the actual network medium. In the case of Ethernet, that cable is called a transceiver cable. The transceiver cable connects to a transceiver device attached to the physical network medium. The IEEE 802.3 configuration is much the same, except that the connecting cable is referred to as an attachment unit interface (AUI), and the transceiver is called a media attachment unit (MAU). In both cases, the connecting cable attaches to an interface board (or interface circuitry) within the end station. Ethernet and IEEE 802.3 frame formats are shown in Figure 4-2. Figure 4-2 Ethernet and IEEE 802.3 Frame Formats Both Ethernet and IEEE 802.3 frames begin with an alternating pattern of ones and zeros called a preamble. The preamble tells receiving stations that a frame is coming. The byte before the destination address in both an Ethernet and an IEEE 802.3 frame is a start-of-frame (SOF) delimiter. This byte ends with 2 consecutive 1 bits, which serve to synchronize the frame reception portions of all stations on the LAN. Immediately following the preamble in both Ethernet and IEEE 802.3 LANs are the destination and source address fields. Both Ethernet and IEEE 802.3 addresses are 6 bytes long. Addresses are contained in hardware on the Ethernet and IEEE 802.3 interface cards. The first 3 bytes of the addresses are specified by the IEEE on a vendor-dependent basis, and the last 3 bytes are specified by the Ethernet or IEEE 802.3 vendor. The source address is always a unicast (single node) address, whereas the destination address may be unicast, multicast (group), or broadcast (all nodes). In Ethernet frames, the 2-byte field following the source address is a type field. This field specifies the upper-layer protocol to receive the data after Ethernet processing is complete. In IEEE 802.3 frames, the 2-byte field following the source address is a length field, which indicates the number of bytes of data that follow this field and precede the frame check sequence (FCS) field. Following the type/length field is the actual data contained in the frame. After physical layer and link layer processing is complete, this data will eventually be sent to an upper-layer protocol. In the case of Ethernet, the upper-layer protocol is identified in the type field. In the case of IEEE 802.3, the upper-layer protocol must be defined within the data portion of the frame, if at all. If data in the frame is insufficient to fill the frame to its minimum 64-byte size, padding bytes are inserted to ensure at least a 64-byte frame. In 802.3 the data field carries a payload header in addition to the payload itself. This header serves the logical link control sublayer of the OSI model and is completely independent of the MAC sublayer and physical layer below it. This header, functionally known as 802.2 encapsulation, contains destination service access point (DSAP) and source service access point (SSAP) information. This will notify higher protocols what type of payload is actually riding in the frame. It functions like the "type" field in traditional Ethernet and is used by upper-layer network protocols such as IPX. Network software developed to support the TCP/IP networking suite uses the type field to determine protocol type in an Ethernet frame. The type field and the LLC header are not replacements for each other, but they serve to offer backward compatibility between network protocol implementations without rewriting the entire Ethernet frame. After the data field is a 4-byte frame check sequence (FCS) field containing a cyclic redundancy check (CRC) value. The CRC is created by the sending device and is recalculated by the receiving device to check for damage that might have occurred to the frame in transit. Table 4-5 provides troubleshooting procedures for common Ethernet media problems. When you're troubleshooting Ethernet media in a Cisco router environment, the show interfaces ethernet command provides several key fields of information that can assist with isolating problems. The following section provides a detailed description of the show interfaces ethernet command and the information that it provides. show interfaces ethernet Use the show interfaces ethernet privileged exec command to display information about an Ethernet interface on the router: •show interfaces ethernet unit [accounting] •show interfaces ethernet [slot | port] [accounting] (for the Cisco 7200 series and Cisco 7500) •show interfaces ethernet [type slot | port-adapter | port] (for ports on VIP cards in the Cisco 7500 series routers) unit—This must match a port number on the selected interface. accounting—(Optional) This displays the number of packets of each protocol type that have been sent through the interface. slot—Refer to the appropriate hardware manual for slot and port information. port—Refer to the appropriate hardware manual for slot and port information. port-adapter—Refer to the appropriate hardware manual for information about port adapter compatibility. This command first appeared in Cisco IOS Release 10.0. If you do not provide values for the argument unit (or slot and port on the Cisco 7200 series, or slot and port-adapter on the Cisco 7500 series), the command will display statistics for all network interfaces. The optional keyword accounting displays the number of packets of each protocol type that have been sent through the interface. The following is sample output from the show interfaces command for the Ethernet 0 interface:Router# show interfaces ethernet 0Ethernet 0 is up, line protocol is upHardware is MCI Ethernet, address is aa00.0400.0134 (via 0000.0c00.4369)Internet address is 220.127.116.11, subnet mask is 255.255.255.0MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec, rely 255/255, load 1/255Encapsulation ARPA, loopback not set, keepalive set (10 sec)ARP type: ARPA, PROBE, ARP Timeout 4:00:00Last input 0:00:00, output 0:00:00, output hang neverOutput queue 0/40, 0 drops; input queue 0/75, 2 dropsFive minute input rate 61000 bits/sec, 4 packets/secFive minute output rate 1000 bits/sec, 2 packets/sec2295197 packets input, 305539992 bytes, 0 no bufferReceived 1925500 broadcasts, 0 runts, 0 giants3 input errors, 3 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort0 input packets with dribble condition detected3594664 packets output, 436549843 bytes, 0 underruns8 output errors, 1790 collisions, 10 interface resets, 0 restarts Table 4-6 presents show interfaces ethernet field descriptions.
<urn:uuid:faaca9a3-351c-4ee5-8e70-0a9f1a9b434a>
CC-MAIN-2017-04
http://www.cisco.com/en/US/docs/internetworking/troubleshooting/guide/tr1904.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887719
3,691
3.640625
4