text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Security Strategies in Windows Platforms and Applications
Learn the new risks, threats, and vulnerabilities associated with Microsoft Windows operating system.
This course focuses on new risks, threats, and vulnerabilities associated with the Microsoft Windows operating system. Particular emphasis is placed on Windows XP, Vista, and 7 on the desktop, and Windows Server 2003 and 2008 versions. The course highlights how to use tools and techniques to decrease risks arising from vulnerabilities in Microsoft Windows operating systems and applications. The course also covers more information on Microsoft Windows OS hardening, application security, and incident management.
In addition to premium instructional content from Jones & Bartlett Learning's comprehensive Information Systems Security and Assurance (ISSA) curriculum, this course provides access to a customized "virtual sandbox" learning environment that aggregates an unparalleled spectrum of cybersecurity applications. Providing instant, unscheduled access to labs from the convenience of a web-browser, this course allows you to practice "white hat" hacking on a real IT infrastructure—these are not simulations. Winner of the "Security Training and Educational Programs" top prize at the prestigious 2013 Global Excellence Awards by Info Security Products Guide, the industry's leading information security research and advisory guide, these labs provide valuable exposure to complex, real world challenges and over 200 hours of training exercises on how hackers and perpetrators use these applications and tools.
This course covers content within the following industry certification exams:
- Certified Information Systems Security Professional (CISSP) - five content domains covered
- Security + - four content domains covered
- System Security Certified Practitioner (SSCP) - six content domains covered
- National Institute of Standards and Technology (NIST) - seven content domains covered
- 8570.01 - four content domains covered | <urn:uuid:29cd97b2-2542-46a9-823d-7845f54e38bd> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/119876/security-strategies-in-windows-platforms-and-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00568-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.832386 | 352 | 2.625 | 3 |
The United Arab Emirates is the second largest country in Gulf Cooperation Council. The sum total of the country’s water resources falls short of its ever-growing requirements. More than 90% of its groundwater is saline. Only 2 freshwater aquifers remain. The infrequent and scarce rains that do occur are bound in by dams that prevent flash flood water from running into the sea. The country receives 78 mm/year of rainfall and loses much of its rain and surface water to evaporation. The total runoff the surface is estimated at 150 million m3. There are no perennial streams. This leaves the brackish ground and sea water as the only sources of water that can be accessed through desalination. As of 2013, the average per capita consumption of water by its population stood at 500 liters, 82% above the global average in 2013, putting further stress on the scarce resource. A total of XX million imperial gallons is derived from desalination for usage in the country. As of 2015, the desalination market in UAE was worth USD X.X billion. The market size is expected to grow at a CAGR of XX.XX%.
The depleting natural precipitation and ground-water levels and increasing population are the major drivers of the sector in the region. A continued effort at increasing diversification of government income from hydrocarbons is another factor that has led to an increase in construction projects, industries, manufacturing plants, etc., leading to more demand for fresh water. Moreover, the government is supporting and encouraging the establishment of desalination plants to meet the nation’s demands.
Restraints and Challenges
The biggest challenge of desalination is the cost. As per a study, the cost of desalinated water per meter cube was USD 1.04, 0.95 and 0.82 for MSF, MED, and RO, assuming a fuel cost of USD 1.5/ GJ. Moreover, energy accounts for approximately three-fourths of the supply cost of desalination. Transportation cost is also added to the overall cost, making desalination a very costly process. Another negative impact of desalination is on the environment with the treatment of brackish water leading to pollution of fresh water resources and soil. Discharge of salt on coastal or marine ecosystems also has a negative impact.
Moves to diversify the economy and move away from the hydrocarbon dependent economy has resulted in a mere 25% reliance on oil for the country’s GDP. However, a corresponding increase in the number of industries, businesses, and the ensuing population has put a sizeable stress on the available water resources, implying a strong future growth in the desalination industry in UAE. Historically, all desalination plants have used oil, leading to high levels of greenhouse gases. A third of all greenhouses gases emitted by UAE come from desalination plants. This has led to an increased focus on renewable energy alternatives for the production of fresh water through desalination, opening up opportunities for desalination in the renewable energy sector.
About the Market
PESTLE Analysis (Overview): Macro market factors pertinent to this region
Market Definition: Main as well as associated/ancillary components constituting the market
Key Findings of the Study: Top headlines about market trends & numbers | <urn:uuid:7ada8ff5-94d6-40b6-9c18-2a443cc202a1> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/desalination-industries-in-united-arab-emirates-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937783 | 675 | 3.015625 | 3 |
Healthy VoIP Nets�Part II�Network Management Architectures
Our previous tutorial began our series on VoIP network management, and looked at the five functional areas of network management that are defined by the International Organization for Standardization (ISO) as part of their research into the seven-layer Open Systems Interconnection (OSI) model. In review, these five areas are: fault management, accounting management, configuration management, performance management and security management.
But lets take a step back for a moment, and assume that voice communications is only one element of your enterprise communications responsibilities. You are likely dealing with some type of centralized or distributed computing system, local and wide area data networks (LANs and WANs), Internet access, and possibly a video conferencing network. You many also have deployed integrated applications, such as call centers or unified messaging, which depend on a mix of voice and data elements. So when we consider these five network management areas, we need to discuss them in the context of the integrated enterprise network, not merely a singular function.
If we look at this enterprise challenge from a historical perspective, the network management business of a decade or two ago was dominated by two industries: the mainframe computer vendors and the telecommunications providers.
You may have heard of IBMs NetView, Digital Equipment Corporations Enterprise Management Architecture (EMA), and AT&Ts Unified Network Management Architecture (UNMA), that fit the mold of a centralized management system that would allow input from distributed elements such as a minicomputer (in DECs case) or a PBX (in AT&Ts case).
But as networking architectures became more distributed, network management systems had to evolve as well. Instead of a centralized system, where all system performance information was associated with a large systemsuch as the mainframe or a PBXdistributed models, based upon client/server networking were developed.
With this shift toward distributed computing, the 1990s also brought about the development of two different architectures and protocols for distributed systems management.
First, the ITU-T furthered the ISOs network management efforts, publishing a network management framework defined in ITU-T document X.700 (see http://www.itu.int/rec/T-REC-X.700-199209-I/en).
Also defined was a network management protocol that would facilitate communication between the managed elements and the management console, called the Common Management Information Protocol, or CMIP, which is defined in other X.700-series recommendations (of which there are many).
Management of telecommunications networks in particular is addressed in an architecture called the Telecommunications Management Network, or TMN, which is defined in the M.3000-series of recommendations, and specifically detailed in M.3010 (see http://www.itu.int/rec/T-REC-M.3010-200002-I/en). This architecture provides a framework for connecting dissimilar systems and networks, and allows managed elements from different manufacturers to be incorporated into a single management system. This architecture has been most popular with the telcos, likely owing to their allegiance to the ITU-T, and its emphasis on telephony-related research and standards.
The 1990s also brought about the Internet surge, and with that came a friendly rivalry between some of the incumbent standards bodies (such as the ISO and ITU-T) that many considered slow and bureaucratic, and the Internet Engineering Task Force (IETF), that was more likely to be on the cutting edge and therefore quicker to respond to new innovations.
Also at that time, the networking community was patiently waiting for computing and communications vendors to embrace the seven-layer OSI model in their products, and after a few years, grew impatient. That environment gave rise to the development of the IETFs own network management architecture, called the Internet Network Management Framework, and an accompanying protocol, the Simple Network Management Protocol, or SNMP, first documented in RFC 1157 (see ftp://ftp.rfc-editor.org/in-notes/rfc1157.txt). This network management system embeds simple agents inside networking devices, such as IP PBXs, gateways, and servers, which report operational status and exception conditions to the network manager, which provides oversight for the enterprise or a portion of that enterprise. Communication between agents and manager is handled by the SNMP.
But no matter how large your enterprise, and the breadth of existing network management systems that you have deployed to support mainframe, WAN, or LAN environments, its not likely that those existing systems can adequately manage a VoIP networking infrastructure. And the reason is pretty simpletraditional networking management focuses on the systems, making sure that disk storage is adequate, the CPU is not over-utilized, a WAN link has sufficient capacity, or the number of collisions on the Ethernet LAN is not excessive. By contrast, VoIP networks must focus on the real-time conditions of the end users. In other words, the performance management, since it must be measured in real time, is a crucial factor.
Our next tutorial will consider this real time performance challenge in more detail, examining the end user perceptions of voice quality that make the management of a VoIP environment more challenging than that of traditional LANs or WANs.
Copyright Acknowledgement: © 2007 DigiNet Corporation ®, All Rights Reserved
Mark A. Miller, P.E., is President of DigiNet Corporation®, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons. | <urn:uuid:9020186a-c032-4461-b0db-8b4e3d91955a> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/unified_communications/Healthy-VoIP-Nets151Part-II151Network-Management-Architectures-3713201.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933665 | 1,159 | 2.71875 | 3 |
The word "robot" entered English from Czech. The 1920 play Rossum's Universal Robots featured artificial lifeforms designed for manual labor who eventually rose up and exterminated their human creators. A common trope, to be sure, and here's another one: at the end, a pair of robots have learned to love one another, and the last human gives them his blessing, declaring them the new Adam and Eve.
Humans have had an ambivalent relationship with robots real and fictional ever since. The key to easing that discomfort has been to build bots we can relate to -- if not as people, than at least as something with a personality. This slideshow will detail some attempts that met varying degrees of success. | <urn:uuid:00cd64ad-e382-4a9c-9360-8a086eba5143> | CC-MAIN-2017-04 | http://www.cio.com/article/2369585/hardware/150877-Relatable-robots-Machines-you-can-feel-for.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97311 | 144 | 2.953125 | 3 |
In Routing Tables part one, we covered the basic purpose of a routing table and how an end device or intermediate device such as a router or multilayer switch can route based off its table. We also viewed different routing tables and how they are used to find a given destination network. In this blog, we will cover the purpose of routing metrics, administrative distance, and static routing and how they are important to routing table with the SHOW IP ROUTE command.
As a quick review, I mentioned that routers will always route between directly connected interfaces. A routing protocol isn’t needed to forward packets between these subnets.
This example shows that a router has multiple connected interfaces that it routes between automatically. The codes listed in the table specify how the route was learned and C is for directly connected. (Remember as long as the interface is up and you applied a subnet to that interface, then the router will route between directly connected interfaces.) This is also applicable with Inter-VLAN routing as discussed last month. As shown in the example below, the router can route between its sub-interfaces that are associated with specific VLANS for router on a stick.
Complementary to the last example is the routing table that is produced by that configuration. You will notice that the sub-interfaces are listed as directly connected.
Next, Routers need routing protocols to route between subnets that aren’t directly connected. These protocols include examples such as Router Information Protocol (RIP) and Open Shortest Path First (OSPF). Within these protocols their own respective algorithms used a ranking structure to validate on path over another. This ranking is known as a Metric and varies by the protocol in use (for example: Hop count for RIP and Cost for OSPF), but they all have a similar feature, the lowest metric is always best. In other words, if a router calculates two or more paths to a given destination, and one path has a lower metric number that the other path ways, then that route will be added to the routing table.
In example 4, there are two routes that were learned by RIP (as specified with the code entry of “R”). The routing entry contains “[120/1] and [120/10]”. The 1 and the 10 are the metrics and represent a hop count of 1 and 10 respectively for the router to forward packets to the destination networks of 126.96.36.199/24 and 172.16.1.0/24.
While the lowest metric is the more preferred pathway, sometimes a router may have more than one path to a give destination.
In the example above, from router Denver perspective, it has two possible routes to get to the network 192.168.1.0/24. If the only routing protocol running between all routers were RIP, then the best path would be selected based off least number of hops. Therefore that path would be from <DALLAS-LANGLEY> with a hop count (metric of 2) instead of <SAINT PAUL- GRAND RAPIDS- LANGLEY> with a hop count of 3. However if a more sophisticated routing protocol such as OSPF were running between all routers then the best path would be chosen based off cost rather that hop count (Cost is a metric, which has an inverse relationship with bandwidth meaning that higher the bandwidth the lower the cost and therefore must be as better path.) Therefore with OSPF the best path way will be <SAINT PAUL- GRAND RAPIDS- LANGLEY> instead of <DALLAS-LANGLEY> because the bandwidth is higher (or cost is lower) over the upper route than the lower route.
Sometimes that may be a need to have two routing protocols configured at the same time. These reasoning may be due to having different vender routers or legacy routers interconnected with newer routers. Therefore if router DENVER learned a route from RIP <DALLAS-LANGLEY> as the best path and a route from OSPF <SAINT PAUL- GRAND RAPIDS- LANGLEY> as a best path, it would have to decide on which routing protocol is more trustworthy. This can be accomplished with Administrative Distance. Administrative distance is a tie breaker between different routing protocols when multiple are learned from different routing protocols. Denver needs to only trust one routing protocol RIP or OSPF. All routing protocols have default Administrative distance number assignments (These number technically are arbitrary however Cisco has its own protocols with a better Admin Distance that most others.) The lower the Administrative distance the better the believability. RIP administrative distance is 120 while OSPF is 110. There for all routes learned from OSPF are more preferred that RIP. Below are a few examples the most common routing protocols with their respective administrative distances. These will be displayed in the directly before the metric of a given protocol (such as [120/1] 120 is the administrative distance for RIP).
Lastly, Static route are administratively configured routes. Rather than using large amounts of memory to build tables like most dynamic routing protocols, static routes are simple and specify a next hop or an interface that the router must use to reach that given destination.
As shown above two routes were created and are found in the routing table. One entry has as next hop address 10.1.1.2 and the other which points out of FastEthernet 0/0. With that case the router will ARP to find the destination MAC- Address to get the next hop information at layer 2 (which in this case will also be router 10.1.1.2.)
This concludes Routing table part two. I discussed multiple views of the routing table and how to interpret the output its key features. These values include the code table, administrative distance and also how static routes appear with the SHOW IP ROUTE command | <urn:uuid:470c59bd-8791-48de-8bf7-ebafae421496> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/05/22/routing-tables-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944288 | 1,212 | 3.875 | 4 |
The term “digital native”, coined by Marc Prensky in 2001, refers to post-millennial children (3rd generation) who have never been without technology. Prensky defines digital natives as those born into an innate “new culture”. “Digital immigrants”, in contrast, are old-world settlers, who previously lived in the analogue age and immigrated to the digital world. A good amount of teachers in the US fall into the digital immigrant category while all of their students from pre-K through college are now digital natives.
Because young people are practically born with an iPad in their crib, it would be a logical assumption that students know more about technology than their teachers. On average, children are 12.1 when they receive their first mobile device. Chances are they’ve been borrowing their parents’ tablet for years before that, though. A 2013 study by Common Sense Media found that 75% of children under 8 years old had used a handheld digital device on a regular basis, and 38% of babies under 2 years old had used tablets. (What?!)
So, if kids are using technology from birth and driving their own devices in middle school, they must have a huge tech knowledge advantage over their teachers, right? Well…studies show that the most popular uses for devices with kids of all ages are games and social media. Although post-millennials have more access, knowledge, and use of technology in general than their “old” teachers and parents, their knowledge lies in usage for entertainment, not learning.
Teachers know more about technology for education
Even though the digital native generation has technology ingrained into their culture, it is teachers who know the most about utilizing that technology for educational value, says Shiang-Kwei Wang of the New York Institute of Technology. In fact, the study concluded that if it weren’t for the coaxing of teachers, most students would never use their devices for more than listening to music and messaging their friends.
For this study Wang investigated the tech skills of 24 science teachers and 1,078 middle school students from 18 different schools in two states. It was found that although students had a rich digital life outside of school, most were not familiar with common tools designed to make information production and sharing easier. Their teachers, regardless of age or technological skill level, were quite savvy with digital resources for problem solving, learning, and researching.
Teaching the use of digital resources for knowledge
While the studies previously mentioned conclude that teachers, although digital immigrants, know more about utilizing technology to gain knowledge and resources, there is still a divide between the current usage and teaching of digital resources and their potential opportunities. In order to bridge that gap between technology usage inside and outside the classrooms, teachers must learn how to use technology for problem solving and creative thinking.
“School-related tasks usually require students to use technology limited to researching information and writing papers. Rarely do teachers provide opportunities to allow students to use technology to solve problems, enhance productivity, or develop creativity.” (phys.org)
According to a study by Tom VanderArk & Carri Schneider, digital learning promotes a deeper learning experience, which is why teachers need to facilitate use of technology for this purpose.
Digital Learning Resources:
ST Math – teaches math visually, promoting conceptual understanding
Phet – science simulations and game-based history course
Edmodo – social media-like platform teaches collaboration
Managing technology while teaching
Impero knows that managing devices in classrooms is the first order of business when incorporating digital learning into lessons. If devices aren’t managed, teaching critical thinking using iPads isn’t going to work! For more information about classroom technology management contact us. | <urn:uuid:f3d5d221-1da3-4d36-80bc-df617b150f74> | CC-MAIN-2017-04 | https://www.imperosoftware.com/even-with-digital-natives-teachers-know-more-about-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00469-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961992 | 772 | 3.265625 | 3 |
Why some people have all the luck?
Why do some people get all the luck while others never get the breaks they deserve?
A psychologist, Professor Richard Wiseman from University of Hertfordshire, says he has discovered the answer. 10 years ago, he set out to examine luck. He wanted to know why some people are always in the right place at the right time, while others consistently experience ill fortune. He placed advertisements in national newspapers asking for people who felt consistently lucky or unlucky to contact him. Hundreds of extraordinary men and women volunteered for his research and over the years, he have interviewed them, monitored their lives and had them take part in experiments.
The results revealed that although these people have almost no insight into the causes of their luck, their thoughts and behavior are responsible for much of their good and bad fortune. Take the case of seemingly chance opportunities. Lucky people consistently encounter such opportunities, whereas unlucky people do not.
He carried out a simple experiment to discover whether this was due to differences in their ability to spot such opportunities. He gave both lucky and unlucky people a newspaper, and asked them to look through it and tell him how many photographs were inside. He had secretly placed a large message halfway through the newspaper saying: "Tell the experimenter you have seen this and win $50."
This message took up half of the page and was written in type that was more than two inches high. It was staring everyone straight in the face, but the unlucky people tended to miss it and the lucky people tended to spot it. Unlucky people are generally more tense than lucky people, and this anxiety disrupts their ability to notice the unexpected. As a result, they miss opportunities because they are too focused on looking for something else. They go to parties intent on finding their perfect partner and so miss opportunities to make good friends. They look through newspapers determined to find certain types of job advertisements and miss other types of jobs.
Lucky people are more relaxed and open, and therefore see what is there rather than just what they are looking for. His research eventually revealed that lucky people generate good fortune via four principles. They are skilled at creating and noticing chance opportunities, make lucky decisions by listening to their intuition, create self-fulfilling prophecies via positive expectations and adopt a resilient attitude that transforms bad luck into good.
Towards the end of the work, Professor Wiseman wondered whether these principles could be used to create good luck.
He asked a group of volunteers to spend a month carrying out exercises designed to help them think and behave like a lucky person. Dramatic results! These exercises helped them spot chance opportunities, listen to their intuition, expect to be lucky, and be more resilient to bad luck.
One month later, the volunteers returned and described what had happened. The results were dramatic. 80% of people were now happier, more satisfied with their lives and, perhaps most important of all, luckier. The lucky people had become even luckier and the unlucky had become lucky.
Finally, he had found the elusive "luck factor". Here are Professor Wiseman's 4 top tips for becoming lucky:
- Listen to your gut instincts - they are normally right.
- Be open to new experiences and breaking your normal routine.
- Spend a few moments each day remembering things that went well.
- Visualize yourself being lucky before an important meeting or telephone call. Have a Lucky day and work for it. The happiest people in the world are not those who have no problems, but those who learn to live with things that are less than perfect.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:15bf179c-69ed-4155-88c5-e41630a7dd4c> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-229.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.979582 | 823 | 2.796875 | 3 |
Talking on a cell phone while driving doesn't increase the risk of an accident, according to new research that looked at real-world accidents and cell-phone calls by drivers from 2002 to 2005.
Talking on a cell phone while driving doesn't increase the risk of an accident, according to new research that looked at real-world accidents and cell-phone calls by drivers in the U.S. from 2002 to 2005.
"Using a cell phone while driving may be distracting, but does not lead to higher crash risk in the setting we examined," said Saurabh Bhargava, an assistant professor at Carnegie Mellon University in Pittsburgh, and one of the two researchers in the study.
The study, published in the August issue of American Economic Journal: Economic Policy was described in a report Thursday from Carnegie Mellon in Futurity, an online publication that brings research from leading universities to the public's attention. (Access to the full 33-page study article, "Driving under the (Cellular) Influence" in the economic journal costs $9.50 for 24 hours' access.)
Bhargava did the research with Vikram Pathania, a fellow in the London School of Economics and Political Science. The researchers only focused on talking on a cell phone, not texting or Internet browsing, which have been highly popular in recent years. Pathania said it is possible that texting and browsing could pose a real hazard.
The study used the cell-phone calling patterns of a single, unnamed wireless carrier to track an increase in call volume of 7% at 9 p.m. on weekdays when most carriers were offering free calls during the 2002 to 2005 period. Drivers were identified as those whose cell phone calls were routed through multiple cellular towers.
The researchers also compared crash rates before and after 9 p.m., looking at about 8 million crashes in nine states and all the fatal crashes nationwide.
The researchers found that the increase in cell phone usage had no effect on crash rates. The highest odds of a crash while using a cell phone was determined in the new study to be significantly less than that found by two researchers in 1997 who equated cell phone use by drivers to illegal levels of alcohol use.
Bhargava explained the study's results saying that drivers may compensate for cell-phone use distractions by deciding to make or continue a call later or driving more carefully during a call. If drivers really do compensate for such distractions, then it makes sense for state lawmakers to penalize drivers for cell phone use as a secondary, rather than a primary, offense, he said. A secondary offense means a driver would have to be stopped first for a primary offense, such as speeding.
Many studies of cell phone usage have focused on distractions in laboratory or field tests, but haven't used real world data, Bhargava noted.
The National Safety Council has urged states to pass laws making cell phone usage of any kind while driving a primary offense. The council also advocates for a ban on using a cell phone for texting, talking, browsing or any other purpose while driving.
The NSC believes talking on cell phones while driving leads to 20% of all crashes, while texting causes 4%. There were about 6 million car crashes in 2012 in the U.S., and 3.7 million of those resulted in significant injury or death. Most of the focus by state legislatures is on texting, with 41 states having some form of law restricting texting while driving.
The CTIA, which represents the wireless industry and carriers, said it doesn't oppose total government bans on using wireless devices while behind the wheel, but said such decisions should be left to the public and lawmakers in their respective communities.
This article, Cell-phone talking while driving doesn't lead to higher crash risk, research says , was originally published at Computerworld.com.
Matt Hamblen covers mobile and wireless, smartphones and other handhelds, and wireless networking for Computerworld. Follow Matt on Twitter at @matthamblen or subscribe to Matt's RSS feed. His email address is email@example.com.
Read more about mobile/wireless in Computerworld's Mobile/Wireless Topic Center.
This story, "Cell-phone talking while driving doesn't lead to higher crash risk, research says" was originally published by Computerworld. | <urn:uuid:a6b6a712-d817-4979-bc9c-ceadc0ae0590> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2168945/smb/cell-phone-talking-while-driving-doesn--39-t-lead-to-higher-crash-risk--research-says.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00275-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964434 | 881 | 2.921875 | 3 |
by Joab Jackson, IDG News Service
University of Texas at Austin researchers have developed a smart traffic intersection that can manage the flow of autonomous vehicles. The intersections of the future will not rely on stoplights or stop signs. Instead, when cars are driven by software, they could be managed by virtual traffic controllers, which stay in close contact with the automobiles as they approach the intersection, said Peter Stone, a professor of computer science at The University of Texas at Austin.
At least from a certain perspective, it is unusual that humans still drive their own vehicles, Stone explained. Other forms of mechanized transportation, such as airplanes, boats and railroad locomotives, have autopilot capabilities.
Using computers and sensor systems, autonomous vehicles could handle much of the routine navigation that humans now do. Autonomous cars could get their occupants to their destinations more quickly and be safer, advocates say. Such self-driving technology is still nascent, though organizations such as the U.S. Defense Advanced Projects Agency (DARPA) and Google are aggressively funding research in the area. And at least one state, Nevada, has already approved the use of autonomous vehicles on its roads.
Professor Peter Stone says intersections of the future will manage vehicles with virtual traffic controllers, which stay in close contact with the automobiles as they approach the intersection.
The researchers created a demonstration system for managing autonomous vehicle traffic in road intersections. Each intersection has a computer that coordinates all of the traffic in the most efficient way possible, and each car has a software agent that communicates with upcoming intersections. In the prototype system, a self-driving car takes instructions from the intersection manager as it approaches the intersection, waiting for other cars to go through the intersection before it passes through.
The system can be modified to work with both autonomous and human-driven cars. The researchers say it also could ensure that approaching emergency response vehicles can get through as quickly as possible, or participate in citywide traffic-shaping efforts, which could help reduce congestion overall. “We can prove that as long as all the cars follow the protocol we defined, then there will not be accidents,” Stone says. Report.
DCL: There’s a lot of sophisticated event processing going on in such systems. Would be interesting to know more about this aspect of the work. | <urn:uuid:257c20f7-7fdf-44c3-91b4-a4ba830158cf> | CC-MAIN-2017-04 | http://www.complexevents.com/2012/02/25/computers-may-control-intersections-for-self-driving-cars/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961255 | 471 | 3.046875 | 3 |
Ashutosh Saxena bought an Xbox to play computer games at home, but discovered that the Kinect motion-detection technology it includes provides a rich tool for his robotics lab where he's trying to create robots that learn what humans are up to and try to help out.
Saxena, a professor of computer science at Cornell University, says the Xbox One announced last week will boost the realm of activities his robots can figure out because its HD camera will be able to detect more subtle human motions, such as hand gestures.
“Modeling hand is extremely hard,” he says. “Hands move in thousands of ways.”
With the current Kinect, his models consider hands as a single data point so they can’t analyze finger motions, for example, he says. The camera’s resolution is just about good enough to identify a coffee mug but not cell phones and fingers, he says.
Kinect’s 3D imaging is far superior and less expensive than the 2D technology he used before. That was good for categorizing a scene or detecting an object, but not for analyzing motion, he says.
With input from Kinect sensors his algorithms can determine what a person is doing given a range of activities and then perform appropriate predetermined tasks.
For example the motion sensors in combination with algorithms running on Saxena’s Ubuntu Linux server could identify a person preparing breakfast cereal and retrieve milk from the refrigerator. Or it could anticipate that the person will need a spoon and get one or ask if the person wants it to get one.
“It seems trivial for an able-bodied person,” Saxena says, “but for people with medical conditions it’s actually a big problem.”Video]
Similarly it could anticipate when a person’s mug is empty and refill it. [
Robots in his lab can identify about 120 activities such as eating, drinking, moving things around, cleaning objects, stacking objects, taking medicine – regular daily activities, he says.
Attached to telepresence systems, learning robots could carry around cameras at remote locations so a participant could control where the camera goes but the robot itself would keep it from bumping into objects and people. It could also anticipate where the interesting action in a scene is going and follow it, he says.
Attached to a room-vacuuming robot a sensor could figure out what is going on in a room – such as viewing television – and have the robot delay cleaning the room or move on to another one.
Assembly line robots could be made to work more closely with humans. Now robots generally perform repetitive tasks and are separated from people. Learning robots could sense what the people are doing and help or at least stay out of the way.
Assistant robots could help at nursing facilities, determining if patients have taken medications and dispensing them.
He says some of these applications could be ready for commercial use within five years.
The difficult part will be writing software that can analyze human activity, identify specific task that are being performed by people, anticipate what they are likely to do next and figure out what they can do that’s useful, Saxena says.
Tim Greene covers Microsoft and unified communications for Network World and writes the Mostly Microsoft blog. Reach him at email@example.com and follow him on Twitter@Tim_Greene. | <urn:uuid:e134c344-e8d6-49a5-9a4c-de1cc7b30c44> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2166746/software/xbox-one-should-accelerate-development-of-learning-robots.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00506-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967037 | 696 | 2.984375 | 3 |
Balance sheets contain valuable information for assessing a company's financial health. IT professionals should pay close attention to financial statements and use this guide to gain valuable insights. This is the second note in a three-part series on financial analysis. Other notes include, “Income Statements: The Crystal Ball of Vendor Viability” and “Financially Impact the Bottom Line.”
The Balance Sheet
The balance sheet lists what a company owns and owes at a specific point in time. This statement is different than the income statement, which measures profitability over a specific period of time. The balance sheet shows the financial strength of a company in three major categories: Assets, Liabilities, and Equity. Assets are equal to Liabilities plus Equity. | <urn:uuid:92cfcf36-ca9d-44cb-bd17-9dd929f3a462> | CC-MAIN-2017-04 | https://www.infotech.com/research/spotting-balance-sheet-red-flags | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913292 | 151 | 2.734375 | 3 |
Bell Laboratories is the research and development subsidiary of Alcatel-Lucent. Bell Laboratories operates its headquarters in Murray Hill, New Jersey, United States, and has research and development facilities throughout the world.The historic laboratory originated in the late 19th century as the Volta Laboratory and Bureau created by Alexander Graham Bell. Bell Labs was also at one time a division of the American Telephone & Telegraph Company , half-owned through its Western Electric manufacturing subsidiary.Researchers working at Bell Labs are credited with the development of radio astronomy, the transistor, the laser, the charge-coupled device , information theory, the UNIX operating system, the C programming language, S programming language and the C++ programming language. Eight Nobel Prizes have been awarded for work completed at Bell Laboratories.On May 20, 2014, Bell Labs announced the Bell Labs Prize, a competition for innovators to offer proposals in information and communications technologies, with cash awards of up to $100,000 for the grand prize. Wikipedia.
Lu L.,Georgia Institute of Technology |
Li G.Y.,Georgia Institute of Technology |
Swindlehurst A.L.,University of California at Irvine |
Ashikhmin A.,Bell Laboratories |
Zhang R.,National University of Singapore
IEEE Journal on Selected Topics in Signal Processing | Year: 2014
Massive multiple-input multiple-output (MIMO) wireless communications refers to the idea equipping cellular base stations (BSs) with a very large number of antennas, and has been shown to potentially allow for orders of magnitude improvement in spectral and energy efficiency using relatively simple (linear) processing. In this paper, we present a comprehensive overview of state-of-the-art research on the topic, which has recently attracted considerable attention. We begin with an information theoretic analysis to illustrate the conjectured advantages of massive MIMO, and then we address implementation issues related to channel estimation, detection and precoding schemes. We particularly focus on the potential impact of pilot contamination caused by the use of non-orthogonal pilot sequences by users in adjacent cells. We also analyze the energy efficiency achieved by massive MIMO systems, and demonstrate how the degrees of freedom provided by massive MIMO systems enable efficient single-carrier transmission. Finally, the challenges and opportunities associated with implementing massive MIMO in future wireless communications systems are discussed. © 2014 IEEE. Source
Shang X.,Bell Laboratories |
Poor H.V.,Princeton University
IEEE Transactions on Information Theory | Year: 2012
An interference channel is said to have strong interference if a certain pair of mutual information inequalities are satisfied for all input distributions. These inequalities assure that the capacity of the interference channel with strong interference is achieved by jointly decoding the signal and the interference. This definition of strong interference applies to discrete memoryless, scalar and vector Gaussian interference channels. However, there exist vector Gaussian interference channels that may not satisfy the strong interference condition but for which the capacity can still be achieved by jointly decoding the signal and the interference. This kind of interference is called generally strong interference. Sufficient conditions for a vector Gaussian interference channel to have generally strong interference are derived. The sum-rate capacity and the boundary points of the capacity region are also determined. © 2012 IEEE. Source
Gettys J.,Bell Laboratories
IEEE Internet Computing | Year: 2011
Bufferbloat is the existence of excessively large (bloated) buffers into systems, particularly network communication systems. Systems suffering from bufferbloat will have bad latency under load under some or all circumstances, depending on if and where the bottleneck in the communication's path exists. Bufferbloat encourages network congestion; it destroys congestion avoidance in transport protocols such as HTTP, TCP, Bittorrent, and so on. Network congestion-avoidance algorithms depend on timely packet drops or ECN; bloated buffers invalidate this design presumption. Without active queue management, these bloated buffers will fill, and stay full. Bufferbloat is an endemic disease in today's Internet. © 2011 IEEE. Source
Niesen U.,Alcatel - Lucent |
Gupta P.,Bell Laboratories |
Shah D.,Massachusetts Institute of Technology
IEEE Transactions on Information Theory | Year: 2010
We consider the question of determining the scaling of the n 2-dimensional balanced unicast and the n2n-dimensional balanced multicast capacity regions of a wireless network with n nodes placed uniformly at random in a square region of area n and communicating over Gaussian fading channels. We identify this scaling of both the balanced unicast and multicast capacity regions in terms of Θ(n), out of 2n total possible, cuts. These cuts only depend on the geometry of the locations of the source nodes and their destination nodes and the traffic demands between them, and thus can be readily evaluated. Our results are constructive and provide optimal (in the scaling sense) communication schemes. © 2010 IEEE. Source
Agency: NSF | Branch: Continuing grant | Program: | Phase: RES IN NETWORKING TECH & SYS | Award Amount: 35.09K | Year: 2016
Software defined radio (SDR) is emerging as a key technology to satisfy rapidly increasing data rate demands on the nations mobile wireless networks while ensuring coexistence with other spectrum users. When SDRs are in the hands and pockets of average people, it will be easy for a selfish user to alter his device to transmit and receive data on unauthorized spectrum, or ignore priority rules, making the network less reliable for many other users. Further, malware could cause an SDR to exhibit illegal spectrum use without the users awareness. The FCC has an enforcement bureau which detects interference via complaints and extensive manual investigation. The mechanisms used currently for locating spectrum offenders are time consuming, human-intensive, and expensive. A violators illegal spectrum use can be too temporary or too mobile to be detected and located using existing processes. This project envisions a future where a crowdsourced and networked fleet of spectrum sensors deployed in homes, community and office buildings, on vehicles, and in cell phones will detect, identify, and locate illegal use of the spectrum across a wide areas and frequency bands. This project will investigate and test new privacy-preserving crowdsourcing methods to detect and locate spectrum offenders. New tools to quickly find offenders will discourage users from illegal SDR activity, and enable recovery from spectrum-offending malware. In short, these tools will ensure the efficient, reliable, and fair use of the spectrum for network operators, government and scientific purposes, and wireless users. New course materials and demonstrations for use in public outreach will be developed on the topics of wireless communications, dynamic spectrum access, data mining, network security, and crowdsourcing.
There are several challenges the project will address in the development of methods and tools to find spectrum offenders. First, the project will enable localization of offenders via crowdsourced spectrum measurements that do not decode the transmitted data and thus preserve users? data and identity privacy. Second, the crowd-sourced sensing strategy will implicitly adapt to the density of traffic and explicitly adapt to focus on suspicious activity. Next, the sensing strategy will stay within an energy budget, and have incentive models to encourage participation, yet have sufficient spatial and temporal coverage to provide high statistical confidence in detecting illegal activity. Finally, the developed methods will be evaluated using both simulation and extensive experiments, to quantify performance and provide a rich public data set for other researchers. | <urn:uuid:39ae430a-3fce-473b-8a19-45a6b18f3cca> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bell-laboratories-161336/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00048-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901035 | 1,513 | 2.5625 | 3 |
Rootkit malware has not been viewed as a formidable security threat for quite some time — the malware reached its peak levels in early 2011. Since then, rootkits have been on the decline. During the fourth quarter of 2013, McAfee Labs researchers found the rate of rootkits had fallen below the amount present in 2008. McAfee Labs has long credited 64-bit processors with the prevention of rootkit attacks, making the operating system kernel more difficult to attack. However, the first quarter of 2014 was hit with a spike in rootkit malware.
The stealthy nature of rootkit malware is what makes their resurgence dangerous. Once a rootkit gains access to a system, it is able to remain undetected while it steals information for an extended period. The longer it is unnoticed, the greater are the chances for attackers to steal and destroy data on both corporate and individual scales.
The main culprit in early 2014 was a single 32-bit family attack, which is a possible anomaly. Newer and smarter forms of this malware have learned how to circumvent the 64-bit systems, hijack digital certificates, exploit kernel vulnerabilities, digitally sign malware, and attack built-in-security systems. McAfee Labs believes these methods will result in a resurgence of rootkit-based attacks.
The drastic decline in rootkit samples is depicted in the chart below. As Windows adopted the 64-bit platform, the microprocessor and OS design brought heightened security thanks to digital signature checking and kernel patch protection.
Sample counts declined along with rootkit techniques used to gain kernel access. Efforts to access the kernel or install malicious device drivers were blocked with the increased protection of the 64-bit systems. The heightened security subsequently spiked the cost of building and deploying rootkits on the protected platforms.
From Roadblocks to Speed Bumps
Security measures and increased rootkit costs aside, attackers seem to have finally found ways to gain kernel-level access of 64-bit systems. The most recent malicious rootkit to penetrate the kernel, Uroburos, remained undetected for three years. By exploiting a known vulnerability in an old VirtualBox kernel, Uroburos was able to load its unsigned malware and override PatchGuard — a protection within 64-bit Windows meant to thwart attackers.
Stolen private keys also offer attackers access to 64-bit systems. Valid digital signatures also assist in circumventing security measures. McAfee Labs has seen a rise in all types of malicious binaries just like these with digital signatures. The McAfee Labs team examined the past two years of data to find out how many 64-bit rootkits have used stolen digital certificates and discovered:
While 64-bit processors and 64-bit Windows have implemented new security measures to safeguard against rootkits, it’s important to realize that no security is completely bulletproof. A more comprehensive security system that integrates hardware, software, network, and endpoint protection is the best rootkit defense. | <urn:uuid:e40722dd-0b29-4b00-ba08-b77bc71020d2> | CC-MAIN-2017-04 | http://www.mcafee.com/sg/security-awareness/articles/rise-of-rootkits.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00286-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934075 | 598 | 2.890625 | 3 |
When we talk about the fiber optical cable, you will find two different typs, is used to transmit information. The first type is the multimode cable and the second is the single mode cable. Both of them is the use of light energy in the speed data, but they use a kind of or many kinds of glass fiber to spread it. In this article, you will get enough information on single mode fiber along with some advantages delivered by this cable.
Single-mode is a single stand of glass fiber, usually consisting of 2, with a diameter of 8.3 to 10 microns and has only one mode of transmission, and single mode fiber comes in a 9-microns diameter that suppports Gigabit Ethernet data transfer in up to 10 kilometers in distance. Single-mode, having a relatively smaller diameter than multi-mode, carries higher bandwidth than multi-mode, but requires a light source with a narrow spectral width. Although single-mode fiber costs more than multi-mode, it gives you a higher transmission rate and up to 50 times more distance. The small core virtually eliminates any distortions that could result from overlapping pulses, providing the least signal interruption and the highest transmission speeds of any fiber optic type. You will find some different types of single mode fibers, such as cutoff shifted fiber, dispersion shifted fiber, low water peak fiber, non-zero dispersion shifted fiber, and some other else.
Are given in the best advantage of this particular cable is greater bandwidth capacity to deliver this cale. Therefore, Many people perfer this particular cable than multimode a support for their system. Main purpose in the use of fiber optic cable in the Internet or communication system is to make the top bits of data transmission from the sender to the receiver in fewer errors. You will find that the narrow core of this fiber limits the dispersion of light, which is usually called multi-patch effect. Therefore, the bandwidth capacity of the cable could be increased significantly.
Another advantages given by this certain cable is its ability to be used for longer distances. Therefore, usually this certain cable is used to establish local area network (WAN), metropolitan area network (MAN), the campus network. In addition, they also support the transmission distance at 50 times than multimode fiber. Usually is to use a SMF for remote data transmission. On single mode fiber, the light s usually 1300nm for shorter distances and 1500 for longer distances. Then, the light into the core of some of the fibers in parallel, and multimode, let light into from all angles and direction. The single-entry mode offered by SMF also limits the dispersion of light, so it could eliminate the data waste as well as increase the speeds of the data transmission. In addition, the cable is also immune to any external noise in single mode type, such as electromagnetic interference (EMI) and radio frequency interference (RFI).
Those are some advantages of single mode fiber cable for your communication or network system. You can surely add this certain cable for your needs.
To support your system, you can provide great quality of connector to get better data transmission. There are several fiber optic connector types that you can find in the market. | <urn:uuid:59738910-1fd7-4a73-9e31-18d9c5e5e5fb> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-best-advantages-of-single-mode-fiber.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00314-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933586 | 650 | 3.21875 | 3 |
A wireless Access Point (AP) is a device that allows wireless devices to connect to internet using Wi-Fi. With the remarkable increase in number of wireless devices the number of APs has also increased drastically to serve the Wi-Fi needs of these devices. We have APs at home, offices, airports, public hotspots. Any clue about the AP a device connected to could be an important piece of information for law enforcement or examiners. When a device connects to an AP it leaves evidence behind. This article is geared towards analyzing a file in iPhone that contains vital information about the Wi-Fi AP to which the device got connected.
To perform this experiment we used:
- A jail-broken iPhone with iOS version 6.1.3
- A Windows 7 machine
- A plist editor installed on Windows PC
- WinSCP to access iPhone files
Step 1 – Collecting Data
In order to better understand different keys of the file we performed a simple experiment of connecting the iPhone device to multiple access points at different locations including a home, university, and a public hotspot to collect variant data.
Step 2 – Extracting the required file
The file ‘com.apple.wifi.plist’ contains significant information about wireless APs and is located in /private/var/preferences/SystemConfiguration/. As the name indicates, it is a plist file that can be opened with any plist viewer. Following are the steps to extract the file from iPhone.
1- Install Cydia Package SSH
2- Note the IP address of the iPhone
3- Using WinSCP to SSH iPhone using its IP address found in step 2
4- Go to location /private/var/preferences/SystemConfiguration/ and copy the file com.apple.wifi.plist locally on Windows machine
Note that the following research has been done on iOS v6.1.3. The name, location, content and/or type of the file might be different in other versions of iOS.
Step 3 – Parsing com.apple.wifi.plist file
This is the main step which explains how to read and understand the plist file extracted from the iPhone. Below given is a sample com.apple.wifi.plist file populated after performing the experiment. Some vital keys highlighted below include SSID, security mode, last auto joined time, last joined time and BSSID (MAC address of router). iPhone connects to a home network named JAZZ with WPA2 personal security.
This is true when,
1) Device automatically connects to the network, without user interaction.
The time is in UTC format and as the name indicates, it only saves recent or last auto joined date/time.
This is true when,
1) User joins the network by entering the password and connects to it. This could happen when user joins to a network for the first time or rejoined the network after the device forgets the saved network setting.
2) User scans the network and taps on the SSID he/she wants to connect to (with password saved already).
The time is in UTC format and as the name indicates, it only saves recent or last joined date/time.
Types of Wi-Fi Connection
Next important thing is to understand types of Wi-Fi connection based on authentication method.
– Password method
– Certificate method
We noticed that mostly certificate method is used at public Wi-Fi hotspots while at homes, colleges/universities, password method is implemented. But this is just a limited observation and of course it can vary place to place.
We were lucky to find both types of network easily and below is presented the comparison.
We connect to a home network JAZZ that just needs a passkey set on the router to authorize the user. Below is the screenshot of a sample plist that logs the device connected with password authentication (PEAP)
Where did BSSID go???
Did you notice that there is no BSSID key in the above capture?
Yes this is a gotcha. Here is what happened; BSSID key comes with LastAutoJoined key. If there is no LastAutoJoined key in the file, you won’t find the BSSID field. Yes, this is weird but that’s how it is working according to our research.
Let’s reproduce the result. In order to remove the LastAutoJoined key from the plist, do the ‘Forget this Network’ on the iPhone (shown below).
In this example, iPhone connects to the network SSID JAZZ (we entered the password this time). Notice how the file gets updated with no BSSID and LastAutoJoined keys (Figure 3).
Next, we turned the Wi-Fi off and back on and let the device connect automatically. This updates the file at the same time and adds not only the lastAutoJoined key but also BSSID. Refer to screenshot shown in Figure 4.
In addition to password or passkey method, many of us must have also experienced certificate type network authentication. Usually at places like coffee shops, hotels and airports we allow the certificate to get authorized for free or pay for it. Our next experiment was at a free public hotspot Starbucks. This time we noticed a certificate type authentication connection. It is interesting to note the variance in some keys when the authentication method of the network changes
One of the differences noticed in this case was authentication method key ‘is WPA’ with value ‘0’ and it also had an additional key ‘Captive Network’. Next, we disconnected and reconnected to see if the auto join works here and found that it worked. Probably, this auto join is possible till the certificate expires.
Another important piece of information examiner might want to check is wireless security mode.
In WPA personal no server has to be involved and a passphrase can be set on the router or AP that can be used by every user. On the other hand, in WPA Enterprise, a RADIUS server is involved for authentication that contains unique username and password for each user.
This time we hooked the iPhone to University of Central Florida (UCF) network. The below sample plist clearly shows the difference. One can easily guess the mode of Wi-Fi by looking at it.
The value of ‘SecurityMode’ key is WPA-Enterprise. It also gives the username under the EAPCLientConfiguration in Enterprise profile (Figure 6)
Compare it with WPA- Personal mode where there is no such key Enterprise profile. And the SecurityMode value is ‘WPA2 Personal’ (Figure 1)
The outcome of this experiment implies that
- com.apple.wifi.plist contains info of ONLY last connection (LastAutoJoined or LastJoined) to the unique AP.
- MAC address (BSSID) of the AP can be found through this file if the network was auto joined. The BSSID key is missing if there is no LastAutoJoined key.
- In certificate method, similar to password method, BSSID field can be a good piece of information to determine possible physical network device.
- Insight knowledge of keys and their values could be helpful in understanding the type of Wi-Fi authentication configuration.
- One can determine if the network was WPA personal or WPA enterprise type by looking at this plist file.
- In case of WPA Enterprise, username might be extracted that can be helpful in further investigation. | <urn:uuid:789d7bcf-f5c5-46c2-a4af-1e2ac8ea88b2> | CC-MAIN-2017-04 | https://articles.forensicfocus.com/2013/09/03/from-iphone-to-access-point/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914912 | 1,569 | 2.53125 | 3 |
The fact that Purdue was able to build its supercomputer by lunchtime Tuesday is pretty incredible. But the network equipment used is pretty interesting as well.
Purdue bills the Coates Cluster as “the first internationally ranked academic supercomputer that is wired solely by superfast ten-gigabit network connections.” It is using those 10 Gigabit Ethernet connections as a “unified wire,” where all communications into and out of each node – including storage, networking, clustering, management and boot traffic - happen on a single 10G Ethernet connection.
Chelsio notes that each node, therefore, has only the 10G Ethernet and the power cord connecting to it, (what, no Power over Ethernet?) which no doubt helped the university get its supercomputer together in such a hurry.
Each of the 1,280 AMD-based HP dual quad-core compute nodes was paired with a Chelsio Communications iWARP RDMA adapter. The technology used in the adapters can offload a lot of the communications processing so that the CPU doesn’t have to do it. Chelsio says the performance is comparable to that of InfiniBand.
The adapters were connected to Cisco Nexus switches. Matrix Integration provided the nodes and Verizon Business Network Services provided the integration services for the switches.
The next ranking of supercomputers worldwide happens in the fall, and Purdue expects the Coates supercomputer to make the top 50 with its 90 teraflops of performance. Purdue notes that the supercomputer it built last year was initially ranked 105th in the world, but has since fallen to 196th, which gives you an idea of how quickly the state of the art is advancing.
Purdue expects to use the new supercomputer for research into climate modeling and weather forecasting. | <urn:uuid:915c2abe-53bb-4b87-b334-6cebfad628f2> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2260514/lan-wan/10-gigabit-ethernet-and-iwarp-at-heart-of-new-supercomputer.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955632 | 367 | 2.65625 | 3 |
When selecting a network camera for day or night surveillance, there are several elements impacting image quality that are important to understand. This guide is intended to give a basic overview of those elements, to give an understanding of how lighting affects the image, and of the factors that need to be taken into consideration for creating favorable lighting in dark environments.
Light is fundamental to network video. It is light reflected from the scene being viewed that allows images to be visible both to the human eye and to the camera. So the performance of any network video system depends not only on the camera and lens, but also on the quantity, quality, and distribution of available light.
Light is energy in the form of electromagnetic radiation. The light's wavelength (or frequency) determines the color and type of light. Only a very narrow range of wavelengths is visible to the human eye, i.e. from approximately 400nm (violet) to 700nm (red). However, network video cameras can detect light outside the range of the human eye, allowing them to be used not only with white light, but also with Near Infrared light (715-950nm) for night surveillance.
The behaviour of light varies according to the material or surface it strikes, where it is either reflected, diffused, absorbed or (more commonly) subjected to a mixture of these effects. Most surfaces reflect some element of light. Generally, the paler the surface, the more light it reflects. Black surfaces absorb visible light, while white surfaces reflect almost all visible light. Infrared is not always reflected in the same way as visible light. | <urn:uuid:44fc0084-4204-41ff-a3a6-5dbd972a73b7> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/lighting-design-guide-white-paper-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924788 | 324 | 3.578125 | 4 |
Black Box Explains...UARTs and PCI buses
Universal Asynchronous Receiver/Transmitters UARTs are designed to convert sync data from a PC bus to an async format that external I/O devices such as printers or modems use. UARTs insert or remove start bits, stop bits, and parity bits in the data stream as needed by the attached PC or peripheral. They can provide maximum throughput to your high-performance peripherals without slowing down your CPU.
In the early years of PCs and single-application operating systems, UARTs interfaced directly between the CPU bus and external RS-232 I/O devices. Early UARTs did not contain any type of buffer because PCs only performed one task at a time and both PCs and peripherals were slow.
With the advent of faster PCs, higher-speed modems, and multitasking operating systems, buffering (RAM or memory) was added so that UARTs could handle more data. The first buffered UART was the 16550 UART, which incorporates a 16-byte FIFO (First In First Out) buffer and can support sustained data-transfer rates up to 115.2 kbps.
The 16650 UART features a 32-byte FIFO and can handle sustained baud rates of 460.8 kbps. Burst data rates of up to 921.6 kbps have even been achieved in laboratory tests.
The 16750 UART has a 64-byte FIFO. It also features sustained baud rates of 460.8 kbps but delivers better performance because of its larger buffer.
Used in newer PCI cards, the 16850 UART has a 128-byte FIFO buffer for each port. It features sustained baud rates of 460.8 kbps.
The Peripheral Component Interconnect (PCI®) Bus enhances both speed and throughput. PCI Local Bus is a high-performance bus that provides a processor-independent data path between the CPU and high-speed peripherals. PCI is a robust interconnect interface designed specifically to accommodate multiple high-performance peripherals for graphics, full-motion video, SCSI, and LANs.
A Universal PCI (uPCI) card has connectors that work with both a newer 3.3-V power supply and motherboard and with older 5.5-V versions. | <urn:uuid:1fe03623-b743-4362-95aa-b1c9fdb41dcb> | CC-MAIN-2017-04 | https://www.blackbox.com/en-pr/products/black-box-explains/black-box-explains-uarts-and-pci-buses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00002-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923284 | 484 | 3.390625 | 3 |
December 2015 marks an historic change in US federal laws on education. Every Student Succeeds Act (ESSA), a rewrite of the Elementary and Secondary Education Act, replacing No Child Left Behind, was signed into law by President Barack Obama. With this new act, federal involvement in education is scaled back, giving states more control over testing and accountability.
In addition to changes in testing and accountability, under this new law, funding decisions for schools have shifted to state and districts. An article in US News & World Report states that some US states will have “the opportunity to combine and distribute federal, state and local funds through one formula that allocates resources based on student needs.” This “weighted funding” will empower school leaders “to make the best choices for their community. With the autonomy to design programs based on individual contexts, school leaders can best devise instructional and enrichment programs that support their students when school budgets are driven by student need rather than funding for specific programs.”
ESSA means changes in funding for schools
Within ESSA, many programs are laid out with budgets for each. In contrast to No Child Left Behind, several programs have been lumped together, giving educational leaders the ability to allocate funds in the best manner for their student population. Within this program, TITLE IV lays out funding for technology programs and training. See below:
TITLE IV: 21ST CENTURY SCHOOLS
PART A: STUDENT SUPPORT AND ACADEMIC ENHANCEMENT GRANTS
FY 2015 Appropriation: NA AUTHORIZED LEVELS:
FY 2017: $1,650,000
FY 2018: $1,600,000
FY 2019: $1,600,000
FY 2020: $1,600,000 5
Subpart 2—Internet Safety
For a full breakdown of the ESSA budget, see the Committee for Education Funding handout here.
ESSA means technology opportunity for students & teachers
With the changes in funding, it appears that school leaders will have more opportunities to train teachers in new technologies, invest in technology innovations, and fill the gaps in technology accessibility for students. Enthusiasm for this opportunity was recently voiced by Intel’s K-12 Education Strategist, “Intel is pleased to see a renewed emphasis on integrating technology into the classroom through the new Title IV’s emphasis on supporting devices, Internet applications, on-line learning, and technology professional development for America’s classroom educators.” and “With ESSA becoming the law of the land, local school authorities will decide how best to bridge the technology and connectivity gaps and provide many more US students and teachers the digital and collaborative tools they need to complete in the global marketplace.”
With these new laws in place, 2016 is sure to bring new innovations in education and new opportunities for students across the United States. Impero Software is happy to be a part of these new changes.
Impero Education Pro software provides classroom and network management solutions for all digital devices in schools. Call us at 877.883.4370 or email us at firstname.lastname@example.org today for more information.
Photo courtesy NEAToday | <urn:uuid:8fa1f5e7-4a49-4c97-884d-00a9614e358a> | CC-MAIN-2017-04 | https://www.imperosoftware.com/every-student-succeeds-act-essa-means-more-technology-opportunities-for-schools/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93656 | 658 | 3.28125 | 3 |
A Brief Introduction To The SQL Procedures Language
September 27, 2016 Ted Holt
The SQL Procedures Language, or SQL PL, is a proprietary procedural language that IBM designed to work with the DB2 family of database management systems. I believe that it’s a good idea for anyone who works with DB2 to learn SQL PL. If you know RPG, CL, or COBOL, you’ll find it easy to learn.
SQL PL is available for all the DB2s. Knowledge of SQL PL that you acquire by working with DB2 for i applies in large part to the mainframe and LUW (Linux-Unix-Windows) versions. You can use SQL PL to create stored procedures, functions, and triggers. You can also use it to build dynamic compound statements, which you can store in source physical file members and the IFS and run using the Run SQL Statements (RUNSQLSTM) command.
So, what’s SQL PL like? It’s like RPG in some ways. For example:
And it’s also not like RPG in some ways. For example:
The basic building block of SQL PL is the compound statement. Let me tell you a few things about compound statements. Then I’ll show you an example.
That’s enough facts for now. Let’s see an example.
create trigger ValidateOrder no cascade before insert on SalesOrderHeaders referencing new row as n for each row mode db2sql begin atomic declare v_Status dec(1); declare v_Parent dec(5); -- check the customer for credit hold select Status, parent into v_Status, v_Parent from Customers where AccountNumber = n.CustomerID; if v_Status <> 0 then signal sqlstate '85510' set Message_text = 'Customer is on credit hold'; end if; -- check the parent for credit hold select Status into v_Status from Customers where AccountNumber = v_Parent; if v_Status <> 0 then signal sqlstate '85511' set Message_text = 'Parent customer is on credit hold'; end if; end
This purpose of this trigger is to prevent sales to customers who are on credit hold.
The compound statement begins with BEGIN ATOMIC. This means that the entire compound statement is to be treated as a whole. If there were multiple database changes under commitment control, and one of them failed, all changes would be rolled back. In this case, a non-atomic statement would probably work just as well.
This compound statement declares two variables to contain the STATUS and PARENT fields from the customer master table. I prefix variable names with V_ to distinguish them from database columns (fields). One feature of SQL PL that I like is that I can mix variables and column names as required. There’s no need to prefix variables with a colon, as RPG and COBOL require me to do.
I didn’t come up with the idea of using the V_ prefix. That came from the book DB2 SQL Procedural Language for Linux, UNIX, and Windows, by Yip et al.
The first SELECT checks the customer status. A non-zero status means that a customer is on credit hold.
If the customer is not on credit hold, the second SELECT checks the parent company (if there is one) to see if the parent is on credit hold.
The trigger indicates a credit hold status by sending an error to the caller. SQL state 85510 means that the customer is on credit hold. SQL state 85511 means that the parent company is on credit hold. The following shows the error I got when I tried to create an order for a customer using green-screen SQL.
Diagnostic message SQL0723 SQL trigger VALIDATEORDER in MYLIB failed with SQLCODE -438 SQLSTATE 85510. An error has occurred in a triggered SQL statement in trigger VALIDATEORDER in schema MYLIB. The SQLCODE is -438, the SQLSTATE is 85510, and the message is Customer is on credit hold.
That’s the brief introduction. The rest is details. I hope to say more about SQL PL in upcoming issues of Four Hundred Guru.
By the way, if the word proprietary scares you, consider that Transact-SQL, also called T-SQL, (Microsoft and Sybase) and PL/SQL (Oracle) are also proprietary, and that doesn’t stop people from using them every day.
Ted Holt welcomes your comments and questions. Email him through the IT Jungle Contacts page. | <urn:uuid:3e5bb3d9-7479-4abb-8b98-c24b35e41375> | CC-MAIN-2017-04 | https://www.itjungle.com/2016/09/27/fhg092716-story02/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00212-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900333 | 960 | 3.625 | 4 |
Biometrics Demystified: What You Need To KnowFrom fingerprints and retina scans to DNA and gesture recognition, the technology is advancing while costs are declining. Here's what you need to know.
Rising threat levels, increasing interconnectivity of systems, and the growing volume and value of data held by computers connected to the Internet have data owners re-evaluating access control methods. They need to do more than just check that authorized users have the correct login information; they also want to ensure that those people are actually the rightful owners of the login information they're using. Biometrics is the only way to do this.
With biometric authentication, every individual is unique. Most people are familiar with techniques such as fingerprint and facial recognition, which grant access based on physiological characteristics, but certain behavioral characteristics, such as typing rhythm, gait, and voice, also can be used.
User names and password combinations can be guessed or easily obtained by imposters. Tokens can be lost, forgotten, and stolen. But criminals can't guess fingerprints, and users can't forget or misplace their fingerprints. Physical attributes can't be faked the way ID cards can. And once a person has authenticated himself using biometrics, he can be tied directly to any actions he performs. This isn't the case with other form of authentication.
Biometric systems also have low administrative overhead. No more password resets. No more redistributing and renewing tokens, and no more revoking and replacing lost or stolen tokens. Most network operating systems allow for the easy integration of biometric authentication to replace and supplement passwords.
How Biometrics Works
Many people are under the misconception that biometric authentication involves direct comparison of the biometric trait--comparing an actual image of a fingerprint with stored fingerprints. What actually happens is that the device capturing the image creates a numerical value to represent the fingerprint--a digital hash of distinct characteristics. This value is sent to the authentication server for comparison with stored values.
With facial recognition, the camera captures an image of the face and extracts relevant characteristics, such as the distance between the eyes, width of the nose, shape of the cheekbones, and length of the jawline. These values are used to create a template.
To read the rest of the article,
Download the Nov. 21, 2011 issue of InformationWeek
Michael Cobb is founder and managing director of CobWeb Applications, a consulting firm that helps companies secure their IT infrastructures. Write to us at [email protected]. | <urn:uuid:ea5d878b-c8d6-4c75-8baf-165e6e85be03> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/biometrics-demystified-what-you-need-to-know/d/d-id/1101403?piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92032 | 517 | 2.796875 | 3 |
Big data is making a huge difference to a number of companies and industries, allowing professionals to handle data more effectively than ever before and gather fascinating new insights.
As the number of devices capable of connecting online increases, it is vital that companies find ways of interpreting data that can help to improve their products and services. Sectors such as marketing, retail and insurance have already reaped the benefits of big data.
However, one area that is perhaps understated is how the technology could help to improve research into diseases, particularly cancer. Vice-president of the US Joe Biden recently met up with healthcare specialists in Utah and explained how a better approach for sharing information will be necessary in order for new treatments to be realised.
How has big data helped to treat cancer?
Speaking to The Spectrum, Mr Biden emphasised how big data is helping to trace genetic and environmental factors that influence the disease, giving practitioners new knowledge that can be used to enhance their understanding.
Mary Beckerle, Huntsman Institute CEO, told the news provider: “Half of those folks who succumb to cancer succumb to a cancer that could have been prevented. I think there's a really important emphasis to treat cancers that could have been prevented.”
Mr Biden has been visiting hundreds of doctors in a bit to improve federal engagement on curing cancer. He visited Duke University earlier this month and is set to make an appearance at the University of California San Francisco today (February 29th).
The work is part of the White House’s cancer “moonshot” initiative, which aims to improve the level of progress towards curing cancer. As well as providing $1 billion (£721,000) towards research, the government is hoping to generate new ideas for cancer treatment specialists across the country.
President Barack Obama is asking Congress for $755 million for cancer research in the coming budget, which would be on top of the $195 million already given the green light by officials last year.
What developments could occur in the coming years?
Big data is still growing in the market and many industries should be able to improve how they manage information and increase the quality of their products and services.
In recent years, devices such as laptops, tablets and mobiles have all become key technologies for businesses as employees adopt more flexible ways of working and, with big data now introduced by many companies, there is more information for managers to decipher than ever before.
As analytics technologies improve, companies can easily identify new ways of making money and producing better products and services. Big data allows organisations to easily gain insights from complex statistics and ensures that information is being used as effectively as possible.
For causes such as cancer research, where thousands of documents have been created for organisations to look at, big data makes it easier than ever before to collate and understand information. Without it, sorting through large numbers of files and trying to extract insights from them can be a painstaking process. | <urn:uuid:72cc0264-e8d3-4467-874c-70886140c1a3> | CC-MAIN-2017-04 | http://kognitio.com/joe-biden-big-data-could-assist-cancer-research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00424-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961978 | 595 | 2.671875 | 3 |
Usage and load characteristics are gauges for interpreting how much work an online platform performs, and how well it performs under stress.
Understanding how your infrastructure performs isn’t as simple as assessing your car’s odometer to measure distance traveled, or its speedometer to measure the maximum speed. Usage and load characteristics provide insight into the performance of the platform in real-world use cases, like analyzing that metaphorical car’s journey from point A to point B.
Understanding what kind of usage and load capacity your service concurrently supports, as well as changes to those factors in the future, is a vital part of providing an excellent user experience. Cloud computing and scaling services are a great asset because you have almost unlimited server resources to handle traffic spikes and growth; however, your service may suffer if you’re not configured to use it.
On the other side, you don’t want to be paying for more power than you need. Use your interpretation of usage and load characteristics to know your limits, check up on the user experience, and evaluate poor performance issues.
Usage: How much are you utilizing?
Usage characteristics are a practical way to measure how much server power you need to run your web platform.
Your usage characteristics are going to break down into CPU, memory, storage, and network load statistics which can be measured over time or by time increments. The usage data sheds light on how much information your platform is moving to end users, as well as when it moves.
Usage can also tell you how many users are accessing your service at a specific time and compare that against usage statistics to see how hard they are pushing the system. An example usage characteristic would be your web application moving 100GB of data within a month.
Load: Can you take the heat?
Load characteristics can tell you how well your platform performs depending on how many end users are accessing the service concurrently, as well as the maximum amount of work the service can handle before it starts to experience performance problems. Whereas usage testing identifies how much information moves, load testing examines how efficiently the service moves that information.
Load testing, whether performed during development or on a live, fully functioning application, is like test-driving the user experience to make sure everything runs smoothly on a larger scale. Apica provides testing tools to handle smaller-scale cloud services with up to 10,000 concurrent users, as well as Enterprise solutions that support up to 2 million simultaneous users.
Using load testing analytics, you can identify capacity shortcomings and single out bottleneck points where the platform can be improved. Load testing gauges how well a platform holds up in terms of service capacity, long-term high use endurance conditions, and demand spikes. It’s great for identifying problems with latency as well—something usage data doesn’t provide any insight into. An example load characteristic is the latency between users when a typical number is simultaneously using the service.
Combining the two for hosting capacity and programming efficiency analysis
Looking at your web service’s usage and load characteristics helps answer the question of whether your platform needs to make programming efficiency improvements and adjust hosting resources.
If your service passes the test with little headroom, it’s an indication that future growth will disrupt service quality. The performance data helps businesses avoid being victims of their own success. Unpredictable load and rapid use expansion can cause the service to falter if the hosting services are not prepared.
For example, when Pinterest first launched, they used a gated account approval method at first for gradually allowing new users to access the service. This prevented them from overloading the application and creating a poor user experience.
Take advantage of the information that usage and load characteristics provide, adjusting service capabilities to address problems with real-world service use.
Are You Ready to Achieve Peak Web and Mobile Performance?
Start a 6 month full-featured trial of Apica LoadTest or a 30-day trial of Apica WPM.
Start Your Free Trial! | <urn:uuid:d68e2361-46ad-41a8-ba13-8276809123af> | CC-MAIN-2017-04 | https://www.apicasystem.com/blog/understanding-usage-load-characteristics-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925796 | 817 | 2.71875 | 3 |
What is the technology adoption curve? Is it relevant to DSS?
by Daniel J. Power
The technology adoption curve (TAC) is a theory about how individuals and organizations behave in implementing innovative technologies. A quick examination of the framework shows some similarities to the product life cycle curve discussed in business marketing courses. The theory is however more sophisticated than a life cycle or a diffusion model. The underlying model of technology adoption identifies 5 types of adopters of technology with very different interests and buying characteristics. The companies and individuals that are first to adopt a new technology are called innovators. The second type is known as the early adopters. The third type is called early majority, then the late majority adopters and, finally, the laggards.
Technology refers to products including software that are based on scientific knowledge. As scientific discoveries are made innovators often apply the new scientific findings to create useful products. Adoption of new innovative technologies seems to occur following a pattern. The technology adoption curve pattern is presented as a traditional bell-shaped curve with exponential growth in the beginning phase of adoption and a slowdown in adoptions occurring during the late adoption phase. When a new technology is introduced, it is usually hard to find, expensive and imperfect (even flawed). Over time, the new technology's availability increases, cost decreases and features improve to the point where a many people can benefit from adopting the technology. The technology diffuses and spreads to general use and application.
Adoption occurs in phases and adopters in each phase have similar characteristics. In the initial phase innovators are technically oriented users and “visionary”. In the final phase laggards are practical and conservative. The early adopters are seeking a competitive advantage. Productivity issues and conformity influence the early and late majority adopters. Some technology innovations reach a “dead end” early in the adoption cycle. These immature or premature innovations "flame out". The technologies that change industries and even society are the “killer applications” like the VisiCalc spreadsheet.
In summary, Innovators are enthusiasts who adopt a new technology for its own sake, with no clear purpose in mind. Early Adopters have the vision to adopt an emerging technology and apply it to an opportunity that is important to them. Early Majority adopters are pragmatists who do not like to take the risks of pioneering, but are ready to see the advantages of tested technologies. They are the beginning of a mass market for the new technology. Late Majority adopters are also pragmatists and this group represents about one-third of available customers. This group dislikes “discontinuous innovations” and believes in tradition rather than progress. The late majority buy high-technology products reluctantly and do not expect to like them. Traditionalists (or laggards) don't really like technology. This group performs a “reality testing” service for the rest of us by pointing out the discrepancies between the day-to-day reality of a technology product and the often exaggerated claims made for it.
The technology adoption curve (TAC) model is relevant to understanding the adoption of various decision support technologies. Model-driven DSS are probably at the late majority stage, but Web technologies have reinvigorated that type of decision support and changed its adoption curve. Data warehousing and analytical processing are probably still in the hands of the early majority. Customer Relationship Management (CRM) may be at a dead end. Communications-driven DSS are being adopted quickly. Knowledge-driven DSS are probably still in the early adoption stage. Document-driven DSS are evolving with the Web technologies. In 2015, analytics were extending and expanding the statistical and quantitative technologies used for decision support. Some decision support technologies have however been dead ends and disappointments.
Don Norman is often credited with first explaining the technology adoption curve model. See an example of applying the curve to microprocessor technology at startribune.com/digage/curve.htm. Gordon Moore, co-founder of Intel, also helped popularize the technology adoption curve.
Moore's view of technology adoption prescribes that a company can not expect to target a mass market directly with a technology innovation. Rather, the company must first target the early adopters. So what do you think? Does the technology adoption curve hold for innovative decision support applications? Will data visualization tools follow this curve? What about OLAP or data warehouses? Simulation models and optimization? Or Web-based DSS?
The literature on technology adoption and diffusion is large and varied. The phenomenon is part of daily life and seems to be increasingly widespread and rapid. The pervasiveness of the phenomenon and the implications for entrepreneurs and society encourage research and nuanced discussion. Trends suggest technology innovation will continue in many areas including decision support and analytics.
Carr, V. H., "Technology Adoption and Diffusion," at URL http://www.au.af.mil/au/awc/awcgate/innovation/adoptiondiffusion.htm
Moore, G. A., Crossing the Chasm, HarperBusiness, New York, 1991.
Rogers, E.M. (1995). Diffusion of innovations (4th ed.). New York: The Free Press.
The above response is from Power, D., What is the technology adoption curve? Is it relevant to DSS? DSS News, Vol. 2, No. 13, June 17, 2001, updated February 8, 2015.
Last update: 2015-02-08 06:56
Author: Daniel Power
You cannot comment on this entry | <urn:uuid:0437b7ab-5381-45b9-90b4-91ca62819924> | CC-MAIN-2017-04 | http://dssresources.com/faq/index.php?action=artikel&id=23 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931813 | 1,144 | 3.015625 | 3 |
Passwords are the most common way to authenticate access. At home, on the job, and using social networks, applications require passwords. As the systems you log into increase, you assume greater risk. This is true for one reason. Most passwords are weak and in all likelihood, you use them. With enterprise security, users are vulnerability. They can also become a deterrent.
In reusing passwords, you put yourself and an organization at risk. You make an enterprise security breach easier. You increase your likelihood of identity theft. Since 2011, SplashData publishes the year’s 25 most commonly breached passwords. From the report, everyone would benefit from a strong password manager. At work and in our personal lives, our passwords are weak.
SplashData reports the most popular passwords breached in 2015 were:
- qwerty (Check your keyboard.)
- 1qaz2wsx (Check your keyboard)
- qwertyuiop (Second keyboard row)
- passw0rd (using zero)
Hard to believe?
How many on the list do you use?
How to Make Strong Passwords
Although not always possible, randomly generated passwords are the securest. For IT, your enterprise password manager should enforce strong policies. When creating passwords, make them hard to guess, yet easy to remember. They should be difficult to hack without much effort on your part.
To make strong passwords, you must understand human behavior. A Linköping University, Sweden study, found 62% of users reuse passwords. 28% reported they never change their passwords. These behaviors reveal what cyber thieves count on. Most people reuse passwords and many never change them. One password can give a lot of access.
As unbelievable as this sounds, many passwords are simply guessed. Relating user behavior, the study found:
- 4.7% use the password password
- 8.5% use password or 123456
- 9.8% use password, 123456 or 12345678
- 14% use a password from the top 10 passwords
- 40% use a password from the top 100 passwords
- 79% use a password from the top 500 passwords
- 91% use a password from the top 10,000 passwords
Smart guessing is often the first automated cyber strike. Guessing attacks target account’s using short and simplistic passwords. Smart guessing is an efficient hacking use of time. During brute force attacks, top 10,000 password checks open 91% of accounts. For 8 character passwords, attacks take around 26 minutes.
About 70% of passwords contain dictionary words. Dictionary attacks are a variation of smart guessing. These attacks apply multi-language dictionaries to smart guessing. Hacker dictionaries contain words, names, inflections, phrases, abbreviations and hyphenations. Dictionary attacks try all combinations of words up to a certain length.
Passwords that combine dictionary words and random characters require hybrid attacks. These tools combine dictionary attacks with random characters. Hybrid password attacks take longer and often the last record exposed.
Strong Passwords Best Practices
For every organization, security starts with a strong password policy. Strong passwords never include names, phone numbers, or places. They do not contain proper nouns, dictionary words, or repeated characters. They don’t follow patterns or keyboard paths. They never reference birthdays, anniversaries, old addresses, or life events. They do not add single digits to words or spell backwards.
Secure passwords are never reused. They are easy to remember so they’re not written down. They are more than eight characters— the longer the better. They randomly place upper and lower case letters. They include punctuation and special characters when possible. They never reference sports, religion, love or popular culture past and present.
For strong passwords, use phrases rather than words. Do not capitalize to separate words and ideas. Write something about yourself only you know. Pick things transparent to anyone social engineering an attack. Then, apply a little creativity and deviate from norms.
Learn the Top 10 Password Management Best Practices for successful implementations from industry experts. Use this guide to sidestep the challenges that typically derail enterprise password management projects and prevent stong passwords. | <urn:uuid:d9d6abf1-c60a-4b72-befe-afbf89304175> | CC-MAIN-2017-04 | https://www.avatier.com/blog/last-years-25-worst-passwords-reinforce-strong-passwords-matter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899532 | 862 | 2.671875 | 3 |
The FCC created a mess back in 1997 when it allocated spectrum in the 2305-2360 MHz band, partly for the Satellite Digital Audio Radio Service (SDARS) and partly for the Wireless Communications Service (WCS). The SDARS is known commercially as Sirius XM, and the WCS is known commercially as … well, it isn’t known as anything because there is no commercial service.
But the FCC’s National Broadband Plan is the agency’s goal above all other goals. As a result, the headline announcing a new FCC decision trumpets: “FCC unleashes 25 MHz of spectrum for mobile broadband use – provides greater certainty for satellite radio and mobile broadband licensees.” I’m skeptical that the new rules will provide enough confidence to encourage investment in this particular 25 MHz of spectrum.
The basic problem is that the WCS allocation is adjacent to the 2320-2345 MHz SDARS frequency band. But SDARS satellite receivers must be very sensitive because they receive very low-power signals from satellites, and thus they are susceptible to interference from signals on adjacent bands. To protect them from interference, the FCC in 1997 imposed very tight technical rules on the WCS, particularly out-of-band emission limits.
Licenses to provide SDARS within the U.S. were awarded by auction in early April 1997. The two winners of the auction – XM and Sirius – were each assigned 12.5 megahertz of spectrum for their exclusive use on a primary basis. XM and Sirius launched their satellites and began commercial operations in 2001 and 2002, respectively. On Aug. 5, 2008, the FCC approved the merger of XM and Sirius. And as of March 31, Sirius XM had nearly 19 million subscribers in the contiguous United States.
The FCC auctioned WCS licenses in April 1997. Although the Commission permitted WCS licensees to provide both fixed and mobile services, it adopted different power limits for these two classes of service. For WCS fixed operations in the 2305-2320 and 2345-2360 MHz bands, the Commission adopted a peak power limit of 2 kW EIRP. For WCS mobile stations, the Commission adopted a peak power limit of 20 W EIRP. In addition, very stringent out-ofband emission limits were imposed in portions of the WCS band closest to the SDARS band – in fact, so stringent that they made mobile WCS devices impractical.
According to the FCC, Horizon Wi-Com, AT&T, Comcast, nTelos and NextWave Broadband collectively hold virtually all of the 2305-2320/2345-2360 MHz WCS licenses within the U.S.
Both Sirius XM and the WCS companies did a separate series of interference tests during 2009, some of which were observed by FCC staff. Differences in test setups included the power levels, duty factor or duty cycle of the WCS signal, WCS signal strength as a result of propagation losses, WCS antenna heights and positions (outside and inside the test vehicle), etc. Not surprisingly, Sirius XM and the WCS companies reached differing conclusions about power limits and out-of-band emission limits. Nonetheless, based on staff observations, the FCC felt that it could adopt new technical rules based on the test results.
The FCC has now adopted a set of complex new rules for WCS and SDARS. Most important are the new rules for WCS mobile and portable devices, which might be located in close proximity to those sensitive SDARS receivers. The WCS mobile devices will not be permitted to operate in the 2.5 MHz portions of the WCS band closest to the SDARS band, and in other portions they must operate with a reduced power limit of 250 mW average EIRP. In addition, the FCC imposed duty cycle limits and required the use of automatic transmitter power control. But it also relaxed the very tight out-of-band emission limits. In addition, a variety of complex technical rule changes were adopted for fixed WCS stations. New technical rules were also adopted for SDARS terrestrial repeaters, including power limits. In addition, blanket licensing of repeaters will be permitted, instead of a separate license for each transmitter site, but Sirius XM must notify affected WCS operators in advance to coordinate site locations.
The FCC has recognized that strict rules that eliminate all interference to SDARS will kill WCS as a viable service, so its goal is more modest – to eliminate interference that “repeatedly disrupts or seriously degrades service.” The FCC believes the modified WCS mobile and portable devices’ operating power and out-of-band limits will prevent interference to SDARS operations, except in rare instances (e.g., WCS mobile device in close proximity to SDARS receiver, high degree of mutual coupling between WCS and SDARS antennas, lack of obstructions between WCS transmitter and SDARS receiver, WCS mobile device transmitting channel immediately adjacent to SDARS receiving channel). But if more than occasional intermittent interference is suffered by SDARS listeners, the WCS operators are required to cure the problem.
Does that compromise give you the confidence to invest in WCS? “Unleashing” this 25 MHz of WCS spectrum lets the FCC put a check mark against one of the specific goals of its broadband plan, but the mark next to “Turn WCS into a viable service” remains a question mark. | <urn:uuid:287f2c47-be27-46a7-9f53-1495a9e86105> | CC-MAIN-2017-04 | https://www.cedmagazine.com/article/2010/06/capital-currents-sdars%E2%80%93wcs-mess | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00386-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945952 | 1,090 | 2.65625 | 3 |
Mobility serving as cloud enabler
The rapid rise of mobile devices, which are becoming prominent in light of the ability to constantly connect to the web from a variety of locations, is helping to fuel cloud computing growth. As the two technologies grow in similar trajectories, technologies that help them work well together, such as business process management software, can offer critical services to businesses.
According to a recent CloudTweaks report, the cloud’s history goes back as far as the 1960s, with the technology rising largely due to network improvements.
A brief history of the cloud
The core idea of cloud computing was formally defined sometime in the early 1990s, but the processes that make the cloud workable dates back as far as the 1960s. At the time, personal computers were not a viable option for most businesses. Instead, companies would invest in a single mainframe that would be connected to dumb terminals – screens that display data and screens that enable interaction, the news source explained.
Over time, this model grew to companies using servers and other systems to attach to thin client PCs, a model that offered considerable potential in the 1980s and ’90s. However, the need to use a physical data cord to attach each thin client to the data center systems limited this model’s growth, the report said.
Eventually, data center service providers evolved to the point that they began using small internal networks to cluster specific resources together, and used a cloud in diagrams to show where the divide between vendor and user control. This model led to the core idea of cloud computing, though the model is similar to the mainframe system used in the early days of computing. According to the news source, this rise of the cloud has been made possible, largely, by improved networking capabilities that allow constant access to the web, making cloud data accessible enough to use the solution as a major IT model.
Why this history matters
Cloud computing and networking options are clearly intertwined. Without connectivity, users are unable to access the cloud. In a business era increasingly dependent on mobile devices, the constant web access is vaulting the cloud into a significant place. As a result, BPM software and similar tools that integrate the mobile and cloud channels to optimize functionality play an essential role in creating operational benefits from technological innovation.
Vice President of Product Marketing | <urn:uuid:334ecc35-4463-4439-bb14-414916d0700b> | CC-MAIN-2017-04 | http://www.appian.com/blog/enterprise-mobility/mobility-serving-as-cloud-enabler | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951076 | 474 | 2.515625 | 3 |
Defending the Enterprise
An attack on the Border Gateway Protocol (BGP) could create a black hole on the Internet. The Department of Homeland Security stated that a few subverted servers recently enabled an attack on some of the Internet Domain Name System (DNS) root servers and threatened to disrupt service for many users. The risk from cyber-attacks from hackers is rising. The hackers of today are highly determined, patient, adaptive and well-funded.
The threat to businesses has never been greater, and gaps in the infrastructure provide opportunities for malicious, relentless, 24x7x365 attacks from professional hackers worldwide. This applies to all industries and sectors.
Businesses worldwide lose $3 billion yearly in productivity due the need to test, clean and deploy patches to computer systems.
These are examples of blended attacks that are based on threats from viruses and worms. A virus attaches itself to an executable file, while a worm spreads through memory and disk space. Information Week reports that a successful virus strike costs individual businesses from $100,000 to $1 million a year in cleanup and related costs.
Threats today do have a real and immediate impact on business revenue and costs. Each year, the Computer Security Institute and the San Francisco Federal Bureau of Investigation’s (FBI) Computer Intrusion Squad conduct and publish the “Computer Crime and Security Survey.” The trends established in the 1990s continue:
- 90 percent of respondents (primarily large corporations and government agencies) detected computer security breaches within the past 12 months.
- 80 percent acknowledged financial losses due to computer breaches.
- 44 percent (223 respondents) were willing and/or able to quantify their financial losses. These 223 respondents reported $455,848,000 in financial losses.
- As in previous years, the most serious financial losses occurred through theft of proprietary information (26 respondents reported $170,827,000) and financial fraud (25 respondents reported $115,753,000).
- For the fifth year in a row, more respondents (74 percent) cited their Internet connection as a frequent point of attack than cited their internal systems as a frequent point of attack (33 percent).
- 34 percent reported the intrusions to law enforcement. (In 1996, only 16 percent acknowledged reporting intrusions to law enforcement.)
Respondents detected a wide range of attacks and abuses. The following is a small sample of attacks and abuses:
- 40 percent detected system penetration from the outside.
- 40 percent detected denial-of-service (DoS) attacks.
- 78 percent detected employee abuse of Internet access privileges (for example, downloading pornography or pirated software or inappropriate use of e-mail systems).
- 85 percent of respondents detected computer viruses.
For the fourth year, CSI asked some questions about electronic commerce over the Internet. Here are some of the results:
- 98 percent of respondents have Web sites.
- 52 percent conduct electronic commerce on their sites.
- 38 percent suffered unauthorized access or misuse on their Web sites within the past 12 months. 21 percent said they didn’t know if there had been unauthorized access or misuse.
- 25 percent of those acknowledging attacks reported from two to five incidents. 39 percent reported 10 or more incidents.
- 70 percent of those attacked reported vandalism (only 64 percent in 2000).
- 55 percent reported denial of service (60 percent in 2000).
- 12 percent reported theft of transaction information.
Understanding Types of Attacks
Systems that exist on a network may be subject to specific types of attacks. There are several types of attacks that businesses are vulnerable to. These include:
- Denial of Service (DoS) or Distributed Denial of Service (DDoS).
- Insider attacks.
- Malicious software, such as viruses, worms, Trojan horses and backdoor programs.
For example, in a masquerade (also referred to as “spoofing”), one entity pretends to be a different entity. An entity can be a user, a process or a node on the network. A masquerade is typically used with other forms of an active attack such as replay and modification of messages. (A message is a packet or multiple packets on the network.)
Hacking and attacking are rising significantly, with the profile of the attacker changing as a consequence of better funding and easier access to tools and resources.
A replay occurs when a message, or part of a message, is repeated to produce an unauthorized effect.
Modification of a message occurs when the content of a data transmission is altered without detection and results in an unauthorized effect.
Denial of service occurs when an entity fails to perform its proper function or acts in a way that prevents other entities from performing their proper functions. This type of attack may involve suppressing traffic or generating extra traffic. The attack might also disrupt the operation of a network, especially if the network has relay entities that make routing decisions based on status reports received from other relay entities.
Insider attacks occur when legitimate users of a system behave in unintended or unauthorized ways. Most known computer crimes involve insider attacks that compromise the security of a system. The techniques that might be used for outsider attacks include wiretapping, intercepting emissions, masquerading as authorized users of the system and bypassing authentication or access-control mechanisms.
Malicious software refers to viruses, worms, Trojan horses and backdoor programs. Malicious software either performs negative behaviors or is used by attackers to further their goals of attacking enterprise networks and systems.
The threats are real. To confront these threats, businesses must protect their infrastructure and critical systems and networks by deploying appropriate safeguards. These will typically be a combination of administrative, physical and technical safeguards.
Administrative safeguards are administrative actions, policies and procedures to manage the selection, development, implementation and maintenance of security measures to protect enterprise information and to manage the conduct of the organization’s workforce in relation to the protection of all sensitive information.
Physical safeguards are physical measures, policies and procedures to protect the organization’s vital systems and related buildings and equipment from natural and environmental hazards and unauthorized intrusion.
Technical safeguards refer to the technology and the policies and procedures for its use that protect and control access to information, systems and transactions.
The key components of a secure infrastructure include technologies in two key areas of security: defense and trust.
Examples of defense-based security technologies include:
- Firewall systems.
- Intrusion Detection Systems (IDS) and malicious software detection.
- Secure Virtual Private Networks (VPNs).
Examples of security technologies that enable trust include:
- Encryption, for example, Public Key Infrastructure (PKI).
- Strong authentication, such as biometrics, authentication tokens and smart cards.
Finally, security policies and procedures provide the blueprint required to identify the security architecture to defend vital business assets and information. Documentation and updates of all critical assets and polices is vital in order to maintain the security of the enterprise.
Security is only as strong as the weakest link, and all gaps in the business infrastructure are opportunities for malicious attacks. Securit | <urn:uuid:81afb86c-84da-4538-a99c-c03c5f257c91> | CC-MAIN-2017-04 | http://certmag.com/defending-the-enterprise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9188 | 1,455 | 2.828125 | 3 |
Whatever happened to Super Wi-Fi? It's back!
- By Greg Crowe
- Feb 05, 2013
Back in 2010, the Federal Communications Commission announced it would be taking steps to free up the ranges of frequencies between broadcast TV channels, called “white spaces,” for other uses. One of those uses was supposed to be what was called “Super Wi-Fi,” which could provide robust, long-range wireless communications and could be used, for instance, by cities or counties for municipal broadband networks.
Even though the term “Wi-Fi” in this context is incorrect — the radio technology is different, and it isn’t endorsed by the Wi-Fi Alliance — the name stuck, and we still refer to it as “Super Wi-Fi.”
New Hanover County and the city of Wilmington in North Carolina last month announced the launch of the nation’s first Super Wi-Fi network covering several city parks. The county already had been using unused TV white spaces for wireless communications in areas where it was not practical to use fiber, such as for sensors monitoring water quality in wetlands and as traffic monitors.
So Super Wi-Fi could be showing its potential. But what is it?
Super Wi-Fi frequencies are much lower than even the 2.4 GHz band used for some current wireless networking. On the U.S Frequencies Allocation chart, TV channels sit in a couple places: VHF channels 2-6 sit between 54 and 88 MHz, channels 7-13 are at 174 to 216 MHz and the UHF channels (21-61) go from 470 to 763 MHz.
As these waves have a much lower frequency, their signals can go much farther than traditional Wi-Fi and penetrate obstacles such as concrete walls more easily. Access points would have a range of several miles, with upload speeds of 6 megabits/sec and download speeds of 20 megabits/sec, about in line with 4G LTE.
The only disadvantage is that the bandwidth of a wireless channel in these ranges would be much smaller than the higher-frequency Wi-Fi. Still, these frequencies could be ideal for making a nationwide wireless network.
So, what has held it up? Well, for one thing, testing. Also, since this is the first time in more than 20 years that the FCC will have allocated a section of spectrum for unlicensed use, the commission had to make sure all of its legal ducks were in a row. And the last time it freed up spectrum, it was for low-power wireless devices, such as baby monitors, microphones and garage door openers. This time it is for broadcasts that would go for miles.
Also, because Super Wi-Fi would use unlicensed spectrum, telecommunications companies initially balked at the idea of “free Wi-Fi.” But although the airwaves would be free, services would not be, and companies now seem to be looking at it as a fast lane to innovation. (Although Hanover County officials say they don’t plan to charge people for using its Super Wi-Fi network, they are contracting with local companies to build it out.)
Now that the trials are largely complete, and one county has a working example, the stage is set for commercial manufacturers to jump in. Just last week many of the industry’s experts attended the Super Wi-Fi Summit in Miami, Fla., where they discussed the recent developments in white space allocation, and how it might affect their companies as well as the face of the Internet. They must have made good progress, because they are having another summit in August.
So fairly soon, we may see products rolling out that take advantage of this new spectrum allocation. It might even pave the way (figuratively) for a network of driverless cars, which have been in the demo stage for a while now, but were lacking a network to run on.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:48836f01-6e3a-492d-b3e5-36620aa0a3f8> | CC-MAIN-2017-04 | https://gcn.com/articles/2013/02/05/future-super-wifi.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975556 | 825 | 2.828125 | 3 |
Ethernet Basics (e) - Flash
- Course Length:
- 1 hour of eLearning
NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop.
As the communications industry transitions to wireless and wireline converged networks to support voice, video, data and mobile services over IP networks, a solid understanding of Ethernet and its role in networking is essential. Ethernet is native to IP and has been adopted in various forms by the communication industry. A solid foundation in IP and Ethernet has become a basic job requirement in the industry. Starting with a brief history, the course provides a focused basic level introduction to the fundamentals of Ethernet technology. It is a modular introductory course only on Ethernet basics as part of the overall eLearning IP fundamentals curriculum.
This course is intended for those seeking a basic level introduction to Ethernet technology.
After completing this course, the student will be able to:
• Define Ethernet
• Summarize the key variations of the Ethernet family of standards
• Discuss Ethernet addressing and Frame Structure
• Discuss Ethernet services offered by Carriers
1. Ethernet Defined
2. Ethernet Standards
3. Ethernet Addressing and Frame Structure
4. Carrier Ethernet | <urn:uuid:1dd3fe56-62c2-48a1-bb3e-498f6271f624> | CC-MAIN-2017-04 | https://www.awardsolutions.com/portal/elearning/ethernet-basics-e-flash?destination=elearning-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.887407 | 254 | 3.359375 | 3 |
For Seattle residents, rain - and lots of it - is a fact of life. But they'd never seen a month quite like November 2006. With 15.59 inches of rain - including snowfall and hail - it set the record for wettest month, according to the National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center. It was the most rain the Emerald City had ever seen in a one-month span, in 115 years of record keeping.
If that weren't enough, mid-December brought supercharged winds of 60 to 90 mph that cut power to about 1 million people, some of whom lived in the dark for prolonged periods.
"It wasn't just for a couple of hours, a couple of days," said Eric Holdeman, former director of the King County, Wash., Office of Emergency Management. "There were folks without power for 10 days in isolated areas, or even longer than that."
That same month, drought plagued parts of Minnesota, Wyoming, Nebraska, Texas and Oklahoma; thunderstorms and tornadoes whipped through the South; a cyclone lashed the Eastern coastline from South Carolina to Virginia; and the earliest snowfall on record fell on Charleston, S.C., and Savannah, Ga., according to the National Climatic Data Center.
Worldwide patterns show an increase in heavy precipitation and intense droughts caused by a warmer atmosphere, increases in water vapor and a rising sea-surface temperature - all results of global warming.
Holdeman, now principal at ICF International's Emergency Management and Homeland Security team, holds last winter's unusually hazardous weather events as anecdotal evidence that our weather reality is shifting.
"Whatever the cause is, the weather is changing," Holdeman said. "There's been any number of extreme weather events happening."
Scientists may not agree on some of the possible effects of global warming, but most do agree that it's happening, said Gabriel Vecchi, research scientist at the NOAA Geophysical Fluid Dynamics Laboratory in Princeton, N.J.
According to a February report by the Intergovernmental Panel on Climate Change (IPCC), the nation is already seeing warming effects in the Western mountains and melting of the snow pack; with increased winter flooding and summer warming; through pests and wildfires plaguing forest environments; with the intensifying of heat waves; and in hurricanes pounding coastal cities.
Unfortunately any changes related to the planet's increased temperature will be magnified in developing countries, where resources won't be available to delay or minimize effects. But in richer nations, like the United States, where the resources are forthcoming, it's time to adapt and plan for changes we might see, or are seeing now.
The most egregious global warming effects will occur on global warming's frontlines - at the poles, where there's damage to ecosystems and thawing of glaciers and ice sheets, and on small islands, where beach erosion and storm surges are expected to further deteriorate coastlines, according to the IPCC.
Though most scientists agree that global warming is happening, the question of how exactly it will manifest remains. Many believe, however, that warming oceans may be contributing to more devastating hurricane seasons.
The 2004-2005 period was one of the most active 24 months ever witnessed in the Atlantic basin, setting records for number of hurricanes and tying the 1950-1951 record for most major hurricanes with 13.
But hurricanes don't just endanger lives; they also threaten people's livelihoods, businesses and homes, and cities' economies. And because tropical storms tend to hit the United States in its sweet spot - expensive and growing coastal stretches from Texas to Maine - they represent one of the country's gravest storm challenges.
Hurricanes that hit the Gulf Coast region during the 2004 and 2005 storm seasons produced seven of the 13 costliest hurricanes to hit the United States since 1900 (after adjusting for inflation), according to an April 2007 report
by the National Hurricane Center in Miami.
According to the NOAA, Hurricane Katrina cost approximately $60 billion in insurance losses to the Gulf Coast region - almost triple the $21 billion in insurance losses from Hurricane Andrew, the second costliest hurricane, which struck south Florida in 1992.
This year's hurricane season, from June 1 to Nov. 30, already looks grim. Experts at the NOAA Climate Prediction Center project a 75 percent chance the season will be above normal. They predict a strong La Nina - which favors more Atlantic hurricanes, while El Nino favors fewer hurricanes - will cause three to five major hurricanes.
Also a factor is a phenomenon called "the tropical multidecadal signal" - the notion that two or three decades of lessened storm activity are followed by two or three decades of increased activity. The period since 1995 has wreaked conditions for more hurricanes.
Yet despite signs of a rough hurricane season ahead, a surprising phenomenon is occurring: People are increasingly moving to the Atlantic coast. Census Bureau data shows that in 1950, 10.2 million people were threatened by Atlantic hurricanes; today more than 34.9 million are threatened, according to USA Today.
"The areas along the United States Gulf and Atlantic coasts where most of this country's hurricane-related fatalities have occurred are also experiencing the country's most significant growth in population," the National Hurricane Center report confirmed.
But since coastal communities won't stop corralling newcomers, the report concluded that communities themselves should take action.
Jim O'Brien, professor emeritus of meteorology and oceanography at Florida State University, said emergency managers and policymakers should address the hurricane issue by enforcing stricter building codes, readdressing evacuation strategies and educating people about the imminent problem.
However, more drastic action must be taken to stop people's risky behavior, according to Kerry Emanuel, an atmospheric scientist at the Massachusetts Institute of Technology in Cambridge.
The coastal migration is made possible, he said, through an unwise mix of state and federal policies, like government regulation of property and flood insurance (which covers storm surges), and federal disaster relief given to flooded regions. While such policies help people in the short term, Emmanuel explained, they also enable the risky behavior to continue.
Scientists have long feared America's vulnerability to hurricanes because its shores are lined with some of the nation's wealthiest residents. Emanuel, in conjunction with nine scientists, released a July 2006 statement about the U.S. hurricane problem: "We are optimistic that continued research will eventually resolve much of the current debate over the effect of climate change on hurricanes. But the more urgent problem of our lemming-like march to the sea requires immediate and sustained attention."
Paul Milelli, director of public safety for Palm Beach County, Fla., contends that global warming's effects may inherently force people to change their ways.
"If we start having to build homes to meet a 200 mph wind, the cost would probably stifle some growth," he said, "and then [there's] the fear factor of people moving in."
Because the county uses an all-hazards approach, emergency planning won't change much with global warming in the equation, he said.
"The economy is just going to be affected tremendously, and that, to me, is going to be the biggest concern. Because we can prepare our people for a hurricane, whether it's a Category 1 or a Category 5, and how we prepare the people really doesn't change - except that as the categories get higher, we start asking people to make their plans earlier and earlier."
For a statewide evacuation, Floridians would have to begin leaving days before the hurricane hit - a logistic impracticality.
"It's bigger than me. It's bigger than what I can plan for as
a planner of the county," said Milelli, whose 31-year emergency management career ends in January when he plans to retire in Wisconsin - far away from hurricanes.
To help combat storm destruction, the Gulf Regional Planning Commission in Mississippi focuses on hurricane preparation as well as planning and redevelopment.
"We're certainly well aware of the dramatic impacts of climate change and also the need for looking outside of our localized area when we're starting to talk about the impacts of climate change," said Elaine Wilkinson, the commission's executive director.
The commission is working to build bridges that withstand high winds (similar to the effects of an earthquake), and building up seawalls to match the roadbed.
After Hurricane Katrina, the commission took an extra year to engineer its long-range transportation to plan for major storms. Transportation planning is important to ensure safe evacuation, she said.
Wilkinson was also involved in a U.S. government study on how global warming could affect the nation's coastal transportation systems. The study, which just released its first phase for scientific review, concluded that with climate change, the sea level is rising and the land is sinking, according to a National Public Radio news report.
Listening to scientists provided a good opportunity for Wilkinson, who said scientists must share global warming findings with people who can effect change.
"We need to find a way to bring the scientific data into the planning process," Wilkinson said. "That's something that'll challenge us. But we're very much in need of information to make some good decisions."
Ask the Question
Working with science, King County integrated global warming policies into its government.
In October 2005, the county sponsored a conference to understand Washington's climate changes in the coming 20, 50 and 100 years, and identify approaches to adapt to climate change predictions.
The Climate Impacts Group (CIG), along with King County, developed conference materials, including Pacific Northwest climate change scenarios. CIG, which is funded by Washington University's Center for Science in the Earth System in Seattle and by NOAA, explores climate science with an eye to the public interest in the region. The group is one of eight NOAA teams that assess regional climate change in the United States.
From the conference, the CIG and King County established a relationship and jointly wrote Adapting to Global Warming - a Guidebook, to be released this November following a peer review process.
As a resource for regional leaders, the guidebook outlines King County's global warming approach, addressing its water supply, wastewater and floodplain management, agriculture, forestry and biodiversity. The county approved an aggressive levee improvement plan and adopted a climate plan in February that includes a two-page outline for the King County Office of Emergency Management to revise its strategies given projected climate changes.
In the guidebook, the CIG tells how scientists can communicate climate change information to emergency managers and policy leaders. But government officials are also responsible for opening the dialog.
Elizabeth Willmott, global warming coordinator for King County, stepped into her position upon its creation in January 2007, and works to coordinate projects, ideas and information related to the county's climate change mitigation and preparedness plans.
"What we suggest simply," Willmott said, "is that regional leaders ask the climate question, 'How is climate change going to affect my region?'"
Just asking, she said, can plant the issue in people's minds.
Though weather seems to be telling us something about how climate change will impact our future, there's uncertainty in many circles about what to do to prepare and how to mitigate its consequences.
ICF's Holdeman said we must focus on finding global warming's regional effects and work to lessen them now.
"We end up being so reactive as a society, and certainly the United States is," he said. "We don't address issues - like Social Security or Medicaid. Everybody knows it's a problem, but we're not going to do anything about it until it's staring us in the face, and there's a trillion dollar deficit."
It's up to emergency managers, he said, to spread the word and ensure global warming consequences are known.
"For emergency managers themselves," Holdeman said, "if we're not talking about it generally and trying to educate elected officials about it and the hazards, then you're counting on them to stumble on it as an issue." | <urn:uuid:5ae27884-097a-480b-a5d8-dee9bb4f5667> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/99377239.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954827 | 2,454 | 2.984375 | 3 |
Astronomers around the world are breathlessly watching comet ISON -- a relic from when our solar system was formed -- head toward the sun, where it might break up in a stunning light show.
Astronomers around the world are breathlessly watching a comet - a relic from when our solar system was formed - head toward the sun, where it might break up in a stunning light show.
Comet ISON has been on a journey that may have taken millions of years to get from the edge of the solar system to where it's closing in on our sun. And by studying ISON, scientists hope to gain clues to the ancient formation of the solar system and its planets.
Comet ISON shines in this five-minute exposure taken at NASA's Marshall Space Flight Center earlier this month. Comet ISON is heading toward a close encounter with the sun on Nov. 28. (Image: NASA)
"The reason we study comet ISON to begin with is it's a relic," Carey Lisse, a senior research scientist withJohns Hopkins Applied Physics Laboratory, said during a NASA news conference today. "It's a dinosaur bone of solar system formation. You need comets in order to build the planets. This comet has been in a deep freeze half way to the next star for the last four and a half billion years. It's just been coming in over the last few millions years and possibly even started around the dawn of man."
The comet, which is smaller than normal at about three-quarters of a mile across, is expected to come within a million miles of the sun's surface.
Since ISON is a loosely packed formation of ice and dust, there's an approximately 70% chance it won't survive traveling so close to the sun's blazing hot surface. The comet is expected to get relatively close to the sun on Thursday, which could offer spectacular images for NASA's telescopes and spacecraft to capture.
"It's going from a deep freeze to the furnace of the sun and we're going to watch it bake and boil," said Lisse. "We want to actually see the light coming from that evaporation.... It's like experiments you did in high school, in which you could see blue green for copper or red for iron. We're going to do the same thing for our comet dust."
Jim Green, director of NASA's planetary science division, said ISON is likely on its first orbital trip close to the sun.
"It's a special comet," he added. "It's probably the first time it's come in from a very long distance away - right at the edge of what our sun's gravity can hang on to. It may have taken millions of years to get to this location."
Green noted that ISON has quickly become the most observed comet in history.
Along with the Hubble Space Telescope, five others also have been used to track and study the comet. Even the Mars Reconnaissance Orbiter has turned its imagers on ISON as it flew past.
Karl Battams, an astrophysicist with the Naval Research Laboratory, noted that this is a critical time for ISON.
"When it's closest to the sun, it's experiencing the most intense solar radiation and gravitational forces," he explained. "There are a lot of things that could happen to this comet. Will it fall apart? Will it not fall apart? Will it fade away? We need to see what it does and when it does it and why it behaved the way it did."
NASA's scientists said some believe that the comet already is breaking apart, casting out large chunks of itself. But Lisse thinks that while some pieces may have been cast off, the comet is still holding together.
If it does break up, pieces will be scattered toward the sun where they'll flare and burn up.
"If it's just burping and bubbling and stays a coherent body, it will be heated and stressed by solar gravity," said Lisse. "Some think it could survive and be fine and come back out again."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA calls comet ISON a time capsule from the solar system's birth" was originally published by Computerworld. | <urn:uuid:9a0b443c-9838-45f2-98ba-fc8fa95d8b8a> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2172251/data-center/nasa-calls-comet-ison-a-time-capsule-from-the-solar-system--39-s-birth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958466 | 932 | 3.84375 | 4 |
As the wide application of fiber optic system, optical light source plays a more and more important part in it. We known a basic optical fiber system consists of a transmitter, an optical fiber and a receiver. The fiber optic light source, as an important component of the transmitter is modulated by a suitable drive circuit in accordance with the signals to be transmitted. Optical light source is also needed for performing fiber optic network testing to measure the fiber optic loss in the cable plant. Light source is offered in a variety of types including LED, halogen and laser. Among which, LED and Laser light source are two types of semiconductor light sources. The following article will discuss about some differences between laser and Led light source.
Basically, both kind of light source must be able to turn on and off millions to billons of times per second while projecting a near microscopic beam of light into an optical fiber. During the working process of optical signals, they are both supposed to be switched on and off rapidly and accurately enough to properly transmit the signals.
General difference between them as that LEDS is the standard light source which is short for light-emitting diodes. Laser light source like gas lasers may be mainly used in some special cases. Lasers are more powerful and operate at faster speeds than LEDs, and they can also transmit light farther with fewer errors. Laser are also much more expensive than LEDs.
LED fiber optic light source are made of materials that influence the wavelengths of light that are emitted. A basic LED light source is a semiconductor diode with a p region and an n region. When the LED is forward biased, current flows through the LED. As current flows through the LED, the junction where the p and n regions meet emits random photons. LEDs emitting in the window of 820 to 870 nm are usually gallium aluminum arsenide (GaAIAs). Laser is also a semiconductor diode with a p and an n region like LED, but it provide stimulated emission rather than the simplex spontaneous emission of LEDs. The main difference between a LED and a laser is that the laser has an optical cavity required for lasting. The cavity is formed by cleaving the opposite end of the chip to form highly parallel, reflective, mirror like finishes.
VCSEL, known as vertical-cavity surface-emitting laser, is a popular laser source for high speed networking, which consist of two oppositely-doped Distributed Bragg Reflectors (DBR) with a cavity layer. It combines high bandwidth with low cost and is an ideal choice for the gigabit networking options. The idea for vertical light emitting laser started between 1975-1977 to satisfy the planarization constraints of the integrated photonics according to the microelectronic technology available then. Nowadays, apart from the application in optical fiber data transmission, it is also widely used for other applications like analog broadband signal transmission, absorption spectroscopy (TDLAS), laser printers, computer mouse, biological tissue analysis, chip scale atomic clock, etc.
Different wavelengths travel through a fiber at different velocities as a result of material dispersion. What should always keep in mind is that both Laser and LED will not emit a single wavelength, but a range of wavelength that is known as the spectral width of the source. Fiber optic light source is always working with the fiber optic power meter. During the working process, it collimates beams of light and aim right down the center of the narrow single mode core and propagates in essentially a single mode transmission. For more questions about fiber optic test equipment, such as visual fault locators, optical power meter, OTDR testers, etc., please go for Fiberstore. | <urn:uuid:908d1038-aa4a-4c0f-9c27-66739472adc8> | CC-MAIN-2017-04 | http://www.fs.com/blog/difference-between-laser-light-source-and-led-light-source.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945922 | 748 | 3.890625 | 4 |
There are a few facts though: China dominates the counterfeit world; digital reproduction technology is making counterfeit movies and music recordings commonplace and the counterfeit industry hurts the overall US economy. Those are but a few of the results of a look by the US Government Accountability Office at what the theft of intellectual property means to the US.
Critics have long said the US needs to do something to put a crimp in the over $200 billion counterfeit and pirated goods industry with better enforcement and increased penalties for violations.
Some of the more telling facts from the GAO report:
- According to Customs and Border Protection data from 2004 through 2009, China accounted for about 77% of the aggregate value of goods seized in the United States. Hong Kong, India, and Taiwan followed China, accounting for 7, 2, and 1% of the seized value, respectively. CBP data indicate certain concentrations of counterfeit production among these countries: in 2009, about 58 % of the seized goods from China were footwear and handbags; 69% of the seized goods from Hong Kong were consumer electronics and watch parts; 91% of the seized goods from India were pharmaceuticals and perfume; and 85% of seized goods from Taiwan were computers and consumer electronics.
- Digital products can be reproduced at very low cost, and have the potential for immediate delivery through the Internet across virtually unlimited geographic markets. Digital piracy impacts most the music, motion picture, television, publishing, and software industries. Piracy of these products over the Internet can occur through methods including peer-to-peer networks, streaming sites, and one-click hosting services. There is no government agency that systematically collects or tracks data on the extent of digital copyright piracy.
- According to a recent Commerce department report, counterfeit electronics parts have infiltrated U.S. defense and industrial supply chains and almost 40 % of companies and organizations-including the Department of Defense-surveyed for the report have encountered counterfeit electronics.
- Commerce reported that the infiltration of counterfeit parts into the supply chain was exacerbated by weaknesses in inventory management, procurement procedures, and inspection protocols, among other factors. The Federal Aviation Administration (FAA) tracks and posts notifications of incidents of counterfeit or improperly maintained parts entering airline industry supply chains through its Suspected Unapproved Parts Program in an effort to improve flight safety. The FAA program has identified instances of counterfeit aviation parts, as well as fake data plates and history cards to make old parts look new. FAA's program highlights the risks that counterfeit parts pose to the safety of commercial aircraft.
- Counterfeit pharmaceuticals may include toxic or correct ingredients in incorrect quantities, or other mislabeling. These products can be ineffective in treating ailments or may lead to adverse reactions, drug resistance, or even death. The World Health Organization estimates that as much as 10% of medicines sold worldwide are believed to be counterfeit.
- Counterfeit automotive products may be substandard. A representative of a US automotive parts supplier told the GAO that it tested a supply of counterfeit timing belts that did not meet industry safety standards and could potentially impair the safety of vehicles.
- Counterfeit or pirated software may threaten consumers' computer security. The illegitimate software, for example, may contain malicious programming code that could interfere with computers' operations or violates users' privacy.
Looking to address these kinds of problems, the Department of Justice in February set up a task force it says will focus exclusively on battling US and international intellectual property crimes.
The Task Force will focus on bolstering efforts to combat intellectual property crimes through close coordination with state and local law enforcement partners as well as international counterparts, the DoJ stated. It will also monitor and coordinate overall intellectual property enforcement efforts at the DoJ, with an increased focus on the international IP enforcement, including the links between IP crime and international organized crime. The Task Force will also develop policies to address what the DoJ called evolving technological and legal landscape of this area of law enforcement.
As part of its mission, the Task Force will work closely with and make recommendations to the recently established Office of the Intellectual Property Enforcement Coordinator, which reports to the Executive Office of the President and is supposed to develop an overarching US strategic plan on intellectual property.
Part of the problem with IP enforcement is that even within the US the sheer amount of agencies involved makes it difficult. For example, overseas personnel from the Departments of Commerce, Health and Human Services, Homeland Security, Justice and State, and from the Office of the United States Trade Representative and the United States Agency for International Development all are involved in intellectual property efforts, the GAO has noted.
The new task force is represented by a variety of agencies as well, such as the US Attorney General, the Deputy Attorney General, and the Associate Attorney General; the Criminal Division; the Civil Division; the Antitrust Division; the Office of Legal Policy; the Office of Justice Programs; the Attorney General's Advisory Committee; the Executive Office for U.S. Attorneys and the FBI.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories:
Report rips key government security efforts | <urn:uuid:00075ba0-5814-4488-a649-df02f35fb934> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2230418/security/an-inside-look-at-intellectual-property-theft.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00287-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943946 | 1,042 | 2.984375 | 3 |
The Healthcare is one of the fastest growing industries today - thanks to the increased consciousness to seek a healthy living and the ease of access to excellent medical care. With such unprecedented surge in demand for medical care, health care institutions cannot but seek a strong and robust IT system across their setting to manage efficiently. In fact today the IT department is as common as a radiology department in most hospitals. Hospitals rely on IT system, and Computer networks to manage the entire patient treatment cycle - from admission to discharge, to the extent that they have come to view the IT department as a value-enhancer, away from the “The cost-centre” that they were once considered to be.
A question of Life
When it comes to Healthcare, Computer Networks could mean the difference between life and death. With more and more hospitals joining the electronic bandwagon, Patient Medical Histories get stored as Electronic Medical Records everywhere. The availability of reliable and accurate data as and when required determines the quality of the treatment extended to the patient.
The Healthcare Institution of today
Healthcare institutions come in all sizes - from the basic only-outpatient-treatment-centre down the road to the large Medical centre in Universities to the very large community healthcare centres. What is common to them is the strong IT system that each has in place and the much stronger strict federal laws that each is governed by - only that in the case of the large and very large centres the law is more pronounced and the fallouts of not complying with the laws could mean damaging ramifications. The IT spending by Healthcare Institutions today is like never before. This is mainly on account of the need to manage the health related information of numerous patients and their medical histories. The other reason for IT proliferation is the interest in leveraging the treatment given. Also with Telemedicine catching up in a big way, thanks to Information and Communication technologies, medical treatment becomes possible without constraints of time and distance from anywhere
The flip-side to this overarching reliance on IT is that even the slightest glitch in the IT systems could adversely impact their operations. Add to it the federal laws like HIPAA that seek intense scrutiny of the IT system security and patient data integrity, and the job of the IT System⁄ Network manager becomes all the more challenging
A case in Point
Consider the case of a Large Community Medical System that has 5000 employees, has 60 distinct business units. To achieve high levels of service delivery and efficiency the medical center deploys a sophisticated Healthcare Information system(HIS) that spans its entire campus. A sophisticated Healthcare Information system to automate the whole process flow.
This HIS has the ability to store electronic medical records of patients and facilitate quick reference to the patient health status to authorised(prieveleged )physicians This apart it also has a strong Picture Archiving system (PACS) to electronically store patient image records. To support the access to HIS and PACS from anywhere the medical center has a high bandwidth network across its campus. This in turn facilitates Voice - Over - Ip communications, Access to wireless internet access from anywhere in the campus.
A network disaster namely:
could spell disaster to the medical system. Hence it needs a monitoring tool that helps it take proactive actions and troubleshoot fast in case of any network outages.
The NetFlow Analyzer as a tool to monitor the network
Cisco NetFlow Technology makes it possible to unravel various vital information about your network health with no additional investment. ManageEngine NetFlow Analyzer, by harnessing the NetFlow data export from Cisco Routers and switches analyses and reports on vital parameters like who the top talkers are, which applications are consuming the maximum bandwidth, whether there has been any network attack⁄unscrupulous access attempts, etc.
Armed with this information and with provisions to be proactively alerted upon any threshold violations, you can stay relaxed when it comes to your healthcare network. You bet! | <urn:uuid:8d47f439-3cc4-42c4-853b-51b6283b89f5> | CC-MAIN-2017-04 | https://www.manageengine.com/products/netflow/healthcare-network-bandwidth-monitoring-traffic-analyzis.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946592 | 798 | 2.75 | 3 |
http://www.theregister.co.uk/content/55/26049.html By John Leyden Posted: 04/07/2002 at 16:22 GMT The Web is more vulnerable to attack now than at any time previously. That's the stark conclusion of Netcraft's latest monthly survey of Web servers, which expresses concerns over the emergence of serious vulnerabilities in both Microsoft's IIS and Apache Web servers over the last month. These vulnerabilities create a situation where a majority of Internet sites are likely to be accessible to remote exploit, Netcraft, which is normally associated with alarmist predictions, believes. On June 11, Microsoft released a trio of advisories, the most serious of which referred to a HTR buffer overflow that could be used to remotely compromise machines running Microsoft-IIS. Although Netcraft can not explicitly test for the vulnerability without prior permission from the sites, around half of the Microsoft IIS sites on the internet have HTR buffer overflow enabled, making it likely that many will be vulnerable to attack. Days later it was reported that many versions of the Apache Web server were vulnerable to a buffer overflow because of a flaw in the Web server's "Chunked Encoding" mechanism. If exploited, the flaw, could lead to a remote system compromise and exploits are already known to have been been developed for Windows, FreeBSD and OpenBSD. There is an active debate on whether exploits are possible for Linux and Solaris. Netcraft reports that Apache administrators have reacted quite quickly to the problem, and within a week of first publication, well over 6 million sites have been upgraded to Apache/1.3.26, which addresses the problem. That still leaves around 14 million potentially vulnerable Apache sites, however. Netcraft's report says: "With over half of the Internet's web servers potentially vulnerable, conditions are ripe for an epidemic of attacks against both Microsoft-IIS and Apache based sites, and the first worm, targeting sites running Apache on FreeBSD, has been spotted this weekend." Security watchers monitoring this worm believe its spread has been modest. Aside from this welcome result, Netcraft notes (quiet surprisingly) that worms can have positive effects. It says draw administrators' attention to vulnerable servers and - once patched - the server is usually no longer available as a platform for more insidious activity. Last year, immediately prior to the Code Red worm, Netcraft was finding that around one in six ecommerce sites running Microsoft-IIS taking a security test from Netcraft for the first time had already been successfully compromised, and had a backdoor giving an external attacker control over the machine. "The clear up from Code Red had the positive effect of flushing the majority of these backdoors out of the Internet," it notes. - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail.
This archive was generated by hypermail 2b30 : Mon Jul 08 2002 - 06:48:03 PDT | <urn:uuid:b87afda0-5597-49fc-be7c-2233a19665d4> | CC-MAIN-2017-04 | http://lists.jammed.com/ISN/2002/07/0026.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00011-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952054 | 611 | 2.796875 | 3 |
(Originally published in Small Business Computing, March 11, 2009)
Technology insiders tend to throw around technical terms and business jargon, assuming people outside the industry understand what it all means. By its nature, technology vocabulary is often confusing and complicated, and insiders often add to the confusion by over-complicating things. To help add a sense of clarity to the confusion, each month, Laurie McCabe, a partner at Hurwitz & Associates (a business consulting firm), will pick a technology term, explain what it means in plain English, and then discuss why it may be important to you. Laurie kicks off her new column with a look at cloud computing.
What Is Cloud Computing?
Cloud computing is a computing model that lets you access software, server and storage resources over the Internet, in a self-service manner. Instead of having to buy, install, maintain and manage these resources on your own computer or device, you access and use them through a Web browser. Sometimes you might need to download a small piece of client code (i.e., software you install on your PC), but we’d still categorize that as cloud computing, because, for the most part, the real horsepower is supplied from the cloud.
At this point, many of you may be asking, I still don’t get why they call this “cloud” computing — why not Internet computing? The answer is that techies have long used cloud icons to represent the data centers, technologies, infrastructure and services that comprise the Internet — and the metaphor has stuck.
You can perform just about any computing task in the cloud. It’s likely that you already use several cloud solutions. For example, software-as-a-service (SaaS) or on demand business applications, such as salesforce.com, Intuit QuickBooks Online or Citrix GoToMeeting are cloud applications; you access them from your Web browser, but the software, processing power and storage reside in the cloud.
Free Web services — such as Google Gmail or Microsoft Hotmail, or FaceBook and Twitter, for that matter — are also examples of cloud computing. Likewise, if you use online backup solutions, you’re storing your files in the cloud. And many managed services providers supply services such as network and security monitoring over the Internet. Another example is Amazon.com, which sells access to CPU cycles and storage as a service of its cloud infrastructure.
Why Should You Care?
Most small businesses simply don’t have the time, expertise or money necessary to buy, deploy and manage the computing infrastructure needed to run these solutions on their own. Cloud computing shields you from these complexities. As a user, you see only the self-service interface to the computing resources you need. And, you can expand or shrink services as your needs change.
Instead of laying out capital to buy hardware and software, you rent what you need, usually either on a subscription basis, or on a utility pay-as-you go model. Many cloud computing vendors offer free services. Some, like Google and Yahoo, monetize free offerings through ad revenues. Other vendors make money by selling optional, integrated fee-based services alongside the freebies — a model that is gaining momentum. A couple to check out in this category include the following:
* SmartRecruiters (www.smartrecruiters.com), which offers a free applicant tracking system for recruiters in small and medium businesses.
* FreshBooks (www.freshbooks.com), which has free invoicing, expense reporting and time tracking solutions for freelancers and small businesses.
* Demandbase’s (www.demandbase.com) freebie service is Demandbase Stream, which provides information about who’s visiting your Web site, search terms they use, and pages they looked at.
What to Consider
Behind the scenes, cloud computing vendors have to do a lot of work to manage all of the infrastructure, technology and people that make this possible. To provide services easily, flexibly and profitably to thousands or even millions of users, they invest heavily in hardware, virtualization technologies, networking infrastructure and automation capabilities (any one of which would need its own article to fully explain).
There are thousands of cloud computing vendors and solutions out there. But they are not all are created equal — and neither are your needs in any given solution area. Think about how critical a particular function is to your business? What would happen if you couldn’t access data or use the application for a period of time? For instance, a small business needs higher service levels for an accounting solution than a freelancer requires for expense tracking. Before moving beyond a trial service, consider your needs for reliability, security, performance and support — and then at how well a vendor can meet them.
Cloud computing providers should provide details about how they protect data and ensure regulatory compliance, and they should explain their policies to provide you with your data if you decide to terminate the service or if they go out of business. If you pay for a service, you should also get a service level agreement (SLA) from the cloud vendor. The SLA documents service requirements, supplies ongoing metrics to ensure these requirements are met, and provides remuneration should the vendor fail to deliver on the agreed metrics.
Filed under: cloud computing, Small Business Computing Articles, Small Business Software, SMB, Software as a Service, Web 2.0 Tagged: | cloud, cloud computing, demandbase, Freshbooks, SaaS, small business, SmartRecruiters, SMB | <urn:uuid:fd899fd7-ab33-43a9-b1a5-3a64843ae956> | CC-MAIN-2017-04 | https://lauriemccabe.com/2009/08/06/what-is-cloud-computing-and-why-should-you-care/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00011-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926102 | 1,146 | 2.875 | 3 |
As a fundamental practice in protecting sensitive data, encryption has long been a focal point in cyber security development and implementation. However, in light of recent news on government surveillance efforts from agencies, as well as cyber espionage attempts by foreign governments, data encryption has been getting a lot of attention.
When effectively applied, strong encryption algorithms are trusted to keep prying eyes off of meaningful data. However, companies and government agencies continue to struggle to ensure their data is being adequately protected. Furthermore, cloud providers wishing to do business with the federal government often find themselves unable to offer the assurance that their encryption methods are up to the task.
There are a number of roadblocks to reliable encryption, but these challenges are some of the most common. And fortunately, each one has a viable solution.
1. Choosing the right configuration for encryption. According to Johns Hopkins cryptography researcher Matthew Green, many organizations rely on SSL to encrypt sensitive data. And although this protocol is effective, its keys are comparatively small and vulnerable to interception. Green suggests that certain configurations for SSL, such as DHE and ECDHE, can more effectively protect against successful decryption than the RSA configuration.
2. Covering the full lifecycle. For complete protection against surveillance, data needs to be encrypted not just during transfer, but also when it’s at rest and when it’s accessed by applications. Successfully managing encryption across the full lifecycle can mean rewriting software, planning cross-jurisdiction governance and adding processes. Organizations handling the full lifecycle of their data need to invest in methods to ensure encryption at all stages.
3. Key management. The high volume and wide distribution that organizations manage results in the generation of a large number of keys, which need effective management. In addition to wide-ranging governance considerations, this calls for consistent training to ensure privacy at all times.
4. Encryption in the cloud. Organizations employing cloud services for data management have additional challenges in applying encryption since the cloud service provider holds the key to that encrypted data. Solutions for using encryption in the cloud exist. However, getting everything right in the implementation of these solutions is best left to a cloud security expert.
5. FedRAMP. Cloud service providers looking to work with federal government agencies have a particular interest in meeting encryption standards, which by FedRAMP regulations means FIPS 140-2 validation. Since the inception of FedRAMP, this standard has been a major hurdle for many CSPs. However, specialists in the area of FedRAMP compliance have been successful in helping many CSPs gain certification.
For cloud services providers, organizations entrusting their data to the cloud, or organizations managing their own end-to-end encryption, Lunarline helps effectively implement encryption across the entire lifecycle. For more information about our products and services, visit Lunarline.com or contact us today. | <urn:uuid:ebd28a53-f073-48f2-912f-fea9e4235fa1> | CC-MAIN-2017-04 | https://lunarline.com/blog/2015/09/enterprise-encryption-roadblocks-opportunities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948866 | 585 | 2.59375 | 3 |
126.96.36.199 What are Certificate Revocation Lists (CRLs)?
A certificate revocation list (CRL) is a list of certificates that have been revoked before their scheduled expiration date. There are several reasons why a certificate might need to be revoked and placed on a CRL. For instance, the key specified in the certificate might have been compromised or the user specified in the certificate may no longer have authority to use the key. For example, suppose the user name associated with a key is ``Alice Avery, Vice President, Argo Corp.'' If Alice were fired, her company would not want her to be able to sign messages with that key, and therefore the company would place the certificate on a CRL.
When verifying a signature, one examines the relevant CRL to make sure the signer's certificate has not been revoked. Whether it is worth the time to perform this check depends on the importance of the signed document. A CRL is maintained by a CA, and it provides information about revoked certificates that were issued by that CA. CRLs only list current certificates, since expired certificates should not be accepted in any case: when a revoked certificate's expiration date occurs, that certificate can be removed from the CRL.
CRLs are usually distributed in one of two ways. In the ``pull'' model, verifiers download the CRL from the CA, as needed. In the ``push'' model, the CA sends the CRL to the verifiers at regular intervals. Some systems use a hybrid approach where the CRL is pushed to several intermediate repositories from which the verifiers may retrieve it as needed.
Although CRLs are maintained in a distributed manner, there may be central repositories for CRLs, such as, network sites containing the latest CRLs from many organizations. An institution like a bank might want an in-house CRL repository to make CRL searches on every transaction feasible. The original CRL proposals often required a list, per issuer, of all revoked certificates; new certificate revocation methods (for example, in X.509 version 3; see Question 5.3.2) are more flexible. | <urn:uuid:831813b4-760f-43af-b312-5e0345eb58c6> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/certificate-revocation-lists.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945044 | 440 | 2.875 | 3 |
Just like professionals, more and more students are taking their own devices with them. BYOD, a trend in the business world, is also gaining popularity in schools.
With limited budgets, many elementary and high schools around the world are allowing student devices in the classroom. For them, it’s a means to help ensure access to computing devices for as many students as possible.
Proponents of the trend say it helps increase classroom collaboration and participation and help students better prepare for the future. Not all schools are on board, however. Concerns about technology being a distraction in the classroom and devices highlighting economic differences between students are also very real.
On college and university campuses, BYOD has already been established for years. For higher education, students using their own devices is the expectation rather than the exception, as the majority of students’ coursework is conducted digitally. Teamwork is also highly emphasized in a college setting, and students use their own devices to collaborate together on shared projects.
Whether schools have bring-your-own-device policies yet or not, one thing is for certain: Devices are the future, and students will only acquire more of them. According to an F-Secure survey, 60% of children under 12 already own at least one mobile device with Internet access. For teenagers and college students, the numbers are decidedly higher.
BYOD in schools has undeniably gotten a push forward from the cloud. With cloud-based applications that work across devices and platforms, there’s no need to load specific software onto every student’s computer. Students can access the application from any computer or device with Internet access. | <urn:uuid:fea6d22f-9c67-4608-bd8e-42b808211fb3> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/08/26/more-students-bringing-mobile-devices-to-class/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970487 | 333 | 3.0625 | 3 |
Energy consumption accounts for a large portion of datacenter operating budgets. The added environmental impact compounded with capital costs has motivated operators to increase the efficiency and lower the cost of computing. Last year, Facebook spun up the Open Compute Project, with the goal of improving datacenter designs to meet those goals.
So far, the project has produced a number of components including a server chassis, a battery cabinet and two x86 motherboards. All of the designs are freely available to download from their website.
To rate efficiency, datacenters calculate their power usage effectiveness (PUE) ratio. The rating compares how much incoming power to energy used for computational processes. One estimate puts the average datacenter PUE at 1.8. Larger Internet players have used some unique solutions to bring that number closer to the 1.0 mark, with Google currently averaging a 1.14 PUE.
Prineville, Oregon is home to one of the world’s most efficient datacenters. Built by Facebook, the center utilizes designs from the Open Compute Project as well as innovative LED lighting and gray water facilities to receive a 1.07 PUE rating at full load.
Ken Pratchett, Manager of the Prineville data center, discussed the cost and efficiency of the Open Compute Servers, “These machines are 38 percent more efficient than another machine that you could find on the open market,” he said. “In fact, they cost 24 percent less to create.”
The combination of green technologies implemented at the Oregon datacenter earned it a Gold Certification from the US Green Building Council’s Leadership in Energy and Environmental Design (LEED). Compared to other datacenters built to code, it consumed 52 percent less energy and 72 percent less water for occupant use. It also recycles captured water for landscape irrigation.
Next month, the Open Compute Project will hold a summit in San Antonio. The two-day event will include the following workshops:
- Mechanical design and modular power distribution
- Defining open systems management for the enterprise
- New initiatives; building for different geographies
- New hardware and building for the 100 year standard
- Pushing the limits of connectivity, software and hardware modularity
The consumer-driven demand for computer technology has spurred the need for more and larger datacenters, resulting in increased energy consumption and higher operational costs. Examples like the one in Prineville not only lead to innovation in the field of green datacenter technology, but also provide a framework for cost-effective operations as well. | <urn:uuid:315025cb-e880-4df4-9cd7-74e44196bade> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/04/26/facebook_showcases_green_datacenter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926628 | 530 | 2.71875 | 3 |
To a large extent, our fears define us. Our earliest bipedal ancestors probably mostly had fears about ingestion -- either being eaten or not having enough to eat. The literature and art of the Victorians, as Julie Wosk notes in Breaking Frame: Technology and the Visual Arts in the Nineteenth Century, reflected a popular psychosis fixated on fears of being blown up by misengineered technology, being accelerated half out of one's mind on a train or being maimed when a train went off the tracks.
The fears of the information age, however, are different. Many of us tremble at a looming Malthusian wall of ignorance. Thomas Malthus was an 18th century Anglican curate, demographer and political scientist who observed that the population was growing much faster than the food supply and predicted an end state of famine and social unrest.
There are many neo-Malthusians in the "big data" ecosystem who fear that the volume of information that must be known is growing far faster than organizations' capacity to know. Failure to embrace new technologies and new information management practices is predicted to doom us to a form of cognitive starvation.
This story, "Managing the fears that define the information age" was originally published by Computerworld. | <urn:uuid:b9a41207-216a-48de-9657-e39bd3a70371> | CC-MAIN-2017-04 | http://www.itworld.com/article/2734204/it-management/managing-the-fears-that-define-the-information-age.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957511 | 260 | 2.84375 | 3 |
Cyber-attacks in the healthcare environment are on the rise, with recent research suggesting that critical healthcare systems could be vulnerable to attack.
In general, the healthcare industry is proving lucrative for cybercriminals because medical data can be used in multiple ways, for example fraud or identify theft. This personal data often contains information regarding a patient’s medical history, which could be used in targeted spear-phishing attacks.
Dangerous attacks – what are the risks?
Cybercriminals have found medical data to be far more valuable than credit card fraud or other online scams. This is because medical information contains everything from a patient’s medical history to their medical prescriptions, and hackers are able to access this data via network-connected medical devices, now standard in hi-tech hospitals. This is opening up new possibilities for attackers to breach a hospital or a pharmaceutical company’s perimeter defences. If a device is connected to the internet and left vulnerable to attack, an attacker could remotely connect to it and use it as gateways for attacking network security.
The danger is that, because most of these devices are not on segregated networks and are directly connected to other medical computers or life-depending medical hardware, attackers could make their way to servers or databases housing sensitive and confidential patient records. Furthermore, whilst accessing medical data is a serious concern, there’s also the risk of tampering with medical equipment that’s keeping patients properly medicated. In this case, it is likely that future cyber-attacks could lead to the loss of human life.
The healthcare security spend – how much is enough?
Despite increasing attacks on healthcare organisations, 10 per cent or less of IT spend is put towards security, leading many recent reports to suggest that healthcare organisations are not taking the security of patients seriously. However, while 10 percent may seem small, healthcare organisations usually have large budgets, which means this could represent a lot more than what a small or medium-sized company would allocate towards security.
What is more of a concern is that, while organisations continue to put pressure on healthcare companies to secure patient data, 87 per cent of healthcare organisations are still leaving data at risk. If, until now, these organisations have focused on investing in quality services, medication and personnel, protecting a patient’s medical data should be met with the same level of interest and involvement. With the number of implantable or internet-connected medical devices, medical organisations need to account for the fact that such devices could also be used to end life, not only protect it, in a cybercriminal’s control.
Securing the data: Keeping the attackers at bay
The majority of healthcare organisations have often been shown to fail basic security practices, such as disabling concurrent login to multiple devices, enforcing strong authentication and even isolating critical devices and medical data storing servers from a direct internet connection. Organisations must start by fixing these shortcomings.
Furthermore, healthcare companies should implement security policies, and invest in Intrusion Detection Systems, access control lists and even regular pen-testing drills for identifying network, software and procedural issues.
Going forward, it is vital for companies to invest in training personnel to correctly identify security threats, as they’re usually the ones most prone to social engineering techniques or spear-phishing attacks. Healthcare professionals that handle medical equipment should be trained and instructed on best security practices and medical devices security, as they could be directly responsible for a potential security breach or patient-related issues caused by mishandling such hardware or software. | <urn:uuid:cf8b487f-e77f-4b28-87a3-ab6da3bc299e> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2016/06/23/hackers-targeting-healthcare-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00095-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948773 | 716 | 3.28125 | 3 |
What You'll Learn
- Distinguish Korn Shell and Bash Shell specific features
- Use utilities such as sed and awk to manipulate data
- Understand system shell scripts, such as /etc/shutdown
- Write useful shell scripts to aid system administration
- AIX Basics (AU131) or (AU130) or (Q1323) or
- Linux Basics and Installation (LX020) or (QLX02) or
- Linux Basics and Installation(QLXA2) or
- Linux Basics and Installation - Lite (QLXL3)
- Understand the programming fundamentals of variables and flow control concepts, such as repetition and decision, or
- Have working knowledge of UNIX or Linux,including the use of the vi editor, manipulating files and directories, basic variables, piping and redirection, and the find and grep commands.
Who Needs To Attend
This intermediate course is for experienced system administrators, programmers, application developers, and end users. | <urn:uuid:86a106d0-e050-4e12-9fe4-9571618ce76d> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/117791/korn-and-bash-shell-programming/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00003-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.810638 | 202 | 2.625 | 3 |
Here's a simple VBScript that uses the same speech method to hear how speech-enabled programs pronounce words. This is useful to determine how these programs will pronounce proper names.
sText = InputBox("Enter the text you want the computer to say.", "Text to Speech")To accomplish the same thing in PowerShell, use the following:
sText = Trim(sText)
If sText <> "" Then
set sapi = CreateObject("sapi.spvoice")
$Voice = New-Object -com SAPI.SpVoiceFor example, if you enter my name as it's spelled (Jeff Guillet) you will hear how speech enabled applications mispronounce my name. In the case of Exchange UM directory lookups, this is also how Exchange expects callers to pronounce my name to find a match. If you enter the phonetic spelling of my name (Jeff GheeA) you will hear it pronounced correctly.
$Voice.Speak( "Keith Johnson" )
By testing different phonetic spellings using these scripts, you can determine what to use for the msDS-PhoneticDisplayName attribute in Active Directory. | <urn:uuid:7a19fcd0-d180-478d-8ecb-96928c069864> | CC-MAIN-2017-04 | http://www.expta.com/2010/12/testing-speech-grammars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00425-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.821477 | 241 | 2.546875 | 3 |
The World Wide Web Consortium (W3C) has completed a set of technical specifications that define how scripting programs interact with web pages. The development marks an important step toward interoperability on the web and is a sign of the web's growing maturity, according to an industry analyst.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The W3C recommended its Document Object Model (DOM) Level 3 Core and DOM Load and Save specifications. A recommendation means the consortium considers a specification stable and ready for use. DOM level 1 and 2 have already been recommended.
Together, the DOM specifications define the APIs (application programming interfaces) that programs use to access, manipulate and manage HTML and XML (Extensible Markup Language) documents. Their completion makes possible "more sophisticated and powerful combinations of scripting languages and XML documents and data, including the critical web services applications space", the W3C said.
The completion of the DOM specifications is a sign that the web has come of age, said Illuminata analyst Jonathan Eunice. In the 1990s, the development of competing web browsers from Microsoft and the Netscape led to incompatibilities in the way software programs interact with HTML and XML documents.
"Standardising the DOM solves one of the longest-standing and ugliest chapters of practical non-interoperability the web has seen. The Microsoft and Netscape/Mozilla camps built hugely incompatible implementations of how programs work with HTML and XML documents," he said.
"Today's W3C standardisation helps put the commonality back and reunify the web, the way it should be."
Some programmers will continue using browser-specific extensions they are familiar with, Eunice noted, and the web's incompatibilities will not disappear overnight. But the completed standards mean developers now have a way to write compatible code and to fix incompatible code already in use,.
The W3C's work on the DOM specifications began in 1997 and involved more than 20 organisations, including IBM, Macromedia, Sun Microsystems, Microsoft, the National Institute of Standards and Technology (NIST) and the Object Management Group.
The DOM test suites have been updated to include the new specifications and developers can start using them immediately. More information is available at http://www.w3.org/DOM/
James Niccolai writes for IDG News Service | <urn:uuid:b0ae1118-2453-4ab9-a6c9-cce22f0cb6d5> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240055538/W3C-completes-web-scripting-specs | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00057-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875309 | 499 | 2.75 | 3 |
THE WORLD OF WIRELESS applications and technologies, and its alphabet soup of acronyms, can be a confusing place. The bets your company makes on wireless technology will likely depend on where you work; they could also depend on how many different technologies your customers require you to support. Here they are.
3G (third generation) An industry term used to describe the next, still-to-come generation of wireless applications. It represents a move from circuit-switched communications (where a device user has to dial in to a network) to broadband, high-speed, packet-based wireless networks (which are always "on"). The first generation of wireless communications relied on analog technology (see Analog), followed by digital wireless communications. The third generation expands the digital premise by bringing high-speed connections and increasing reliability.
802.11 A family of wireless specifications developed by a working group of The Institute of Electrical and Electronics Engineers. These specifications are used to manage packet traffic over a network and ensure that packets do not collide--which could result in loss of data--while traveling from their point of origin to their destination (that is, from device to device).
AMPS (advanced mobile phone service) A term used for analog technologies, the first generation of wireless technologies.
Analog Radio signals that are converted into a format that allows them to carry data. While cellular phones and other wireless devices still use analog in geographic areas where there is little or no coverage by digital networks, analog will eventually give way to faster digital networks, analysts say.
Bandwidth The size of a network "pipe" or channel for communications in wired networks. In wireless, it refers to the range of available frequencies that can carry a signal.
BlackBerry Two-way wireless device, made by Waterloo, Ontario-based Research in Motion, that allows users to check e-mail and voice mail (translated into text), as well as page other users via a wireless network service. Also known as a RIM device, it has a miniature qwerty keyboard for users to type their messages. It uses the SMS protocol (see SMS). BlackBerry users must subscribe to a wireless service that allows for data transmission.
Bluetooth A short-range wireless specification that allows for radio connections between devices within a
30-foot range of each other. The name comes from 10th-century Danish King Harald BlŒtand (Bluetooth), who unified Denmark and Norway.
CDMA (code division multiple access) U.S. wireless carriers, such as Sprint PCS and Verizon, use CDMA to allocate bandwidth for users of digital wireless devices. CDMA distinguishes between multiple transmissions carried simultaneously on a single wireless signal. It carries the transmissions on that signal, freeing network room for the wireless carrier and providing interference-free calls for the user. Several versions of the standard are still under development. CDMA promises to open up network capacity for wireless carriers and improve the quality of wireless messages and users’ access to the wireless airwaves. It’s an alternative to GSM, which is popular in Europe and Asia (see GSM).
CDPD (cellular digital packet data) Telecommunications companies can use CDPD to transfer data on unused cellular networks to users. If one section, or "cell," of the network is overtaxed, CDPD automatically allows for the reallocation of resources.
Cellular Technology that sends analog or digital transmissions from transmitters that have areas of coverage called cells. As a user of a cellular phone moves between transmitters from one cell to another, the user’s call travels from transmitter to transmitter uninterrupted.
Circuit switched Used by wireless carriers, this method lets a user connect to a network or the Internet by dialing in, such as with a traditional phone line. It’s a dial-in Internet service provider for wireless device users. Circuit-switched connections can be slow and unreliable compared with packet-switched networks, but for now circuit-switched networks are the primary method of Internet and network access for wireless users in the United States (see Packet-switched network).
Dual-band mobile phone Phones that support both analog and digital technologies by picking up analog signals when digital signals fade. Most mobile phones are not dual-band.
EDGE (enhanced data GSM environment) A faster version of the GSM standard. It is faster than GSM because it can carry messages using broadband networks that employ more bandwidth than standard GSM networks (see GSM).
FDMA (frequency division multiple access) An analog standard that lets multiple users access a group of radio frequency bands and eliminates interference of message traffic.
Frequency hopping spread spectrum A method by which a carrier spreads out packets of information (voice or data) over different frequencies. For example, a phone call is carried on several different frequencies so that when one frequency is lost another picks up the call without breaking the connection.
GPRS (general packet radio service) A technology that sends packets of data across a wireless network at speeds of up to 114Kbps. It is a step up from the circuit-switched method; wireless users do not have to dial in to networks to download information. With GPRS, wireless devices are always on--they can receive and send information without dial-ins. GPRS is designed to work with GSM.
GSM (global system for mobile communications) A standard for how data is coded and transferred through the wireless spectrum. The European wireless standard also used in Asia, GSM is an alternative to CDMA. GSM digitizes and compresses data and sends it down a channel with two other streams of user data. The standard is based on time division multiple access (see TDMA).
I-Mode A wildly popular service in Japan for transferring packet-based data to handheld devices. I-Mode is based on a compact version of HTML and does not use WAP (see WAP), setting it apart from other widely used transmission methods. I-Mode’s creator, NTT DoCoMo of Tokyo, agreed in November 2000 to pay $9.8 billion to buy 16 percent of AT&T Wireless. Since then, AT&T Wireless has talked about bringing I-Mode to the United States by the end of 2001--a daunting prospect that requires the rebuilding of U.S. wireless networks, analysts say. DoCoMo is developing a version of I-Mode that supports the WAP standard.
Integrated Digital Enhanced Network (iDEN) A technology that allows users to access phone calls, two-way radio transmissions, paging and data transmissions from one wireless device. Developed by Motorola, iDEN is based on TDMA. Services based on the technology are available in North America (offered by Nextel), South America and parts of Asia (see TDMA).
Kbps (kilobits per second) A measurement of bandwidth in the United States.
Packet A chunk of data that is sent over a network, whether it’s the Internet or wireless network. Packet data is the basis for packet-switched networks, which are under development in the United States as a faster, more reliable method of transferring wireless data than a circuit-switched network. Packet-switched networks eliminate the need to dial in to send or receive information because they are "always on," transferring data without the need to dial. The packets that hold data depend on the size of the data involved; "chunks" are broken down into an efficient size for routing. Each of these packets has a separate number and carries the Internet address for which it is destined.
Packet-switched network Networks that transfer packets of data (see Packet).
PCS (personal communications services) An alternative to cellular, PCS works like cellular technology because it sends calls from transmitter to transmitter as a caller moves. But PCS uses its own network, not a cellular network, and offers fewer "blind spots"--areas in which access to calls is not available--than cellular. PCS transmitters are generally closer together than their cellular counterparts.
PDA (personal digital assistant) Mobile, handheld devices--such as the Palm series and Handspring Visors--that give users access to text-based information. Users can synchronize their PDAs with a PC or network; some models support wireless communication to retrieve and send e-mail and get information from the Web.
Radio frequency devices These devices use radio frequencies to transmit data. One typical use: a bar code scanner gathers information about products in stock or ready for shipment in a warehouse or distribution center and sends them to a database or ERP system.
Satellite phone Phones that connect callers via satellite. The idea behind a satellite phone is to give users a worldwide alternative to sometimes unreliable digital and analog connections. So far, such services have proven very costly and have appealed to few users aside from, for example, the crews at deep-sea oil rigs with phones configured to connect to a satellite service.
Smart phone A combination of a mobile phone and a PDA, smart phones allow users to converse as well as perform tasks, such as accessing the Internet wirelessly and storing contacts in databases. Smart phones have a PDA-like screen. As smart phone technology matures, some analysts expect these devices to prevail among wireless users. A PDA equipped with an Internet connection could be considered a smart phone. Ericsson, Nokia and Motorola also make smart phones.
SMS (short messaging service) A service through which users can send text-based messages from one device to another (see BlackBerry). The message--up to 160 characters--appears on the screen of the receiving device. SMS works with GSM networks.
TDMA (time division multiple access) This protocol allows large numbers of users to access one radio frequency by allocating time slots for use to multiple voice or data calls. TDMA breaks down data transmission, such as a phone conversation, into fragments and transmits each fragment in a short burst, assigning each fragment a time slot. With a cell phone, the caller would not detect this fragmentation. Whereas CDMA (which is used more frequently in the United States) breaks down calls on a signal by codes, TDMA breaks them down by time. The result in both cases: increased network capacity for the wireless carrier and a lack of interference for the caller. TDMA works with GSM and digital cellular services.
WAP (wireless application protocol) WAP is a set of protocols that lets users of mobile phones and other digital wireless devices access Internet content, check voice mail and e-mail, receive text of faxes and conduct transactions. WAP works with multiple standards, including CDMA and GSM. Not all mobile devices support WAP, but IDC (a sister company to CIO’s publisher, CXO Media) projects that more than 1.3 billion wireless Internet users will have WAP-capable devices in their hands by 2004.
WASP (wireless application service provider) These vendors provide hosted wireless applications so that companies will not have to build their own sophisticated wireless infrastructures. Vendors include Etrieve and Wireless Knowledge.
WCDMA (wideband CDMA) A third-generation wireless technology under development that allows for high-speed, high-quality data transmission. Derived from CDMA, WCDMA digitizes and transmits wireless data over a broad range of frequencies. It requires more bandwidth than CDMA but offers faster transmission because it optimizes the use of multiple wireless signals--not just one, as with CDMA.
Wireless LAN It uses radio frequency technology to transmit network messages through the air for relatively short distances, like across an office building or college campus. A wireless LAN can serve as a replacement for or extension to a wired LAN.
Wireless spectrum A band of frequencies where wireless signals travel carrying voice and data information. Wireless carriers are bidding at Federal Communications Commission auctions on slivers of airwaves through which they will ultimately be able to send third-generation communications. The auctions, which began in December 2000 in the United States and already occurred in several European nations, will give providers access to new pieces of the spectrum that will allow them to move to third-generation services. More auctions relevant to 3G communications are on tap (see 3G).
WISP (wireless Internet service provider) A vendor that specializes in providing wireless Internet access.
Z! Because we promised you a list from A to Z. | <urn:uuid:601990f4-5af5-44ba-8e90-78d867bc25b3> | CC-MAIN-2017-04 | http://www.cio.com/article/2441728/mobile/glossary--how-to-speak-wireless.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00085-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916019 | 2,572 | 2.890625 | 3 |
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: ENV.2010.1.1.5-1 | Award Amount: 4.82M | Year: 2011
Increases of atmospheric CO2 and associated decreases in seawater pH and carbonate ion concentration this century and beyond are likely to have wide impacts on marine ecosystems including those of the Mediterranean Sea. Consequences of this process, ocean acidification, threaten the health of the Mediterranean, adding to other anthropogenic pressures, including those from climate change. Yet in comparison to other areas of the world ocean, there has been no concerted effort to study Mediterranean acidification, which is fundamental to the social and economic conditions of more than 130 million people living along its coastlines and another 175 million who visit the region each year. The MedSeA project addresses ecologic and economic impacts from the combined influences of anthropogenic acidification and warming, while accounting for the unique characteristics of this key region. MedSeA will forecast chemical, climatic, ecological-biological, and socio-economical changes of the Mediterranean driven by increases in CO2 and other greenhouse gases, while focusing on the combined impacts of acidification and warming on marine shell and skeletal building, productivity, and food webs. We will use an interdisciplinary approach involving biologists, earth scientists, and economists, through observations, experiments, and modelling. These experts will provide science-based projections of Mediterranean acidification under the influence of climate change as well as associated economic impacts. Projections will be based on new observations of chemical conditions as well as new observational and experimental data on the responses of key organisms and ecosystems to acidification and warming, which will be fed into existing ocean models that have been improved to account for the Mediterraneans fine-scale features. These scientific advances will allow us to provide the best advice to policymakers who must develop regional strategies for adaptation and mitigation.
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ENV.2012.6.2-3 | Award Amount: 12.05M | Year: 2012
The objectives are to: (i) improve our understanding of human activities impacts (cumulative, synergistic, antagonistic) and variations due to climate change on marine biodiversity, using long-term series (pelagic and benthic). This objective will identify the barriers and bottlenecks (socio-economic and legislative) that prevent the GES being achieved (ii) test the indicators proposed by the EC, and develop new ones for assessment at species, habitats and ecosystems level, for the status classification of marine waters, integrating the indicators into a unified assessment of the biodiversity and the cost-effective implementation of the indicators (i.e. by defining monitoring and assessment strategies). This objective will allow for the adaptive management including (a) strategies & measures, (b) the role of industry and relevant stakeholders (including non-EU countries), and (c) provide an economic assessment of the consequences of the management practices proposed. It will build on the extensive work carried out by the Regional Seas Conventions (RSC) and Water Framework Directive, in which most of the partners have been involved (iii) develop/test/validate innovative integrative modelling tools to further strengthen our understanding of ecosystem and biodiversity changes (space & time); such tools can be used by statutory bodies, SMEs and marine research institutes to monitor biodiversity, applying both empirical and automatic data acquisition. This objective will demonstrate the utility of innovative monitoring systems capable of efficiently providing data on a range of parameters (including those from non-EU countries), used as indicators of GES, and for the integration of the information into a unique assessment The consortium has 23 partners, including 4 SMEs (close to 17% of the requested budget) and 2 non-EU partners (Ukraine & Saudi Arabia). Moreover, an Advisory Board (RSC & scientific international scientists) has been designed,to ensure a good relationship with stakeholders
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ENERGY.2009.3.2.2 | Award Amount: 4.08M | Year: 2010
The Biowalk4Biofuels Project aims to develop an alternative and innovative system for biowaste energy recovery and use of GHG emissions to produce biofuels, using macroalgae as a catalyser, in a multidisciplinary approach. The objectives of the project are: production of a cost-efficient biogas without using cereal crops; optimise the production of biogas per amount of biowaste and CO2 used, with low land use for plant facilities; and increase and optimize the types of biowastes that can be utilised for biogas production. To achieve the underlined objectives, research activities are to be carried out on the selection of adequate macroalgae species that can reach high output biomass yields and high carbohydrate content. Pre-cultivation of protoplasts will allow to obtain easily available biomass for feeding the cultivation open floating ponds within shorter periods, thanks to the rapid proliferation of germplasm, diminishing the life-cycle of macroalgae. In addition, the relationship between growth and energy potential of selected species with the amounts/characteristic of GHG emissions and biowaste introduced in the cultivation medium is to be studied. . After fermenting the algal biomass and other biowastes, the cycle is closed by producing biogas to be used for electricity and heat generation and as a transport fuel. A high quality biogas is expected, hence a purification step will proceed the final product. Furthermore, organic residues output from the methanation biodigestor are to be used as fertilizer after solid/liquid separation. The liquid fraction of the digestate will be treated in a biological oxidation system .A portion of the unseparated outlet effluent from the oxidation system (solids \ liquid) will be fed to the macroalgae cultivation (instead of the enrichment with chemical N-P-K fertilizers). Meanwhile, the other portion will be reused as feeding for the AD plant section. This process solution will permit to feed with several critical biowastes the biodigester, transforming them into a resource. The expected impact is to produce a cost-efficient, low energy-intensive, purified biogas, to reduce negative environmental impacts from industry (GHG emissions) and biowaste. The multidisciplinary approach solution to reduce GHG emission and process biowaste, while producing energy, seeking for the future replications in other locations.
Agency: Cordis | Branch: FP7 | Program: CP-IP-SICA | Phase: OCEAN.2011-4 | Award Amount: 11.32M | Year: 2012
Environmental policies focus on protecting habitats valuable for their biodiversity, as well as producing energy in cleaner ways. The establishment of Marine Protected Area (MPA) networks and installing Offshore Wind Farms (OWF) are important ways to achieve these goals. The protection and management of marine biodiversity has focused on placing MPAs in areas important for biodiversity. This has proved successful within the MPAs, but had little impact beyond their boundaries. In the highly populated Mediterranean and the Black Seas, bordered by many range states, the declaration of extensive MPAs is unlikely at present, so limiting the bearing of protection. The establishment of MPAs networks can cope with this obstacle but, to be effective, such networks must be based on solid scientific knowledge and properly managed (not merely paper parks). OWF, meanwhile, must be placed where the winds are suitable for producing power, but they should not have any significant impact on biodiversity and ecosystem functioning, or on human activities. The project will have two main themes: 1 - identify prospective networks of existing or potential MPAs in the Mediterranean and the Black Seas, shifting from a local perspective (centred on single MPAs) to the regional level (network of MPAs) and finally the basin scale (network of networks). The identification of the physical and biological connections among MPAs will elucidate the patterns and processes of biodiversity distribution. Measures to improve protection schemes will be suggested, based on maintaining effective exchanges (biological and hydrological) between protected areas. The national coastal focus of existing MPAs will be widened to both off shore and deep sea habitats, incorporating them into the networks through examination of current legislation, to find legal solutions to set up transboundary MPAs. 2 - explore where OWF might be established, producing an enriched wind atlas both for the Mediterranean and the Black Seas. OWF locations will avoid too sensitive habitats but the possibility for them to act as stepping-stones through MPAs, without interfering much with human activities, will be evaluated. Socioeconomic studies employing ecosystem services valuation methods to develop sustainable approaches for both MPA and OWF development will also be carried out, to complement the ecological and technological parts of the project, so as to provide guidelines to design, manage and monitor networks of MPAs and OWF. Two pilot projects (one in the Mediterranean Sea and one in the Black Sea) will test in the field the assumptions of theoretical approaches, based on previous knowledge, to find emerging properties in what we already know, in the light of the needs of the project. The project covers many countries and involves researchers across a vast array of subjects, in order to achieve a much-needed holistic approach to environmental protection. It will help to integrate the Mediterranean and Black Seas scientific communities through intense collective activities, combined with strong communications with stakeholders and the public at large. Consequently, the project will create a permanent network of excellent researchers (with cross fertilization and further capacity building) that will also work together also in the future, making their expertise available to their countries and to the European Union.
Agency: Cordis | Branch: FP7 | Program: CP-IP-SICA | Phase: OCEAN.2011-3 | Award Amount: 16.99M | Year: 2012
The overall scientific objectives of PERSEUS are to identify the interacting patterns of natural and human-derived pressures on the Mediterranean and Black Seas, assess their impact on marine ecosystems and, using the objectives and principles of the Marine Strategy Framework Directive as a vehicle, to design an effective and innovative research governance framework based on sound scientific knowledge. Well-coordinated scientific research and socio-economic analysis will be applied at a wide-ranging scale, from basin to coastal. The new knowledge will advance our understanding on the selection and application of the appropriate descriptors and indicators of the MSFD. New tools will be developed in order to evaluate the current environmental status, by way of combining monitoring and modelling capabilities and existing observational systems will be upgraded and extended. Moreover, PERSEUS will develop a concept of an innovative, small research vessel, aiming to serve as a scientific survey tool, in very shallow areas, where the currently available research vessels are inadequate. In view of reaching Good Environmental Status (GES), a scenario-based framework of adaptive policies and management schemes will be developed. Scenarios of a suitable time frame and spatial scope will be used to explore interactions between projected anthropogenic and natural pressures. A feasible and realistic adaptation policy framework will be defined and ranked in relation to vulnerable marine sectors/groups/regions in order to design management schemes for marine governance. Finally, the project will promote the principles and objectives outlined in the MSFD across the SES. Leading research Institutes and SMEs from EU Member States, Associated States, Associated Candidate countries, non-EU Mediterranean and Black Sea countries, will join forces in a coordinated manner, in order to address common environmental pressures, and ultimately, take action in the challenge of achieving GES. | <urn:uuid:95fcc4b5-fa20-496b-923a-f0013be1bad8> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/consorzio-nazionale-interuniversitario-per-le-science-del-mare-1820113/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91097 | 2,397 | 2.609375 | 3 |
Electronic Data Interchange (EDI)A pre-Internet system for exchanging data between organizations. EDI requires that organizations standardize terms and invest heavily in computers and the maintenance of the EDI software. Although some companies use EDI systems and will only phase them out slowly, EDI is being replaced by less expensive Internet systems and protocols like XML.
Learn More About Electronic Data Interchange (EDI)
Related TermsData Warehouse, Database, ebXML (electronic business XML), Social Media Analytics | <urn:uuid:5d4b5eb6-a86b-491c-951d-3d1ed55b8065> | CC-MAIN-2017-04 | http://www.activevos.com/glossary/electronic-data-interchange-edi | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89237 | 104 | 2.546875 | 3 |
Sometimes, when a network is breached, when servers are compromised, or when unencrypted data is at risk, companies will get, or even seek, assistance from government offices. The nature of cybercrime points to the ways in which our digital architectures are interconnected – over the Internet, but also in terms of how sensitive information plays different roles in business and in civic life.
All this to say that leaders in the security community are always focusing on how to define threats, how to promote specific levels of response, and generally, how to more robustly protect systems.
With that in mind, it surprises some security-minded people to know that in some ways, the U.S. government and the Pentagon have not fully come to terms with the scope of cyberwarfare, and that key pieces of counter-cyber-espionage strategy are not yet in place.
At this late date, with the infamous DNC hack and big breaches of many Fortune-500 data systems, with the tech media fairly screaming about cybersecurity, the federal government still has no concrete idea of when a cybercrime constitutes an act of war.
The Cybercrime Controversy
This Slate piece by Fred Kaplan highlights some of the back-and-forth that has gone on over the issue, starting with queries from Robert Gates as Defense Secretary in 2006, and revealing a bit of dissembling on the part of the Pentagon Defense Science Board, along with implications of thorny questions such as how to create a “proportional response” or how to “expel” a piece of the malware as you could a human spy.
It also shows the limits of government involvement. Indeed, even common-sense federal protections to private infrastructure can easily be seen as “Orwellian” or as a government overreach.
However, steps to clarify something like a cyber act of war are unilateral, and therefore not so controversial. It seems likely that what has delayed the implementation of this type of standard is not so much dissent as simple procrastination.
Federal News Radio and other outlets have covered the investigation of Senator Mike Rounds (R-S.D.) into the issue, and a bill sponsored by Rounds, the Cyber Act of War Act of 2016, that was introduced to the House of Representatives in May. The bill still has to go through committee review, and a quick look at tracking site Congress.gov shows no action on the bill since its introduction.
Why is this Important for Private Businesses?
The less leaders address cybercrime and its corrosive effects on both business and civic life, the more businesses have to innovate and pioneer in the field of cyberdefense. In essence, a company is on its own to arm itself with what it needs to ward off hordes of hackers and assorted cybercriminals operating on a global network with few fences.
SentinelOne’s next-generation endpoint and server security tools anticipate this important work, and help to standardize the responses of enterprises. These versatile, proactive security tools are focused on the new perimeter – the endpoint – offering protection from unknown and zero-day attacks using automated behavior detection and machine learning. To what end? Using a heuristic model and machine learning principles, these resources promote threat visibility, where companies can see danger a mile away. Endpoint protection and related processes reduce dwell time, a term that has become something of a spine-shaking buzzword evoking unknown malice lurking in digital systems. There’s a real need for businesses to take those steps of initiative, to “expel” the attempts of hackers and keep a clean house, in an age when no place seems safe from cybercrime. | <urn:uuid:858ec36d-6691-4602-b3e7-4c37d577ca9c> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3156464/security/cybercrime-not-an-act-of-war.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953676 | 748 | 2.59375 | 3 |
Although not usually covered in training materials, it is interesting to note where Ethernet originally came from. Like many of the early networking protocols, the principles of Ethernet were developed inside a corporation that was looking to solve a specific problem.
Xerox needed an effective way to allow a new invention, called the “personal computer”, to be connected in its offices. In 1973, at the Palo Alto Research Center, researcher Bob Metcalfe designed and tested the first Ethernet network. While working on a way to link Xerox’s Alto computer to a printer, Metcalfe developed the physical method of cabling that connected devices to each other on an Ethernet network.
From that, Ethernet was conceived. Eventually, Xerox teamed with Intel and Digital Equipment Corporation (DEC) to further develop Ethernet, so the original Ethernet became known as DIX Ethernet, referring to DEC, Intel, and Xerox.
The Institute of Electrical and Electronics Engineers (IEEE) took over the LAN standardization process in the early 1980s. And, since that time, the IEEE has defined many Ethernet standards to support the widely varying needs for building a LAN, such as the needs for different speeds, different cabling types, trading off distance requirements versus cost, and other factors.
Ethernet has since become the most widely accepted and deployed LAN network technology in the world. It has grown to encompass new technologies as computer networking has matured. However, the mechanics of operation for every Ethernet network today stem from Metcalfe’s original design.
The original Ethernet concept described communication over a single cable that is shared by all devices on the network. Once a device was attached to the cable, it had the ability to communicate with any other attached device. This process allows the network to expand to accommodate new devices without requiring any modification to those devices already on the network.
Author: David Stahl | <urn:uuid:5f1b331f-6667-41c9-b1f9-2f2f806ebff3> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/02/01/the-beginnings-of-ethernet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00168-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968505 | 387 | 3.953125 | 4 |
Table of Contents
Before Windows was created, the most common operating system that ran on IBM PC compatibles was DOS. DOS stands for Disk Operating System and was what you would use if you had started your computer much like you do today with Windows. The difference was that DOS was not a graphical operating system but rather purely textual. That meant in order to run programs or manipulate the operating system you had to manually type in commands. When Windows was first created it was actually a graphical user interface that was created in order to make using the DOS operating system easier for a novice user. As time went on and newer versions of Windows were developed DOS was finally phased out with Windows ME. Though the newer operating systems do not run on DOS, they do have something called the command prompt, which has a similar appearance to DOS. In this tutorial we will cover the basic commands and usage of the command prompt so that you feel comfortable in using this resource.
When people refer to the command prompt they may we refer to it in different ways. They may refer to it as a shell, console window, a command prompt, a cmd prompt, or even dos. In order to enter the command prompt you need to run a program that is dependent on your operating system. Below we list the programs that you need to run to enter a command prompt based on the version of Windows you are running.
|Windows 3.1,.3.11, 95, 98, ME||command.com||This program when run will open up a command prompt window providing a DOS shell.|
|Windows NT, 2000, XP, 2003||cmd.exe||This program will provide the native command prompt. What we call the command prompt.|
|Windows NT, 2000, XP, 2003||command.com||This program will open up a emulated DOS shell for backwards compatibility. Only use if you must.|
To run these programs and start a command prompt you would do the following steps:
Step 1: Click on the Start Menu
Step 2: Click on the Run option
Step 3: Type the appropriate command in the Open: field. For example if we are using Windows XP we would type cmd.exe.
Step 4: Click on the OK button
After following these steps you will be presented with a window that look similar to Figure 1 below.
Figure 1. Windows Command Prompt
The command prompt is simply a window that by default displays the current directory, or in windows term a folder, that you are in and has a blinking cursor ready for you to type your commands. For example in Figure 1 above you can see that it says C:\WINDOWS>. The C:\WINDOWS> is the prompt and it tells me that I am currently in the c:\windows directory. If I was in the directory c:\program files\directory the prompt would instead look like this: C:\PROGRAM FILES\DIRECTORY>.
To use the command prompt you would type in the commands and instructions you want and then press enter. In the next section we will discuss some useful commands and how to see all available built in commands for the command prompt.
The command.com or cmd.exe programs have built in commands that are very useful. Below I have outlined some of the more important commands and further instruction on how to find information on all the available commands.
The Help command - This command will list all the commands built into the command prompt. If you would like further information about a particular command you can type help commandname. For example help cd will give you more detailed information on a command. For all commands you can also type the command name followed by a /? to see help on the command. For example, cd /?
The Exit command - This command will close the command prompt. Simply type exit and press enter and the command prompt will close.
The CD command - This command allows you to change your current directory or see what directory you are currently in. To use the CD command you would type cd directoryname and press enter. This would then change the directory you are currently in to the one specified. When using the cd command you must remember how paths work in Windows. A path to a file is always the root directory, which is symbolized by the \ symbol, followed by the directories underneath it. For example the file notepad.exe which is located in c:\windows\system32 would have a path as follows \windows\system32\notepad.exe. If you want to change to a directory that is currently in your current directory you do not need the full path, but can just type cd directoryname and press enter. For example if you are in a directory called c:\test, and there were three directories in that the test directory called A, B, and C, you could just type cd a and press enter. You would then be in the c:\test\a. If on the other hand you wanted to change your directory to the c:\windows\system32 directory, you would have to type cd \windows\system and press enter.
The DIR command - This command will list the files and directories contained in your current directory, if used without an argument, or the directory you specify as an argument. To use the command you would just type dir and press enter and you will see a listing of the current files in the directory you are in, including information about their file sizes, date and time they were last written to. The command will also show how much space the files in the directory are using and the total amount of free disk space available on the current hard drive. If I typed dir \test I would see the contents of the c:\test directory as shown in Figure 2 below.
Figure 2. DIR of c:\test
If you examine the screen above you will see a listing of the directory. The first 2 columns are the date and time of the last write to that file. Followed by whether or not the particular entry is a directory or a file, then the size of the file, and finally the name of the file. You may have noticed that there are two directories named . and .., which have special meaning in operating systems. The . stands for the current directory and the .. stands for the previous directory in the path. In the example above, .. stands for c:\windows.
Also note for many commands you can use the * symbol which stands for wildcard. With this in mind, typing dir *.txt will only list those files that end with .txt.
The Copy command - This command allows you to copy files from one location to another. To use this command you would type
copy filetocopy copiedfile. For example if you have the file c:\test\test.txt and would like to copy it to c:\windows\test.txt you would type
copy c:\test\test.txt c:\windows\test.txt and press enter. If the copy is successful it will tell you so and give you back the prompt. If you are copying within the same directory you do not have to use the path. Here are some examples and what they would do:
|copy test.txt test.bak||Copies the test.txt file to a new file called test.bak in the same directory|
|copy test.txt \windows||Copies the test.txt file to the \windows directory.|
|copy * \windows||Copies all the files in the current directory to the \windows directory.|
The Move command - This command allows you to move a file from one location to another. Examples are below:
|move test.txt test.bak||Moves the test.txt file to a new file renaming it to test.bak in the same directory.|
|move test.txt \windows||Moves the test.txt file to the \windows directory.|
|move * \windows||Moves all the files in the current directory to the \windows directory.|
At this point you should use the help command to learn about the other available commands.
Redirectors are an important part to using the command prompt as they allow you to manipulate how the output or input of a program is displayed or used. Redirectors are used by appending them to the end of a command followed by what you are redirecting to. For example: dir > dir.txt. There are four redirectors that are used in a command prompt and they are discussed below:
|>||This redirector will take the output of a program and store it in a file. If the file exists, it will be overwritten. If it does not exist it will create a new file. For example the command dir > dir.txt will take the output of the dir command and place it in the dir.txt file. If dir.txt exists, it will overwrite it, otherwise it will create it.|
|>>||This redirector will take the output of a program and store it in a file. If the file exists, the data will be appended to the current data in the file rather than overwriting it. If it does not exist it will create a new file. For example the command dir >> dir.txt will take the output of the dir command and appends it to the existing data in the dir.txt file if the file exists. If dir.txt does not exist, it will create the file first.|
|<||This redirector will take the input for a program from a specified file. For example the date command expects input from a user. So if we had the command date < date.txt, it would take the input for the date program from the information contained in the date.txt file.|
||||This redirector is called a pipe. It will take the output of a program and pipe it into another program. For example dir | sort would take the output of the dir command and use it as input to the sort command.|
Batch files are files that have an extension ending in .bat. They are simply scripts that contain command prompt commands that will be executed in the order they are listed. To create a batch file, just make a file that ends in .bat, such as test.bat, and inside the file have the commands you would like. Each command should be on its own line and in the order you would like them to execute.
Below is example batch file. It has no real use but will give you an example of how a batch files works. This test batch file contains the following lines of text:
If I was to run the test.bat file I created I would have output that looks like the following:
Figure 3: Example of a batch file running.
As you can see from the figure above, my batch file executed each command in my batch file in the sequence they were written in the batch file.
If a program is created for express purpose of running within a command prompt, or console window, that program is called a console program. These are programs that are not graphical and can only be run properly from within a command prompt window.
Below is a list of sites that contain console programs that may be useful to you:
There are many more sites that have tools available. Just do a Google search on windows console programs.
The command prompt can be a very powerful and versatile tool for a computer user. Hopefully this brief introduction into the command prompt will enable you to use your computer more efficiently. If you have any questions on how to use the command prompt, please do not hesitate to ask us in the computer help forums.
Bleeping Computer Microsoft Basic Concepts Tutorial
BleepingComputer.com: Computer Support & Tutorials for the beginning computer user.
The Windows 7 System Recovery Command Prompt is a text-based console that allow you to perform maintenance and recovery tasks on your computer by typing the commands that you would like to execute. These commands allow you to perform a wide variety of tasks such as replace infected files, delete infections, repair boot up configurations for your hard drive, resize hard drive partitions, as well as ...
The Windows Recovery Environment Command Prompt is a text-based console that allows that allows you to perform many tasks on your computer by typing in the commands that you would like to execute. These commands allow you to perform a range of tasks from managing the files on your hard drives, formatting and repartitioning hard drives, configuring how Windows boots, deleting and copying files, ...
If you are a system administrator, IT professional, or a power user it is common to find yourself using the command prompt to perform administrative tasks in Windows. Whether it be copying files, accessing the Registry, searching for files, or modifying disk partitions, command-line tools can be faster and more powerful than their graphical alternatives. This tutorial will walk you through ...
A Command Prompt allows you to run programs, manipulate Windows settings, and access files by typing in commands that you wish to execute. To start a Command Prompt you simply need to type cmd.exe in the search field in the Start menu or click on Start, then Accessories, and then click on the Command Prompt icon. A window will appear, called the Command Prompt, that will open in your user profile ...
A boot disk is a floppy or a CD that you can use to boot your computer into a state in which you can use it to fix a problem. There will be times that your computer may not boot up properly and thus you can not access files or attempt fixes that you are asked to do. When situations like this occur, having a boot disk is necessary as you will be able to access files and attempt fixes that may allow ... | <urn:uuid:4f032bc8-3c7f-4aa8-a4f4-b37142905718> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/windows-command-prompt-introduction/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926216 | 2,816 | 3.890625 | 4 |
Unipeak is a free anonymous proxy, it encodes the URLs like this:
Suppose you had to reverse engineer the encoding scheme, how could you proceed? You are in a comfortable position, because you can execute a Chosen Plaintext Attack.
First we need to find out if the encoding scheme is reversible, because it could also be a hash or another key used to access the cache of the proxy (if it’s a caching proxy).
So we add a letter ‘a’ to the encoded URL and see what Unipeak replies:
and we see the Google website.
So it’s not a hash, it’s reversible.
We add another ‘a’:
and now we get an error message:
unable to connect to http://www.google.comi:80/
It’s definitely reversible.
Searching with Google via Unipeak gives another URL:
This URL starts with the same sequence as our first URL, so it’s probably a simple encoding scheme where the characters are processed from left to right.
So let’s start another experiment, we enter this URL: aaaaaaaaaa
The encoded URL is:
Very interesting, we also get a repeating pattern, but the cycle is 4 characters long (YWFh).
Ok, now let’s use a trick: we enter a series of characters Us. The character U is special, its ASCII encoding written in binary is 01010101. Thus UU is 0101010101010101, UUU is 010101010101010101010101, …
Entering UUUUUUUUUU gives us:
Another nice sequence!
This is a strong indication that the encoding is done at the bit level: the input is seen as a stream of bits, the bits are grouped in groups of X bits (where X is unknown). Each group is transformed to another sequence of bits by a function F, and the same function F is used for each group. We can also assume that X is even, otherwise we wouldn’t get a sequence of identical characters, but a sequence of identical pairs.
We perform some extra tests to prove (or disprove) our hypothesis.
We encode sequences of different lengths and compare the length of the cleartext and the cyphertext: the ratio is about 3 to 4, 3 input characters generate 4 output characters (BTW, the fact that we get a cycle of 4 characters for aaaaa… is also a strong indication for this ratio).
So X can be 3, 6, 9, 12, … . Except we assume X is even: 6, 12, …
Let’s test X = 6.
We try URL 000, this gives us MDAw (http://www.unipeak.net/gethtml.php?_u_r_l_=MDAw)
Now 000 is 30 30 30 (in hexadecimal ASCII)
or 00110000 00110000 00110000 in binary, grouped in 8 bits (1 byte)
or 001100 000011 000000 110000 in binary, but grouped in 6 bits (X = 6)
Now increment the first group:
001101 000011 000000 110000
or 00110100 00110000 00110000 in binary, grouped in 8 bits (1 byte)
or 34 30 30 (in hexadecimal ASCII)
So 000 becomes 400 when you increment the first group of 6 bits.
Testing URL 400 gives NDAw: changing the first 6 bits changes only the first character!
We do the same for the remaining groups:
000 -> 0@0 -> MEAw
000 -> 00p -> MDBw
000 -> 001 -> MDAx
So X is indeed 6, because changing a group of 6 bits at a time changes only one encoded character.
And we can also assume that function F is linear, because incrementing the input with 1 increments the output with 1 (M -> N, D -> E, A -> B and w -> x).
Now we could try every possible permutation of 6 bits, and see what the corresponding encoded character is.
We would discover that F maps 0..63 to ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/
And this is a very common encoding scheme: base64 | <urn:uuid:27ef8a55-38a8-4466-a571-be245b56c42a> | CC-MAIN-2017-04 | https://blog.didierstevens.com/2006/10/02/reversing-an-anonymous-proxy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.817411 | 942 | 3.15625 | 3 |
There are two kinds of Microsoft Excel users in the world: Those who make neat little tables, and those who amaze their colleagues with sophisticated charts, data analysis, and seemingly magical formula and macro tricks. You, obviously, are one of the latter--or are you? Check our list of 11 essential Excel skills to prove it--or discreetly pick up any you might have missed.
Vlookup is the power tool every Excel user should know. It helps you herd data that's scattered across different sheets and workbooks and bring those sheets into a central location to create reports and summaries.
Say you work with products in a retail store. Each product typically has a unique inventory number. You can use that as your reference point for Vlookups. The Vlookup formula matches that ID to the corresponding ID in another sheet, so you can pull information like an item description, price, inventory levels and other data points into your current workbook.
Summon the vlookup formula in the formula menu and enter the cell that contains your reference number. Then enter the range of cells in the sheet or workbook from which you need to pull data, the column number for the data point you're looking for, and either "True" (if you want the closest reference match) or "False" (if you require an exact match).
To create a chart, enter data into Excel with column headers, then select Insert > Chart > Chart Type.A Excel 2013 even includes a Recommended Charts section with layouts based on the type of data you're working with. Once the generic version of that chart is created, go to the Chart Tools menus to customize it. Don't be afraid to play around in here--there are a surprising number of options.
IF and IFERROR are the two most useful IF formulas in Excel.A The IF formula lets you use conditional formulas that calculate one way when a certain thing is true, and another way when false. For example, you can identify students who scored 80 points or higher by having the cell report "Pass" if the score in column C is above 80, and "Fail" if it's 79 or below.
IFERROR is a variant of the IF Formula. It lets you return a certain value (or a blank value) if the formula you're trying to use returns an error. If you're doing a Vlookup to another sheet or table, for example, the IFERROR formula can render the field blank if the reference is not found.
PivotTables are essentially summary tables that let you count, average, sum, and perform other calculations according to the reference points you enter. Excel 2013 added Recommended PivotTables, making it even easier to create a table that displays the data you need.
To create a PivotTable manually, ensure your data is titled appropriately, then go to Insert > PivotTable and select your data range. The top half of right-hand-side bar that appears has all your available fields, and the bottom half is the area you use to generate the table.
For example, to count the number of passes and fails, put your Pass/Fail column into the Row Labels tab, then again into the Values section of your PivotTable. It will usually default to the correct summary type (count, in this case), but you can choose among many other functions in the Values dropdown box. You can also create subtables that summarize data by category--for example, Pass/Fail numbers by gender.
Part PivotTable, part traditional Excel chart, PivotCharts let you quickly and easily look at complex data sets in an easy-to-digest way. PivotCharts have many of the same functions as traditional charts, with data series, categories, and the like, but they add interactive filters so you can browse through data subsets.
Excel 2013 added Recommended Pivot Charts, which can be found under the Recommended Chartsicon in the Charts area of the Insert tab. You can preview a chart by hovering your that option. You can also manually create PivotCharts by selecting the PivotChart icon on the Insert tab..
Easily the best new feature in Excel 2013, Flash Fill solves one of the most frustrating problems of Excel: pulling needed pieces of information from a concatenated cell. When you're working in a column with names in "Last, First" format, for example, you historically had to either type everything out manually or create an often-complicated workaround.
In Excel 2013, you can now just type the first name of the first person in a field immediately next to the one you're working on, click on Home > Fill > Flash Fill, and Excel will automagically extract the first name from the remaining people in your table.
Excel 2013's new Quick Analysis tool minimizes the time needed to create charts based on simple data sets. Once you have your data selected, an icon appears in the bottom right hand corner that, when clicked, brings up the Quick Analysis menu.
This menu provides tools like Formatting, Charts, Totals, Tables and Sparklines. Hovering your mouse over each one generates a live preview.
Power View is an interactive data exploration and visualization tool that can pull and analyze large quantities of data from external data files. Go to Insert > Reports in Excel 2013.
Reports created with Power View are presentation-ready with reading and full-screen presentation modes. You can even export an interactive version into PowerPoint. Several tutorials on Microsoft's site will help you become an expert in no time.
For most tables, Excel's extensive conditional formatting functionality lets you easily identify data points of interest. Find this feature on the Home tab in the taskbar. Select the range of cells you want to format, then click the Conditional Formatting dropdown. The features you'll use most often are in the Highlight Cells Rules submenu.
For example, say you're scoring tests for your students and want to highlight in red those whose scores dropped significantly. Using theLess Than conditional format, you can format cells that are less than -20 (a 20-point drop) with the Red Text or Light Red Fill with Dark Red Text function. You can create many different kinds of rules, with unlimited formats available via the custom format function within each item.
Transposing columns into rows (and vice versa)
Sometimes you'll be working with data formatted in columns and you really need it to be in rows (or the other way around). Simply copy the row or column you'd like to transpose, right click on the destination cell and select Paste Special. A checkbox on the bottom of the resulting popup window is labeled Transpose. Check the box and click OK. Excel will do the rest.
Essential keyboard shortcuts
Keyboard shortcuts are the best way to navigate cells or enter formulas more quickly. We've listed our favorites below.
Control + Down/Up Arrow = Moves to the top or bottom cell of the current column Control + Left/Right Arrow = Moves to the cell furthest left or right in the current row
Control + Shift + Down/Up = Selects all the cells above or below the current cell
Shift + F11 = Creates a new blank worksheet within your workbook
F2 = opens the cell for editing in the formula bar
Control + Home = Navigates you to cell A1
Control + End = Navigates to the last cell that contains data
Alt + =' will autosum the cells above the current cell
Excel is arguably one of the best programs ever made, and it has remained the gold standard for nearly all businesses worldwide. But whether you're a newbie or a power user, there's always something left to learn. Or do you think you've seen it all and done it all? Let us know what we've missed in the comments.
This story, "Real Excel power users know these 11 tricks" was originally published by PCWorld. | <urn:uuid:867d0b4f-e2ea-4cbd-94e9-27656187f1be> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2175275/data-center/real-excel-power-users-know-these-11-tricks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00160-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.880309 | 1,641 | 2.578125 | 3 |
Double duty for video cards
When the next version
of the Mac operating system, code named Snow Leopard, is released later this year, users might experience some surprising boosts in speeds, at least for some applications. The time it takes, for instance, to re-encode a high-definition video for an iPod could dramatically decrease from hours to a few minutes.
The secret sauce for this boost? Snow Leopard will have the ability to hand off some of the number crunching in that conversion to the graphics processing unit (GPU). The new OS is scheduled to include support for Open Computing Language, which allows programmers to have their programs tap into the GPU.
How to write apps for multiple cores: Divide and conquer
Does parallel processing require new languages?
Double-duty for video cards
Typically, the GPU, usually embedded in a graphics card, renders the screen display for computers. But ambitious programmers are finding that GPUs can also be used to speed certain types of applications, particularly those involving floating-point calculations.
For instance, researchers at Belgium’s University of Antwerp outfitted a commodity server with four dual-GPU Nvidia GeForce 9800 GX2 graphics cards. The server would be used to look for ways to improve tomography techniques. They found that this configuration could reconstruct a large tomography image in 59.9 seconds, which is faster than the 67.4 seconds it took an entire server cluster of 256 dual-core Opterons from Advanced Micro Devices.
The cluster cost the university $10 million to procure, whereas the researchers' server only ran $10,000.
For a certain group of problems, GPUs can provide a lot more computational power than an equivalent number of central processing units (CPUs), argued Sumit Gupta, senior product manager for Nvidia’s Tesla line of GPU-based accelerator cards.
In order to render material visual displays, GPUs have been tweaked to do lots of floating-point computations. This sort of computation differs from the integer-based operations that CPUs usually perform insofar that integer computation truncates calculations on the right side of the decimal point, which could lead to small rounding errors. Floating-point operations carry out rounding to 32 bits (and double floating point carries it out for 64 bits). The hard-number crunching of scientific research, in particular, requires the accuracy of floating-point operations.
Graphics cards have always excelled at this kind of floating-point computation, Gupta said. In order to portray tree leaves fluttering in the wind or water trickling over a streambed of the latest computer game, the GPU has to calculate the color, depth and other factors of each screen pixel, which requires heavy matrix multiplication to floating-point precision. These sorts of calculations are not unlike those scientists need to do to solve mathematical conundrums in molecular dynamics, computational chemistry, signals processing and the like.
Nvidia, for one, has seen the interest in having the GPU do double-duty and has modified some of its cards to make them fully programmable. The Nvidia Tesla C1060 computing board is being offered for the scientific crowd. It has one GPU with more than 240 processor cores. It can offer 933 million floating-point operations per second.
To help programmers tap into this computational power, Nvidia has created a package of tools named Cuda. Part of this package is a library for the C programming language, called C Cuda. It offers a number of parallel keywords that developers can use to break off portions of their code to run on the GPU. They just insert the name of the library in their C code, and then they are able to use the functions to signify chunks of the code that can be run in parallel.
Cuda has proven popular with developers. More than 75 research papers have been written on Cuda, and more than 50 universities teach how to use the platform, Gupta said. Certainly, the Cuda sessions were among the best-attended at the SC08 conference in Austin, Texas, last fall.
Even with tools such as Cuda, however, writing for GPUs certainly makes the job of programming a little bit more complicated. For its own developers, government integrator Lockheed Martin, via its Advanced Technology Laboratories, is looking at ways to ease programming in heterogeneous processor environments.
"If you use a GPU, you need to learn the Nvidia compiler and learn how to put the appropriate extensions into your code in the GPU," noted Daniel Waddington, principal research scientist at Lockheed Martin’s labs. He is leading an effort to build what he calls a refactoring engine. Called Chimera, this software will be able to recompile code written in well-known languages so it can be reused across a wider variety of processors without the programmer needing to know the low-level implementation details of the GPUs or other new types of processors.
"The problem is not only are designers moving to multicore processors, but designers are coming out with new designs a few times a year," said Lockheed Martin research scientist Shahrukh Tarapore, who also is working on Chimera. “They have different programming models and different capabilities.”
Right now, Chimera works with the C and C++ languages, which are widely used within Lockheed Martin. If successful, Chimera could be used by the company’s programmers to quickly build programs that can take advantage of the latest processors — be they CPUs, GPUs or even some other design.
"Your source code is first transformed into an abstract syntax tree [so] it can be translated into other forms," explained Tarapore. This approach will also identify which sections can be broken into chunks that could be run in parallel. Those pieces are then pulled from the main body of the program and replaced with pointers to components that can execute the tasks on specific pieces of hardware
Posted by Joab Jackson on Jun 10, 2009 at 9:03 AM | <urn:uuid:d159cec6-e72b-4bc5-9aea-591b546bd7ac> | CC-MAIN-2017-04 | https://gcn.com/blogs/gcn-tech-blog/2009/06/tips-for-programming-gpus.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00068-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93771 | 1,199 | 3.0625 | 3 |
From the archives.
- "It is often put simply that in e-business, authentication means that you know who you're dealing with. Authentication is inevitably cited as one of the four or five 'pillars of security' (the others being integrity, non-repudiation, confidentiality and, sometimes, availability).
- "To be a little more precise, let's examine the functional definition of authentication adopted by the Asia Pacific Economic Co-operation (APEC) E-Security Task Group, namely the means by which the recipient of a transaction or message can make an assessment as to whether to accept or reject that transaction.
- "Note that this definition does not have identity as an essential element, let alone the complex notion of 'trust'. Identity and trust all too frequently complicate discussions around authentication. Of course, personal identity is important in many cases, but it should not be enshrined in the definition of authentication. Rather, the fundamental issue is one’s capacity to act in the transaction at hand. Depending on the application, this may have more to do with credentials, qualifications, memberships and account status, than identity per se, especially in business transactions."
Making Sense of your Authentication Options in e-Business
Journal of the PricewaterhouseCoopers Cryptographic Centre of Excellence, No. 5, 2001.
See also http://lockstep.com.au/library/quotes. | <urn:uuid:39f2fdd7-478a-462d-8b9a-9e2b7e249f3c> | CC-MAIN-2017-04 | http://lockstep.com.au/blog/2012/10/29/i-never-trusted-trust | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00490-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932635 | 288 | 2.53125 | 3 |
ContactCenterWorld - Definition
A set of traffic engineering techniques utilized to determine numbers of facilities required in various telecommunications scenarios. Developed by Danish mathematician A.K.Erlang in early 1900s. Erlang B is used to determine required facilities in an "all calls cleared" situation such as automatic route selection in a PBX. Extended Erlang B is a modified technique used when there is measurable retry of calls taking place when calls are blocked. Erlang C assumes blocked calls will wait in queue and is therefore the Erlang technique used to determine staffing needs in a typical "hold for the next agent" contact center scenario. | <urn:uuid:ee44a981-5629-4401-9015-c1435425e1f6> | CC-MAIN-2017-04 | https://www.contactcenterworld.com/define.aspx?id=1010582c0bd141f39c15a34343247387 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00398-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895573 | 127 | 2.78125 | 3 |
Internet is relatively new technology but is one of the fastest in gaining popularity and in the spreading across the world. We can hardly imagine our normal daily life without internet. Sometimes we don’t even realize how much devices and services around us are actually internet equipped. But what makes all this devices work? How does the internet work?
That’s today theme and we will speak about the most important parts of internet technology and way this parts cooperate to give us the possibility to use network resources. In the beginning the simplest way to look at the internet is by splitting it into two main components: harware and protocols. | <urn:uuid:a381275f-afe2-480d-b3f6-acaaaf18b480> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/networking-site | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939519 | 126 | 3.40625 | 3 |
Common-Core Teaching: Broadening the Scope (Encore Presentation)
While the Common Core State Standards specifically focus on mathematics and English/language arts, teachers of other subjects must also adjust their content and instruction to infuse the common core in their own classrooms. This professional-development series aims to tackle that periphery. For instance, how will the arts and cross-disciplinary writing play a role in implementing the common core? And how can educators help their English-language learners master advanced literacy demands? In this webinar series, educators, experts, and advocates share guidance on how schools can improve outcomes by integrating the common standards across a variety of teaching areas.
Choose one of these vital webinars for just $49, or select all three and pay only $129. You will also get a certificate of completion, 3 months of on-demand access, and a FREE download of the Education Week Spotlight on Literacy and the Common Core. | <urn:uuid:1f05b9ff-a991-4560-af9b-4f8721e5d5c1> | CC-MAIN-2017-04 | http://vts.inxpo.com/scripts/Server.nxp?LASCmd=AI:4;F:APIUTILS!51004&PageID=506CF378-6441-4E8C-ACF2-08D68251AF6C&AffiliateKey=16232&AffiliateData=EW-ENL | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931735 | 191 | 2.515625 | 3 |
How do social media impact decision making?
by Dan Power
Social media is increasing its penetration into our lives. These online technology tools help us use the Internet to communicate with friends and to share information and resources with our networks of contacts. Some evidence suggests the impact of social media on personal and managerial decision making can be extensive. Anecdotal evidence suggests social media are altering our opinions and influencing our choices. The impact may be on individual decisions by consumers or business decisions made by managers. We need to understand what is happening and how social media can and do influence us. What theories explain the manner in which real-time communications from other people in our social or professional network can alter our behavior?
Easy to use social media tools have increased connectedness exponentially. According to Metcalfe's law, the value of a telecommunications network is proportional to the square of the number of connected users (n)of the system. In 1993, George Gilder formulated Metcalf's for social networks. He asserted we need to focus on the number of interconnected and interacting users and that the value of a social network increases exponentially as we add users to the network. Because of improved technologies, the value of social networks for individuals is increasing.
Theories related to more general communication, social and media phenomena should be explored to explain the consequences of an expanded use of web-based social media on decision making. Some of the theories that seem relevant include media richness theory, crowd behavior theory, crowd convergence theory, conformity theory, peer pressure theory, and communications saturation theory. One or some combination of theories from more traditional media environments may make some sense of the web-based social networking services that are connecting a broad range of people who share interests and activities. Also, we should explore the philosophy of the Web 2.0 vision that encourages and promotes social interaction and information creation and sharing.
According to media richness theory (Daft and Lengel, 1984), social perceptions, message clarity, and ability to evaluate others impact how media richness alters decision quality. Richer media facilitate social perceptions and perceived ability to evaluate others' deception and expertise. Tools like electronic mail and electronic conferencing facilitate communication clarity when participants have less task-relevant knowledge. According to a study by Kahai and Cooper (2003) impacts of mediating constructs on decision quality were found to depend on the levels of participant expertise and deception. In general, it was found that richer media can have significantly positive impacts on decision quality when participants' task-relevant knowledge is high. Moreover, effects of participant deception can be mitigated by employing richer media.
Social media also create new forms of peer pressure that are more immediate and broader in scope that anything experienced in face-to-face situations. Peer pressure refers to the influence exerted by a peer group in encouraging a person to change his or her attitudes, values, or behavior. Both types of conformity discussed in the literature (Aronson et al., 2007) seem to occur in social networks. Informational conformity can have an impact on decision making because the decision maker turns to the members of his/her social network to obtain accurate information. Normative conformity may also bias decisions because the decision maker conforms in an effort to liked or accepted by the members one or more social networks.
It is also possible that social media encourage crowd or mob behavior. Sigmund Freud's contagion crowd behavior theory argues people who are in a crowd act differently and are less aware of the true nature of their actions. But convergence theory holds that crowd behavior is not a product of the crowd itself, but is carried into the crowd by particular individuals. Thus, crowds represent a convergence of like-minded individuals. In other words, while contagion theory states that crowds cause people to act in a certain way, convergence theory says that people who wish to act in a certain way come together to form crowds.
Another potential negative issue with social networks is the saturation effect that can impact decision makers. "Saturation refers to the communication overload experienced by group members in centralized positions in communications networks" (p. 148, Shaw, 1976). Also, Shaw argued "the greater the saturation the less efficient the group and the less satisfied the group members, although saturation probably influences effectiveness to a greater extent than it does satisfaction" (p. 148, Shaw, 1976). Two kinds of saturation can be investigated: channel saturation and message unit saturation. These phenomena are correlated and the numbers of channels a person must deal with influences the number of messages the person must read and respond to.
We may be able to monitor social media and identify patterns. From a business and organization perspective, a tool like sentiment analysis software discovers business value in opinions and attitudes in social media, news, and enterprise feedback. Supposedly sentiment analysis can help managers discover the "true Voice of the Customer". Proponents argue an automated tool is necessary to keep track of the vast amount of information on the Web related to customer satisfaction and support, brand and reputation management, financial services, or product design and marketing (cf., http://sentimentsymposium.com/).
So an initial review suggests social media impact decision making by creating more connections to receive information and opinions. We tend to trust opinions of participants in our online networks. Social media are rich information sources and these tools facilitate crowd behavior, increase peer pressure and may result in saturation and the negative results from saturation.
Aronson, E., Wilson, T.D., & Akert, A.M. Social Psychology (6th Ed.). Upper Saddle River, NJ: Pearson Prentice Hall, 2007.
Card, O. S., Ender's Game, Tor Publishing, January 1985, ISBN: 0-312-93208-1 .
Crowd Psychology, from Wikipedia, the free encyclopedia, URL http://en.wikipedia.org/wiki/Crowd_psychology
Conformity, from Wikipedia, the free encyclopedia, URL http://en.wikipedia.org/wiki/Conformity
Daft, R.L. & Lengel, R.H. (1984). Information richness: a new approach to managerial behavior and organizational design. In: Cummings, L.L. & Staw, B.M. (Eds.), Research in organizational behavior 6, (191-233). Homewood, IL: JAI Press.
El Nasser, H. "Mayoral recall drives go viral," USA Today, April 12, 2011, URL http://www.usatoday.com/news/nation/2011-04-11-mayors_N.htm?loc=interstitialskip
Gilder, G. Metcalf's Law and Legacy, Forbes ASAP, September 1, 1993 at URL http://www.discovery.org/a/41, also check http://www.gildertech.com/
Hynes, A. "Impact of social media impact beyond PR and marketing," URL http://www.youtube.com/watch?v=lCNv2imFO0A .
Kaplan, A. M. and M. Haenlein (2010). "Users of the world, unite! The challenges and opportunities of Social Media". Business Horizons 53 (1): 59–68.
Kahai, S. and R. Cooper, "Exploring the Core Concepts of Media Richness Theory: The Impact of Cue Multiplicity and Feedback Immediacy on Decision Quality," Journal of Management Information Systems, Volume 20 Issue 1, Number 1/Summer 2003, URL http://portal.acm.org/citation.cfm?id=1289809 .
Kirkpatrick, David D. (2011-02-09). "Wired and Shrewd, Young Egyptians Guide Revolt". The New York Times.
Metcalfe's law, from Wikipedia, the free encyclopedia, URL http://en.wikipedia.org/wiki/Metcalfe%27s_law .
Power, D. J. "How will Web 2.0 impact design and development of decision support systems?" DSS News, Vol. 8, No. 8, April 22, 2007, update October 22, 2010.
Power, D. J. "What is social media?" DSS News, Vol. 11, No. 9, April 24, 2011.
Power, D. J. "What is the impact of social media on decision making?" DSS News, Vol. 11, No. 10, May 8, 2011.
Reed, F., "Study Shows Social Media Impact Lags Search and Email," February 3, 2011 at URL http://www.marketingpilgrim.com/2011/02/study-shows-social-media-impact-lags-search-and-email.html Shapiro, C. and H. R. Varian (1999). Information Rules. Harvard Business Press. ISBN 087584863X.
Shaw, M. E. Group Dynamics: The Psychology of Small Group Behavior (2nd edition), New york: McGraw-Hill, 1976.
Social media, from Wikipedia, the free encyclopedia, URL http://en.wikipedia.org/wiki/Social_media .
Facebook discussion at http://www.facebook.com/topic.php?uid=6858164899&topic=4239
Topic: Impact of Social Networking - how it changes the decision making process
Displaying all 5 posts.
This week let's examine how social networking technologies have changed the classic diffusion process. Jay Deragon posted a link to his blog looking at how markets are being defined as "collective parties engaging in conversation" and its these converational transactions that create influence.
Looking at that notion in terms of how platforms such as facebook have expedited the conversational process it is interesting to reexamine the classic Diffusion Model to see how it is being effected. Defining Diffusion as a process in which an innovation is communicated through a system of people over time creates 4 main elements to study
Innovation- ideas & products- disruptive techs, breakthroughs or fads
Communication - traditional advertising, interactive, word or mouth
Social System - how is it bounded- geographics, industry, age demographics
Time - how long until innovation is mainstream or is marginalized
Platforms such as facebook have greatly effected the power of word-of-mouth influence thus drastically reducing the time for 'catchy' innovations to reach the mainstream. Who is currently working in this area, have there been formal studies, where are the other communities of interest?
I'd definitely include wikis in this e.g. PBWiki for the enterprise; and another interesting site is Satisfaction (http://getsatisfaction.com/), which is seeking to leverage customer feedback & service into marketing opportunities.
Catherine Ann Fitzpatrick
Can you point to a decision that was made collectively through rapid diffusion that proves that social media is somehow changing the way people collaborate?
I find...Most wikis get started with great pomp and circumstance, and then people in a project don't use them, and end up talking on Skype, email, Twitter...
How are the little games on Facebook any different than the parlour games our grandparents played with pencil and paper a century ago?
The main use of FB for accelerating thinking and conversations seems to be the mini feed, which is a kind of streaming media for your friends, but do people do a lot of thinking when all they are doing is trading links?
I agree, it's not the tool that makes innovation but the people. Web 2.0 tools surely help but if you don't change your map of mind in management it would be the same than before!
There are many examples of web 2.0 - type sites where open collaboration has made its way into the corporate world. They aren't using the standard Facebook/MySpace platform, they have their own, but the concepts are similar. These sites will either post problems to be solved and then invite the world to participate in solving them, or post solutions/innovations that companies have found and are looking for applications of the innovations.
http://innocentive.com/ where mainstream companies like P&G can post their problems and invite scientists around the world to help solve them (for $$)
http://www.collab.net/ which is an open source software development portal where many technology companies participate
Last update: 2011-05-29 05:28
Author: Daniel Power
You cannot comment on this entry | <urn:uuid:ad0a378d-bc46-43f3-a3bc-a54d5adcb257> | CC-MAIN-2017-04 | http://dssresources.com/faq/index.php?action=artikel&id=225 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901778 | 2,572 | 2.890625 | 3 |
David Meza, chief knowledge architect at the National Aeronautics and Space Administration’s Johnson Space Center, is trying to find a way to share visual data so that a NASA employee at the Kennedy Space Center in Florida can log in to a computer and see what Meza and his team have been working on in their Texas-based studio.
This project to streamline visualization tools falls under a broader initiative to link the agency, especially in regard to big data. Recently, Meza and about 50 other NASA scientists have collaborated to create and execute strategy for a master data management plan, which was concocted in December. These collaborators meet twice a year at one of the 10 NASA space centers; their next big data plan meeting will occur in either September or October.
“We’re starting small,” Meza said. “We’re looking at ways of doing it smart. It’s a living thing. Data changes every day.”
NASA manages and stores more data than most other Federal agencies. The agency must confront and control the data collected from its many branches, missions, and projects.
In addition to working on ways to visualize data, Meza and his team use text analytics to improve query results. Meza said that another focus is honing communication between those who specialize in knowledge management, information architecture, and data science.
“Big data is many different things to many different people. My group has to understand all the different languages,” Meza said. “It’s important to learn how to not talk across each other, but with each other.”
The data strategy’s projects extend beyond linking visual images across space centers. Other aspects of the master plan involve a Data Fellows Program, in which selected candidates will work for NASA on agency-specific problems for terms of 6-12 months. Meza stated that NASA anticipates the first fleet of fellows to arrive in September. The strategy will also incorporate a Data Steward Program, in which NASA scientists who are experts in a certain field can advise branches on how to manage data properly.
“The important part is how we actually use big data,” Meza said. “It’s important that we are able to manage and analyze big data.” | <urn:uuid:1a0f4904-4561-4aff-8da5-4d3b4b7f4e5b> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/nasas-big-data-plan-takes-off/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00362-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942017 | 476 | 2.609375 | 3 |
What is CRLF?
When a browser sends a request to a web server, the web server answers back with a response containing both the HTTP headers and the actual website content. The HTTP headers and the HTML response (the website content) are separated by a specific combination of special characters, namely a carriage return and a line feed. For short they are also known as CRLF.
The server knows when a new header begins and another one ends with CRLF, which can also tell a web application or user that a new line begins in a file or in a text block.
What is the CRLF Injection Vulnerability?
In a CRLF injection vulnerability attack the attacker inserts carriage return and / or linefeed characters into user input to trick the server, web application or the user into thinking that an object is terminated and another one has started.
CRLF injection in web applications
In web applications a CRLF injection can have severe impacts, depending on what the application does with single items. Impacts can range from information disclosure to code execution. For example it is also possible to manipulate log files in an admin panel as explained in the below example.
An example of CRLF Injection in a log file
Imagine a log file in an admin panel with the pattern IP - Time - Visited Path. Therefore entries appear like:
18.104.22.168 - 08:15 - /index.php?page=home
If an attacker is able to insert the CRLF characters into the query he is able to fake those log entries and change them into:
/index.php?page=home&%0d%0a127.0.0.1 - 08:15 - /index.php?page=home&restrictedaction=edit
%0d and %0a is the url encoded form of CR and LF. Therefore the log entries would look like this after the attacker inserted those characters and the application displays it:
IP - Time - Visited Path
22.214.171.124 - 08:15 - /index.php?page=home&
127.0.0.1 - 08:15 - /index.php?page=home&restrictedaction=edit
Therefore by exploiting a CRLF injection vulnerability the attacker can fake entries in the log file to obfuscate his own malicious actions. For example imagine a scenario where the attacker has the admin password and executed the restrictedaction parameter, which can only be used by an admin.
The problem is that if the administrator notices that an unknown IP used the restrictedaction parameter, will notice that something is wrong. However, since now it looks like the command was issued by the localhost (and therefore probably by someone who has access to the server, like an admin) it does not look suspicious.
The whole part of the query beginning with %0d%0a will be handled by the server as one parameter. After that there is another & with the parameter restrictedaction which will be parsed by the server as another parameter. Effectively this would be the same query as:
HTTP Response Splitting
Since the header of a HTTP response and its body are separated by CRLF characters an attacker can try to inject those. A combination of CRLFCRLF will tell the browser that the header ends and the body begins. That means that he is now able to write data inside the response body where the html code is stored. This can lead to a Cross-site Scripting vulnerability.
An example of HTTP Response Splitting leading to XSS
Imagine an application that sets a custom header, for example:
The value of the header is set via a get parameter called "name". If no URL encoding is in place and the value is directly reflected inside the header it might be possible for an attacker to insert the above mentioned combination of CRLFCRLF to tell the browser that the request body begins. That way he is able to insert data such as XSS payload, for example:
The above will display an alert window in the context of the attacked domain.
HTTP Header Injection
By exploiting a CRLF injection an attacker can also insert HTTP headers which could be used to defeat security mechanisms such as a browser's XSS filter or the same-origin-policy. This allows the attacker to gain sensitive information like CSRF tokens. He can also set cookies which could be exploited by logging the victim in the attacker's account or by exploiting otherwise unexploitable cross-site scripting (XSS) vulnerabilities.
An example of HTTP Header Injection to extract sensitive data
Impacts of the CRLF injection Vulnerability
The impact of CRLF injections vary and also include all the impacts of Cross-site Scripting to information disclosure. It can also deactivate certain security restrictions like XSS Filters and the Same Origin Policy in the victim's browsers, leaving them susceptible to malicious attacks.
How to Prevent CRLF / HTTP Header Injections in Web Applications
The best prevention technique is to not let users supply input directly inside response headers. If that is not possible, you should always use a function to encode the CR and LF special characters. It is also advised to update your programming language to a version that does not allow CR and LF to be injected inside functions that set headers. | <urn:uuid:347d3f6d-0ed0-4222-bbdd-62183a94273a> | CC-MAIN-2017-04 | https://www.netsparker.com/blog/web-security/crlf-http-header/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00362-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897672 | 1,098 | 4.15625 | 4 |
Storage networking is built on three fundamental components: wiring, storing, and filing.
BY MARC FARLEY
Storage networking provides storage applications on any number of suitable wiring technologies. In general, storage networking products have been associated with specific network technologies. Storage area network (SANs) have been associated with Fibre Channel technology, and network-attached storage (NAS) is considered to be an Ethernet technology. Unfortunately, identifying storage network technologies with specific data networks has not helped people understand the abstract architectural components of storage networking. (See InfoStor, September 2001, p. 28, for Part I of the book excerpt.)
Storing and filing as network applications
Filing is familiar as a client/server application where both client and server perform similar communications functions. For example, a server for one group of clients may itself be a client of some other server. It is strange to think about it as such, but on a communications level, not an application level, clients and servers are peers.
Storing, however, is built on a different type of relationship. Storing-level communications is based on a master/slave model where host system initiators are master entities issuing commands and storage devices/ subsystems are slave entities responding to those commands. In general, the slave entities have much less flexibility than the masters that direct their operations. Notable exceptions to this arrangement include devices and subsystems with implemented embedded initiator functionality such as disk drives with integrated XOR processing and backup equipment with third-party-copy capabilities. Even in these cases, however, the embedded initiator in the device is used for specific applications and not for general-purpose storage communications.
Figure 1: The hierarchy of storing and filing in a single system.
There is an implied hierarchy between storing and filing, where users and applications access data on the filing level and where filing entities such as file systems and databases access the data on a storing level. This hierarchy exists as an internal relationship within nearly all systems used today. This hierarchy, along with the corresponding I/O stack functions, is depicted in Figure 1.
Although a hierarchy exists between storing and filing, it is not always necessary for it to be implemented as in Figure 1. Filing can access the wiring function independently without first passing through a storing function, as shown below.
The preceding drawing is the scenario usually used to show how NAS systems work. Analyzing the I/O path in more detail, however, one realizes the necessity for the client/server filing operation to be converted to a master/slave storing function and transmitted by the server over some sort of wiring to the destination storage devices. This conversion is done by a data structure function within the server's file system that determines where data is stored in the logical block address space of its devices or subsystems.
For most NAS products today, the wiring function used for storing operations is a storage bus. When all the pieces of the I/O path are put together for NAS, we see that the NAS system provides filing services to network clients and incorporates some type of storing function, typically on independent sets of wiring, as shown above.
While individual NAS vendors and their particular products may have specific storing and wiring implementations, no architectural requirements for storing or wiring are implied by the NAS concept. Therefore, NAS is considered to be mostly a filing application that uses the services provided by storing and wiring. Although a particular NAS product may implement specific wiring and storage technology, the primary external function provided to customers is its filing capabilities.
SAN as a storing application
Storing functionality can be generalized as the master/slave interaction between initiators and devices. Storing is deterministic by design to ensure a high degree of accuracy and reliability. To some degree this is a function of the underlying wiring, but it is also a function of the command sequences and exchanges used in storing. Several storing technologies are available, the most common being the various flavors of SCSI commands.
It can be very hard to separate the storing function from the wiring function when one looks for product examples. For instance, a Fibre Channel host bus adapter (HBA) is certainly a part of the wiring in a storage network, but it also provides functionality for processing SCSI-3 serial data frames. It is important to realize that the SCSI-3 protocol was developed independently of Fibre Channel technology and that nothing inherent in SCSI-3 ties it to Fibre Channel. It is independent of the wiring function and could be implemented on Ethernet or many other types of network.
Similarly, there is no reason another serial SCSI implementation could not be developed and used with Fibre Channel or any other networking technology. In fact, there is no reason that SCSI has to be part of the equation at all. It is one of the easiest storing technologies to adopt because it has been defined for serial transmission, but there certainly are other ways to control devices and subsystems.
So what is a SAN? It is the application of storing functionality over a network. SANs by definition exclude bus types of wiring. SANs provide deterministic control of storage transmissions, according to the implementation details of the storing protocol used and the capabilities of the underlying network.
Aligning the building blocks of storage networking
Storage networking is certainly not "child's play," but that doesn't mean we can't approach it that way. Certainly the SAN industry has made a number of ridiculous puns and word games surrounding SAN and sand, so with that as an excuse, we'll discuss building blocks. The three building blocks we are interested in, of course, are these:
As discussed previously, the implied and traditional hierarchy of these building blocks within a single system is to place wiring on the bottom and filing on top, such that storing gets to be the monkey in the middle, like this:
Of course, in the worlds of NAS and SAN, these blocks have been assembled like this:
But if we want to take a detailed view of NAS, we know that NAS actually has a storing component as well, which is often parallel SCSI, and we place the building blocks within client and server respectively, like this:
But as we've been saying in this article, wiring is independent from both storing and filing and, in fact, can be the same for both. So we've structured the building blocks of filing (NAS) and storing (SAN) on top of a common wiring, like this:
Now the preceding drawing is probably only interesting in theory, as something to illustrate the concept. In actual implementations, it is probably a good idea to segregate client/server traffic from storage traffic. This provides the capability to optimize the characteristics of each network for particular types of traffic, costs, growth, and management.
That said, it might also be a good idea to base the two different networks on the same fundamental wiring technology. This allows organizations to work with a single set of vendors and technologies. As long as a common wiring technology can actually work for both types of networks, there is the potential to save a great deal of money in the cost of equipment, implementation, training, and management. This type of environment, shown in Figure 2, includes a storage device as the final destination on the I/O path.
Race for wiring supremacy
Three networking technologies have the potential to provide a common wiring infrastructure for storage networks. The first is Fibre Channel, the next is Ethernet, particularly Gigabit Ethernet, and the third is InfiniBand. We'll make a brief comparison of their potential as a common wiring for storage networks.
Figure 2: Common wiring, but separate networks for filing and storing.
Fibre Channel strength
Fibre Channel's primary strengths are precisely where Ethernet has weaknesses. It is a high-speed, low-latency network with advanced flow control technology to handle bursty traffic such as storage I/O. However, its weaknesses are the major strengths of Ethernet. The Fibre Channel industry is still small compared to Ethernet, with limited technology choices and a relatively tiny talent pool for implementing and managing installations. The talent pool in Fibre Channel is heavily concentrated in storage development companies that have a vested interest in protecting their investment in Fibre Channel technology. This does not mean that these companies will not develop alternative wiring products, but it does mean that they will not be likely to abandon their Fibre Channel products.
Of the three technologies discussed here, Fibre Channel was the first to develop legitimate technology for common wiring. But technology alone does not always succeed, as has been proven many times throughout our history. The Fibre Channel industry has never appeared interested in its potential as a common wiring. Although it has a technology lead, having begun as the de facto standard for SANs, it is extremely unlikely that Fibre Channel will cross over to address the NAS, client/server market.
Ethernet has the obvious advantage of being the most widely deployed networking technology in the world. There is an enormous amount of talent and technology available to aid the implementation and management of Ethernet networks. While the 10Mbps and 100Mbps Ethernet varieties are sufficient for NAS, they are probably not realistic choices to support SANs because of their overall throughput limitations and lack of flow control implementations. Therefore, Gigabit Ethernet would likely be the ground floor for storing applications such as SANs. However, even though Gigabit Ethernet has the raw bandwidth and flow control needed for storage I/O, most Gigabit Ethernet switches do not have low enough latency to support high-volume transaction processing.
There is little question that Ethernet will be available to use as a common wiring for both filing and storing applications, but its relevance as an industrial-strength network for storing applications has to be proved before it will be deployed broadly as an enterprise common wiring infrastructure.
InfiniBand in the wings
The latest entrant in the field is InfiniBand, the serial bus replacement for the PCI host I/O bus. InfiniBand's development has been spearheaded by Intel with additional contributions and compromises from Compaq, Hewlett-Packard, IBM, Sun, and others. As a major systems component expected to be implemented in both PC and Unix platforms, InfiniBand is likely to become rapidly deployed on a large scale. In addition, a fairly large industry is developing the equivalent of HBAs and network interface cards for InfiniBand. Therefore, InfiniBand is likely to grow a sizable talent pool rapidly.
In relation to storage networks, the question is: Will storing and/or filing applications run directly across InfiniBand wiring, as opposed to requiring some sort of InfiniBand adapter? Immediately, soon, years away, or never? The technology probably needs to gain an installed base as a host I/O bus before it can effectively pursue new markets such as storage networking. However, InfiniBand certainly has the potential to become a legitimate storage wiring option at some point in the future.
As the apparent method of choice for connecting systems together in clusters, along with their associated storage subsystems, this could happen sooner than expected. As with any other networking technology, it is not so much a question of whether the technology can be applied but rather when attempts will be made and by whom with what resources.
There aren't any crystal balls to predict the future of storage networking. However, any time functions can be integrated together in a way that reduces cost and complexity, the only question is whether it can be marketed successfully. Common wiring is more than a theoretical abstraction for storage networks, but it represents a large opportunity to integrate data networks and storage channels under a single technology umbrella.
As Fibre Channel, Ethernet, and InfiniBand technologies evolve in response to this integration gravity, it is almost inevitable that NAS and SAN developers will look for ways to combine functionality, and their products will look more and more alike. The terms NAS and SAN will seem completely arbitrary or obsolete, and it will be necessary to distinguish storage products by the storing and filing applications they provide, as opposed to the limitations of their initial implementations. At that point, a whole new level of storing/filing integration will become visible and true self-managing storage networks may be possible. But first, the wiring slugfest!
The table briefly summarizes the competing technologies that could be used to form a common wiring and their current status.
This article discusses the fundamental components of storage networks-wiring, storing, and filing-in relation to the most common applications of storage networks today: NAS and SAN. More than just the similarity of the acronyms used, NAS and SAN have confused the industry and the market because of their similarities and the lack of an architectural framework to view them in.
NAS, the application of filing over a network, has two important roles. First, it provides a service that allows applications and users to locate data as objects over a network. Second, it provides the data structure to store that data on storage devices or subsystems that it manages.
SAN, on the other hand, is the application of storing functions over a network. In general, this applies to operations regarding logical block addresses, but it could potentially involve other ways of identifying and addressing stored data.
Wiring for storage networks has to be extremely fast and reliable. Fibre Channel is the incumbent to date, but Gigabit Ethernet and InfiniBand are expected to make runs at the storage network market in years to come. The development of a common wiring infrastructure for both filing (NAS) and storing (SAN) applications appears to be inevitable, and it will deliver technology and products that can be highly leveraged throughout an organization.
Marc Farley is a storage professional and author of Building Storage Networks, First and Second Editions.
This article is excerpted with permission from Building Storage Networks, Second Edition, by Marc Farley (Osborne/ McGraw-Hill, ISBN 0-07-213072-5, copyright 2001). | <urn:uuid:55ae3b19-abc5-4ec0-b402-5fe419089052> | CC-MAIN-2017-04 | http://www.infostor.com/index/articles/display/123544/articles/infostor/volume-5/issue-10/features/part-ii-building-storage-networks-a-book-excerpt.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00572-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948459 | 2,830 | 2.921875 | 3 |
IBM scores green cloud patent
IBM Labs announced Friday that the company has been awarded a patent for a method that allows datacentre operators to dynamically allocate compute and networking resources to lower-powered or underutilised systems. The company says the invention will help “green” cloud computing and reduce the energy consumption of datacentres, and allow service providers the ability to offer consumers a trade-off between performance and energy efficiency.
Submitted in September 2011, the patent (US 8,549,125B2, “Environmentally Sustainable Computing In A Distributed Computer Network”) which is the company’s third variation on a “dynamic resource allocation” theme, allows cloud service providers to identify services or deployments that can be implemented with the lowest level of environmental impact across the datacentre.
The cloud provider then routes the requests to the network devices and servers, down to the code functions that will process that service to consume the least amount of electricity.
“The efficient, distributed cloud computing model has made it possible for people to bank, shop, trade stocks and do many other things online, but the massive datacentres that enable these apps can include many thousands of energy-consuming systems,” said Keith Walker, master inventor at IBM and co-inventor on the recently awarded patent. “We have invented a way for cloud service providers to more efficiently manage their datacentres and, as a result, significantly reduce their environmental impact.”
Walker said the idea for the patent came from IBM’s experience in buying from energy companies: “They scale their service and price according to the energy – and kind of energy – they make available. For example, by paying a little more, they can guarantee a certain percentage of energy will come from renewable sources. Why not do this for cloud services?”
Proponents of cloud technology often suggest increased cloud computing adoption has the potential to significantly reduce greenhouse gas emissions by consolidating IT infrastructure in massive energy efficient datacentres. Researchers from Harvard University, Imperial College and Reading University recently explored cloud computing’s impact on lowering GHGs and found that if 80 per cent of organisations used cloud-based email, CRM and collaboration software the they could save 11.2 TWh annually, or a quarter of London’s annual energy usage.
The recently patented method would effectively allow cloud service providers to offer consumers a trade-off between performance and energy efficiency in an easily automatable way. It won’t necessarily make the datacentre “greener,” as that’s largely dependent on the energy sources, but it’s a step in that direction. A schematic describing the patent can be found below: | <urn:uuid:099262be-4f53-4549-a930-2dfa0d2a6ae7> | CC-MAIN-2017-04 | http://www.businesscloudnews.com/2013/11/11/ibm-scores-green-cloud-patent/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00207-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935305 | 564 | 2.734375 | 3 |
Operating System (OS)
An Operating System (OS) is the software program that runs on a computer or mobile device and handles the most fundamental tasks. Examples of common operating systems include Microsoft Windows, Linux, and UNIX.
A10 Networks Advanced Core Operating System (ACOS) represents a new kind of operating system for New Generation Application Delivery Controllers and New Generation Server Load Balancers. ACOS leverages the best combination of hardware and software to maximize performance for 32-bit and 64-bit architectures. Additional hardware streamlines the packet flow and off-loads CPU-intensive applications, resulting in greater performance, and making A10 Thunder Series the most flexible, scalable, energy-efficient solution on the market today. | <urn:uuid:af602261-1cb6-4a07-8a5b-995eeeabb747> | CC-MAIN-2017-04 | https://www.a10networks.com/resources/glossary/operating-system-os | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.831537 | 147 | 3.078125 | 3 |
Backup tapes are produced to allow you to easily restore your data within your service-level timeframes if the original data is lost or corrupted. In a real disaster recovery (DR) crisis, any complexity or additional steps required when restoring your data must be minimized.
Once you decide that a solution is required in order to meet regulatory or good business governance requirements you must choose between a single platform or a corporate-wide solution. It is usually better to standardize on one solution for all platforms.
Next you must determine which option is best for your environment - software-only or hardware.
Software often does not offer compression.
Hardware units—whether built into the drive or of an inline design—use hardware compression prior to encryption.
Software compression relies upon the system processing power to do the work.
Hardware compression is not system-reliant.
Software normally involves several updates during the life of the system.
Hardware does not change even if the complete system or OS is changed.
Software encryption is not available for all systems.
Hardware encryption works on all system types.
Some backup packages do not include encryption
and therefore require a change of package.
Hardware encryption works on all backup solution
packages without the need for any configuration
With software, the user key is kept on the system, so the system or network is open to attack.
With hardware tape encryption, the key can be kept in the device and so cannot be read from any external device.
Software is normally restricted to a single operating system type.
Hardware is system-independent.
Software encryption usually needs to be upgraded when the OS is upgraded.
Hardware, being platform-independent, does not need to be changed when the OS is upgraded.
Software is often a low cost solution, and dependent on the OS being used.
Hardware is normally the same cost whatever the OS.
Software costs are often based on the capacity of the attached library.
Hardware costs are fixed.
What to Encrypt?
Another issue raised is whether to encrypt only the sensitive data or to encrypt everything.
The concept of encrypting only the sensitive data appears to be very attractive because it minimizes the amount of extra processing. The downside is that someone has to make the decision as to what is sensitive and what is not.
Another area of contention is when to implement a solution. Should you look at what is readily available and “field proven” or wait for the availability of the “ultimate solution” real soon?
From the beginning, it should be understood that there may be individuals within the business who will not understand the risks and will fight against any attempts to integrate a solution into the infrastructure. Many MIS departments see backup as non-productive. Another potential issue is the funding for this solution.
A vital point to consider is what to do with the existing pool of tapes used for backups and archives.
Is it possible to reuse the existing media?
Does the solution require continual monitoring and operational input?
Does your solution take into consideration migration to a new system?
An external hardware solution with dedicated compression and encryption engines will not suffer from the problems and complexities that software may suffer from.
The DR Implications
Any good tape encryption solution must be such that it does not hinder or overcomplicate this already stressful operation.
Statistics show that if you fail to restore your business data and get your business back up in a timely manner, the result 80% of the time is the total collapse of your business.
Be cautious against choosing a solution that is over-complex, needs specialists to install on the DR site, or has a difficult key-management system.
Where Should a Hardware Solution Reside?
In the Server
When encryption is built into the server, it is system-dependant and will be very disruptive to install.
The downside is that it must also reside in any DR or development systems in order to be utilized for DR or development.
With host-based encryption using a standard encryption card, any user who has decided to implement the same methodology will have exactly the same physical hardware as you.
In the Drive
There are only a limited number of truly integrated drive-based solutions on the market, and these are new and, so far, unproven. Most solutions are limited to a new media type in order to allow encryption.
The whole system’s security is based on a single external key, and as the drives are standard; hence, key management of such a product is of paramount importance.
With drive-based encryption using a standard encryption card, any user will have exactly the same physical hardware as you.
These devices are normally the simplest to install and cause the least disruption and the keys can be securely loaded into the appliance, which needs no network connection to the system so is inherently more secure. These systems are transparent, and drives can be rolled out across a heterogeneous environment very easily. These solutions also offer the easiest use in a DR situation.
Encryption is the best way for businesses to meet the increasing need for privacy protection.
10ZiG offers two storage security solutions to protect your data at rest. The Q3 is a
stand-alone storage encryption appliance. The Q3i is a
tape drive with built-in PCI
compliant encryption. For more details,
www.theq3.com or contact 10ZiG at | <urn:uuid:6d14457f-b09c-409a-89f1-fbb3e8fb575f> | CC-MAIN-2017-04 | http://www.10zig.com/choosingencryption.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925807 | 1,106 | 2.671875 | 3 |
Interesting post by Nick Carr in which he points to the supposed first published evidence of the concept of Cloud Computing. The proof comes in the document, dated March 30, 1965 which outlines a Western Union executive's ambitious plan to create "a nationwide information utility, which will enable subscribers to obtain, economically, efficiently, immediately, the required information flow to facilitate the conduct of business and other affairs." In a nutshell Western Union invented cloud computing.
Specifically, "Just as a number of local or regional companies provide both electricity and gas, independent telephone companies would be encouraged to provide both telephone and information utility services in their respective territories"
The original copy of this intriguing document resides in the Smithsonian National Museum of American History, Lemuelson Center for the Study of Invention & Innovation, in the Western Union Telegraph Company Records archival collection covering the years 1820-1995.
Here is the complete text.
1965: Western Union's Future Role-as the Nation's First Cloud Utility | <urn:uuid:e3cc996b-9b07-4f99-97c8-d47091ea7fbc> | CC-MAIN-2017-04 | http://www.elasticvapor.com/2009/11/who-invented-cloud-computing-western.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905765 | 197 | 2.765625 | 3 |
Sometimes unexpected detours are necessary to reach the goal. Take this simple
This compiler generated code calculates the length of the input string. If you do not remember
the exact definition of repne scasb, here is another snippet which does the same thing:
A straightforward decompilation of the first snippet yields this:
I can’t say that the C code is any better than the assembly code:
- Single repne scasb has been replaced by an obscure loop.
- An additional variable to represent the ZF flag has been introduced.
- The result is longer than the initial assembly code.
It would be nice if the decompiler could replace this assembly code
by a call to strlen. For a human reader, the difference would be spectacular:
Just one meaningful line, no puzzling x86 instructions,
just plain and understandable code!
Now, the question is, how do I transform the initial assembly code into this ideal
decompilation result? I could hardcode the decompiler to check if the first instruction
is mov, the second is xor, and so on. You know better than me that this naive approach
is severely limited: as soon as the compiler decides to shuffle instructions, use different
registers, or replace repne scasb with a loop, our decompiler would be hopelessly
confused and lost. Also, different compilers generate different code for built-in functions
(just remember the second strlen example).
I can not hope to hardcode all these variations by hand! What if I could specify
the sequence in an abstract form and match it against real assembly code?
This idea looked attractive for me: I just need to build the pattern matcher once
and specify patterns for built-in functions. Patterns could look like this:
- x86 instructions are gone – they have been replaced by abstract instructions for a virtual machine.
- Registers are gone – they have been replaced by abstract variable names.
Difficulties are not where we expect them – the most laborious part of the task turned out to be
the pattern reader utility which would read the above text representation and produce
something binary. And here I stopped and asked myself: what binary representation do I need?
The answer was surprising: the pattern reader would generate a C text! The main reason
is that C text is most portable, you just need to compile it. I could generate a binary
file but then I would need to design its format. I could generate another text file but then
I would need another reader. C code has a reader – a C compiler, it can also have any
format I want with the structure and union declarations.
The path to the result turned out to be not as straight as I hoped:
The decompiler would be based on a utility which generates C code
from an assembler for a virtual machine. Everything got mixed up. | <urn:uuid:623c0c84-581a-4061-abf1-8969b458f100> | CC-MAIN-2017-04 | http://www.hexblog.com/?p=39 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919908 | 605 | 3.1875 | 3 |
This course is designed to introduce business professionals, the WebSphere Commerce catalog types, and the skills they need to manage their store catalogs, and core features of the Catalogs tool of Management Center that is provided by IBM WebSphere Commerce V7 Feature Pack 7.
The skills that are developed in this course enable WebSphere Commerce business users to create, and manage different store-specific business objects such as catalogs, categories, subcategories, and catalog entries with the use of Catalogs tool of the Management Center. The course also explains versions in the Catalogs tool.
The course begins with an overview of the catalogs. It introduces the students to WebSphere Commerce catalogs, supported by a scenario, and possible objectives for using the catalogs, and different catalog management tasks of business users such as catalog managers, product managers, and category managers. It also provides a brief description of other Management Center tools associated with catalogs. Subsequent units cover the catalog management terminology that includes catalog hierarchy, category types, versions and many more, and how to create and manage catalogs, categories, catalog entry types such as products, and SKUs by using the Catalogs tool. They also describe version management tasks that can be performed with the use of Catalogs tool.
The course provides check point questions that tests your understanding of the concepts that are explained in the course. | <urn:uuid:f6b5f116-f7bb-4f4a-bd44-217715df9bb8> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120861/introduction-to-product-catalog-for-ibm-websphere-commerce-version-7-fep-7/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92219 | 277 | 2.53125 | 3 |
GCN LAB IMPRESSIONS
IBM's dancing electrons create spintronics breakthrough
- By Greg Crowe
- Aug 13, 2012
Scientists at IBM have announced the discovery of a new process that just might make spin transport electronics, or “spintronics” a real possibility in commercial electronics.
As you probably know, today's computing technology processes data by means of charging electrons. Unfortunately, as the circuits in our microchips become smaller and smaller, they will soon pass the point where electron flow is impossible to fully control.
The idea behind spintronics is to to use electrons’ spin rather than their charge, by getting all of the electrons in the same area of a magnetic field to spin at the same rate, which IBM described as a waltz. This stabilizes the electron flow and extends an electron’s spin duration by up to 30 times, lasting just beyond the current time it takes a 1 GHz processor to cycle.
The IBM scientists, working with scientists at European research university ETH Zurich, were able to monitor and stabilize the electrons with really short laser pulses, IBM said in a release.
The actual paper was published at Nature Physics, but it is not for the faint of heart.
Since spintronics research takes place about 40 degrees above absolute zero, the folks at IBM admit that this new technology may take a while to appear in commercially available devices.
However, when it does, they say it could not only mean maintaining increases in processing power and storage capacity, but also greater energy efficiency. These will both be good things for network administrators to have at their disposal.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:5a5251b4-2fe7-406d-8ee2-88727d8e9bda> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/08/13/ibm-spintronics-breakthrough-waltzing-electrons.aspx?admgarea=TC_EMERGINGTECH | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00289-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946631 | 349 | 3.25 | 3 |
Multimode fibers are identified by the OM (“optical mode”) designation as outlined in the ISO/IEC 11801 standard:
Today, this evolution continues with the development of OM4 fiber as the industry prepares itself for speeds of 40 and 100 Gb/s.
OM3 and OM4 are both laser-optimized multimode fiber (LOMMF) and were developed to accommodate faster networks such as 10, 40, and 100 Gbps. Both are designed for use with 850-nm VCSELS (vertical-cavity surface-emitting lasers) and have aqua sheaths.
When the 10 Gigabit Ethernet (10GbE) standard released in 2002, the fiber optic links of 10GBASE-SR were standardized to at least 300 meters over Optical Multimode 3 (OM3) fiber. OM3 is the leading type of multimode fiber being deployed today in the data center, but it isn’t the best fiber any more.
Optical Multimode 4 (OM4) fiber was standardized in 2009 by the Telecommunications Industry Association. OM4 is now the latest and greatest multimode fiber and the IEEE is setting the supported distance of 10GbE to at least 400 meters. Given the premium for single mode transceivers, OM4 fiber is the best option for the small percentage of users needing to run 10Gb/s over links between 300 and 550 meters (or the even smaller percent who anticipate running 40 or 100Gb/s between 100 and 150 meters).
OM4 cable could be regarded as improvement on the existed OM3 standards. The key performance differences lie in the bandwidth specifications for which the TIA standard stipulates the following three benchmarks: effective modal bandwidth of at least 4,700 MHz-km at 850 nm; overfilled modal bandwidth of at least 3,500 MHz-km at 850 nm; overfilled modal bandwidth of at least 500 MHz-km at 1,300 nm. Both rival single-mode fiber in performance while being significantly less expensive to implement.
When the 40GbE and 100GbE standard was released in 2010, OM4 was designed into the standard and achieved a distance of 150 meters on OM4 fiber while OM3 fiber went 100 meters. 90 percent of all data centers have their runs under 100 meters so it really just comes down to a costing issue right there. Laser-optimized multimode fiber is recognized as the medium of choice to support high-speed data networks. Actually, OM4 fiber defines the next generation multimode optical fiber for high speed fiber optic transmission.
Economically the cost for OM4 fiber is much lower than using singlemode due to the price of the Opti-electronics. At 40 Gbps the approximate cost is 3 times higher and at 100 Gbps the approximate cost is 10 times higher. Unlike the Multimode, the Singlemode solution does not use multiple strands and lasers to accomplish the speed of 40/100 Gbps; instead the use of CWDM is leveraged. | <urn:uuid:d54a0e00-02c4-4415-80bd-86355653a4f5> | CC-MAIN-2017-04 | http://www.fs.com/blog/om4-fiber-for-high-speed-applications.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00409-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940707 | 623 | 2.796875 | 3 |
As you know, wavelength-division multiplexing (WDM) is a technology which multiplexes a number of optical carrier signals onto a single optical fiber by using different wavelengths of laser light, this technology is widely used in fiber optic communications. This technique no only enables bidirectional communications over one strand of fiber, but also multiplication of capacity.
WDM systems are divided into different wavelength patterns, conventional/coarse (CWDM) and dense (DWDM). CWDM systems provide up to 8 channels in the 3rd transmission window (C-Band) of silica fibers around 1550nm. DWDM uses the same transmission window but with denser channel spacing. Channel plans vary, but a typical system would use 40 channels at 100 GHz spacing or 80 channels with 50 GHz spacing. Some technologies are capable of 12.5 GHz spacing (sometimes called ultra DWDM). Such spacings are today only achieved by free-space optics technology. New amplification options (Raman amplification) enable the extension of the usable wavelengths to the L-band, more or less doubling these numbers.
There are two basic types of WDM solutions – both are available for CWDM and DWDM implementations depending on customer requirements:
Transponder-Based Solutions: Allows connectivity to switches with standard 850 or 1310nm optical SFP transceivers. A wdm transponder is used to convert these signals using Optical-to-Electrical-to-Optical (O-E-O) conversion to WDM frequencies for transport across a single fiber. By converting each input to a different frequency, multiple signals can be carried over the same fiber.
SFP-Based Solutions: These eliminate the need for transponders by requiring switch equipment to utilize special WDM Transceiver (also known as colored optic), reducing the overall cost. Coarse or Dense WDM SFPs are like any standard transceiver used in Fibre Channel switches, except that they transmit on a particular frequency within a WDM band. Each wavelength is then placed onto a single fiber through the use of a passive multiplexer. The WDM transceivers utilize a single strand of fiber to transmit network traffic on separate receive and transmit wavelengths (1310/1550 nm). This innovative technology allows you to effectively use the two strands for two independent connections or to double the capacity without running a second fiber cable.
The Trends Of Larger Demand Of WDM Transceivers
Optical transceiver modules are key components for WDM equipment for large capacity/long distance transmission across optical fiber. The market for WDM modules is expected to reach more than $770 million in revenues as carriers make the shift to a WDM-based transport infrastructure in order to support new high bandwidth services and their promised 100 Gbps backbones.
ROADM deployment is also creating larger addressable markets for WDM modules. After years of merely being talked about, these WDM boxes are now being deployed much more widely and finally allowing operators to capitalize on their ability to provide greater flexibility of network design, integration of SDH and OTN, fast service activation and high bandwidth efficiency.
The newly developed optical WDM transceiver module has the advanges of low cost, high performance, and compact size. Choosing cwdm 10g or 10g dwdm at FiberStore.com. | <urn:uuid:766b4c52-c3ea-42c8-80f9-04e0829c06e5> | CC-MAIN-2017-04 | http://www.fs.com/blog/wdm-transceiver-of-wdm-devices.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911276 | 690 | 3.609375 | 4 |
Sobey K.G.,Ontario Ministry of Natural Resources |
Walpole A.A.,Ontario Ministry of Natural Resources |
Rosatte R.,Ontario Ministry of Natural Resources |
Fehlner-Gardiner C.,Canadian Food Inspection Agency |
And 6 more authors.
Vaccine | Year: 2013
ONRAB® is a rabies glycoprotein recombinant human adenovirus type 5 oral vaccine developed for application in baits to control rabies in wildlife populations. Prior to widespread use of ONRAB®, both the safety and effectiveness of this vaccine required investigation. While previous research has focused on field performance and the persistence and pathogenicity of ONRAB® in captive animals, we sought to examine persistence and shedding of ONRAB® in populations of free-ranging target and non-target mammals. We collected oral and rectal swab samples from 84 red foxes, 169 striped skunks, and 116 raccoons during 2007 and 2008 in areas where ONRAB® vaccine baits were distributed. We also analyzed 930 tissue samples, 135 oral swab and 138 rectal swab samples from 155 non-target small mammals from 10 species captured during 2008 at sites treated with high densities of ONRAB® vaccine baits. Samples were screened for the presence and quantity of ONRAB® DNA using quantitative real-time PCR. None of the samples that we analyzed from target and non-target species contained quantities of ONRAB® greater than 103EU/mL of ONRAB® DNA which is a limit that has previously been applied to assess viral shedding. This study builds on similar research and suggests that replication of ONRAB® in animals is short-lived and the likelihood of horizontal transmission to other organisms is low. © 2013 Elsevier Ltd. Source
ARTEMIS, Inc. | Date: 2013-09-13
ARTEMIS, Inc. | Date: 2013-05-07
Fair-trade crude rubber, fair-trade natural rubber. Gloves for household and gardening purposes; containers for household or kitchen use; brushes for pets; household utensils, namely, graters, sieves, spatulas, strainers, pot and pan scrapers; all the above made from fair-trade rubber. Footwear, headwear, visors, nightwear, shirts, blouses, pants, jackets, coats, gloves, socks, suits, swimwear, underwear, rain wear; all the above made from fair-trade rubber. Balloons; balls for games or sports; pet toys; bats and rackets for games; fishing rods; skis; yoga mats; all the above made from fair-trade rubber.
Zaugg E.,ARTEMIS, Inc. |
Edwards M.,ARTEMIS, Inc. |
Wilmhoff B.,First RF |
Westbrook L.,Air Force Research Lab
IEEE National Radar Conference - Proceedings | Year: 2011
A SAR system with a single aperture that simultaneously transmits at a high and low frequency provides the distinct advantages of each individual frequency band, and the added benefit of having a collocated aperture. In this paper, the design of the multi-band SlimSAR and the unique multi-frequency antenna are presented. © 2011 IEEE. Source
Zaugg E.C.,ARTEMIS, Inc. |
Edwards M.C.,ARTEMIS, Inc.
Conference Proceedings of 2013 Asia-Pacific Conference on Synthetic Aperture Radar, APSAR 2013 | Year: 2013
The SlimSAR is a flexible, multi-frequency band, multi-mode synthetic aperture radar system with options suitable for operation on aircraft across a wide range of size and capabilities. From small unmanned aircraft flying at low altitude and covering a small area, to large aircraft covering very wide swaths, the SlimSAR has the versatility to perform in any mission scenario. One application is that of disaster monitoring, and the SlimSAR has shown its capabilities imaging the aftermath of Hurricane Sandy in 2012. © 2013 IEICE. Source | <urn:uuid:8c72b5b9-430b-458b-b3e5-dee2b1dae78d> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/artemis-inc-280181/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00225-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903209 | 838 | 2.75 | 3 |
If you're conscientious, you probably think carefully about the words you choose in an e-mail message or a formal report. Making yourself understood helps get your message across, and it helps your readers benefit from what you're saying.
Many people, however, don't think twice about the way their words, specifically the letters, look on screen or paper. The particular form that letters take depends on the font you choose, and the art of choosing the right font is called typography.
The meaning of the word "font" has changed over the years, and in today's digital world it's largely synonymous with "typeface," meaning a stylistically coordinated set of letters, numbers and punctuation marks.
Typography has been around longer than personal computers, but PCs opened up typographic possibilities to the masses.
When desktop publishing was introduced in 1985, the surfeit of font choices led many people to create documents that resembled ransom notes written by an inspired 10-year-old. The opposite extreme is to always use the same font, which isn't much different from always wearing the same clothes. People make judgments about you and your writing because of the font you choose, just as people draw conclusions from your wardrobe.
The two most popular fonts today are Times New Roman and Arial. The former is a serif font, with small designs at the ends of letter strokes, and the latter is a sans-serif font, which lacks those designs. Sans-serif fonts, which are starker and bolder, are often used for titles and headlines; serif fonts aid legibility and are often used for the body of works.
People typically choose among the default fonts that come installed with word-processing programs, but you can also buy fonts separately. And there are thousands available. You can also visit Web sites where generous designers make fonts available to download for free, such as 1001 Free Fonts, at www.1001freefonts.com.
Choosing a font that is appropriate for your work is like choosing what clothes to wear to work, a formal party, a gathering of friends or a workout at the gym. You should aim for both image and utility.
A study by the Software Usability Research Laboratory at Wichita State University sheds light on this. Researchers analyzed 20 commonly used fonts by asking more than 500 people what images the fonts projected. For example, the study found the best font for projecting flexibility is Kristen, assertiveness is Impact, practicality is Georgia and creativity is Gigi. But there are two sides to a coin (or font): Kristen also projects instability and rebelliousness; Impact connotes rudeness and unattractiveness; and Gigi suggests impracticality and passivity.
Some people use Courier New because it's a monospaced font: Each letter takes up the same amount of horizontal space, just like a manual typewriter's font. It's useful if you need to align numbers in a column. But Courier New can project conformity, unimaginativeness and dullness, according to the Wichita State researchers. A better monospaced font choice is Consolas.
Times New Roman is a versatile, all-around font with an interesting history. It was commissioned by the British newspaper The Times in 1931, hence its name. Microsoft has included it in every copy of Windows since version 3.1, and it's the default font in many Windows programs. On the Apple Macintosh, it's called Times, and it's also the default for many Mac programs. In 2004, the U.S. State Department in 2004 mandated that all diplomatic documents use Times New Roman instead of previously mandated Courier New. But if you use Times New Roman reflexively, also consider Georgia, which is less stiff but equally legible.
Even though the Wichita State study looked at only 20 fonts, reading the results, at http://psychology.wichita.edu/surl/usabilitynews/81/PersonalityofFonts.htm,
gives you a feel for why type talks.
Fonts can be fun, but don't overdo it. One rule of thumb: Use a maximum of three different fonts per page. You should use minimally the varying font sizes. Too much variety can be jarring to the eye.
Avoid long stretches of text in italic, bold and uppercase, which can be more difficult to read than regular upright type. Similarly make sure there's enough contrast between the letters and their background.
Black on white is easier to read than white on black, and both are easier to read than green on blue. The most legible combination is black on cream.
Reid Goldsborough is a syndicated columnist and author of the book Straight Talk About the Information Superhighway. He can be reached at email@example.com or www.reidgoldsborough.com. | <urn:uuid:1735c2cc-7fb1-435f-8c52-d2ed4379da19> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Personal-Computing-Choosing-the-Right-Font.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944231 | 994 | 2.65625 | 3 |
The recent cyberattack on a public water utility in Springfield, Ill. has stoked considerable concerns about the vulnerability of U.S. critical infrastructure equipment.
The attack destroyed a pump at the facility when someone using a computer with an IP address based in Russia gained access to the Supervisory Control and Data Acquisition (SCADA) system controlling the pump.
Experts in the industrial control systems arena say that, while that attack was relatively inconsequential and not unsurprising given the vulnerabilities that exist, it may be a harbinger of things to come.
Here are four lessons from the incident, which is still under investigation:
Information sharing is critical
Though an initial report by the Illinois Statewide Terrorism and Intelligence Center called the incident a public water district cyber intrusion, the Department of Homeland Security (DHS) and other agencies that share information on such incidents have so far been relatively quiet about what happened. That led to speculation about the nature of the attack, how serious it was, and what the motives might have been. Some even question whether the pump could have failed in the manner reported in the incident report.
The water pump at the Springfield utility is supposed to have burned out after attackers used their access to the SCADA system to cycle the pump off and on continuously. Typically that should not have happened, said L.W. Brittian, a SCADA system consultant and training expert. "Rapid cycling of a large pump motor should not, by itself, have been enough to burn a pump motor up," Brittian said. While turning a pump motor on and off over and over can cause it to overheat. temperature and pressure control mechanisms built into it should have tripped, taking it safely offline.
"The SCADA system may have been accessible on the Internet, so someone could come in and get the pump to run and they could ask it to stop," Brittian said. "They could tell it to start and stop every three seconds until something happens," he said. But what they would not have been able to access over the Internet is the overload relay that is provided to protect the motor from overloading and burning up.
Even if hackers had accessed the operating controls, it's doubtful they could have also accessed the safety controls, he said. "We need more details of exactly what happened."
SCADA systems are easy to hack
A vast majority of the systems used to control critical equipment at places like power stations, nuclear power plants and water treatment facilities are inherently insecure. In many cases, anyone with logical access to an industrial control system or programmable logic controller can upload firmware on it without authentication. Passwords are often hardcoded into systems. And many systems have administrative backdoors and contain very basic buffer overflow errors.
Such vulnerabilities were acceptable for a long time because SCADA systems were not really connected to the outside world; An attacker usually needed physical access to a SCADA system to compromise it.
That's changed over the last few years. A growing number of SCADA systems are connected to the Internet, making them much more vulnerable to attack from external sources. Last week, a hacker named pr0f claimed he hacked into a SCADA system at a water utility in South Houston by overcoming a three-character password that was used to protect the system.
"The major thing about control system security that most people don't get is that there is none," said Ralph Langner, a German industrial control systems expert noted for his research on the Stuxnet worm last year. Stuxnet has been blamed for disrupting Iran's uranium enrichment efforts by causing SCADA problems. More recently, Iran said it had been affected by the Duqu trojan, which also targets SCADA systems.
Duqu is seen as a precursor to the next Stuxnet.
More people will attempt to break into SCADA systems
Expect to see many more such attacks. After Stuxnet, the SCADA community has been living in a fishbowl of sorts, said Eric Byres CTO and founder of Byres Security, a provider of industrial control system security products and consulting services. People who didn't know how to spell SCADA are now finding all sorts of vulnerabilities in SCADA products. So far this year, there have been over 200 vulnerabilities discovered in ICS products from various vendors, compared to just over 10 that were discovered in all of 2010.
The SCADA community is "no longer living in a little bubble," Byres said. "Security by obscurity no longer works."
Fixing SCADA systems is hard
After Stuxnet, there has been a greater effort to find and fix vulnerabilities in SCADA systems. But most of the focus has been on addressing issues in the front-end -- mostly Windows-based Human Machine Interface (HMI) systems that are used to interact with SCADA systems. But vendors are paying far less attention to vulnerabilities in the embedded control systems themselves. The ISA Security Compliance Institute last year launched a program to test and certify industrial control system products for vulnerabilities. So far just two companies have had their products certified under the program.
Utilities also often lack the resources needed to bolster the security of their control systems. This is especially true in the case of smaller utilities such as the one that was attacked last week in Springfield. "Smaller utilities have a harder time securing their SCADA and DCS [Distributed Control System] because they don't have the IT staff or other resources to allocate to this," said Dale Peterson, CEO of Digital Bond, a consultancy that specializes in control system security.
"We have seen municipal utilities that have two people on the IT staff that are responsible for keeping everything running," from desktops and email systems to SCADA and distributed control systems, he said.
Jaikumar Vijayan covers data security and privacy issues, financial services security and e-voting for Computerworld. Follow Jaikumar on Twitter at @jaivijayan or subscribe to Jaikumar's RSS feed. His e-mail address is firstname.lastname@example.org.
This story, "4 lessons from the Springfield, Ill. SCADA cyberattack" was originally published by Computerworld. | <urn:uuid:a15c77d1-d2bd-41f1-bf37-e878fdfa9c92> | CC-MAIN-2017-04 | http://www.itworld.com/article/2734897/security/4-lessons-from-the-springfield--ill--scada-cyberattack.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00519-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969129 | 1,257 | 2.546875 | 3 |
GPUs are becoming more like CPUs. But in the critical area of error corrected memory, graphics hardware still lags. The lack of error correction is probably the single biggest factor that makes users of GPUs for high performance computing nervous. Some HPC applications are resistant to the occasional bad data value, but many are not. The good news is that graphics chip vendors are aware of the problem and it appears to be only a matter of time before GPUs get a memory makeover.
Before AMD and NVIDIA brought GPU computing onto the scene, graphics processors didn’t really need to be concerned with error-prone memory. If a pixel’s color is off by a bit or two, nobody is going to notice as the images go flying by. So it was natural (and cheaper) for GPU devices to be built without support for error corrected memory. In 2006, with the advent of general-purpose computing on graphics processing units, otherwise know as GPGPU, the issue of reliable memory came to the fore.
The problem is that when you’re using the GPU as a math accelerator and a memory bit flips in a data value, you’ve got a potential problem. Obviously in numerical calculations, accuracy matters. That’s why all standard CPU servers today come with memory that supports Error Correcting Codes (ECC) as well as with on-chip intelligence for error checking and correction in cache and local data structures. The reason that general-purpose computing can be done on GPUs at all has to do with the relatively infrequent occurrence of these errors on standard graphics hardware. Algorithms are typically run many times in a typical technical computing application, so anomalous results can be averaged out, or even manually discarded.
The only simple way to circumvent the problem on the current crop of GPUs is to run the code twice (or simultaneously on two separate devices). If the results don’t match, you assume an error occurred and you rerun the offending sequence. It’s relatively bulletproof, but you’ve cut your price-performance in half for the sake of error correction. A less brute-force method was devised by the Tokyo Institute of Technology, who came up with software-based ECC for GPUs (PDF). But the preliminary results showed the performance overhead was acceptable only for compute-intensive applications, not bandwidth-intensive ones.
There are different categories of memory errors. The kind most people focus on are thought to be the result of cosmic rays, alpha particles in packaging material, or possibly as a side-effect of harsh environmental conditions. They are called soft (or transient) errors and most commonly occur in off-chip DRAM, but can also strike the GPU ASIC itself in local memory or data registers.
Hard (or permanent) errors can also be present on memory chips, but these are easy to detect with simple diagnostic tests. Hard errors are usually dealt with by replacing the offending memory module, but theoretically could be handled in software too. The conventional wisdom is that soft errors are much more common than hard errors, although at least one study (PDF) by Google found just the opposite.
Data errors can also occur at the memory bus interface. Here, at least, the graphics world has made some progress. GDDR5 (Graphics Double Data Rate, version 5) memory, which first appeared in 2008, was the first memory specification for graphics platforms that contained an error detection facility. The motivation behind this was the high data rates of GDDR5, which made the odds of producing bad data much more likely. Since GDDR5 contains an error correction protocol, a compatible memory controller is able to take corrective action — basically a retry — to compensate.
That still leaves a lot of data on the GPU board exposed. Adding ECC memory to GPU boards intended for the technical computing market is a relatively straightforward product decision since the extra cost can be passed on to the GPGPU consumer. But changing the GPU core as well as the integrated memory controller to complete the protection requires a tradeoff, since extra transistors are needed for error detection and correction on the ASIC. And because of the expense of designing and testing chips, GPUs are shared across product lines at AMD and NVIDIA.
For example, the latest AMD FireStream products use the Radeon HD 4800 core, while the current NVIDIA Tesla platforms uses (presumably) the GeForce GTX 285. These are the same ASICs used in high-end graphics products. The challenge to the two GPGPU vendors is to figure out how to design processors that offer the data reliability of a CPU server, without impacting their core graphics business unduly.
Patricia Harrell, AMD’s director of Stream Computing, admits that the need for more robust data protection in GPUs already exists. She says error corrected memory is a requirement for a number of customers, especially those looking to deploy GPUs at scale, i.e., high performance computing users with large compute clusters. Although individual memory error rates are low, as you add more GPUs (and thus more graphics memory) to the system, and run applications for longer periods of time, the chances of hitting a flipped memory bit increases proportionally.
The AMD FireStream 9270 board already incorporates GDDR5 memory, so data protection is already in place at the memory interface in this product. In this case, whenever the memory controller sends and receives data to and from the DRAM, it buffers the data locally while the DRAM calculates the integrity of the value and returns a status code. If the code indicates an error, the memory controller does the retry automatically.
Overall though, AMD seems to be taking a cautious approach to error correcting GPUs. “It’s really important to put in the required features intelligently, and make sure you do the research and engineering to protect the data structures that are going to return the most value,” notes Harrell. If not, she says, you end up with devices that are too big and too hot, in which case you lose the performance advantages GPGPU was originally intended for.
Harrell says that they are continuing to look at the memory protection issue, but couldn’t offer more specific guidance on AMD’s roadmap. “I think it isn’t clear if that [error correction] is going to be required for the broad market yet,” she adds.
Unlike AMD’s more wait-and-see attitude, NVIDIA appears to be fully committed to bringing error protection to GPU computing. According to Andy Keane, general manager of the GPU computing business unit at NVIDIA, it is not a matter of if, but when. From his point of view, ECC memory is a hard requirement in datacenters. “We have to respond to that by building that kind of support into our roadmap,” Keane said unequivocally. “It will be in a future GPU.”
As far as when ECC-capable Tesla products will show up, Keane wouldn’t say. It’s likely that NVIDIA’s OEM partners and GPU computing developers already have a pretty good idea of the timeline (under NDA of course), so systems and software based on high-integrity GPUs may already be in the works. In a Real World Technologies article that spells out the major costs and benefits of error corrected memory in GPUs, analyst David Kanter predicts that NVIDIA’s next GPGPU product release will include ECC.
Presumably Intel is also mulling over its options, since Larrabee, the company’s first high-end graphics processor, is scheduled to be released into the wild next year. But Intel insists the first version of Larrabee will target the traditional graphics space, making it unlikely that they would introduce ECC into the mix. Of course, the company could reverse itself and release a true HPC processor variant with ECC bells and whistles.
My sense is that ECC will come to GPU computing products sooner (1-2 years) rather that later (3-5 years). Being able to ensure data integrity in these devices will widen the aperture for HPC applications and help push GPGPU into true supercomputers. Just like double precision performance and on-board memory capacity, error correction is destined to become an important differentiator in high-end GPU computing. | <urn:uuid:426f7858-0793-4021-9857-6781fdea1faa> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/09/02/reliable_memory_coming_to_a_gpu_near_you/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937921 | 1,702 | 3.078125 | 3 |
Effective data and information management is the foundation for competitive advantage for enterprises in any industry. Today’s information-infused world has added another essential component for differentiation: cognitive technology.
Where speed and scale were once technology’s most important requirements, today’s challenges are no longer simply a question of computing power. The opportunities presented by “making sense” of data and information come with new requirements for comprehension, context and connection. Therefore, today’s technologies must be fast, powerful and intelligent.
Cogito: Combining the Advantages of Semantic Technology with Machine Learning
The ability to think on a deep level distinguishes humans from most other species. Inspired by this concept, Cogito, Latin for “I think”, is a software that bases its cognitive capabilities on artificial intelligence algorithms that mimic the human ability to think at the speed of current technologies. Cogito is cognitive technology that:
- is based on a representation of knowledge, the Cogito Knowledge Graph, that is both deep and wide, and in a format compatible with current technologies. Cogito has the ability to read, comprehend and learn, up front, and out of the box.
- understands the meanings of words the way that people do. Thanks to semantic analysis, including word disambiguation, Cogito is cognitive technology that identifies the correct meaning of words and expressions in context, and understands the relationships between different concepts. Cogito is software that reads, a core component of its learning.
- emulates some processes that humans use to comprehend. Through comprehension, Cogito makes the knowledge discovered in text actionable.
- becomes more intelligent by learning from human experts and from written communications to acquire new knowledge, as well as slang, puns, jargon and other nuances of a language. Cogito is software that learns from human experience. | <urn:uuid:b412e83c-860f-474c-9f79-3df9928777cb> | CC-MAIN-2017-04 | http://www.expertsystem.com/cogito/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00087-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945787 | 387 | 3.21875 | 3 |
Dec. 20 — The UK’s largest distributed supercomputing network, HPC Wales, has secured a lucrative contract to provide its services to an international research project, seeking to expand the lifetime of batteries powered by renewable energy. The contract, worth £100,000 and creating two new jobs, will see HPC Wales supporting a research project examining stationary batteries, required to meet the demands of the energy industry and play a key role within the national electrical grid.
The SIRBATT (Stable Interfaces for Rechargeable Batteries) Project, led by Liverpool University, is a three-year European-funded collaboration between six universities and five private sector companies across Europe.
As part of international efforts to reduce carbon dioxide emissions, power companies have turned to renewable energy sources such as wind, wave and solar. However, the power supplied by these sources is intermittent, as it depends on external variables, so researchers are examining new methods of storing energy and releasing it on demand.
Scientists believe that rechargeable batteries could be a solution to this problem, with lithium-ion batteries able to provide uninterruptible power supply and high-quality power and distribution. However, SIRBATT researchers believe their current shelf life needs to be extended by at least five times, at a price point that is ultimately affordable by the energy industry.
Currently available batteries – used to offer extended levels of power during times of high demand, or as a backup power source during a blackout – only last up to five years on average, and the high cost to replace the units is covered by the taxpayer.
The SIRBATT project will explore the issues that currently limit the lifespan of batteries used in stationary battery storage. Using supercomputing, powered by HPC Wales, for modeling and simulation purposes, the project will perform advanced calculations to isolate the chemical processes that cause the battery to degrade. Researchers will then seek to provide a preventative solution to ensure the longer life of lithium-ion batteries in the future.
The collaboration brings together a wide range of research expertise in the study of both practical and theoretical battery physics.
Dr. Gilberto Teobaldi, at Liverpool University’s Stephenson Institute for Renewable Energy and Department of Chemistry, said:
“As it stands, the lifespan of a lithium-ion battery needs to be increased by at least a factor of five for the batteries to become a competitive, affordable solution to the renewable energy industry. Given the limited knowledge of the factors responsible for their short lifespan, this research is of fundamental importance. We want to make green energy cheaper and more accessible to everyone. Hopefully our research, which involves innovative modeling methods, will help us get closer to achieving our goal.”
David Craddock, Chief Executive Officer of HPC Wales, said:
“We are delighted to announce our new contract with Liverpool University, bringing further inward investment into Wales for the purchase of technological facilities. With the support of supercomputing, the SIRBATT project will make crucial progress in the creation of longer-lasting green energy resources in the UK for everyone. As HPC Wales’ network can be accessed remotely, increasing numbers of businesses and academics are benefiting from its power, and we hope more will follow suit in the next 12months.”
Part-funded by the European Regional Development Fund through the Welsh Government, HPC Wales is committed to boosting the Welsh economy by providing academic researchers and businesses with some of the most advanced computing technology in the world.
About HPC Wales
High Performance Computing (HPC) Wales is Wales’ national supercomputing service provider. Host to the UK’s largest distributed supercomputing network, HPC Wales provides businesses and researchers with local access to world-class technology, as well as the support and training necessary to fully exploit it. HPC Wales is a unique collaboration between Aberystwyth University, Bangor University, Cardiff University, Swansea University, the University of South Wales and the University of Wales Trinity Saint David.
HPC Wales’ distributed infrastructure includes two Hubs at both Swansea and Cardiff, and a two-tier spoke model involving Tier-1 Spokes (at Aberystwyth, Bangor and the University of South Wales) plus Tier-2A clusters and associated Tier-2B workstations at a number of other installations across Wales.
Please visit www.hpcwales.co.uk to find out more.
SIRBATT (Stable Interfaces for Rechargeable Batteries) is a European funded FP7 multisite collaborative project. It consists of 12 partners from across Europe and includes six universities, five industry partners and one research institute. Collaboration with leading battery research groups at an international level will play an important part in the project. The diversity of the organisations will provide a wide range of complementary expertise in areas relating to the study of battery electrode interfaces, at both experimental and theoretical levels. Find out more about the project at www.liv.ac.uk/sirbatt. Research will be carried out within the frame of seven identified Work Packages.
Source: HPC Wales | <urn:uuid:5433b7d5-2fdb-4645-9d9e-770671998f04> | CC-MAIN-2017-04 | https://www.hpcwire.com/off-the-wire/renewable-energy-project-plugs-hpc-wales/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00087-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936708 | 1,052 | 2.53125 | 3 |
“If you think you understand quantum physics, you don’t understand quantum physics.”
— Richard Feynman, Quantum Theorist
The first commercial quantum computer was pioneered by Canadian firm D-Wave Systems, which unveiled its first prototype, a 16-qubit superconducting adiabatic quantum processor, in 2008. This novel type of superconducting processor uses quantum mechanics to massively accelerate computation.
In May, D-Wave’s current flagship product, the 512-qubit D-Wave Two computer, was installed at Google’s Quantum Artificial Intelligence Lab under the direction of NASA, Google and the Universities Space Research Association. The partners planned to use the AI Lab to explore the frontiers of quantum computing and space research. Yet in the months since, the work has remained largely secretive – until now.
Earlier this month, Google et. al. debuted a brief film at the Imagine Science Films Festival at Google New York exploring various dimensions of the project. Quantum physics is not easily boxed and labeled. As an area of research, quantum theory is linked to such major philosophical and practical concerns as consciousness, intelligence, free will, determinism, black holes, protecting the planet from asteroids, ions, photons, artificial intelligence, machine learning, and time travel, among others.
As stated in the film, “Quantum physics puts everything into question. It defies every intuition you have about the modern world.”
In addition to raising these deeply provocative theoretical and philosophical concepts, the video also provides a close-up look at the D-Wave machine (the quantum processor) and the infrastructure required to power and cool it.
Then the focus turns to applications.
“The overwhelmingly obvious killer app for quantum computation is optimization,” says D-Wave CTO Geordie Rose. As problems get larger, and more and more data is generated, extracting useful insights from that data grows ever more challenging. That’s where optimization comes in.
While the film steers clear of the “big data” phrase, one of the main transformations of this big data age is identifying answers without having to know the question. It’s a point that is emphasized by NASA’s Eleanor Rieffel. “We don’t know what the best questions are to ask that computer,” she says. “That’s exactly what we’re trying to understand now.”
On the AI Lab Team’s website, the partners affirm that quantum computing holds the key to solving some of the world’s most complex computer science problems. They write: “We’re particularly interested in how quantum computing can advance machine learning, which can then be applied to virtually any field: from finding the cure for a disease to understanding changes in our climate.”
In related news, D-Wave announced today that it had selected a new foundry partner, Cypress. D-Wave transferred its proprietary process technology to the new site in January 2013, and Cypress delivered the first silicon parts on June 26. D-Wave states that the decision has already resulted in better yields, which it says validates the quality of Cypress’s production-scale environment. Cypress’s Wafer Foundry is located in Bloomington, Minnesota. | <urn:uuid:3f15e5e5-962f-424c-b43a-57971dae7868> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/10/23/behind-scenes-googles-quantum-ai-lab/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00481-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920536 | 690 | 3.21875 | 3 |
It will be months before we know the true damage brought about by super typhoon Haiyan. The largest death tolls now associated with the storm are only estimates. Aid workers from across the world are now flying to the island nation, or they just recently arrived there. They—and Filipinos—will support survivors and start to rebuild.
But they will be helped by an incredible piece of technology, a worldwide, crowd-sourced humanitarian collaboration made possible by the Internet.
What is it? It’s a highly detailed map of the areas affected by super typhoon Haiyan, and it mostly didn’t exist three days ago, when the storm made landfall.
Since Saturday, more than 400 volunteers have made nearly three quarters of a million additions to a free, online map of areas in and around the Philippines. Those additions reflect the land before the storm, but they will help Red Cross workers and volunteers make critical decisions after it about where to send food, water, and supplies.
These things are easy to hyperbolize, but in the Philippines, now, it is highly likely that free mapping data and software—and the community that support them—will save lives.
The Wikipedia of maps
The changes were made to OpenStreetMap (OSM), a sort of Wikipedia of maps. OSM aims to be a complete map of the world, free to use and editable by all. Created in 2004, it now has over a million users.
I spoke to Dale Kunce, senior geospatial engineer at the American Red Cross, about how volunteer mapping helps improve the situation in the Philippines.
The Red Cross, internationally, recently began to use open source software and data in all of its projects, he said. Free software reduces or eliminates project “leave behind” costs, or the amount of money required to keep something running after the Red Cross leaves. Any software or data compiled by the Red Cross are now released under an open-source or share-alike license.
While Open Street Map has been used in humanitarian crises before, the super typhoon Haiyan is the first time the Red Cross has coordinated its use and the volunteer effort around it.
How the changes were made
The 410 volunteers who have edited OSM in the past three days aren’t all mapmaking professionals. Organized by the Humanitarian OpenStreetMap Team on Twitter, calls went out for the areas of the Philippines in the path of the storm to be mapped.
What does that mapping look like? Mostly, it involves “tracing” roads into OSMusing satellite data. The OSM has a friendly editor which underlays satellite imagery—on which infrastructure like roads are clearly visible—beneath the image of the world as captured by OSM. Volunteers can then trace the path of a road, as they do in this GIF, created by the D.C.-based start-up, Mapbox:
Volunteers can also trace buildings in Mapbox using the same visual editor. Since Haiyan made landfall, volunteers have traced some 30,000 buildings.
Maps, on the ground
How does that mapping data help workers on the ground in the Philippines? First, it lets workers there print paper maps using OSM data which can be distributed to workers in the field. The American Red Cross has dispatched four of its staff members to the Philippines, and one of them—Helen Welch, an information management specialist—brought with her more than 50 paper maps depicting the city of Tacloban and other badly hit areas.
Those maps were printed out on Saturday, before volunteers made most of the changes to the affected area in OSM. When those, newer data are printed out on the ground, they will include almost all of the traced buildings, and rescuers will have a better sense of where “ghost” buildings should be standing. They’ll also be on paper, so workers can write, draw, and stick pins to them.
Welch landed 12 hours ago, and Kunce said they “had already pushed three to four more maps to her.”
The Red Cross began to investigate using geospatial data after the massive earthquake in Haiti in 2010. Using pre-existing satellite data, volunteers mapped almost the entirety of Port-au-Prince in OSM, creating data which became the backbone for software that helped organize aid and manage search-and-rescue operations.
That massive volunteer effort convinced leaders at the American Red Cross to increase the staff focusing on their digital maps, or geographic information systems (GIS). They’ve seen a huge increase in both the quality and quantity of maps since then.
But that’s not all maps can do.
The National Geospatial-Intelligence Agency (NGA), operated by the U.S. Department of Defense, has already captured satellite imagery of the Philippines. That agency has decided where the very worst damage is, and has sent the coordinates of those areas to the Red Cross. But, as of 7 p.m. Monday, the Red Cross doesn’t have that actual imagery of those sites yet.
The goal of the Red Cross geospatial team, said Kunce, was to help workers “make decisions based on evidence, not intuition.” The team “puts as much data in the hands of responders as possible.”What does that mean? Thanks to volunteers, the Red Cross knows where roads and buildings should be. But until it gets the second set of data, describing the land after the storm, it doesn’t know where roads and buildings actually are. Until it gets the new data, its volunteers can’t decide which of, say, three roads to use to send food and water to an isolated village.
Right now, they can’t make those decisions.
Kunce said the U.S. State Department was negotiating with the NGA for that imagery to be released to the Red Cross. But, as of publishing, it’s not there yet.
When open data advocates discuss data licenses, they rarely discuss them in terms of life-and-death. But, every hour that the Red Cross does not receive this imagery, better decisions cannot be made about where to send supplies or where to conduct rescues.
And after that imagery does arrive, OSM volunteers around the world can compare it to the pre-storm structures, marking each of the 30,000 buildings as unharmed, damaged, or destroyed. That phase, which hasn’t yet begun, will help rescuers prioritize their efforts.
OSM isn’t the only organization using online volunteers to help the Philippines: MicroMappers, run by a veteran of OSM efforts in Haiti, used volunteer-sorted tweets to determine areas which most required relief. Talking to me, Kunce said the digital “commodification of maps” generally had contributed to a flourishing in their quantity and quality across many different aid organizations.
“If you put a map in the hands of somebody, they’re going to ask for another map,” said Kunce. Let’s hope the government can put better maps in the hands of the Red Cross—and the workers on the ground—soon. | <urn:uuid:86ebf061-1189-47ec-be92-c1495f593359> | CC-MAIN-2017-04 | http://www.nextgov.com/cloud-computing/2013/11/how-online-mapmakers-are-helping-red-cross-save-lives-philippines/73637/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00510-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953388 | 1,505 | 3.125 | 3 |
Most people recognize the pervasive impact of geospatial technology even if they are not familiar with the terms. Location sensors are prevalent in cars, phones, and even inhalers. On a daily basis, constellations of satellites provide updated images of the natural and manmade events that fill Earth’s surface.
Government investments in geospatial data have become a major boon to our economy and our lives. The benefits of GPS and Census data are widely known, but fewer people are aware of the impact of the National Broadband Map, for example, which is guiding nearly $1 billion in investment to expand access to high-speed Internet across the country.
As the industry matures and evolves, it’s critical for government to adapt its thinking about the growth of geospatial data and its value. Federal overseers often are concerned with cost and duplication. Those are necessary considerations, but I suggest they are a small component in a larger discussion about the future of the geospatial industry. Generating three substantially similar data sets may seem duplicative, but that redundancy is potentially inconsequential when compared to the business value each could represent for its stated purpose. Creating a business framework for agencies to manage investments, quantify outcomes and make data and tools accessible to users should be the primary focus. The principles of the marketplace should apply.
One initiative with the ability to lead change is the Federal Geospatial Platform, which aims to make geospatial data, applications, services and infrastructure more readily accessible to government users as well as the public. Here are four considerations to maximize its potential:
First, engage users to drive the data. Consider that 80,000 of the 85,000-plus records on data.gov are tagged as geospatial. But the current platform is difficult to search and mine. Much of the coordination takes place among government IT leadership, yet users from government, academia and the private sector are the ultimate consumers. Metadata standards and integration are important, but why not create an API to see what users do with it? While platform managers are establishing communities to engage stakeholders, they would do a better job of incorporating user needs by bringing customers into the existing governance structure.
Second, use the platform to establish new partnerships. The Platform should be a collaboration space that helps agencies learn from each other, implement open standards and promote interoperability. Similar to the framework established by private developers in GitHub, the Platform is embracing open communities to share code to build applications faster. However, A-16 geospatial theme owners should revisit the business models unique to their theme and look for ways to engage the private sector or crowd to collaboratively build data sets. The existing marketplace should facilitate the unique partnerships and transactions of each theme.
Third, mandate place-based performance metrics. One way to encourage and catalog investment is to create incentives to use the data to develop and report on shared, place-based outcomes. There are Federal examples of shared local outcomes, such as the HUD/EPA/DOT sustainable communities initiative, but more can be accomplished. A shared mapping framework would enable agencies to better visualize the impact of their national priorities and programs, such as grants and rule-making, at a local scale.
Last, support agencies in their efforts to be more strategic in promoting geospatial activities. The Government Accountability Office recently reiterated that agencies lack a strategy and roadmap for geospatial activities. A better approach would be for IT leaders to develop an enterprise strategy focused on the mission, not the software, supported by a robust internal governance structure that engages users who can identify opportunities for shared investment. The evolution of GIS as a web-based platform allows agencies to take advantage of the shared geospatial resources promoted by the platform to quickly deliver maps and apps without a major IT investment.
The New York Times recently wrote, “What came first, conquered by Google’s superior search algorithms. Who was next, and Facebook was the victor. But where, arguably the biggest prize of all, has yet to be completely won.” The platform represents great promise as a tool to promote collaboration, innovation and user engagement with public data. That promise needs to become reality.
Matt Gentile is a principal at Deloitte Financial Advisory Services LLP. The views expressed here are his own. | <urn:uuid:5781d49f-3aae-4bba-a435-9b3aada2671a> | CC-MAIN-2017-04 | http://www.nextgov.com/big-data/2014/03/op-ed-unlocking-economic-potential-geospatial-data/79894/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00262-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943619 | 874 | 2.59375 | 3 |
Nmap is tool that can perform various activities in a penetration test.The function of NSE (Nmap Scripting Engine) and the scripts that have written so far they can transform Nmap to a multi purpose tool.For example we can use Nmap during the information gathering stage of a penetration test just by using the appropriate scripts.In this article we will examine those scripts and the information that we can extract.
One of our first steps it can be to determine the origin of the IP address that our client has given to us.Nmap includes in his database a couple of scripts for this purpose.If we want to run all these scripts we can use the following command as it can be seen in the image below:
As we can see the script called an external website (geobytes) in order to determine the coordinates and location of our target.
The command Whois can be run directly through the console in Linux environments.However there is a specific script for Nmap that performs the same job and it can be used.This script will return information about the registrar and contact names.
Email accounts can prove also important in a penetration test as it can be used as usernames,in social engineering engagements (i.e Phishing Attacks)or in a situation where we have to conduct brute force attacks against the mail server of the company.There are two scripts available for this job:
The http-google-email script uses the Google Web and Google Groups in order to search for emails about the target host while the http-email-harvest spiders the web server and extracts any email addresses that it discovers.The http-email-harvest is in the official repository of Nmap and the http-google-email script can be downloaded from here.
Brute Force DNS Records
DNS records contains a lot of information about a particular domain which cannot be ignored.Of course there are specific tools for brute forcing DNS records which can produce better results but the dns-brute script can perform also this job in case that we want to extract DNS information during our Nmap scans.
Discovering Additional Hostnames
We can discover additional hostnames that are based on the same IP address with the nmap script http-reverse-ip.This script can help us to find other web applications that exist on the same web server.It is an external script that can be downloaded from here.
In this article we examined some Nmap scripts (internal and external) that can be used during the information gathering stage of a penetration test and before we start the actual scanning.The information that we have obtained proves that Nmap can perform almost any task with his scripts.If it cannot do something that you want then it is time to write your own Lua scripts and to contribute to the community. | <urn:uuid:d0b96d04-cc80-48a1-aef3-02911e1acb22> | CC-MAIN-2017-04 | https://pentestlab.blog/2013/02/16/information-gathering-with-nmap/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00382-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918744 | 571 | 2.71875 | 3 |
Richard Murphy, a computer architect at Sandia National Laboratory, recently weighed in on progress toward the goals set forth by the Ubiquitous High Performance Computing program (UHPC). For those who are not familiar, this initiative, which was set forth by the Defense Advanced Research Projects Agency (DARPA) aims to bring petascale and exascale computing innovations into military use via a program of focused research efforts on everything from power and efficiency to performance to applications.
The program, which got its start last year posed a challenge to scientists to build a petaflop system that consumes no more than 57 kilowatts of electricity, in part so that the military could bring computing power out of large datacenters and into the field for immediate, on-spot use. Aside from this more practical military use of high-end HPC systems on the fly, massive benefits for computing efficiency for cost savings and reduced environmental impact would be realized as well.
To bring the kilowatt usage down to the challenge level of 57 kilowatts is no simple task; it will require a dramatic, almost unthinkable reduction in electricity use—all the while retaining the key performance required for military high performance computing applications.
Teams working on such initiatives are vying for the chance to win an award to build a supercomputer for DARPA. Those who come close to the power goals will need to dramatically rethink how computers are designed, particularly in terms of how memory and processors move data. As Discover Magazine pointed out, “The energy required for this exchange is manageable when the task is small—a processor needs to fetch less data from memory. Supercomputers, however, power through much larger volumes of data—for example, while modeling a merger of two black holes—and their energy can become overwhelming.”
According to Richard Murphy, “it’s all about data movement.” Those in the race to meet DARPA’s challenge are seeking ways to make data movement more efficient via distributed architectures, which clip the distance data travels by the addition of adding memory chips to processors. “We move the work to the data rather than move the data to where the computing happens,” Murphy says.
As Eric Smalley wrote today following a discussion with Richard Murphy:
“Sandia National Laboratory’s effort, dubbed X-caliber, will attempt to further limit data shuffling with something called smart memory, a form of data storage with rudimentary processing capabilities. Performing simple calculations without moving data out of memory consumes an order of magnitude less energy than today’s supercomputers.” | <urn:uuid:a467e5b3-b5cd-48c5-bd28-8720a988aa4a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/08/30/uhpc_developments_move_darpa_closer_to_goals/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92739 | 535 | 2.90625 | 3 |
DARPA challenge: Program satellites to salvage space tech
The Defense Advanced Research Projects Agency has put out a challenge to the programming community to help address a vexing space challenge: how to harvest valuable components from retired or non-working satellites still spinning and tumbling around the planet.
The lack of gravity makes it difficult to manage the precision maneuvering necessary to salvage the precious flotsam and jetsam. So DARPA’s InSPIRE program (short for International Space Station SPHERES Integrated Research Experiments) is sponsoring what it calls the Zero Robotics Autonomous Space Capture Challenge to develop algorithms for guiding the maneuvers.
The challenge, which kicks off March 28, asks programmers from around the world to develop a “fuel-optimal control algorithm” to enable a satellite to capture a space object that’s in free float.
:DARPA quest test social media's ability to help in an emergency
During four weeklong rounds, the algorithms will be programmed into bowling-ball sized satellites called SPHERES (short for Synchronized Position, Hold, Engage, and Reorient Experimental Satellites) aboard the International Space Station. The algorithm will need to direct the SPHERES satellite to approach the moving object and maneuver itself to contact with the object via Velcro on the SPHERES satellites.
The winners of each round will be invited to the Massachusetts Institute of Technology to view the finals via video link from the space stations, where the four algorithms will be programmed into SPHERES and tested.
“If a programming team can solve this challenge of autonomous space object capture, it could...benefit...any space servicing system in the future,” said Dave Barnhart, DARPA's program manager.
Connect with the GCN staff on Twitter @GCNtech. | <urn:uuid:38470634-b462-4c7b-afa9-5bae0a7efdea> | CC-MAIN-2017-04 | https://gcn.com/Articles/2012/03/09/DARPA-satellite-salvage-challenge.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.849095 | 371 | 2.984375 | 3 |
The agency said it wants to accelerate solid-state lighting technology from the lab to the marketplace because it has the potential to more than double the efficiency of lighting systems, significantly reduce its carbon footprint and transform the environment.
SSL lighting is an advanced technology that creates light with considerably less heat than incandescent and fluorescent lamps, allowing for increased energy efficiency. Unlike incandescent and fluorescent bulbs, SSL uses a semi-conducting material to convert electricity directly into light, which maximizes the light’s energy efficiency, the DOE said in a release. Solid-state lighting encompasses a variety of light-producing semi-conductor devices, including light-emitting diodes (LEDs) and organic light-emitting diodes (OLEDs).
Once used only for indicator lights to illuminate the numbers on digital clocks and light up watches, LEDs are now found in a variety of applications including brake lights, flashlights, traffic signals, and more recently, streetlights. OLED technology is more commonly used commercially, such as in small screens for mobile phones, portable digital music players, digital cameras, and now televisions.
The companies receiving money and their projects are listed below (from DOE) :
Add-Vision Inc. (Scotts Valley, CA): Low-Cost, High Efficiency Polymer OLEDs Based on Stable p-i-n Device Architecture. This project seeks to develop a polymer OLED (P-OLED) lamp technology using advanced material synthesis and a modified device architecture to enable large-scale manufacturing of robust P-OLED lamps.
Team Members: University of California, Los Angeles; University of Southern California, Santa Cruz
Project Value: $ 2,010,076
Estimated DOE contribution: up to $ 1,567,858
Crystal IS, Inc. (Green Island, NY): Gallium nitride (GaN) -ready Aluminum Nitride Substrates for Cost-effective, Very Low Dislocation Density III-nitride LEDs. This project seeks to develop GaN-ready substrates with defect densities below 105/cm-2. This GaN ready substrate will then be tested by growing high efficiency blue LEDs.
Team Member: Philip Lumileds Lighting Company, LLC
Project Value: $ 1,286,680
Estimated DOE contribution: up to $ 1,029,343
Georgia Institute of Technology (Atlanta, GA): Fundamental Studies of Higher Efficiency III-N LEDs for High-Efficiency High-Power Solid-State Lighting. This project seeks to understand the impact of strain, defects, polarization, and Stokes loss in relation to unique device structures upon the internal quantum efficiency of LEDs and to employ this understanding in the design and growth of high-efficiency LEDs capable of highly-reliable, high-current, high-power operation.
Team Member: Luminus Devices
Project Value: $ 2,241,097Estimated DOE contribution: up to $ 1,508,110
Lehigh University (Bethlehem, PA): Enhancements of Radiative Efficiency with Staggered Indium gallium nitride (InGaN) Quantum Well Light Emitting Diodes. This project seeks to solve the problem of low radiative efficiency in green LEDs, which is caused by a reduced wavefunction overlap from the existence of polarization field inside the quantum well.
Project Value: $ 598,899
Estimated DOE contribution: up to $ 479,119
PhosphorTech Corporation (Lithia Springs, GA): High Extraction Luminescent Materials for SSL. This project seeks to develop highly efficient phosphors for high brightness LEDs. The proposed phosphors have broad and size-tunable absorption bands, size and impurity tuned emission bands, size-driven elimination of scattering effects, and a distinct separation between absorption and emission bands.
Project Value: $ 1,629,614
Estimated DOE contribution: up to $ 1,254,702
DOE’s Pacific Northwest National Laboratory (Richland, WA): Charge Balance in Blue Electrophosphorescent Devices. This project seeks to develop new organic phosphine oxide electron transporting/hole blocking materials in combination with ambipolar phosphine oxide host materials for achieving charge balanced blue phosphorescent OLED system, a necessary component of white OLEDs.
Project Value: $ 1,783,000
DOE’s Sandia National Laboratories (Albuquerque, NM): Novel Defect Spectroscopy of InGaN Materials for Improved Green LEDs. This project seeks to develop a novel defect spectroscopy platform centered around deep level optical spectroscopy (DLOS) capable of interrogating deep levels throughout the InGaN band gap.
Project Value: $ 1,340,000
Arkema Inc. (King of Prussia, PA): Application of Developed Atmospheric pressure chemical vapor deposition (APCVD) Transparent Conducting Oxides and Undercoat Technologies for Economical OLED Lighting. This project seeks to develop a commercially viable process for an OLED substrate, which would consist of the actual substrate of soda lime glass, a barrier undercoat, and a transparent conducting oxide.
Team Member: Philips Lighting
Project Value: $ 2,626,632Estimated
DOE contribution: up to $ 2,101,305
Cree, Inc. (Goleta, CA): Efficient White SSL Component for General Illumination. This project seeks to develop a high-efficiency, low-cost LED component for solid-state illumination applications that is capable of replacing standard, halogen, fluorescent and metal halide lamps based on the SSL system efficiency and life time cost savings.
Project Value: $ 2,558,959
Estimated DOE contribution: up to $ 1,995,988
General Electric (Niskayuna, NY): Affordable High-Efficiency Solid-State Replacement Down-Light Luminaries with Novel Cooling. This project seeks to develop an illumination quality SSL luminaire based on LED cooling using synthetic jets combined with optimized system packaging and electronics.
Team Members: GE Lumination; University of Maryland
Project Value: $ 2,886,040
Estimated DOE contribution: up to $ 2,164,530
Osram Sylvania Development Inc. (Danvers, MA): High–Quality, Down Lighting Luminaire with 73% Overall System Efficiency. This project seeks to develop a highly efficient integrated down lighting luminaire that minimizes thermal, optical and electronic losses and will achieve a luminous steady state output of 1300lm with a high quality of light.
Project Value: $ 1,092,038
Estimated DOE contribution: up to $ 873,525
Philips Lumileds Lighting, LLC (San Jose, CA): 135 LPW 1050 Lm Warm White LED for illumination. This project seeks to develop pre-production prototypes of a warm white LED that has efficiency of 135LPW while at the same time generating 1050lm of warm white light in the Correlated Color Temperature range between 2800K and 3500K with a Color Rendering Index of greater than 90.
Project Value: $ 5,306,000
Estimated DOE contribution: up to $ 2,653,000
Universal Display Corporation (Ewing, NJ): Development of High Efficacy, Low-Cost Phosphorescent OLED Lighting Luminaire. This project seeks to develop high efficiency OLED lighting luminaires as part of an integrated ceiling illumination system.
Team Members: Armstrong World Industries; University of Michigan; University of Southern California
Project Value: $ 2,662,489
Estimated DOE contribution: up to $ 1,905,467
Layer 8 in a box
Check out these hot stories: | <urn:uuid:7fe8c8df-6227-4429-b617-122663547777> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2350825/data-center/department-of-energy-illuminates--21-million-on-advanced-lighting-research.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00372-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.836428 | 1,587 | 3.15625 | 3 |
BARCELONA, Spain, June 12 — The Barcelona Supercomputing Center – National Supercomputing Center (BSC-CNS) and IBM have celebrated the first anniversary of the creation of the “Supercomputing Technology Center,” whose main aim is the execution of research projects related to hardware and software technologies in high-performance computing. The collaboration of the two organizations is valued at 6 million euros.
During its first year in operation, ten projects have been selected. Today, scientists and researchers from the BSC-CNS, IBM Spain and the IBM Research labs in New York and Switzerland are collaborating in these projects, driving the study of an extremely interesting field for both organizations: high-performance computing (HPC). As technological and research partners both organizations develop projects to keep on improving new technologies that are essential in the current context, such as smarter cities modeling on the basis of semantic ontology, processor architecture or new programming models and execution environments that take into account performance and energy consumption.
Given the evolution that microelectronic technology has experienced in recent years, the future generations of supercomputing systems need research projects like this to help solve the current challenges of high-performance computing (HPC). Through this collaboration, the scientists from both organizations expect to make progress in the design of new system architectures — from the processor to the interconnection network– according to performance, energy and cost efficiency criteria, and also improve scalability in millions of processors and in programmability of future heterogeneous architectures.
Nine Years of Joint Research
This agreement is another landmark in the close relationship between IBM and the BSC. The first collaboration agreement was signed in 2005, which focused on the supercomputer MareNostrum. Thanks to the joint work carried out at the time, MareNostrum ranked first in the Top500 ranking of European supercomputers several times, and it even achieved the fourth position worldwide. Some relevant choices regarding design –such as the utilization of components, processors and interconnection networks of commercial use and open source software– that were adopted later for most supercomputers, were implemented for the first time in MareNostrum. Two years later, the BSC and IBM renewed and extended their collaboration commitment to cooperate in the project MareIncognito. That initiative was another landmark in the recent history of supercomputing because of its multidisciplinary character and also because it did not focus exclusively on matters such as power or speed. The project MareIncognito combined processor design, programming models, efficiency maximization and efficient load balancing mechanisms.
Source: Barcelona Supercomputing Center | <urn:uuid:1982cf89-a757-45ac-8172-a547650add35> | CC-MAIN-2017-04 | https://www.hpcwire.com/off-the-wire/bsc-ibm-recognize-anniversary-supercomputing-technology-center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00098-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940607 | 529 | 2.609375 | 3 |
The Office of Naval Research today said it had successfully demonstrated a system that lets small-unmanned aircraft swarm and act together over a particular target.
The system, called Low-Cost UAV Swarming Technology (LOCUST) features a tube-based launcher that can send multiple drones into the air in rapid succession. The systems then use information sharing between the drones, allowing autonomous collaborative behavior in either defensive or offensive missions, the Navy said.
+More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2014+
Since the launcher and the unmanned systems have a small footprint, the technology enables swarms of compact UAVs to take off from ships, tactical vehicles, aircraft or other unmanned platforms, the Navy said.
The ONR demonstrations, which took place over the last month in multiple locations, included the launch of Coyote UAVs capable of carrying varying payloads for different missions. Another technology demonstration of nine UAVs accomplished completely autonomous UAV synchronization and formation flight.
The BAE-developed Coyote drone is a 14 lb, three-foot long aircraft that has a cruising airspeed of 60 knots and can operate at altitudes up to 20,000 ft.
+More on Network World: Gartner: The super-smart drone in the corner office+
When launched from its sonobuoy container a parachute deploys to slow and stabilize it before the Coyote’s x-wings unfold and electric motor starts turning the pusher style articulated propeller. Its flight is controlled via line-of-sight radio link (VHF or UHF), as far as 20 miles from a human operator in an aircraft or on the ground. Once flying, Coyote follows an autonomous, pre-programmed path with real-time updates, BAE says.
“The recent demonstrations are an important step on the way to the 2016 ship-based demonstration of 30 rapidly launched autonomous, swarming UAVs,” said ONR program manager Lee Mastroianni.
Navy officials say unmanned aircraft reduce hazards and free personnel to perform more complex tasks, as well as requiring fewer people to do multiple missions. The small aircraft lower Lowering costs as well -- even hundreds of small autonomous UAVs cost less than a single tactical aircraft, they say.
LOCUST is just one example of systems the ONR has developed to swarm unmanned systems. Last year ONR showed off a system that lets it swarm a number of unmanned boat drones in unison that could be used for a number of intelligence gathering or military applications. Called CARACaS (Control Architecture for Robotic Agent Command and Sensing) the system can be put into a transportable kit and installed on almost any boat. It allows boats to operate autonomously, without a sailor physically needing to be at the controls—including operating in sync with other unmanned vessels; choosing their own routes; swarming to interdict enemy vessels; and escorting/protecting naval assets.
The Air Force too is pondering what it would take to develop a small, low-cost unmanned aircraft that it could fly in swarms to handle a number of applications such as protecting a given area or quickly gathering intelligence. From the Air Force in a Request For Information issued in 2014: “The thought is to develop an inexpensive, configurable and producible on demand air vehicle. A number of military applications can be envisioned for an air vehicle with such a capability. One potential application is to use hundreds or thousands of such units in a campaign to overwhelm an enemy’s air defenses and “punch a hole” to enable higher value, less replaceable [aircraft] to engage or monitor enemy systems.
Researchers at the Defense Advanced Research Projects Agency (DARPA) also are looking to develop swarms of drones. In this case the agency recently put out a Request For Information to explore the feasibility and value of launching and recovering volleys of small unmanned aircraft from one or more existing large airplanes – think B-52, B-1, C-130.
Check out these other hot stories: | <urn:uuid:430bbb87-67f3-4486-a352-2e1e1c108168> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2909517/security0/us-navy-researchers-get-drones-to-swarm-on-target.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925743 | 843 | 2.828125 | 3 |
Hospitals are reporting a new threat of infection -- from computer malware.
Computer viruses are worming their way into everything from fetal monitors to radiology departments’ picture archiving and communication systems, which store and share images from X-rays and other diagnostic equipment, reports Technology Review, a publication of the Massachusetts Institute of Technology.
Kevin Fu, a computer scientist at the University of Michigan and the University of Massachusetts, Amherst, raised the red flag on Oct. 18, during a panel meeting of the National Institute of Standards and Technology’s Information Security and Privacy Advisory Board, according to the report. Malware attacks threaten thousands of network-connected devices, Fu reportedly said at the meeting. An increase in the number of such attacks poses challenges for hospital IT departments.
Health IT experts have had trouble countering the attacks because manufacturers of devices frequently ban modifications to their equipment, including virus protection, Technology Review reported. Interconnected medical equipment often runs on Microsoft Windows operating systems that are vulnerable to viruses.
Manufacturers fear that modifications, including installation of updated versions of Windows that fix many security vulnerabilities, will jeopardize devices’ Food and Drug Administration approval status, according to the report.
One hospital IT executive told the publication that trying to protect all of a hospital’s software-controlled equipment would require the installation of more than 200 firewalls. Mark Olson, chief information security officer at Beth Israel Deaconess Medical Center, in Boston, said his hospital has 664 pieces of medical equipment on which manufacturers will not allow software modifications or updates, according to the report. | <urn:uuid:3fd6bd5e-4c70-4a03-ab68-3de44a58c0be> | CC-MAIN-2017-04 | http://www.nextgov.com/health/health-it/2012/10/malware-threatens-medical-machines-and-systems/58929/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942975 | 323 | 2.53125 | 3 |
With mechanical hard drives, the bottleneck created by the storage networking infrastructure was not apparent because the latency of the drives themselves. Memory-based storage has no latency and therefore the bottleneck is exposed. This in large part explains the success of PCIe-based solid state storage devices. PCIe solid state devices should've faced an uphill battle as they went against the conventional wisdom of shared storage.
Instead, these components have seen wide adoption because of the cost effectiveness, simplicity to install, and raw performance. As we discussed in our article "What is Storage Class Memory," vendors have been successful at positioning PCIe-based solid state storage devices as a second tier of memory instead of a faster tier of storage. This is because of its near zero-latency performance since it is only separated from the CPU by the PCIe channel.
There is also a storage opportunity with PCIe-based solid state storage. The problem with PCIe-based solid state as storage is that it does create a separate tier of storage, one that is not only different than the mechanical hard drive but one that also is in a different physical location than the shared storage system that typically houses the mechanical hard drive. Automated tiering and caching systems will be the answer to these problems as they become location aware.
Today, we already have separate caching solutions being deployed in servers, leveraging PCIe solid state in parallel with solid state in shared storage. This allows for extremely active data to be cached on solid state storage inside the server and off of the network. With these configurations, active "read" data is stored inside the server--which means less data needs to transfer back and forth across the storage network. Implementing this type of technology could be an alternative to upgrading to the next faster network.
The challenge with these systems is that there is no orchestration of any kind since the caching or automated tiering software is unaware of each other. If the server-based solid state storage is used as a read cache, then data safety should be high and performance should certainly improve. But it will not be optimal. In the future, there needs to be some coordination between the location of the two high-speed storage devices so that maximum performance can be achieved. Something we will explore in greater detail in our next entry. | <urn:uuid:ad44885d-1ba7-432e-b6a5-9d19e7a4524d> | CC-MAIN-2017-04 | http://www.networkcomputing.com/storage/data-tiering-storage-location/298356038 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00364-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96177 | 454 | 2.703125 | 3 |
Open source's roots in the 19th century
Louis Daguerre set a precedent for other to follow – or not
- By Greg Crowe
- Aug 20, 2010
Who pioneered the concept of open source? Red Hat? Sun?
Wired states that the first instance of open-sourcing occurred in 1839, which is much earlier that most people might think. This progenitive incident also has some basic principles in common with more recent events in the computer field of somewhat dubious distinctions. I will leave those particulars out, but it shouldn’t be hard to figure out which companies I’m talking about in the events to follow.
Try to guess which modern examples follow the example set by Louis Daguerre if you want.
Before Daguerre came along, a permanent photo would take about eight hours to make. At the time, photographers could only make a negative image on a pewter plate. Daguerre worked out a chemical process that reduced this time to mere minutes, and etched out a positive image. Without that process, a significant step in the history of photography might never have happened. So what tips can we glean from Daguerre's example?
1) Partner with someone, but then take primary credit, preferably after that partner has died. Daguerre began corresponding with Joseph Niepce, an inventor who held the aforementioned eight-hour developing record, in 1829. They worked together in an attempt to hasten developing time. Niepce died in 1833, and though his son took over, Daguerre is the one the process is named after.
2) Name it after yourself whenever possible. He called it the “Daguerreotype,” and as far as I can tell, he coined it himself. I guess we should count ourselves lucky his name wasn’t Louis Sterey or Louis Heeznoughtmy.
Recent Lab Impressions:
How to trick Windows XP SP2 into thinking it’s SP3
When technology fails: a cautionary tale from the gulf
3) Having a rival with an arguably better product makes for a fun patent battle
While Daguerre was working in France, William Talbot had been working on a similar process (the “calotype”) in England, and was able to show pictures he had made as early as 1834. They both had patents in England, which they both stridently enforced. The calotype eventually won out as the precursor to later photographic methods, and the Deguerrotype became the Betamax, or Neanderthal (take your pick).
4) Get a government grant whenever possible. The reason that Daguerre and “Son of Niepce” made their process free to the world was that the French parliament gave them a quite healthy pension, so that, unencumbered by financial concerns, they – and France – could generously give this new process to the world. If this hadn’t been set up for them, the first occurrence of open-source might have been decades later.
So, you see? Those modern battles in the computer industry are not really a new thing at all. And open-source is hardly a babe in the woods, either. I guess people have been stabbing each other in the back for a long time before the computer era.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:95f36fbe-115a-477e-84ac-dc984fe7eed8> | CC-MAIN-2017-04 | https://gcn.com/articles/2010/08/20/open-source-has-deep-roots.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00364-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973194 | 719 | 3.015625 | 3 |
I love seeing crazy comparisons and statistics, especially if it has anything to do with science.
Deoxyribonucleic acid (DNA)
- If laid out end-to-end, all DNA within your body would go from the Earth to Pluto and back to the Earth (too bad Pluto is not a planet anymore.)
- Humans share 99% of their DNA with everyone else (makes the 7 degrees of separation game kind of pointless.)
- If you could type 60 words per minute, eight hours a day, it would take approximately 50 years to type the human genome (Well, I can type 61 words per minute).
- Humans have roughly 40-50% of the same DNA as cabbage (I don’t like cabbage, which means I don’t like 1/2 of me).
- Back in 2001, it cost roughly $100,000,000 to map the genome, while in 2014 the cost dropped to around $6,000
By understanding the DNA, we can better understand how something works and how it interacts with other things. However, trying to map the DNA is not something you would ever think about doing manually, it would take a lifetime and be prone to many mistakes (plus it would be a pretty boring life). By understanding how something works and how it interacts, we can take preventative actions.
Apps (Hello, World)
- There are roughly 200,000 lines of code in a pacemaker
- The Space Shuttle contained 400,000 lines of code
- The Hubble Space Telescope has 2,000,000 lines of code
- While Microsoft Office 2001 had 25,000,000 lines of code, Office 2013 has grown up to 45,000,000 lines
- Guess what has the same number of lines of code as Windows Vista? How about the Large Hadron Collider. Each with roughly 50,000,000 lines
- And to top things off, it is reported that the United States healthcare.gov website has 500,000,000 lines.
As we all know, upgrading from Windows XP to 7 to 10, from Windows 2003, to 2008 to 2012, from Office 2010 to 2013 to 2016 is not something we do overnight. Just look at the amount of code that is involved in these things. We often spend months and years debating if we should upgrade and then how we should upgrade. We do this because it is not easy as we have a nagging fear that our applications might not work, and for good reason considering Windows 7 had 40 million lines of code. We’ve been bitten too many times by the compatibility bug, so we are willing to forego the added value of the latest releases because we don’t want to experience that nasty bite again.
This is why understanding your application’s DNA is so important. This is why there were a couple of sessions at Citrix Synergy 2015 that focused on AppDNA.
- SYN232: Get the most out of AppDNA for app migrations and updates
- SYN320: Never let me down again: the future of XenApp and XenDesktop upgrades
To get you started, we’ve put together the following video demonstrating what you can do with AppDNA
Therefore, my question for you is “Why haven’t you looked at AppDNA to help you with Windows upgrades, XenApp upgrades, application upgrades?”
Genome mapping: http://en.wikipedia.org/wiki/Whole_genome_sequencing
From the virtual mind of Virtual Feller | <urn:uuid:c1235ce9-2d68-4c97-97eb-6f71498b659b> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2015/06/12/dna-can-tell-you-a-lot-about-your-application/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94615 | 728 | 2.953125 | 3 |
In the age of big data and cloud computing, fast and accurate data transfer is more important than ever. University of Illinois scientists address the network bottleneck.
Scientists at the University of Illinois have figured out a way to transmit error-free data over fiber optic networks at 40 gigabits per second, a US record according to a press release.
With the computational arms race pushing top machines into petascale territory and beyond, processor speeds have advanced faster than transfer speeds. The lag in transfer speeds creates a bottleneck that stymies applications. Even the fastest supercomputer can’t make a file download any faster than the network allows.
The twin technologies of big data and cloud computing – which rely on moving data from point a to point b quickly and accurately – have put a spotlight on this compute-network disconnect.
“Information is not useful if you cannot transmit it,” said Milton Feng, the Nick Holonyak Jr. Chair in Electrical and Computer Engineering. “If you cannot transfer data, you just generate garbage. So the transfer technology is very important. High-speed data transfer will allow tele-computation, tele-medicine, tele-instruction. It all depends on how fast you can transfer the information.”
Feng and his colleagues demonstrated the tiny, fast device and published the results in the journal IEEE Photonics Technology Letters.
The researchers describe a new breed of laser devices called oxide VCSELs (vertical cavity surface emitting lasers), which transmit data over fiber optic cables using light signals. They are known for being faster and more energy efficient than traditional electrical cables.
“The oxide VCSEL is the standard right now for industry,” Feng said. “Today, all the optical interconnects use this technology. The world is in a competition on how to make it fast and efficient, and that’s what this technology is. At the U. of I., we were able to make this technology the fastest in the U.S.”
Compared to the so-called consumer Internet, which can reach speeds of about 100 megabits per second, oxide VCSEL technology at 40 gigabits per second is 400 times faster. And because of its small size, the device uses 100 times less energy than electrical wires.
Currently these oxide VCSELs operate at room temperature, but the Illinois team is working to make them compatible with the higher temperatures that are characteristic of datacenters. Feng believes the laser device could be coaxed to perform at 60 gigabits per second before encountering certain inherent limitations. But when that happens, he’s counting on another cutting-edge device, the transistor laser, to carry the performance torch forward.
Update – Editor’s note:
After receiving a question about how this technology is better than the standard 40G and 100G Ethernet that is currently available, researcher and report co-author Fei Tan kindly provided a thorough explanation, as summarized in the following paragraph:
The direct modulation based short range (< 100 meter) 40G (100G) ethernet employs 4 (10) lasers to achieve 40 (100) Gb/s data transmission. As compared with the standard 40G ethernet, our VCSEL technology employs only a single laser device. Hence our VCSEL technology is more energy efficient, simpler in driver circuits, and more cost effective. In addition, our VCSEL technology not only provides 40 Gb/s error-free data transmission, but also provides an ultralow laser RIN, which is essential in achieving and maintaining error-free data transmission through the optical fiber link. | <urn:uuid:4cbbef58-de53-416c-b7b6-89cf5b49c4e8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/11/08/tiny-laser-transits-data-record-speeds/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909271 | 749 | 3.453125 | 3 |
More than 36,000 Photos Used to Build NASA's 'Global Selfie'
/ May 27, 2014
On April 22, 2014, NASA asked people worldwide to answer the question, “Where are you on Earth right now?” with a selfie on social media -- and the world responded.
According to the administration, it had to work its way through more than 50,000 images submitted using the hashtag #globalselfie on Twitter, Instagram, Facebook, Google+ and Flickr. The goal? To use each picture as a pixel in the creation of a “Global Selfie” – a mosaic image that would look like Earth appeared from space on Earth Day 2014, according to NASA.
One month later, on May 22, NASA released the finished product, which was built using 36,422 individual photos from people on every continent – a total of 113 countries and regions.
"With the Global Selfie, NASA used crowd-sourced digital imagery to illustrate a different aspect of Earth than has been measured from satellites for decades: a mosaic of faces from around the globe," said Peg Luce, deputy director of the Earth Science Division in the Science Mission Directorate at NASA Headquarters, Washington, in a statement. "We were overwhelmed to see people participate from so many countries."
Zoom in below to see the individual photos that make up this mosaic: | <urn:uuid:29500fab-22fb-4cb9-a38c-47294f9dd34e> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-More-than-36000-Photos-Used-to-Build-NASAs-Global-Selfie.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00145-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950688 | 279 | 3.15625 | 3 |
NASA employee mixes SVG format and, presto: Gov Webicons
Building websites to work on every computer at every resolution isn't an easy task anymore. It used to be that you could set your Web page to display optimally at 800 by 600 resolution, and 95 percent of the time it would look fine. Today, if your Web page was at that resolution, it would look like something out of the Flintstones.
Sean Herron, a technology strategist for NASA, blogged about running into this problem because he needed the NASA logo to display in the corner of a webpage, yet he didn't know what resolution to set it at. Too small and it would shrink to almost nothing on a screen with a high resolution. Too large and it would take up most of a page for someone viewing it at a lower resolution. Not only that, but the logo needed to appear on pages of the website, so its unknown resolution was messing with the overall site design.
The solution Herron found was using Scalable Vector Graphics. SVGs are the emerging format for Web page design, because using just a few lines of code lets you create a logo or graphic that scales and displays to whatever size a user needs for their monitor. Most SVGs use open-source code with a PNG fallback so that an older browser that does not support SVG images will have something to display in its place.
SVG, an open standard developed by the World Wide Web Consortium is XML-based so images can be searched and indexed, and they can be created even using a text editor. W3C has a working group for anyone who needs to learn how to make the best use of the new technology.
Herron said he came across FC Webicons, “resolution independent” social icons from Fairhead Creative using SVG graphics. He adapted the open-source code to solve his logo problem for the NASA site he was creating.
But he didn't stop there. He created 41 government logos and flags from every sovereign country on the planet using the new format, calling them Gov Webicons, which can be downloaded and used freely by anyone who needs them. All of the government logos and flags created by Herron are open-source and hosted on GitHub, the open-source development site being used by government agencies like the Health and Human Services Department to quickly make websites whose data can be shared freely.
The actual SVGs are elegant in their simplicity, as they only take up two lines of code, not much more than simply inserting a normal graphic onto a page. There is a tutorial that explains how it all works for designers looking to implement them or who want to use any of the 41 government logos Herron created.
SVGs are certainly going to be part of the larger picture of how webpages will be designed in the future. It's great to see a government techie like Herron not only already using them, but also helping out other government designers who may be faced with the same problems.
Posted by John Breeden II on Jun 06, 2013 at 9:39 AM | <urn:uuid:36556e29-c2e1-4614-8059-536e946bc55e> | CC-MAIN-2017-04 | https://gcn.com/blogs/emerging-tech/2013/06/agency-logos-svg-gov-webicons.aspx?admgarea=TC_EmergingTech | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00079-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962363 | 625 | 3.046875 | 3 |
It is hard to overestimate the importance of open data commons to science. That's why if I were the CEO of a company, I'd be building an open data commons for my business.
The simplest idea behind an open data commons is that there is value in making massive amounts of data available for any and every purpose anyone can come up with. So, make a giant pile of data and invite the world in to play. From that come the first two principles of an open data commons: Put in everything, and let anyone use it.
But, of course, it then gets more complicated. You can't really put in everything. And people won't be able to use any of it if you don't provide the metadata that lets them know what's what. You can't put in everything if only because your commons is going to be about something. It's going to be a commons of data about genomics, economics or Barbie doll models. So, you may want to exclude data about
Formula One racing and panda insemination frequencies. Even so, if you provide usable metadata, you can include all the borderline cases, because the commons' users will be able to sort out what isn't relevant to their project.
Nevertheless, you'll want to do a few more things to make your data commons useful.
First, you'll want to encourage the use of additional metadata so that people know where the data is coming from and what its quality is. That's important because allowing the inclusion of raw data vastly lowers the hurdle for those who have data to contribute. If they have to verify each stat and line up all the decimal points, they'll never get around to releasing the data in the first place. Half-baked data that is available is infinitely more valuable than fully baked data that is not.
Then you'll want to consider the matter of a license for your data. For example, OpenDataCommons.org has two licenses ready-made for your data commons: one that puts the data into the public domain, and another that requires those who use it to attribute where they got the data from and to share the data under the same conditions of openness. ScienceCommons.org (a part of CreativeCommons.org) also will help you out, as well as give you reasons why requiring attribution and share-alike are generally bad ideas for databases.
You'll also want to decide how you want the data structured. Or maybe you'll simply want to require the contributors to describe how they've structured their contribution. The first way makes the data more easily searchable and reusable, but it also requires contributors to conform to your standards. Plus, data standards can inadvertently result in obscuring or eliminating data that might be useful for some unanticipated purpose.
More and more commons
In fact, you might want to make your data available as linked open data, which prescribes the general form of the data (RDF "triples" of the form "A is in relation B to C") without over-specifying what the universal standard semantics and terminology for your commons' topic should be. Linked data makes it easier to pool data into a commons.
Having wended your way through these questions, you will have a commons of data, the main value of which is that its value cannot be predicted. Researchers and innovators will come with questions you would never ever have thought of, and they'll get answers that no one ever anticipated.
That's why we're seeing more and more data commons. Data.gov gathers tons of information from U.S. executive branch agencies. The Genome Commons has data that help interpret genetic information. The Proteome Commons has a huge amount of information about proteins. MetroBoston includes data about cities and towns in Massachusetts, with topics including public health, housing, arts and culture, and education. The list is getting longer and the data pool is getting wider and deeper, quite rapidly.
So, why should you do this in your company?
Because your company has lots and lots of data, and you can't know what value the data has until everyone has a crack at finding its value. So, create a company data commons, and put everything you possibly can into it. Sure, you're going to hold back on some personnel information, and information that is truly confidential. But all the rest should go into the commons. Keep it behind the firewall if you must, or require company ID to login, but let your folks know about it, and encourage them to fish through it. You might even want to provide them with some analytic and visualization tools. Provide a place for people to post their findings, and publicly reward the most innovative and pragmatic results.
Open up your data to your community. You never know what they will make of it. And that is exactly the point. | <urn:uuid:0e01b0b1-a784-4aae-9961-443d3b5c8e59> | CC-MAIN-2017-04 | http://www.kmworld.com/Articles/Column/David-Weinberger/Open-data-commons-for-business-76351.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954894 | 988 | 2.90625 | 3 |
Intell community seeks energy-efficient supercomputer
- By Frank Konkel
- Sep 17, 2013
The MIRA supercomputer at Argonne National Laboratory. (File photo)
The intelligence community wants to develop superconducting supercomputers that could be potentially much faster and use far less energy than today's traditional supercomputers.
Such a supercomputing alternative could emerge as a viable option as the United States races to become the first nation to break the exaflop barrier in supercomputing, or might one day replace current high-performance computers that face power and cooling demands that "are rapidly becoming unmanageable problems," according to the public solicitation published by the Office of the Director of National Intelligence.
The IC's research arm, the Office of Intelligence Advanced Research Projects Activity, seeks partners for its Cryogenic Computing Complexity program that aims to "demonstrate a small-scale computer based on superconducting logic and cryogenic memory that is energy-efficient, scalable, and able to solve interesting problems."
The program is in its early phases, seeking new and improved approaches to cryogenic memory processes and logic, communications and systems, but such an effort wouldn't even be possible were it not for recent innovations.
"In the past, significant technical obstacles prevented serious exploration of superconducting computing, but recent innovations have created foundations for a major breakthrough," the solicitation states. "For example, the new single flux quantum logic circuits have no static power dissipation, and new energy-efficient cryogenic memory designs would allow operation of memory and logic in close proximity within the cold environment."
Superconducting supercomputers might sound like a super-cool technological proposition, but it's one driven by two things: performance and cost.
The world's fastest known supercomputers, the Department of Energy's 27-petaflop Oak Ridge National Laboratory-based Titan and China's 55-petaflop Milky Way 2, both consume vast amounts of energy, on the order of tens of megawatts. Existing technology scaled out to an exascale computing system – one capable of a quintillion, or 1,000,000,000,000,000,000 floating point operations per second (FLOPS) – would consume upwards of one gigawatt, half the maximum power output of the Hoover Dam.
Superconducting systems would offer a significant reduction in power demand, potentially as low as 200 kilowatts for a 100-petaflop system, according to IARPA. Assuming a completely scalable system, that equates to two megawatts for an exascale system using super-cooling to achieve zero- or near-zero electrical resistance.
Energy savings alone in those superconducting supercomputers could top tens or hundreds of millions of dollars versus traditional counterparts.
According to the DOE, developing an exascale computer is going to be an expensive endeavor. As reported by FierceGovernmentIT on Sept. 15, DOE's June report to Congress says that building an exascale supercomputer by 2022 would require $1 billion to $1.4 billion in funding.
Frank Konkel is a former staff writer for FCW. | <urn:uuid:be3d11f3-b55d-4bc7-844a-4287661683a3> | CC-MAIN-2017-04 | https://fcw.com/articles/2013/09/17/superconducting-supercomputer.aspx?admgarea=TC_ExecTech | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927306 | 650 | 2.765625 | 3 |
GCN LAB IMPRESSIONS
Solar-powered processor raises new possibilities
- By Greg Crowe
- Sep 23, 2011
At Intel’s Developer Forum, Chief Technology Officer Justin Ruttner demonstrated one of Intel’s latest research items — a microprocessor that consumes insanely low levels of power.
Code-named Claremont, the circuits on this processor operate very close to their “threshold voltage,” which is the minimum voltage at which the circuit can change states and pass a current. This allows the entire processor to run on less than 10 milliwatts at its minimum, which is a significant improvement.
“So where does the solar power come in?” I hear you ask. “Your headline promised solar power!” OK, stop yelling, I’ll tell you. In order to demonstrate how little power this chip needs to run, the demonstration model was powered by a small photovoltaic cell that was about the size of the processor itself. That is pretty impressive.
Claremont might not ever be in any publicly available products, so don’t start standing in line for one just yet. However, the data they’ve accumulated from it in the lab will probably enable Intel to integrate aspects of this technology with a wide variety of platforms.
Among the possibilities Intel suggests are longer battery lives, energy-efficient multicore processors in everything from handhelds to servers to supercomputers, and generally greener computing.
Who knows, maybe one day they can use offshoots of this technology to make digital devices that are entirely powered off of solar energy. Wouldn’t that be cool? If you ever needed more power, you could simply shine more light at your computer.
At the very least, the folks at Intel have figured out how to keep their demo going even if the building’s power goes out.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:742e5ab6-a393-4338-b8a8-34d69de51755> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/09/23/intel-solar-powered-processor.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00135-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944307 | 407 | 3.0625 | 3 |
Researchers at Stanford University have demonstrated the first functional computer built using only carbon nanotube transistors, according to an article published Wednesday on the cover of scientific journal Nature.
Scientists have been experimenting with transistors based on carbon nanotubes or CNTs as successors to silicon transistors, as silicon is expected to meet its physical limits in delivering the increasingly smaller transistors required for higher performance in smaller and cheaper computing devices that are less power-consuming. Digital circuits based on the long chains of carbon atoms are expected to be more energy-efficient than silicon transistors.
The rudimentary CNT computer, developed by the researchers at Stanford, is said to run a simple operating system that is capable of multitasking, according to a synopsis of the article.
Made of 178 transistors, each containing between 10 and 200 carbon nanotubes, the computer can do four tasks summarized as instruction fetch, data fetch, arithmetic operation and write-back, and run two different programs concurrently.
As a demonstration, the researchers performed counting and integer-sorting simultaneously, according to the synopsis, besides implementing 20 different instructions from the MIPS instruction set "to demonstrate the generality of our CNT computer," according to the article by Max Shulaker and other doctoral students in electrical engineering. The research was led by Stanford professors Subhasish Mitra and H.S. Philip Wong.
"People have been talking about a new era of carbon nanotube electronics moving beyond silicon," said Mitra, an electrical engineer and computer scientist in a press release issued by Stanford University. "But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof."
Carbon nanotubes still have imperfections. They do not, for example, always grow in parallel lines, which has led researchers to devise techniques to grow 99.5 percent of CNTs in straight lines, according to the press release. But at billions of nanotubes on a chip, even a small misalignment of the tubes can cause errors. A fraction of the CNTs also behave like metallic wires that always conduct electricity, instead of acting like semiconductors that can be switched off.
The researchers describe a two-pronged approach called an "imperfection-immune design". They passed electricity through the circuits, after switching off the good CNTs, to burn up the metallic nanotubes, and also developed an algorithm to work around the misaligned nanotubes in a circuit.
The basic computer was limited to 178 transistors, which was the result of the researchers using the university's chip-making facilities rather than an industrial fabrication process, according to the press release.
Other researchers are also working on CNTs as they worry about silicon hitting its physical limits. IBM said last October its scientists had developed a way to place over 10,000 transistors made from the nano-sized tubes of carbon on a single chip, up from a few hundred carbon nanotube devices at a time previously possible. This density was, however, far below the density of commercial silicon-based chips, but the company said the breakthrough opened up the path for commercial fabrication of "dramatically smaller, faster and more powerful computer chips." | <urn:uuid:f47bf1e8-36fc-47d7-91b4-e59a1a9f0426> | CC-MAIN-2017-04 | http://www.cio.com/article/2382238/hardware/stanford-researchers-develop-first-computer-using-only-carbon-nanotube-transistors.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00401-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948945 | 659 | 3.921875 | 4 |
When you cover the high performance computing community as I do, it can be easy to get lost inside our little corner of the world. This is especially true if you just talk to HPC vendors all day. Of course, the whole reason to use cutting-edge computing in the first place is to solve big problems in the real world. And some of these represent the most interesting challenges of the day: climate change, genomics, the nature of the universe, and artificial intelligence, to name just a few.
I was reminded of the last topic because of presentation this week by Henry Makram, director of the Blue Brain Project, based in Lausanne, Switzerland. The project’s goal is to simulate mammalian-style brains in silico. The researchers just happen to use a 10,000-processor IBM Blue Gene supercomputer to get the job done. To date, they have been able to simulate about 50,000 neurons of a rat’s neocortical column in something approaching real time.
At the TED Global conference in Oxford, England, on Wednesday, Makram predicted that within 10 years they’ll be able to simulate a human brain (presumably with a much more powerful computer than the current Blue Gene). In principle, if the model is accurate, the artificial brain should respond like a real human, or at least like Dick Cheney. According to a BBC report, Makram quipped “And if we do succeed, we will send a hologram to TED to talk.”
I bring this up not so much to spotlight the Blue Brain work, which is certainly fascinating in its own right, but to point to the rarity of HPC visibility in venues like the TED conference. TED (which stands for Technology, Entertainment, Design) is quite an interesting organization. It’s a non-profit, established in 1996 by Chris Anderson, editor-in-chief of Wired magazine and author of The Long Tail. The organization is all about the confluence of technology, science, art, and culture. Its catch-phrase is “Ideas Worth Spreading.”
Besides hosting conferences, TED maintains a Web site that hosts blogs and video presentations of movers and shakers in the government, arts and sciences. The content manages to be intellectual and fun at the same time. It’s certainly not a melting pot of content like you find on YouTube. Another Web site along the same lines, although somewhat newer, is Big Think. I peruse this one from time to time and always find something worth watching.
What’s hard to find at TED or Big Think are the HPC visionaries. This may be because most of the top-end academicians in high performance computing are used to attending the same ACM and IEEE sponsored events every year. At the other end are the vendors, which go mostly to trade shows. HPC users, like Makram, may venture further afield, but are usually focused on talking about their applications rather than the wonders of supercomputing. That’s understandable.
I don’t want to leave you with the incorrect impression that our community is totally invisible. Besides the Blue Brain presentation at this week’s conference, I also came across an interesting TED video of something called the AlloSphere. It’s a 3D immersive theater at the University of California, Santa Barbara that uses visualization and audio to present complex data. Rather than trying to describe it further, I’ve inserted the 7-minute video below. If you want more information about the project and the woman behind it, follow this link. | <urn:uuid:41629169-9c25-41c8-8153-c9119c31abbf> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/07/23/big_ideas_in_hpc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00217-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932368 | 743 | 2.546875 | 3 |
A huge crater on Mars once may have contained water -- and therefore possibly life -- data from NASA's Mars Reconnaissance Orbiter indicates. Researchers analyzing spectrometer data from the orbiter say there's evidence that underground water once flowed into the interior of McLaughlin Crater, which measures 57 miles wide and 1.4 miles deep. From NASA:
Layered, flat rocks at the bottom of the crater contain carbonate and clay minerals that form in the presence of water. McLaughlin lacks large inflow channels, and small channels originating within the crater wall end near a level that could have marked the surface of a lake.Together, these new observations suggest the formation of the carbonates and clay in a groundwater-fed lake within the closed basin of the crater. Some researchers propose the crater interior catching the water and the underground zone contributing the water could have been wet environments and potential habitats.
"The observations in McLaughlin Crater provide the best evidence for carbonate forming within a lake environment instead of being washed into a crater from outside," Joseph Michalski, lead author of the paper, said in a statement. Michalski is affiliated with the Planetary Science Institute in Tucson, Ariz., and the Natural History Museum in London. He had five co-authors for the paper, which was published Sunday in the journal Nature Geoscience. While the Curiosity Rover has been the focus of attention since landing on the Red Planet last summer, NASA says the Reconnaissance Orbiter, which has circling Mars since March 2006, has "provided more high-resolution data about the Red Planet than all other Mars orbiters combined." This has allowed researchers back on Earth to learn more about the daily weather and surface conditions on Mars, which is critical in choosing the best landing sites. Now read this: | <urn:uuid:ea1e1ff2-b403-4cb5-873c-7571532408b1> | CC-MAIN-2017-04 | http://www.itworld.com/article/2715506/enterprise-software/was-this-giant-mars-crater-once-a-lake-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00521-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944387 | 361 | 3.96875 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.