text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Learning keyboard shortcuts is one of the best ways to boost your productivity and save time. You probably already know the basic computer shortcuts, such as CTRL+C to copy or CTRL+V to paste (my favorite is CTRL+Z: Undo!), but there are a handful of shortcuts for your browser too. Here are the keyboard shortcuts you should know to make searching and working online as efficient as possible. Most of them work on all the major browsers, except where noted below.
Find a word or phrase in the current page: CTRL+F. This is one of the most useful shortcuts ever, and it works in every browser and most applications too (e.g., you can search a Word document or PDF). Just hit CTRL and F together to bring up a search text box to find all instances of a word or phrase in the page.
Quickly move focus to the address bar or search input box: CTRL+L or CTRL+K. It’s annoying to have to move your mouse back up to the search box or address bar every time you want to go to another site or look something else up.
Chrome/Firefox/IE/Safari (Windows & Mac): Hit CTRL+L to quickly highlight whatever’s in the address bar so you can start typing to replace it; in Chrome and Safari this means you can also perform a new search right away.
To move to the search box in Firefox (Windows & Mac), hit CTRL+K. (On Windows, this also replaces whatever is in the address bar in Chrome with a “?” prompt so you can search for a new term, but I prefer the CTRL+L shortcut because it lets you either type in a new address or perform a search.)
Open a new tab: CTRL+T. Quickly add a new tab without having to mouse over to that “+” or new tab button.
Close a tab: CTRL+W. When you no longer need the tab you currently have open, you can close it without shutting down your browser (assuming you have at least one other tab open).
Reopen an accidentally closed tab: CTRL+Shift+T. Oops, you didn’t mean to close that tab. Undo it by adding the Shift key to the open a new tab combination.
Open a link in a new tab: CTRL+Shift+Left Click. This is the combination for when you want to follow a link but keep your current page open. It brings focus to the new tab (if you’d rather open the tab in the background, just don’t use the Shift key: hold down CTRL and left-click on the link).
Go to a specific tab: CTRL+[number of tab]. Counting your tabs from left to right, starting at 1, you can jump to a tab in that numbered position with CTRL+[number]. So, for example, if your webmail tab is the fourth one from the left, hit CTRL+4 to switch to it.
Go back or forward in history: Backspace or Shift+Backspace. You don’t have to click the back or forward buttons in your browser to revisit pages you’ve previously been on. Press the Backspace button to go back or Shift+Backspace to go forward. (Notice a pattern? Adding Shift to a keyboard shortcut tends to do the reverse action.)
Scroll down or up a page: Space or Shift+Space. Finally, this is my favorite browser shortcut, useful for quickly reading web pages or PDF docs. Hit the Space key to scroll down one page at a time. You can guess what Shift+Space will do.
These aren’t the only keyboard shortcuts for working in your browser, but they cover the majority of tasks for navigating without using your mouse. If you have a favorite or others you’d like to share, please add them in the comments.
Photo by Book Glutton
Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:708f2cdf-c750-4c11-afc2-061dc6d2234f> | CC-MAIN-2017-04 | http://www.itworld.com/article/2715983/consumerization/9-browser-shortcuts-everyone-should-know.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00352-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.85748 | 873 | 2.703125 | 3 |
In most industries, every couple of years, some new technologies and products would come out to replace the old ones to satisfy the growing demands of people. That’s also what happened in the telecommunication industry. In the past decades, different transmission technologies had come into being to satisfy the growing demands for broadband, from Plesiochronous Digital Hierarchy (PDH) to Synchronous Digital Hierarchy (SDH) and Wavelength-division multiplexing (WDM).
PDH has good adaptability for point to point communication. But PDH network lacks of management capacity. SDH with various advantages, like standard optical interface and powerful network management capabilities replaced PDH. However, SDH cannot provide us large-capacity and high speed. Then WDM came out with the advantages of large bandwidth, low transmission costs and adapting to high-speed large-capacity transmission. It seems a perfect solution, but WDM network is not flexible and cannot achieve effective management. To combine the advantages of SDH and WDM, Optical Transport Network (OTN) came into being.
Optical transport network (OTN) also called a “digital wrapper”, is a standard for optical transport developed by the ITU-T (International Telecommunication Union Telecommunication Standardization Sector). An OTN is composed of a set of Optical Network Elements connected by fiber optic links. It is a transport network based on WDM technology to build more network functionality into optical networks. OTN is capable of providing optical channel transport, multiplexing, routing, management, supervision and survivability.
The optical layer of OTN can be divided into Optical Channel (OCh), Optical Multiplex Section (OMS) and Optical Transmission Section (OTS), while OCh can further be divided into three sub-layer electronic fields: Optical Payload Unit (OPU), Optical Data Unit (ODU) and Optical Transmit Unit (OTU).
The above picture shows the OTN hierarchical structure. The functions of the three optical layer are as following:
Optical Multiplex Section:
Optical Transmission Section:
In 1998, ITU-T formally proposed the concept, and took it as an ideal basis for future network evolution. In February 1999, G.872, the first proposal of OTN was approved. As technology advances, OTN standard system has been improved and has been regarded as an ideal basis for future network evolution. Some main advantages of OTN are as following.
Cost effective—OTN bandwidth aligns well with Ethernet and SDH rates. However, OTN is asynchronous, hence it does not carry the costs and complexity associated with the SDH timing hierarchy. OTN Simplifies multiplexing/demultiplexing of sub-rate traffic and reduces in signal overhead requirements.
Stronger forward error correction (FEC)—Although SDH has a FEC defined, it only allows a limited number of FEC check information, which limits the performance of the FEC. For the OTN a Reed-solomon 16 byte-interleaved FEC scheme is defined, which uses 4×256 bytes if check information per ODU frame. In addition, enhanced (proprietary) FEC scheme are explicitly allowed and widely used.
Full-service access and large capacity transmission—Optical transport network supports SDH, Ethernet, IP, ATM, GFP transparent transmission. It can provide simple transition to 40G and 100G transmission speeds and even has Tbits level transport capability.
Nice maintenance and management—OTN is wealth of overhead bytes and supports six levels of independent Tandem Connection Monitoring (TCM).
Networking and protection—ONT supports traditional WDM optical layer protection as well as intelligent protection and restoration with Mesh networks.
Flexible—Advantages of being flexible of OTN can be found in the following aspects: optical layer cross-connect; multiplexer and grooming sub-wavelength services (ODUk/ GE) and ODUk cascade and virtual cascade.
OTN is being adopted to satisfy the increasing demands for high-speed communication and technological progression and advancements, like 40G and 100G technologies which have significantly driven the ONT hardware market. MarketsandMarkets, a research company recently has predicted that the OTN market is estimated to be $11.35 billion in 2014 and is expected to grow to $23.64 billion in 2019 which represent an estimated Compound Annual Growth Rate (CAGR) of 15.8% from 2014 to 2019. Fiberstore, a vendor focusing on optical communication, provides a wide range of optical communication products, including 100G CFP transceiver for OTN, WDM products, optical transceivers, fiber optic cables, etc. | <urn:uuid:5307207b-26b3-4aec-bce8-07f643e0a4da> | CC-MAIN-2017-04 | http://www.fs.com/blog/otn-ideal-basis-for-future-network-evolution.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00224-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926364 | 964 | 3.640625 | 4 |
The Evolution of BGP NetFlow Analysis, Part 1
Enabling comprehensive, integrated analytics for network visibility
Clear, comprehensive, and timely information is the essential prerequisite for effective network operations. For Internet-related traffic, there’s no better source of that information than NetFlow and BGP. But while these protocols have been around for a couple of decades, their potential utility to network operators was initially unrealized, and the process of exposing more value has been a long, gradual evolution. The journey started with simply making the data available. It then progressed to the development of collection and analysis techniques that make the data useable, but with real limitations. The next leap forward has been to transcend the constraints of legacy approaches, both open source and appliance, using big data architecture. With a distributed, multi-tenant HA datastore in the cloud, Kentik has created a SaaS that enables network operators to extract far more practical value from NetFlow and BGP information. In this series we’ll look at how we got from the first iterations of NetFlow and BGP to the fully realized network visibility systems that can be built around these protocols today.
In the beginning…
Border Gateway Protocol (BGP) was first introduced in 1989 to address the need for an exterior routing protocol between autonomous systems (AS). By 1994 BGP4 had become the settled protocol for inter-AS routing. Then in 1996, as the Internet grew into a commercial reality and the need for greater insight into IP traffic patterns grew, Cisco introduced the first routers featuring NetFlow. Support for BGP was added in 1998 with NetFlow version 5, which is still in wide use today.
Support for BGP in NetFlow v5 enabled the export of source AS, destination AS, and BGP next hop information, all of which was of great interest to engineers dealing with Internet traffic. BGP next hop data provided the possibility for network engineers to know which BGP peer, and hence which neighbor AS, outbound traffic was flowing through. With that insight, network engineers could better plan their outbound traffic.
A key use case for next hop data arises when determining which neighbor ASes to peer with. If both paid-transit and settlement-free peering options are available, and those options all provide equivalent and acceptable traffic delivery, then you’ll want to maximize cost savings by ensuring that the free option is utilized whenever possible. Armed with BGP next hop insights, engineers can favor certain exit routers by tweaking IGP routing, either by changing IGP link metrics or (with a certain more-proprietary protocol) by employing weights.
Kick AS and take names
While NetFlow 5’s BGP support was helpful with the above, simple aggregation of the supported raw data left many use cases unaddressed. Knowing ASNs is a first step, but it’s not that helpful unless you can also get the corresponding AS_NAME so that a human can understand it and take follow-up action. In addition, engineers wanted more visibility into the full BGP paths of their traffic. For example, beyond the neighbor AS, what is the 2nd hop AS? And how about the source and destination ASes? NetFlow’s v5 BGP implementation didn’t offer that full path data, and while v9 introduced greater flexibility it still provided only a partial view.
In the early 2000’s, a first generation of vendors figured out how to address this gap by collecting BGP routing data directly and blending it with NetFlow. This was done by establishing passive BGP peering sessions and recording all of the relevant BGP attributes. A further enhancement came from integrating information from GeoIP databases to augment the NetFlow and BGP data by providing source and destination IP location. Now, with a GUI tool, network engineers could make practical use of NetFlow and BGP information.
These enhancements helped engineers with a number of use cases. One was DDoS detection. Looking at a variety of IP header and BGP data attributes on inbound traffic, you could use pattern-matching to detect volumetric as well as more-nuanced denial of service attacks. Another use case was to find opportunities for transit cost savings, including settlement-free peering, by looking at traffic going through 2nd and 3rd hops in the AS_PATH. For companies delivering application traffic to end-users, the ability to view destination AS and Geography helps in understanding how best to reach the application’s user base.
Struggling to keep up
The integration of fuller BGP data with NetFlow and other flavors of flow records created a combined data set that was a huge step forward for anyone trying to understand their Internet traffic. But at the same the overall volume of underlying traffic was skyrocketing. Constrained by the technologies of the day, available collection and storage systems struggled to keep up, and network operators were prevented from taking full advantage of their richer data.
One key issue was that the software architecture of NetFlow-based visibility systems was based on scale-up assumptions. Whether the software was packaged commercially on appliances or sold as a downloadable software-only product, this meant that any given deployment had a cap on data processing and retention. With most of the software functionality written to optimize single-server function and performance, stringing together a number of servers only yielded a sum-of-the-parts in aggregate price performance.
Another issue was that the databases of choice for early NetFlow and BGP analysis implementations were either proprietary flat files, or, even worse, relational databases like MySQL. In this scenario, one process would strip the headers off of the NetFlow packets and stuff the data fields into one table. Another process would manage the BGP peering(s) and put those records into another table. A separate process would then take data from those tables and crank rows of processed data into still more tables, which were predefined for specific reporting and alerting tasks. Once those post-processed report tables were populated, the raw flow and BGP data was summarized, rolled up, or entirely aged out of the tables due to storage constraints.
While it was possible in some cases to run non-standard reports on the raw data, it was painfully slow. Waiting 24 hours to process a BGP traffic analysis report from raw data was not uncommon. In some cases, you could export that raw data, but given the single-processor nature of the software deployment, and considering all of the other processes running at the same time, it was so slow to do so that 99% of users never did. You might have to dedicate a server just to run those larger periodic reports.
Steep deployment costs
Yet another major issue was the cost of deployment. NetFlow and BGP are both fairly voluminous data sets. NetFlow, even when sampled, produces a lot of flow records because there are many short-lived flows. Whenever a BGP session is established or experiences a hard or soft reset, the software has to ingest hundreds of thousands of routes. Plus, you have the continuous flow of BGP UPDATE messages as route changes propagate across the Internet.
Using single-server software, you may end up needing a bunch of servers to process all of that data. If you buy those servers pre-packaged with software from the vendor, you’ll pay a big mark-up. Consider a typical 1U rackmount server appliance from your average Taiwanese OEM. Raw, the cost of goods sold (COGS) may be anywhere from $1,000.00 to $2,000.00, but loaded with software, and after a basic manufacturing burn-in, you can expect to pay a steep $10K to $25K, even with discounts. And even if you’re buying a software-only product that isn’t pre-packaged onto an appliance, you still have the cost of space, cooling, power, and — most importantly — overhead for IT personnel. So owning and maintaining your own hardware in a scale-up software model is still expensive from a total cost of ownership (TCO) point of view.
Considering the limited, inflexible reporting you get for these high costs, most users of legacy NetFlow and BGP analysis tools have been left hungry for a better way. In part 2 of this post, we’ll look at an alternative approach based on big data SaaS, and consider how this new architecture can dramatically boost the value of BGP and NetFlow in network visibility, operations, and management. | <urn:uuid:d462d012-72f2-4905-906e-0d7202b59586> | CC-MAIN-2017-04 | https://www.kentik.com/the-evolution-of-bgp-netflow-analysis-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00398-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949057 | 1,765 | 2.53125 | 3 |
Google Gives Online Class on Making Websites Accessible to the Blind
Google is offering a free online course from Sept. 17 to 30 to help Web developers make their sites more accessible to visually impaired users.Google will offer a free online course Sept. 17 to 30 to teach Web developers and designers how they can make their Websites more accessible and friendly for blind and visually impaired users. The course, "Introduction to Web Accessibility," will offer a host of practices and design elements that will allow sites to serve visually impaired users who wish to have better access to the online world, Eve Andersson, the manager of accessibility engineering at Google, wrote in a Sept. 9 post on the Google Developers Blog. "You work hard to build clean, intuitive Websites," wrote Andersson. "Traffic is high and still climbing, and your Website provides a great user experience for all your users, right? Now close your eyes. Is your Website easily navigable? According to the World Health Organization, 285 million people are visually impaired. That’s more than the populations of England, Germany and Japan combined." That's where the Web accessibility course comes in, she wrote. "As the Web has continued to evolve, Websites have become more interactive and complex, and this has led to a reduction in accessibility for some users. Fortunately, there are some simple techniques you can employ to make your Websites more accessible to blind and low-vision users and increase your potential audience. "Introduction to Web Accessibility" is Google’s online course that helps you do just that."
The two-week online course, which will include support from Google engineers, will teach developers "how to make easy accessibility updates, starting with your HTML structure, without breaking code or sacrificing a beautiful user experience," wrote Andersson. "You'll also learn tips and tricks to inspect the accessibility of your Websites using Google Chrome extensions." | <urn:uuid:5d264cfc-fdef-457a-ad72-a22c6d0ac0d0> | CC-MAIN-2017-04 | http://www.eweek.com/developer/google-gives-online-class-on-making-websites-accessible-to-the-blind.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937888 | 387 | 2.765625 | 3 |
Traditional network equipment has been unable to cope with the huge data traffic
A morning in 2011, an engineer of the world’s most popular social network Facebook press a button, and the results of the company’s entire business to a standstill. This unknow the engineer who was not supposed to make mistakes. He was just trying to run the social networking giant has been in the running of a software task. He is running a distributed data analysis platform Hadoop. Results, Facebook began to analyze the data generated by the hundreds of millions of uers. These data are stored in the tens of thousands of servers on the companys’s multiple data centers. When you analyze these data, all those servers must begin to talk to each other.
It is said that Facebook employees Dorn Lee mermories of the accident on the meeting in the spring of last year, a Hadoop task let the company’s computer network was overwhelmed, and lead to other business is almost at a standstill. “I clearly remember that morning.” Lee said, “It to let Facebook paralyzed, very serious paralysis.”
In the past, the majority of network data traffic is transmitted back and forth between the server and Internet users trying to access the web. But now, with the increasingly large and complex business such as Facebook, Google and Amazon appear within the data center and server data traffic increased, these traditional network equipment used networking giant has been unable to handle so much traffic.
Therefore, the network is taking place in the times change. Companies such as Facebook and Google is building more high-speed network hardware, they are revising their network topology, to accommodate the large flow of transmission between the server. However, such improvements effect is not obvious. Like Donne – Lee this network experts have begun to consider new network equipment – available beam propagation data equipment in the data center configuration.
Electronic and optical fiber network neck and neck
Yes, certain Internet data has started in the form of light to be transmitted. This is the fiber optic network. Standard electronic signals are converted into photos, and to be transmitted along the glass fiber optic cable.
However, under normal circumstances, such data transfer typically occurs between the data center, which rarely occurs within the data center. The next step is staring fiber-optic network to re-build the data center, so that the traditional electronic network switches can greatly accelerate the speed of data transmission between the server fabric switches neck and neck.
“If we can do this, this hybrid network– can be adapted to large-scale network of more data traffic- very attractive.” God Diego campus of the University of California researcher George fiber optic network George Papen said, ” We are yet to reach this point, but we are closer than ever before.”
Papen school district where the R & D team has developed the hybrid network, the network is still in the testing phase, and demonstrate the working principle of the fiber optic switches. Their research projects, usually known as Helios, is funded by Google and other tech giants.
According to Papen, Helios project to be fully realized, there is still a long way to go. However, in Cambridge, Massachusetts, USA, a start-up companies named Plexxi recently launched a fiber optic network switches designed to re-create the data center. Although this technology is completely different from Helios, but they have the same basic goal.
“Photonic switching function is very powerful, once your business is involved in the field of optical fiber exchange, rather than the field of electronic exchange, you will have the advantage of internal performance.” CEO David – Plexxi Company, said Husak (Dave Husak)”We are committed to achieve that effect.”
Future of the Helios project
Traditionally speaking, the network is hierarchical. If your server will be classified as a level above this level is the network switch, you need these servers are connected to the network switches. Then, you will these “high-level switch connected with higher levels of high-speed networking equipment; Next, your network equipment to connect these two levels of network equipment and a higher level. When you reach the core of the network, you run the very expensive network hardware is much faster than the switch on the server level.
You need this faster speed to adapt to all traffic from the network- this is perhaps our previous view. Amin Vahdat and colleagues show that this hierarchy is wrong. If you use the speed is generally cheaper network equipment, you may be more efficient to run your network.
“This is a revolution.” Papen said, “Before this, people like the construction of telecommunications networks to build their data center, Vahdat of the research team realized that to do so is difficult to cost savings, they proved that you can use a completely different way to build a data center. “
This unified network architecture is known as a “fat tree” architecture design. It has now become a universal form of large-scale network business. That’s why companies such as Google have begun to abandon the expensive equipment such as Cisco, switch from a manufacturing company in Asia to purchase low-cost hardware.
Helios project, the basic idea is to create a true fiber-optic network, eliminating the burden of the traditional electronic networks.
In a sense, this project will be back to the future. Today’s network uses a so-called “packet- switching to transfer data back and forth, first break them into smaller flow of information before the output data. This is the way of the operation of the Internet. However, the fiber portion of the Helios project is a “circuit-switching”, to establish a dedicated link between two endpoints. This is the old-fashioned mobile network run.
“Just look at each packet in the data center, you will find that it does not have to use your resources efficiently.” Papen said, “If you can understand, even partially understand the transmission of data traffic direction, you do not have to view each header of each packet, you will be able to create a dedicated circuit to transmit large amounts of data, rather than let it through the packet-switched network. ”
This structure is very attractive, because the fiber optic circuit-switched networks are more flexible than the traditional design. “Circuit is a pipeline, it don’t care about the data transmission speed is it, it is speed agnostic.” He said, ” you can almost any speed data transmission in the above, it is very attractive.”
While this architecture to become the reality of the data center, there is still a long way to go, but Papen believe it will eventually be realized.
Vahdat and Google is likely to be close to reality, but Papen stressed that even he doesn’t know how Google will do. “I have a lot of friends engaged in fiber in large data centers, however, i still do not know what they are prepared to do,” he said.
Google, it is the most important competitive advantage is the design of its internal architecture. Google even outside researchers funded it, do not want to disclose this design. But, Google switched fabric is not only explore the future of the company. The company doing Facebook, Cisco, IBM and now Plexxi Company. | <urn:uuid:aaa3bbd0-7fb4-421c-b5b1-ec3c13763f31> | CC-MAIN-2017-04 | http://www.fs.com/blog/google-and-facebook-how-to-use-fibre-network-upgrade-transmission-speed.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952348 | 1,543 | 2.890625 | 3 |
There are server reasons for why the fiber optic cable to replace copper cables for many applications of computer room. The most important reason is the requirement for greatly increased bandwidth for high definition conferencing and HDTV systems. This requires a higher speed to meet the information demand and the bandwidth of the fiber optic cable has higher than copper cable. Another important reason for this trend is that the fiber connectors on the fiber cables (LC especially) are smaller in size than those on copper cables and require less back plane real estate on the servers, switches, and routers necessary for the data system. This is the reason that manufacturers are switching to LC connectors on some of their servers and switches.
Fiber optic cables are preferable in many cases where power lines run in close proximity to the data cables in order to avoid electromagnetic crosstalk to which fiber cables are immune. Where there are air conditioning or fan motors, which may be necessary for the computer system and are close to the data cabling, the use of fiber optic cables will avoid crosstalk interference with the data signals.
The distance is also very important factors in a computer room setting data system. With fiber optic cable, we can extend the distances that we can run high speed data as compared to copper cables. Where there are legacy copper cabling systems that have to remain in operation, there are fiber optic media converters which interface the fiber system with MTP 12 fiber mass termination connector ports and also have multiple RJ 45 ports on the front which interface the copper cables. Smaller media converters are also available that interface one or two copper duplex ports on the input and convert the signal to one or two fiber duplex output ports for 10/100/1000 gigabit ethernet systems. By the utilization of these devices, the existing legacy system can be extended to interface more remote systems than previously was possible.
If you have hundreds of copper cables running from servers to switches you may actually impede air flow under the computer room floor. If you are dealing with smaller, more thin fiber cable, you don’t set too many pieces of your air flow. If you get through the wall betwen the computer room, the necessary holes smaller, easier to stop fire. The weight of the fiber cables held by the track on top of the cabinets is also considerably less and considerably less bulky.
The LC cassette module is a compact, high density fiber optic solution to protect the equipment cabinet space and allows data to travel longer distances. The LC Cassette module is a distribution module that has one or two twelve fiber MTP connectors on one side connected to twelve or 24 fiber connectors, such as LC, on the other side. It is protected in a metal case that gives good fiber protection. The cassette can be easily swapped during a maintenance cycle. The LC connector is half the size of the SC connector, which helps to reduce the space required.
This cassettes reside in rack mount or wall mounted fiber distribution enclosures holding 3, 6, 9, or 12 cassettes. Each cassette can handle 12 or 24 fibers. The fiber optical cables are high density, multi-fiber trunk cables terminated in MTP connectors that can be pulled or laid quickly from point A to point B. Using a cassette or a transition cable assembly, the data center designer can break out the 12 or 24 fibers from each MTP connector in simplex or duplex connectivity. The fiber is pre-tested, pre-terminated, and essentially plug and play. The error that can be introduced by pulling individual strands across multiple floors plus polishing and termination does not happen. To expand to new servers, a trunk cable with MTP connectivity can be dropped in and be up and running very quickly. All the cabling or cassettes do not have to be installed up front with necessary capital expenditure. MTP fiber cables include 24, 36, 48, 72, or 144 fibers. The network can be deployed as the servers and switches are increased. | <urn:uuid:df2b7f0d-1a4e-4b17-bd3b-2cd3d5130ab2> | CC-MAIN-2017-04 | http://www.fs.com/blog/mtp-fiber-cables-are-being-replaced-by-copper-cable-applications-in-the-computer-room.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919169 | 802 | 2.78125 | 3 |
Proactive Password Auditor helps network administrators to examine the security of their networks by executing an audit of account passwords. By exposing insecure passwords, Proactive Password Auditor demonstrates how secure a network is under attack.
A single weak password exposes your entire network to an external threat. Password-hacking is one of the most critical and commonly exploited network security threats. Network users employ short and simple passwords that are easy to remember, but are also easy to break. They often use repeating characters, simple words and names for easier memorizing. Making them use computer-generated passwords that consist of random characters will only make the problem more severe, as the users will write the passwords down on the proverbial yellow stickers. There is more information available on the issue in the Elcomsoft whitepaper Proactive Is Better than Reactive: Testing Password Safety – a Key to Securing a Corporate Network.
Network administrators are also part of the problem, as they may forget purging terminated employees, forcing people to change passwords often, or locking out the users after a certain number of failed login attempts.
Weak passwords are easy to break, while complex passwords are difficult to memorize. Having an elaborate security policy is the only way to ensure the security of your network. Force network users to change passwords regularly, and audit the network after every change. Did it take Proactive Password Auditor 30 days to break a password when you performed the last audit? Then examine your network at least once a month to ensure ongoing security.
Proactive Password Auditor examines the security of your network by attempting to break into the network. It tries common attacks on the account passwords in an attempt to recover a password of a user account.
Proactive Password Auditor allows carrying out a password audit within a limited period of time. If it is possible for Proactive Password Auditor to recover a password within a reasonable time, the entire network cannot be considered secure.
Network administrators can use Proactive Password Auditor to recover Windows account passwords, too. Proactive Password Auditor analyzes user password hashes and recovers plain-text passwords, allowing accessing their accounts, including EFS-encrypted files and folders.
Proactive Password Auditor uses several basic methods for testing and recovering passwords, including brute force attack, mask attack, dictionary search, and Rainbow table attack. The Rainbow attack is particularly effective, as it uses pre-computed hash tables that allow finding up to 95% of passwords in just minutes instead of days or weeks. Fortunately, the Rainbow attack cannot be performed from outside of your network! You’ll need either administrative access or a dump file exported by Elcomsoft System Recovery.
Proactive Password Auditor supports off-line recovery of account passwords by analyzing dump files saved by Elcomsoft System Recovery, local Registry, binary Registry files (SAM and SYSTEM), memory of the local computer, and memory of remote computers (Domain Controllers), including ones running Active Directory.
The off-line recovery speed can be greatly enhanced when using Elcomsoft Distributed Password Recovery. Thanks to the patent-pending GPU acceleration technology available in Elcomsoft Distributed Password Recovery, the recovery using a single PC is up to 50 times faster as compared to CPU-only mode and other password-recovery applications. Adding more computers increases the recovery speed linearly with zero scalability overhead.
Proactive Password Auditor supports Windows NT4, 2000, XP, Vista, Windows Server 2003, Windows Server 2008 and Windows 7.
You can order fully registered version of Proactive Password Auditor by credit card (online or by fax), with check/money order, by bank/write transfer or with purchase order. The cost of the license depends on the number of user accounts to be audited (typically, the number of user accounts in your local network).
If you are interested in purchasing the license to audit more than 2500 user accounts, please contact us for quote.
Please note that the licenses are not "cumulative". That means that, for example, if you purchase two licenses for 20 accounts each, you still will not be able to audit 40 accounts -- instead, you should purchase one 100-accounts license. To be able to audit 600 accounts, you should get the 2500-accounts license (but not one license on 500 accounts plus another one for 100 accounts), and vice versa.
We accept Visa, MasterCard, American Express, Diners Club, and JCB. When you pay by credit card, your order will be processed immediately. Postal mail shipments are initiated immediately. Products available electronically are generally ready for download immediately, or no more than 48 hours after you place your order.
Using the online order form, you can also order the software with Solo/Switch/Maestro (only if issued in UK), by Bank/Wire Transfer, Online wire transfer (applicable only for orders placed within Germany in Euro), by Check (we accept all personal, business, and Cashier's checks) or Cash.
Online orders are processed automatically and therefore more quickly than orders placed by fax, e-mail, or phone, because processing is not dependent on our customer service center's business hours. But alternatively, you can order through customer serice or with Purchase Order. | <urn:uuid:74a839ca-8d11-40b1-a58b-d568e79e5f48> | CC-MAIN-2017-04 | https://www.crackpassword.com/ppa.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00178-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908965 | 1,058 | 2.578125 | 3 |
Lillian Ingster, director of the National Death Index (NDI), has access to the entire universe of death data in the United States since 1979. Ingster shares her experience managing this data with her Federal peers as often as she can.
NDI, a branch of the National Center for Health Statistics (NCHS), is a centralized database that houses death record information for 54 territories, including all 50 states, New York City, Washington, D.C., the Virgin Islands, and Puerto Rico. NDI collects death records for those who perished in the U.S.
“We don’t really collect marriage and divorce records. That kind of bit the dust in the 1990s,” Ingster said. “But we do collect birth and death records.”
The Centers for Disease Control and Prevention (CDC), which encompasses NCHS, was founded 70 years ago. Ingster, who began work with NDI five years ago, said she likes to share her experience with data management, and frequently attends meetings with representative from other Federal agencies to discuss big data.
“A lot of people use big data. We’ve been doing it for decades,” Ingster said. “We talk to our peers throughout the Federal government. I hope maybe I can give people a little insight into what we do on our end.”
Ingster warned big data collectors to be mindful of information that comes from many sources. She said that sometimes two different sources can provide different readings of the same information.
She said, for example, that asking someone their height and weight would yield a different result from measuring their height and weight. In her experience, she said, people tend to grow a couple of inches and lose 10 pounds when asked what their measurements are.
In addition to advising people to recognize the exact sources of their data, Ingster also warned big data managers to analyze information with a preconceived hypothesis. She said that an established hypothesis will prevent getting lost in vast amounts of data.
“When you have big data lakes and you’re doing data mining, you have to have an a priori hypothesis. If you just go fishing in those big data lakes, and you find significant results, those results are likely to be garbage,” Ingster said. “Any statistician worth his or her salt will tell you your results aren’t worth the paper they’re printed on if you don’t have a prior hypothesis.”
Researchers with private industries submit applications to NDI and pay a fee for the chance to match the database’s information with their own, which includes death date, death state, death certificate number, and cause of death. Ingster said these researchers frequently use the data matching service to create mortality profiles or industry assessments. She stressed that the NDI’s data was for research only, and that it was not a tool to look up how a favorite aunt died.
Ingster said she looks forward to attending big data talks in the future. In addition to imparting lessons from her experiences with data, she also uses such events to listen to what other agencies are working on with their data.
“There are a lot of diverse areas of big data, but, conceptually, there are overlaps,” Ingster said. “It’s a lot of data. There’s a lot of information that can be gleaned, but you’ve got to do it the right way.” | <urn:uuid:f587f1bd-53d8-4a3b-ad18-d18d6e85c923> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/death-index-director-shares-big-data-expertise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00390-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962951 | 729 | 2.609375 | 3 |
The demands of a career in IT can, if unchecked, lead to burnout
This feature first appeared in the Summer 2016 issue of Certification Magazine. Click here to get your own print or digital copy.
While an IT career can be stimulating and lucrative, it is also very demanding. Excessive work pressure is common in the IT industry, potentially leading to high stress levels. If left to build up over a long period of time, this accumulation of stress can result in burnout.
Stress, by definition, is any uncomfortable “emotional experience accompanied by predictable biochemical, physiological and behavioral changes.” We all experience stress from time to time — and a certain amount of stress can be beneficial, giving us a boost that provides drive and energy to accomplish things.
Chronic stress, however, which occurs when we are exposed to extreme amounts of stress on a continual basis, can drain any person of physical and emotional vitality, leaving one feeling utterly exhausted all the time. Chronic stress sufferers lose interest in things, and often become cynical and disillusioned, overcome by feelings of hopelessness.
Chronic stress can affect one’s health and morale, jeopardizing both professional and personal ambitions. This level of stress is often a major contributing factor to career or job burnout — defined as “an extended period of time where someone experiences exhaustion and a lack of interest in things, resulting in a decline in job performance.”
What is Burnout?
Burnout doesn’t necessarily ensue from workplace factors alone. Colleagues often experience the same circumstances at work but do not respond in the same manner. Individual personality, life approaches, and lifestyle all influence how individuals respond. A stressful situation may be easy to brush off for one employee, yet deeply disturbing to another.
Over time, burnout can build up stealthily. Some of the signs of burnout include exhaustion, lack of motivation, difficulty focusing, declining job performance, interpersonal problems at home and work, and engaging in unhealthy coping strategies like eating too much junk food, being too sedentary, or drinking too much alcohol.
Preventing burnout is easier than recovering from it. One needs to recognize the early signs of increasing stress and act fast to prevent them from accumulating. Since stress can also result from internal factors, you need to first determine what is stressing you out, and then reduce the demands on yourself and develop the resources you have.
There are potential triggers of job burnout and being able to recognize these warning signs is the first step to controlling your situation. These factors include:
Long hours — Long workdays are the norm in the IT industry. The reasons could be both internal and external. Excessive job pressure, the nature of job, and your approach to work are the main reasons why IT professionals put in long hours day after day. Industry-wide downsizing has also led to heavier workloads, with fewer workers having to shoulder greater responsibility.
Lack of clarity — When you know precisely what’s expected of you, then you’re able to plan your work and perform effectively. On the other hand, a vague job description and uncertain expectations can adversely affect performance.
Chaotic work conditions — IT professionals don’t work in a vacuum. They continuously collaborate with others in an environment that’s influenced by multiple external factors. When these factors are not in sync, work can suffer.
Responsibility creep — Frequently having to do work that’s not part of your job description can be taxing, which increases stress.
Lack of recognition and respect — Many IT professionals, particularly support and network staff, don’t always get the appreciation and respect that their work deserves.
Politics — Unfortunately, merit and hard work aren’t always rewarded. Some work less, but get a fat raise or a promotion. That’s because they aggressively market themselves, something which many techies who are engrossed in their work aren’t inclined to do.
Drudgery — Doing the same thing, the same way, day after day, can be draining. Eventually, unchallenging work can rob you of enthusiasm and energy, sometimes making you want to quit.
Lack of sleep — Not everybody needs the same amount of sleep. Lack of sufficient sleep night after night, however, is considered to adversely affect physical and mental well-being, leading to poor work performance.
Lack of relaxation and entertainment — Everyone needs time away from work to help them recharge their batteries. Working excessive hours leaves you with little time to unwind or socialize.
Outside pressures and personality — Away from the office, having more responsibilities than one can cope with and a lack of support can also create stress. Additionally, perfectionists and high-achievers are often more vulnerable to stress, as are those who are wary of delegating. Pessimists also tend to worry more because they’re inclined to look at the dark side of things.
Strategies to Help Avoid Burnout
As bad as job burnout can be, there are a number of relatively simple things you can do to avoid it. Now that we know what to watch for, let’s look at ways of lessening the impact of some of these dangerous symptoms.
Relax — Find some time each day, ideally in the morning, to unwind. Meditation, an early morning walk, light exercise, reading something uplifting, or listening to soothing music can help you start the day in a positive frame of mind. Meditating for at least 30 minutes every day will not only help clear and calm your mind, it will gradually make you more aware of yourself.
Cut back on work — One of the best things you can do is to say “No” to extra work that extends beyond your realm of responsibility. Focusing on what’s important will help improve your performance.
Rediscover the thrill — If work doesn’t excite you anymore, then it may be time to ask for a new assignment. A different function might rekindle your interest.
Attend to your health — A healthy diet, regular exercise, and adequate sleep can strengthen your well-being and make you more resistant to the negative effects of stress.
Develop outside interests — Engaging in non-work activities, whether a hobby, a creative pursuit, fitness program, or volunteer work not only takes your mind off of work and gives you a sense of satisfaction, it can also enrich your life and instill positive feelings.
Find some device-free time — Just because you are in IT doesn’t mean you have to be connected all day long. Switch off your devices and disconnect totally for a few minutes every day.
Recovering from Burnout
If you have reached the burnout stage, then you need to work on correcting the problem immediately. The good thing is that burnout is reversible if you act soon and with intention. To reverse burnout consider the following:
Step on the brakes — Continuing to push yourself can result in a total breakdown. The first thing you need to do is decelerate. Take some time off if you can, or slow up; you need to rest and recuperate.
Turn to loved ones for support — Trying to cope in isolation can sometimes make you feel worse. Share your predicament with someone you trust. A good listener can make you feel better. Confiding in a friend or relative can also help put your problems in perspective.
Seek professional help — If you are experiencing physical symptoms like persistent headaches, heart palpitations or chest pains, dizziness, and stomach upsets, then you might need medical attention. Don’t let any of these physical warning signs persist without consulting a physician.
Consider changing jobs or reskilling — If burnout is largely work-related, then a job change may be in order. Specializing in a new technology or function could also help revive your interest.
Pause and take stock of your life — Going on vacation or sabbatical is one of the best ways to recover from burnout. You need to reflect and ask yourself what is most important. Revisit your goals and prioritize accordingly. You might find yourself wanting to change direction.
Job burnout is a real problem for IT professionals. Facing the problem is crucial, and the sooner you act, the better. Learn to identify and address the causes of job burnout. This might require not just work-related changes, but changes in life as well.
Remember that we are more than drones attached to our jobs. As Swami Bhaskarananda said, “A spiritually illumined soul lives in the world, yet is never contaminated by it.” | <urn:uuid:43ac5948-09d5-44f3-8139-6d2c693582d4> | CC-MAIN-2017-04 | http://certmag.com/demands-career-can-unchecked-lead-burnout/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00235-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943931 | 1,794 | 2.671875 | 3 |
An identity, within the context of a system or application, is a reference to a person or a person-like entity. Identities consist of:
For example, the unique identifier may be a login ID and attributes may include first name, last name, department code, location code and e-mail address. In this case, the login ID must be unique within the system in question. Alternately, a fully qualified e-mail address may be used as a unique identifier. Valid e-mail addresses are unique across the Internet.
The white paper Best Practices for Managing User Identifiers is an excellent introduction to how identifiers can and should be assigned to people in the context of a system or application. | <urn:uuid:21fb9547-7ee8-4bf9-afda-5d91f131d7db> | CC-MAIN-2017-04 | http://hitachi-id.com/resource/concepts/identities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890399 | 142 | 2.875 | 3 |
The attacks being experienced today on computer networks are increasingly targeted and sophisticated. Attackers carefully research their victims and increasingly write one-off exploits that are highly unlikely to be caught using traditional rules and signature-based security controls. There are a variety of technology vendors coming up with innovative ways to prevent those attackers from being successful.
One such vendor is LightCyber, a relatively young company founded in Israel but that is setting its sights on wider international markets. It states that its mission is to enable organisations "to effectively detect subtle anomalies in the network and identify targeted attackers at early phases of attack, before real damage has been done."
What makes LightCyber's approach innovative is that its technology develops and maintains profiles of every user and device on a network to identify unique patterns of behaviour associated with each as behavioural patterns can vary widely from one role to another. For example, a user from the engineering department will operate in a manner vastly different from a user from human resources and will have different needs in terms of the applications that they use. What is considered normal behaviour for one user may be considered to be suspicious for another.
The technology works by constantly monitoring and tracking all user and device behaviour in real time and comparing activity to the behaviour profiles that have been developed for each user and device in order to detect when a user is behaving differently to that expected. It does this without the use of rules or signatures that can identify known threats and exploits or that block specific types of traffic. Rather, it passively monitors all traffic and looks for anomalous behaviour, such as an attacked who is reconnoitring the network, looking to perform activity outside of what is considered to be normal, such as looking to elevate privileges or move laterally across the network.
In this way, the technology provides an automated means of identifying malicious behaviour-something that has long been done by human analysts as part of incident response teams-and is a tool that can be used by anyone, not just analysts. It provides a means of detecting an attacker at an early stage of an attack so that networks can be protected against even the most advanced threats. Further, it can also be used to forensically investigate any incidents that have occurred to learn from such incidents to prevent them occurring again. As such, it is a light touch tool that can help organisations to protect and defend against even highly targeted, sophisticated attacks and improve the overall security posture of their network. | <urn:uuid:41fc770d-5eb8-4258-b764-b57048e193a0> | CC-MAIN-2017-04 | http://www.bloorresearch.com/blog/fran-howarth/innovative-analytics-technology-from-lightcyber/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00051-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971248 | 488 | 2.65625 | 3 |
Working with ESL Students
“Nous allons apprendre le français. Le français,” said the wrinkled mouth on the monitor of a wooden RCA television. It was strapped atop a wobbly metal cart, demanding engagement from its somewhat stunned audience. The mouth belonged to Pierre Capretz of La Method Capretz.
The audience was about 30 French I students who, for the entirety of that academic year, would not utter a syllable of their native tongues for 50-minute slots three days a week. NOOz-uh-LOHN uh-prOHndruh luh frohn-say. LUH frohn-say.
IT trainers’ challenges with non-native English speakers differ from those faced by students whose sole focus is proficiency or fluency in another language, but this anecdote is still useful.
Whether categorized as “English as a second language” (ESL), “English as a foreign language” (EFL) or otherwise, students who are non-native English speakers have been thrust into an unnatural environment. One aim for trainers should be to reclaim the comfort of the classroom environment, which can be achieved by establishing habits that transform the strange into the familiar.
Several factors might help provide a more relaxed atmosphere. First, it is important to establish a routine because routines create invaluable levels of comfort. If a student feels lost, this confusion will be mitigated by a consistent format of a lecture or exercise. Routines also emphasize the content of the task at hand instead of putting undue focus on the delivery.
This does not mean the session must be devoid of the sorts of dynamic activities that engage students. Rather, it means that well-ordered exercises and consistency in presentation would benefit comprehension levels all around. Fortunately for the French I students, the assumption that they all had the same native tongue was correct. As a trainer, part of the challenge lies in assessing areas in which both ESL and traditional students are proficient.
An instructor might have insight into the obstacles ahead – as in the case of an American off-site trainer teaching an entire group of ESL students – but it seems that the case often will be one in which the cultural or ethnic makeup of the students in question is altogether unknown. The trainer must recognize cultural differences and proclivities toward one learning modality or another without stereotyping or pigeonholing students.
This calls to question profiling of students in general, as there is diversity in learning modalities even among native speakers. The French I students were subject to both visual and auditory stimulation because they needed to learn to both read and write the language. As the words were being pronounced, they simultaneously appeared on the monitor.
More importantly, however, this multisensory attack should be used, regardless of the subject being taught, because of the distinct manners in which individuals have been shown to fully process information. In addition to these two methods, the kinesthetic acquisition of knowledge should not be forgotten — students should move around and participate in hands-on exercises.
According to “Strategies and Resources for Mainstream Teachers of English Language Learners,” a study done by Bracken Reed and Jennifer Railsback, other modes of introducing information that are especially helpful for ESL students are stimulators that are heavily visual. These might include graphic organizers such as clusters or flowcharts.
Using graphics allows students to actually see the relationships between concepts as the trainer understands them. Students will gain insight into the logic of the trainer, and how he or she connects the information, through the use of such organizational charts. These might benefit students, later as they attempt to parse out the information beyond the classroom — they will have a physical reference item. | <urn:uuid:95dd4d95-2cc8-4392-b2d0-7c3905194616> | CC-MAIN-2017-04 | http://certmag.com/working-with-esl-students/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00445-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953678 | 767 | 2.671875 | 3 |
A Mac OS X bug has surfaced whereby any local user can change that user's password using a simple Terminal command. This means that anyone who obtains physical or remote (such as via ssh) access to a Mac, and who knows this command - not something that your average user will know - can change the password for the current account, then log into it later and access their files, or, if it is an administrator's account, make changes to the system and access other files.
Until this is fixed, it's a good idea to take a number of precautions, especially if you leave your Mac accessible to others. First, disable automatic login. As we wrote in a recent Mac security tip, this means that you need to enter a password to access your Mac when you start it up. Next, make sure you use a different password for your keychain, so if someone does access your account, they still can't get at your passwords. Finally, in the General tab of the Security & Privacy preferences, check Require password immediately after sleep or screen saver begins. This means that you'll need to enter your password more often, but it's a lot safer. If you put your Mac to sleep when you leave it, then no one will be able to access it without your password.
Full protection can be obtained by running the following the following command in Terminal:
sudo chmod 100 /usr/bin/dscl
This limits access to the dscl command to all users other than root.
Apple will undoubtedly issue a security update to fix the bug quickly. In the meantime, the above tips should help you protect your Mac and your files. | <urn:uuid:bf921b28-6f62-4ada-919a-e3100225a0db> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/password-change-issue-affects-mac-os-x/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933583 | 338 | 2.734375 | 3 |
OpenSSL Flaw Discovered: Patch NowIt's No Heartbleed, But Attackers Could Decrypt, Modify Traffic
Beware of a newly discovered bug in OpenSSL, the open-source implementation of the SSL and TLS protocols that's used to secure data sent between clients and servers. The flaw exists in client versions of OpenSSL, as well as the most recent version for servers, which many organizations adopted to mitigate the Heartbleed vulnerability.
The team behind the OpenSSL Project sounded that warning in a June 5 security alert, noting that all versions of the OpenSSL client produced since the project began in 1998 - and recent versions of their server code - are vulnerable to a man-in-the-middle attack that would force servers and clients to use weak keys, which would allow attackers to decrypt traffic. They've also released new versions of OpenSSL to patch the bugs and security flaws.
The OpenSSL team emphasized that such an attack could only be carried out against both a client and server running vulnerable versions of the software. "OpenSSL clients are vulnerable in all versions of OpenSSL," according to the advisory. "Servers are only known to be vulnerable in OpenSSL 1.0.1 and 1.0.2-beta1," which were the latest versions. Even so, they recommended that anyone with OpenSSL servers prior to version 1.0.1 "upgrade as a precaution."
One risk is that hackers could launch a MITM attack to not just read encrypted data, but alter it. "There are ways that you can decrypt data, view it, modify it and pass it along encrypted, and neither the client nor the server will be aware," says Nicholas Percoco, vice president of strategic services at security vendor Rapid7. Such an attack could be used, for example, to intercept banking sessions and create fraudulent transactions that still looked legitimate to both the client and server. "As an end user, you have no indication; nothing will pop up to tell you you're being 'man-in-the-middled,'" he says.
The latest version of OpenSSL also patches five other vulnerabilities, some of which could be abused by attackers to create a distributed-denial-of-service attack. A buffer overrun flaw, meanwhile, could be exploited to run arbitrary code on vulnerable machines.
OpenSSL After Heartbleed
OpenSSL has been in the spotlight since the Heartbleed flaw - which allowed attackers to steal private SSL keys as well as VPN session tokens - was first publicly detailed on April 7, 2014. The flaw was present in more recent versions of OpenSSL, which is used to secure millions of websites, as well as built into an untold number of hardware and software products, including many Android apps.
Due to Heartbleed, many businesses upgraded their servers to OpenSSL 1.0.1. That is now the focus of the man-in-the-middle vulnerability alert issued Thursday. Thankfully, the newly disclosed vulnerability is not on a par with Heartbleed. "It's different, because the Heartbleed flaw was a direct attack against a server that was vulnerable," Percoco says.
Furthermore, while many OpenSSL servers are vulnerable to this MITM attack, there are mitigating factors. "In spite the fact that OpenSSL is wildly popular on servers, most browsers and tools a user uses have their own crypto libraries they use, which likely negates the vulnerability," says Jose Nazario, chief scientist at security product company Invincea. "That said, tons of stuff behind the scenes occurs that does use OpenSSL clients and servers, such as machine-to-machine Web API calls or data shuttling, including e-mail (SMTP+STARTTLS). So there is a bunch of potential exposure for data here if an attacker gets to MITM the right spot."
Heartbleed Drives Bug Discovery
The new MITM vulnerability was discovered by Japanese security researcher Masashi Kikuchi. Inspired by the Heartbleed flaw, he took a close look at the OpenSSL code base for other potential problems, and to his surprise quickly discovered another flaw. He then reported it to Japan's Computer Emergency Response Team, which shared the information with the OpenSSL Project on May 1. The project then took a month to prepare, test and ship a patch that it says is based on a fix that Kikuchi developed.
From a security and reliability standpoint, the fact that security researchers like Kikuchi are hammering away on OpenSSL is good, because it will help make the widely used code base even more secure. Expect further improvements via the recently launched Core Infrastructure Initiative, through which a number of leading technology firms - including Amazon Web Services, Cisco, Dell, Facebook, Google, HP and Microsoft, among others - will be directly funding the development of critical open source tools, including OpenSSL.
Thankfully, these sorts of large-scale man-in-the-middle vulnerabilities aren't common. Rapid7's Percoco estimates that over the past decade, they've only come along about once per year. For example, back in 2011, he and fellow security researcher Paul Kehrer demonstrated at the annual summertime Def Con conference in Las Vegas how attackers could craft fake SSL certificates to intercept traffic from devices running Apple iOS 4.3.5. "The vulnerability allowed you to intercept any communication, very similar to the newly disclosed OpenSSL vulnerability, except that it only affected the iPhone population," Percoco says.
Avoid 'Heartbleed Removers'
To patch the newly discovered vulnerabilities, anyone running OpenSSL on a client or server should upgrade to the latest version, available via the OpenSSL Project site.
At the same time, beware of ongoing phishing e-mails and social-engineering attacks that are attempting to trick people into installing fake fixes, for example in the form of so-called "Heartbleed removal" tools. "Some hackers are trying to convince potential victims that Heartbleed can be 'uninstalled' from their computers," Gary Davis of security vendor McAfee says in a recent blog post. "They're doing this by sending out e-mails loaded with a 'Heartbleed remover' tool attachment, which is really just a cleverly disguised package of malicious software." | <urn:uuid:acc4f4e7-a111-4b77-8904-f73a8f670900> | CC-MAIN-2017-04 | http://www.bankinfosecurity.com/open-ssl-a-6915/op-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00079-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954507 | 1,284 | 2.59375 | 3 |
A manager is a container that you can use to arrange fields on the screen of a BlackBerry device. The fields that a manager contains are called the manager's controlled fields. Every field on a screen can be controlled by only one manager.
A manager is represented by the Manager class, which is included in the net.rim.device.api.ui package. The BlackBerry Java SDK includes several types of managers that extend this class, including the HorizontalFieldManager, VerticalFieldManager, and FlowFieldManager classes. Because managers are derived from the Field class, they can be nested. For example, you can create a horizontal field manager that consists of three vertical field managers, each of which contains its own fields. Using this approach, you can create complex layouts with very little code.
Constructors for the various manager classes can accept style bits as parameters that specify the scrolling behavior of the manager. For example, you can specify Manager.HORIZONTAL_SCROLL to indicate that the manager should allow horizontal scrolling if it contains fields that are wider than the manager's visible area. Similarly, you can specify Manager.VERTICAL_SCROLL to indicate that the manager should allow vertical scrolling if it contains fields that are taller than the manager's visible area. If you do not specify that a manager should allow scrolling, the screen displays as many fields as possible within the available screen space, and the rest of the fields are not shown. The fields exist but are not visible to the BlackBerry device user. This situation can create unexpected scrolling behavior for users.
The Manager class is an abstract class. You can use the managers that are included in the net.rim.device.api.ui.container package in your applications, or you can extend the Manager class to create your own custom manager. If you choose to create a custom manager, you must implement sublayout(). The sublayout() method specifies how the manager arranges its controlled fields, including the size and position of each field.
Vertical field manager
A vertical field manager is represented by the VerticalFieldManager class. This layout manager arranges fields in a single vertical column starting at the top of the screen and ending at the bottom of the screen. Because this layout manager is designed to arrange fields vertically, you can apply horizontal style bits, but not vertical style bits, to the fields that this manager contains. For example, you can apply Field.FIELD_LEFT, Field.FIELD_RIGHT, or Field.FIELD_HCENTER style bits, but you cannot apply Field.FIELD_TOP, Field.FIELD_BOTTOM, or Field.FIELD_VCENTER style bits.
Horizontal field manager
A horizontal field manager is represented by the HorizontalFieldManager class. This layout manager arranges fields in a single horizontal row starting at the left side of the screen and ending at the right side of the screen. Because this layout manager is designed to arrange fields horizontally, you can apply vertical style bits, but not horizontal style bits, to the fields that this manager contains. For example, you can apply Field.FIELD_TOP, Field.FIELD_BOTTOM, or Field.FIELD_VCENTER style bits, but you cannot apply Field.FIELD_LEFT, Field.FIELD_RIGHT, or Field.FIELD_HCENTER style bits.
Flow field manager
A flow field manager is represented by the FlowFieldManager class. This layout manager arranges fields vertically and then horizontally depending on the size of the screen. The first field is positioned in the upper-left corner of the screen and subsequent fields are placed horizontally to the right of the first field until the width of the screen is reached. Once fields can no longer fit on the first row, the next field is placed below the first row of fields on a row that has a height that is equal to the tallest field of the row above it. You can apply vertical style bits, such as Field.FIELD_TOP, Field.FIELD_BOTTOM, or Field.FIELD_VCENTER, to align fields vertically within their row. | <urn:uuid:c64adf15-b822-4f7d-834b-6cfdd75701c2> | CC-MAIN-2017-04 | http://developer.blackberry.com/bbos/java/documentation/managers_1969899_11.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00289-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.867455 | 837 | 2.5625 | 3 |
Certificate Transparency (CT) works within the existing Certificate Authority (CA) infrastructure as a way to provide post-issuance validation of an entity’s authorization for the issuance of SSL Certificates.
The certificate issuance process is shown below with new steps introduced by CT highlighted in blue.
- Server operator purchases certificate from CA
- CA validates server operator
- CA creates a precertificate
- CA logs the precertificate with the log server, which returns a signed certificate timestamp (SCT)
- CA issues SSL Certificate
- SSL Certificate may include signed certificate timestamp (SCT)
- Browser validates SSL Certificate during the TLS handshake
- Browser validates the SCT provided during the TLS handshake, either through OCSP stapling, through a TLS extension, or from information embedded in the certificate
- Browser makes connection with the server
- SSL Certificate encrypts all data as it is passed from the browser to the server
There are three possible ways to deliver the SCT during the TLS handshake, as explained here. A diagram of this process for one of the SCT delivery methods is included below.
Certificate Transparency Components
The CT system has four components: CA, certificate log, certificate monitor, and certificate auditor. Below is a diagram with a likely configuration of these components. Each of these components is explained in more detail below.
CT works within the existing publicly-trusted CA system. With CT, CAs can include evidence of certificate issuance in a public log and browsers can check for these SCTs during the handshake. Logging certificates is evidence of the CA’s proper operation and gives insight on CA operations.
Ideally, certificate logs will maintain a record of all SSL Certificates issued; although the initial rollout is limited to EV certificates. Multiple independent logs are required for several reasons: 1) Multiple logs allow for a backup in the case of a log failure, 2) Independent logs mean that if one log or log operator is compromised, certificates can still be validated; 3) Independent logs mean a single government action cannot remove evidence of issuance from all logs; and 4) Multiple independent logs mean a CA and log operator can’t collude to obfuscate an embarrassing misissuance.
All logs are:
- Append only - Certificates can only be added to a log; they can’t be deleted, modified, or retroactively inserted.
- Cryptographically assured - Logs use a cryptographic mechanism called a Merkle Tree Hash to prevent tampering.
- Publicly auditable - Anyone can query a log and look for misissued or rogue certificates. All certificate logs must publicly advertise their URLs and public key.
A certificate monitor is anyone who watches the certificate logs for suspicious activity, like large brand owners or CAs.
Monitors can fetch information from the logs by using an HTTP GET command. Each customer may act as their own log monitoring service or may delegate this to another party. DigiCert plans to provide log monitoring services for its customers.
Certificate auditors check the logs to verify that the log is consistent with other logs, that new entries have been added, and that the log has not been corrupted by someone retroactively inserting, deleting, or modifying a certificate.
Auditing will likely be an automated process that is built into browsers. However, auditors could be a standalone service or could be a secondary function of a monitor. | <urn:uuid:4ea3d09b-b0e1-47e1-8eb3-a815dd599555> | CC-MAIN-2017-04 | https://www.digicert.com/certificate-transparency/how-it-works.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00289-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917428 | 707 | 2.515625 | 3 |
As scientists increasingly rely on big data to drive their research, a new set of software tools is emerging. Two of these new tools, developed by Microsoft’s External Research division, were launched on Monday at the Microsoft Research Faculty Summit in Redmond, Wash. They include the Project Trident workbench and the Dryad/DryadLINQ programming environment.
Project Trident was originally aimed at oceanographic applications (hence the name). The work began as a collaboration between Microsoft External Research, the University of Washington and the Monterey Bay Aquarium to provide a high-level workflow tool for oceanographers. Oceanography, like many scientific domains today, is being inundated with a deluge of data that researchers are struggling to manage.
Once the proof-of-concept stage for Project Trident was completed, Microsoft realized it could be used as a general-purpose platform for other areas, such as astronomy, environmental science, medicine or essentially any type of research that is dominated by workflow issues. The data is coming from a growing number of inexpensive sensors that collect information in real time as well as an ever-expanding collection of scientific databases being stored on the Internet or in private repositories. In many cases, both data rates and data volumes are growing beyond the capabilities of traditional software environments.
Unlike the commercial world, the science community tends to freely pass its data around. But turning the raw information into useful knowledge often requires weeks, months or even years of software development involving customized scripts and applications. The whole idea behind Trident is to enable workflow applications to be developed by scientists, rather than programmers, by structuring the process into modular steps.
“Why lock your knowledge up into scripts or programs when you could actually write it in a tool that other people stand a chance of reusing,” asks Roger Barga, who is leading Microsoft’s development of Project Trident. According to him, researchers are recognizing that the model of customized workflow development is not sustainable. Even if software maintenance were less expensive, scientists are looking for the kind of speed and flexibility that a code rewrite does not allow.
The Trident workbench is being used today by oceanographers at the University of Washington for seafloor-based research that uses thousands of ocean sensors and by researchers at the Monterey Bay Aquarium Research Institute to study Typhoon intensification. The workbench is also being employed by astronomers at Johns Hopkins University to support the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) project, which is looking for objects in the solar system that could pose a threat to Earth. In this case the data being ingested comes from an array of 1.4 gigapixels digital cameras that capture images of the night sky.
In a nutshell, the Trident workbench tool provides a visual framework for managing and developing workflows. At startup, the user sees a library of existing workflows and activities (or workflow steps). In the GUI, one can add or delete steps from the pipeline by simply dragging and dropping. The idea is that domain experts with no programming knowledge can go in and mix and match existing workflow components to author new experiments and run them on the fly.
A typical workflow would start with reading in the raw data — data files and/or sensor devices. The next step would be to convert the various data sources into a common format. An analysis pipeline — filtering and conditioning algorithms — would come next. Typically the last step is to produce a visual representation of the result.
It’s not all just shuffling objects around a GUI, however. The individual activities, such as reading the raw data, analyzing it, and creating visualizations have to be developed in the first place, as you would any other piece of software. But once developed, the activities can be bound to any user-generated workflows. According to Barga, their experience has been that once you get more than a dozen or so workflows constructed, the users find they’re no longer writing much new code.
One of the important strengths of Trident is that it can utilize HPC clusters. Scientific analysis at scale often requires a high performance computing platform for reasonable performance. By default Trident assumes a single node execution, but users can schedule a job across multiple cluster nodes by creating a workflow application that communicates with the HPC job scheduler.
As one might have guessed, the assumed clustering environment here is Microsoft’s Windows HPC Server, but Trident does allow you to plug in your own scheduler too. This enables researchers to run on a Linux cluster, which remains a much more common platform today for high performance computing. Barga says plugging into a non-Windows scheduler is just one of the different ways Trident has been designed for extensibility, noting that even the tool’s GUI can be replaced should users wish to have a customized look and feel. One dependency that cannot be jettisoned, however, is the Windows .NET framework. The .NET environment contains the Windows Workflow, which is the foundation of the Trident workbench.
The other tools Microsoft released on Monday — Dryad and DryadLINQ — are aimed at developers rather than end users. Dryad itself is a general-purpose data parallel programming runtime designed to run distributed applications on Windows clusters. The runtime is responsible for scheduling resources, handling hardware and software failures, and distributing data and code across the cluster as needed. DryadLINQ is an abstraction layer that runs LINQ (Language Integrated Query) operations on top of Dryad, the idea being to be able to execute data queries that automatically get parallelized via the Dryad runtime.
Unlike MPI, Dryad is not for latency sensitive computation. It is aimed at applications that can increase data throughput via loosely-coupled parallelization. Microsoft Research itself uses Dryad internally for search engine and machine learning research. Barga says they have scaled such applications up to 3,000 nodes on a Windows HPC Server cluster, noting that some of these jobs run for dozens of hours. “The beauty of the Dryad runtime is that if an individual node drops out or there’s a failure in one of the jobs, Dryad automatically recovers, moving the computation off the failed node and reproducing inputs that node was responsible for,” says Barga.
Microsoft is really offering Trident and Dryad/DryadLINQ as two separate solutions, but with interoperability. Trident includes a pre-defined custom activity that invokes Dryad/DryadLINQ, allowing the programmer to pass it LINQ queries. But the real intention seems to be to encourage users to develop their own Dryad/DryadLINQ components to hook into the Trident workbench or use them in standalone applications.
Trident and Dryad/DryadLINQ will be released under the MSR-LA license (Microsoft Research License Agreement) and, as such, is for non-commercial academic use only. Barga says Microsoft is considering some sort of license arrangement for commercial users, but without any requirement for royalty paybacks. The bottom line here is that Microsoft is not looking to generate revenue directly from these tools, but rather to expand the Windows ecosystem for researchers and encourage use of the Windows HPC Server platform.
Barga couldn’t talk about any future interoperability between these tools and Microsoft’s Azure cloud computing platform, but it’s reasonable to assume that all these technologies are heading toward convergence. “Science is moving to the cloud and we want to make sure that all of the tools that we offer, including things like Dryad and Trident … will work on the cloud for scientists who want to do really big data challenges,” says Barga. | <urn:uuid:2daf3c6c-347a-46c5-a839-4df036ac6b71> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/07/13/microsoft_releases_new_software_tools_for_researchers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00107-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937628 | 1,577 | 2.609375 | 3 |
It’s quite possibly the most infamous screen image in computing: the Blue Screen of Death. Encountering this screen will often cause two thoughts to run through a user’s mind: what just happened, and how can I fix it?
The BSOD is the result that Windows resorts to when it experiences an error critical enough to require a reboot. This is what makes it different than a simple application crash, which will usually only require you to restart the application, not the entire computer.
Typically, a BSOD will occur if your computer encounters a problem in its hardware, a driver or software issue, or is infected with a virus. As a result, a STOP Error is thrown up, Windows crashes, and the only remedy is to perform a complete reboot--at the unfortunate cost of any unsaved data that the user may have been working on.
However, not all BSODs are caused by the same problem. Some are legitimate errors, but others are clever attempts by scammers to infiltrate a device more completely. An easy way to spot the difference between real and fraudulent BSOD messages is to check for a phone number. If one is provided, someone is trying to scam you. Microsoft has never provided contact information on their Blue Screen message, only the error code and instructions on how to reboot.
If there is no phone number provided, you’ve managed to incur a real BSOD. Fortunately, the steps to resolve it are pretty straightforward.
System Restore: If you’ve been encountering the BSOD fairly regularly, it could very well be a software problem causing it. If restoring the software to a previous version resolves the issue, you have confirmed that the BSOD was software-related. This is something you should have IT handle for you, since you don’t want to backtrack over something critical.
Malware Issues: Unsurprisingly, malware can often be the cause of this particularly irksome problem. Running a quick scan should quickly root out any malware floating around on your workstation that is causing your problem. Of course, it’s best to be proactive and have a properly managed antivirus that is kept updated and ran regularly.
Boot Up in Safe Mode: By eliminating all but the essential drivers to boot your PC up, it will allow you to determine if a certain driver is causing the issue. Safe mode will allow you to work towards resolving the cause of the BSOD. You might not be able to get much work done in safe mode, but it allows a technician to access event logs and other tools to help determine the problem.
Reinstall Windows: This should be your last-ditch option, as it will completely wipe your existing operating system and replace it with a new installation of Windows. If this doesn’t fix the Blue Screen, your problem most likely lies in your hardware, such as a hard drive failing, memory error, or several other issues. Again, you should rely on an expert who understands the repercussions of reinstalling Windows. Your applications, settings, and many other factors will need to be put back into place before you can start working again.
If you’d rather not have to deal with a BSOD, Nerds That Care can help. By managing your IT, we can predict and prevent issues that would otherwise impact your productivity. To find out more, give us a call at 631-648-0026. | <urn:uuid:d8a5cd0c-621e-40fa-a24f-d7193a56e1fc> | CC-MAIN-2017-04 | https://nerdsthatcare.com/nerd-alerts/entry/tip-of-the-week-4-ways-to-resolve-the-blue-screen-of-death | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00133-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948867 | 699 | 2.671875 | 3 |
The Defense Advanced Research Projects Agency (DARPA) flew the fastest aircraft ever built – until it disappeared. The Falcon Hypersonic Technology Vehicle 2 (HTV-2) was launched aboard a rocket at 7:45 am Thursday, August 11 from Vandenberg Air Force Base in California. Designed to reach any point on the planet in under an hour, the launch was the second test of the HTV-2 system. Following separation from a Minotaur IV rocket, the HTV-2 began flying at Mach 20. Shortly thereafter, contact was lost.
“Here’s what we know,” said Air Force Maj. Chris Schulz, DARPA HTV-2 program manager and PhD in aerospace engineering in an agency news release. “We know how to boost the aircraft to near space. We know how to insert the aircraft into atmospheric hypersonic flight. We do not yet know how to achieve the desired control during the aerodynamic phase of flight. It’s vexing; I’m confident there is a solution. We have to find it.”
The first test of the HTV-2, conducted in April 2010, yielded similar results. DARPA officials said despite the loss of the aircraft, they collected valuable data and will attempt another test in the future.
“Filling the gaps in our understanding of hypersonic flight in this demanding regime requires that we be willing to fly,” said DARPA Director Regina Dugan. “In the April 2010 test, we obtained four times the amount of data previously available at these speeds. Today more than 20 air, land, sea and space data collection systems were operational. We’ll learn. We’ll try again. That’s what it takes.”
The video below shows how the test was supposed to have gone (no audio). | <urn:uuid:9c346f4d-1e1a-4115-92ff-676779858cb0> | CC-MAIN-2017-04 | http://www.govtech.com/technology/DARPAs-13000-mph-Test-Plane-Launches-Vanishes-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00345-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94574 | 384 | 3 | 3 |
Last week, the Cyber Security Industry Alliance issued a report titled "Teaching Children Cyber Security and Ethics" calling for the creation of a national K-12 curriculum for teaching children how to use the Internet safely and ethically.
According to Paul Kurtz, Executive Director of CSIA, many groups are providing cyber security education for young people. However, "the problem is that there is no national coordination of these programs and no clear leader in this area, leaving parents and teachers confused about where to turn to get information," He said.
Kurtz sees coordinated cyber security education for the nation's children as a way to improve the general state of cyber security in the United States.
The report discusses the current state of cyber awareness education for children K-12 by framing key challenges, describing elements of cyber awareness and providing snapshots of typical education programs for cyber security, cyber ethics and cyber safety.
The CSIA sees a number of challenges facing parents and educators who want to teach the nation's children about cyber security, ethics and Internet safety, including the existence of several different sources of information that often duplicate the advice given and confuse those looking for information. Other challenges cited by the report include the lack of multimedia in cyber security training materials geared toward K-12 students.
"The materials designed to teach children about cyber security need to be as dynamic as the multimedia currently holding our children's attention, such as slick computer games. Software developers, particularly in the video gaming industry, need to incorporate cyber education tools that are visually appealing into the top-grade multimedia programs they design," the CSIA said in a statement accompanying the report.
In addition to the challenges facing providers of content, cyber security and Internet safety education is also hampered by a lack of coordinated funding. "CSIA believes that national coordination of funding would help pool resources and dramatically boost the ability to produce quality curricula and multimedia required for training children," the group said.
"Children today must be aware of the cyber threats that exist, which may cause inadvertent damage to their own PCs and other electronic devices or reveal sensitive, personal information," added Kurtz. "Congress and the [Bush] Administration have committed some resources already to cyber safety, but we believe it is equally important to focus on cyber security and ethics."
Following the findings of the report, the CSIA has several policy recommendations for Congress and the Bush Administration. The first of the recommendations is to design a national program for teaching students in K-12 schools cyber security, ethics and Internet safety to be coordinated between the Department of Homeland Security and the Department of Education.
The group also advises the creation of an accreditation standard for cyber security education materials provided on the Web.
Along with the creation of a national standards for cyber security curriculum, Congress and the Bush Administration should coordinate funding sources for cyber security education efforts including federal and state governments, private and public corporations, charitable foundations and parents.
One of the first uses of this coordinated funding, the CSIA urged, should include the development of high-grade educational multimedia that keeps students attention while teaching them how to stay safe and promote security of their computers on the Internet. "The Administration and Congress should encourage gaming companies to provide programs as a public service to schools," CSIA said in a statement. | <urn:uuid:d360d466-2264-4d5e-8e19-afc80d846cea> | CC-MAIN-2017-04 | http://www.govtech.com/security/Cyber-Security-Alliance-Calls-for-Coordinated.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953422 | 663 | 3.34375 | 3 |
October 31, 2011
Boo! It’s Halloween and also the end of National Cyber Security Awareness Month. Our phones are so spookily important to us that 80% of you who responded to our Facebook poll said you’d go without either coffee, TV or chocolate rather than give up your smartphone for a month. To keep our most precious mobile devices safe, we’ve had a full month of spreading awareness of simple steps to secure your phone and the sensitive data you put on it.
National Cyber Security Month by the numbers:
Here’s a recap of our tips to secure your smartphone. Don’t be spooked of the bogeymen of stolen data or hacked phones when you take a few simple precautions.
- Set a passcode for your phone. We love this xkcd comic about setting a strong password.
- Use discretion when downloading apps, especially when downloading from 3rd party markets. Always check the permissions the app is asking for and the developer’s name, ratings and reviews.
- Refrain from using unsecured Wifi; it’s like sending your sensitive data over the air in a clear envelope so anyone could see the contents. If you are really dying of boredom at the airport, just window shop, avoid email, online shopping and social networks.
- Keep your phone’s software up to date. Operating system updates often include patches to known security vulnerabilities.
- Download a mobile security app like Lookout, available for Android or iPhone mobile security.
Just because National Cyber Security Awareness Month is ending doesn’t mean you can revert to bad habits like downloading shady apps or leaving your phone lying around without a passcode. Our smartphones and tablets are mini computers we rely on everyday. Spreading awareness for safe smartphone usage is an ongoing effort that involves everyone. Continue to share our tips with your friends and family—because cyber security awareness doesn’t end today—it lasts all year long!
*Photos courtesy: Gadgetsin.com and Applegazette.com.
October 28, 2011
The year was 2003. Our 8th grade valedictorian was deep into her very own “this is our time” graduation speech, but she lost me at “My fellow graduates.” I was too busy fantasizing about the sleek, cutting-edge Motorola cell phone that awaited me after I got my meaningless Middle School diploma. She was really something— fully stocked with a color screen, alarm clock, and insanely long battery life, my first cell phone will forever hold a special place in my heart.
Fast forward almost ten years later, and kids yet to hit puberty are sporting mobile devices that have revolutionized the way people communicate. Yes, in the year 2011, the smartphone is king. But even kings have their flaws. True, my first cell phone didn’t act as a flashlight or tell me how to tie a tie, but at least it didn’t have inconvenient two-a-day charging sessions. Smartphone battery life, or lack thereof, is a common complaint amongst most users. So how can someone enjoy Google and Apple’s gifts to mankind when they’re plugged into a wall three hours a day? You can’t— at least not fully.
Now, rumor has it that Lookout is a no-no for people who are concerned with battery life. Some claim that because our program runs quietly in the background during routine maintenance, it’s constantly draining energy from your smartphone without you knowing it. Objection! While it is true that Lookout performs daily/weekly scans or backups depending on your preference, our studies show that in these cases battery exhaustion is the power equivalent of making a 30-second phone call. Frequent use of the find my phone feature (and I’m talking daily) will have a more noticeable effect because of the GPS connection requirement, but even this takes just three minutes. Not to worry, that’s like listening to one song on Pandora.
For those who may be skeptical of Lookout’s internal battery testing, take a look at this Reddit post. This user was able to keep their HTC Wildfire running for 29 days without a single charge. And what did they have running in the background? Lookout Mobile Security, of course. So rest easy my fellow smartphoners, and consider the Lookout battery exhaustion myth busted!
October 27, 2011
Cupcake. Eclair. Gingerbread. Ice Cream Sandwich. All delicious treats in their own right, these sugary desserts have something else in common— they’re Android OS versions. Creativity aside, the importance of updating your smartphone to the latest version often gets overlooked by users who are unaware of the benefits. Whether they are iOS or Android, big or small, updates do much more than speed up your device or tweak your user interface. They often contain critical security patches that make sure your Android or iPhone doesn’t become a hacker’s playground.
Android updates are pushed to people over-the-air. You, the user, will see a notification on the home screen prompting you to accept the update. To check the success of an install or to update your Android manually, follow these 4 simple steps:
1. Push the “Menu” button from the Home screen.
2. Touch the screen or use the navigator wheel to select the “Settings” option.
3. With the “Settings” menu, select the “About Phone” option that is found towards the bottom of the list.
4. Touch the “System Updates” option. This causes the phone to look for any new Android updates. If an update is available, the phone will download and install it. If your system is up-to-date, then it will tell you that as well.
With the release of iOS5, Apple has also instituted the over-the-air update process too. Yet if a user’s phone is running on an older operating system, iPhone users need to physically connect their device to their computers and follow these directions:
1. Verify that you are using the latest version of iTunes (before connecting your iOS device).
2. Select your iOS device when it appears in iTunes under Devices.
3. Select the Summary tab.
4. Click “Check for Update.”
Along with checking for software updates, downloading a mobile security app like Lookout is another crucial step in protecting your phone. Just as you protect your PC, you should protect your phone against malware and spyware. When you download new apps, shop online, browse social networks, or use your phone for banking, security apps like ours will be there to protect you. So do your small part, download a security app and make sure your smartphone is always running on the most up-to-date operating system.
October 26, 2011
With more than 63 million estimated to sell in 2011, tablets are unquestionably this year’s must-have device. If you don’t already own a tablet, you’re probably thinking—or dreaming—about buying one. Whether you own one now or are planning to purchase a tablet this holiday, keeping your tablet safe and secure will be top of mind.
Whether it’s WiFi-Only or it Has a Data Plan, We’ve Got You Covered.
At Lookout, we know tablets are the new mobile frontier, so we made our same smartphone security protection and find-my-phone functionality available on any tablet or iPad—including Honeycomb, Ice Cream Sandwich and WiFi-only tablets. So regardless of which kind of tablet or iPad you have, you can keep your device safe.
Already Have Lookout on Your Smartphone? Manage Your Tablet from the Same Lookout Account. If you’re already using Lookout on your smartphone, you can now easily add a tablet to your account so all of your mobile devices are managed in one place at lookout.com. Also, Lookout automatically updates over-the-air, making it easier on you to keep your most personal devices safe.
So if you’re touting a tablet, secure it with Lookout! The Lookout Mobile Security app is available for download in the App Store or Android Market for free.
Your tablet is just as important as your phone (and likely has a higher price tag) so we wouldn’t recommend on skimping to keep it safe!
October 21, 2011
The moment we’ve all been waiting for is almost here. A stunning upgrade to the Android operating system is just a few weeks away. Ice Cream Sandwich will be shipping on the stylish and blazingly fast Samsung Galaxy Nexus in November. In May, Google announced a commitment with every major carrier and device manufacturer to support upgrading capable devices to the latest version of Android for 18 months after the device ships, so hopefully we’ll all be seeing Ice Cream Sandwich on our own devices soon, too!
Ice Cream Sandwich is a feature-packed release alongside an entire redesign of the Android UI. Some of the noteworthy improvements include:
Improved multitasking. Ice Cream Sandwich allows you to see all open apps simultaneously and easily close apps you are done with by swiping them off the screen.
Single-motion panoramic photos. Take a large panoramic photo by simply moving your camera slowly from one side to the other.
Real-time voice dictation. Watch text appear in any input field as you speak naturally.
These are just some of the many features in Ice Cream Sandwich that I’m excited about. But we here at Lookout don’t just love our phones; we also love anything that makes our phones safer. There are a few ways in which Ice Cream Sandwich should help give you more control over your phone and keep your most personal computer and most personal data safe.
Owner info in the lock screen: You can now optionally include a personal message on your lock screen in case you lose your phone and someone else finds it and the screen is locked (you do have a passcode set, right)? This should help increase the chances and reduce the time it takes to recover a lost device. If you wish to include contact details in your message, remember not to use your cell phone number (unless it’s a Google Voice number you can check from another source).
Full device encryption: You can encrypt the entirety of your phone and this feature will be available for all your Android devices running Ice Cream Sandwich. Once your device is encrypted it will be very difficult for anyone to access any of your data without knowing your PIN or passcode. The setup takes about an hour to do and is not reversible without factory resetting your phone. Also you should be aware that if you forget your passcode there is no “Lost Password” button and all your data will be lost permanently (you can still factory reset your device though).
Enhanced Control/Management of Apps
Prior to Ice Cream Sandwich, if an app was preloaded on a mobile device, users were unable to remove these applications. Now, users will have two options at their disposal:
Disable preloaded apps: While you can’t uninstall preloaded applications since they are on the system partition of the device, you can now disable them. A disabled app cannot launch, access any information or even display an icon in your App Tray. It’s inoperable unless you re-enable it.
Disable background data for specific apps: If background data is disabled for an application, it can now only access the network if it’s currently running in the foreground. While this feature seems to be intended to prevent a data-hogging app from using all your bandwidth if you’re not lucky enough to be on an unlimited data plan, it can also be used to protect your private information. For example, Google Maps needs access to your location while you are engaging with the app. But if you’d prefer it not collect and send information about you while you aren’t interacting with it, you can disable background data for the specific app. A word of warning though: just because an application can’t access your network doesn’t mean it can’t send data over WiFi.
My face is my passport, verify me:
Step aside standard number and swipe unlock codes: Ice Cream Sandwich will allow users to unlock their phones using facial recognition. Users will simply point the phone’s camera at their face to unlock their device. If it doesn’t recognize you (because you are in the dark, shaved your beard, got plastic surgery or are wearing too much makeup) then it will ask for an unlock code. (Note: there has been some speculation that you will be able to bypass this lock by pointing the camera at a good picture of the person in question: Tim Bray, a Developer Advocate for Android, insists via Twitter that you can’t unlock it with a photograph).
I’ll reserve my judgment on this feature until I get a chance to play with it, as it is not included in the emulators that shipped with the Android 4.0 SDK as far as I can tell.
Overall, I think including your contact details on your lock screen, being able to encrypt your whole device and the enhanced control over applications included in Ice Cream Sandwich’s new security features look to offer enhanced protection for the Android platform. As an Android user, I can’t wait for Ice Cream Sandwich to start rolling onto my favorite Android devices. As a developer I can’t wait to get started playing around with the new APIs to deliver great new features to Lookout Mobile Security!
October 20, 2011
Recently, Lookout identified a new Android Trojan, LeNa, which is an evolution of the Legacy variant discovered earlier this year (also known as DroidKungFu). Previous Legacy variants were spotted only in alternative app markets and forums in China, collecting various details about users’ Android devices. More recently, we discovered a variant of Legacy, which we are calling LegacyNative (LeNa) that was predominately found in alternative Chinese Markets, but a couple instances were also found on the Android Market. LeNa has similar capabilities as its predecessors, but it uses new techniques to gain a foothold on mobile devices.
All Lookout users are already protected against LeNa. We let Google know about the variants and all LeNa infected apps were promptly removed from the Android Market.
How it Works
Unlike its predecessors, LeNa does not come with an exploit to root the device, rather it requests privileged access on a pre-rooted device. On un-rooted devices, it offers “helpful” instructions on how to root the phone. In some samples, LeNa is re-packaged into apps (a VPN management tool, for instance) that could conceivably require root privileges to function properly. Other samples attempt to convince the user that root access is required to update. Once the user grants LeNa with root privileges, it starts its infection process in the background, while performing the advertised application tasks in the foreground.
Once on a user’s device, the Trojan takes a different tactic than previously seen to infect and launch the malware. LeNa hides itself inside an application that is native to the device (an ELF Binary). This is the first time an Android Trojan has relied fully on a native ELF binary as opposed to a typical VM-based Android application. In essence LeNa trojanizes the phone’s system processes, latching itself onto an application that is native to the device and critical to making the phone function properly.
Our analysis shows it having a number of malicious capabilities after requesting root access:
- Communicating with a command and control (C & C) server
- Downloading, installing and opening applications
- Initiating web browser activity
- Updating installed binaries, and more.
While analyzing and watching LeNa, we’ve seen quite a few things that were pushed by the server. One of the applications being pushed by the C&C server was a DroidDream infected application. This may show a possible correlation between the creators of the DroidDream/DroidDreamLight variants of Android malware and the Legacy variants.
Click here for the complete technical teardown on LeNa.
Who is affected?
Though LeNa has primarily been distributed through third-party markets, a handful of samples were removed from the Android Market. Among the infected apps are One Key VPN and Easy VPN. In total, LeNa was repackaged in over 40 applications, often utility applications (VPN app, a Reader app, security application, etc.).
How to Stay Safe
- Only download apps from trusted sources, such as reputable app markets. Remember to look at the developer name, reviews, and star ratings.
- Always check the permissions an app requests. Use common sense to ensure that the permissions an app requests match the features the app provides.
- Be alert for unusual behavior on your phone. This behavior could be a sign that your phone is infected. These behaviors may include unusual SMS or network activity.
- Download a mobile security app for your phone that scans every app you download to ensure it’s safe. Lookout users automatically receive protection against this Trojan.
October 19, 2011
As part of National Cyber Security Awareness Month, we have been reminding our users to treat their smartphones and tablets as mini-computers. Just like your computer, your smartphone has access to WiFi networks. A quick Google search for “public WiFi” will give you plenty of articles with tips on how to stay safe while on public WiFi on your PC, like this slideshow from PC Mag. But how do you make sure the data on your phone is protected as well?
Public WiFi networks, the kind you find for free in coffee shops and airports, are usually unsecured; this means that all of the data sent over the network is unencrypted. Sending data unencrypted (e.g. via HTTP rather than HTTPS) is like sending your sensitive data in clear envelope so that everyone can see its contents rather than in an opaque envelope. So while the free Internet connection may seem convenient, if you are connected to an unencrypted network, anyone with the right tools would be able to see where you are surfing, the emails you are sending, and potentially even the passwords that you enter.
The key to securing the activity on your phone from prying eyes is pretty simple: you need to encrypt it. Here are 7 actions you can take to ensure you are surfing the web in the safest way possible:
If possible, connect to an encrypted WiFi network. In general, a network that requires a password is safer than a network without a password because of the encryption. Just because you are paying for Wi-Fi, that doesn’t mean it is secure, anyone with the same password could potentially access your data. Tip: Many people think that paid WiFi hotspots are more secure than free hotspots. While this may be somewhat true, just because you are paying for WiFi doesn’t mean it is secure – paid hotspots are almost always unencrypted and just use a captive Web portal to prevent access if you haven’t paid yet.
Let your device forget any public networks to which you have previously connected. To prevent reconnection:
- On Android: Go to Settings > Wireless & networks > WiFi settings > Click on the open network name and hold down until you see a menu, then click “Forget Network”
- On iPhone: Go to Settings > WiFi > Click on the blue arrow next to the network name and then select “Forget this Network” at the top of the page
Use encrypted websites
Even if you aren’t able to connect to a secure WiFi network, you can still protect your data by using websites with SSL encryption (note: the URL will start with HTTPS instead of HTTP are encrypted). You will also see a lock next to the URL that lets you know your data is protected. Check out this video of Lookout’s CTO, Kevin Mahaffey, giving a demonstration on how to ensure you are using SSL encryption whenever possible.
Use your data connection
When you are away from your home or work network, you can’t go wrong with using your 3G or 4G cell data connection instead. Even though it is a little slower and it uses your battery more than sending data over WiFi, it is a secure connection. Most cell service providers encrypt the traffic between cell towers and your device, so you can send emails and check your bank account balance with the peace of mind that your data is secure.
Download a security app that notifies you as soon as you connect to an unsecured WiFi hotspot. iPhone users can download Lookout to alert them if they connect to an unsecured WiFi hotspot that could expose their personal data and passwords.
Only window shop
If you can’t take any of the actions above to protect your data, you can still surf the web, but we recommend that you wait until you are on a secure connection to transmit sensitive data. Just imagine that a stranger is looking over your shoulder the whole time and can see everything that you do on your phone – and don’t do anything that you wouldn’t want them to see!
Consider using VPN if your device supports it
For those of you that may be a bit more tech-savvy, the most secure way you can connect to WiFi on your phone is through a VPN, because all of your data is sent through an encrypted tunnel. Both Android and iOS include VPN support, and this article from eSecurityPlanet gives you some quick options for how to set it up.
The great thing about the Internet is the ability to be connected and online 24/7. There is an unlimited amount of information at our fingertips and we can easily communicate with anyone at the click of a button. But with this connectivity comes security risk when it comes to your personal information. We just want to make sure that you aware of the security risks so you can keep your data and your phone protected.
October 18, 2011
You go everywhere and do everything with your iPhone – it’s your social calendar, your address book, your photo bragbook, your checkbook, and your touchstone to the outside world. As much as we love and rely on our iPhones we want to keep them safe, and keeping your iPhone safe should be simple. That’s why we built Lookout for iPhone as a single, easy to use app that keeps your iPhone safe and secure. Now you can download it for free from the Apple App Store.
When developing Lookout for iPhone, we focused on the issues most important to iPhone users. In a recent survey by Javelin Research, we found that 93% of iPhone users have concerns about the security of data stored on their phones. In addition, four out every ten users are unsure about the security of public WiFi and more than a third of users do not regularly sync their iPhone. So we made sure that Lookout can quickly find your phone if it’s lost or stolen, back up your precious data without syncing, and help you avoid connecting to unsecured WiFi or other actions that might expose the personal information on your iPhone. We can also restore your data to a different smartphone or even an iPad or tablet.
Lookout unites complete security and privacy protection in a simple yet powerful app. Whether you’re concerned about a network connection, wondering about the security of the software on your phone, or have suddenly lost track of your iPhone, Lookout has you covered. You can always rely on Lookout to protect your phone and your personal information. Lookout for iPhone includes:
Missing Device. Lookout can quickly find your lost or stolen phone on a map and sound a loud alarm to find it nearby – even if it’s set on silent and stuck in the couch cushions!
Security. Never before could you keep your iPhone safe and secure with a single app. Lookout walks you through a few simple steps to protect your privacy and secure your iPhone.
- System Advisor notifies you of iPhone settings or software that could put your privacy at risk. Lookout tells you if your iPhone software is out-of-date, which could mean you are missing recent fixes to security vulnerabilities. It also lets you know if your iPhone is “Jailbroken” which could leave you more susceptible to security threats.
- Location Services enable you to take control of your privacy by showing you which apps can access your location, helping you make more informed decisions about the apps you download and keep.
- WiFi Security warns you if you connect to an unsecured WiFi network to ensure that you don’t expose sensitive personal data like passwords or account information.
Backup & Restore. With Lookout, your contacts are automatically backed up no matter where you are. Over-the-air backup means that your contacts are always safe – even when you haven’t had time to sync your iPhone. You can view your data on the secure Lookout website at any time and restore your data to the same iPhone or a new iPhone, other smartphone or iPad.
Management. Lookout for iPhone can help you keep tabs on all of your important mobile devices – from your iPhone or iPad to an Android phone or tablet – all from a single, easy to use dashboard on our secure website.
Stay tuned for more details on all the exciting and useful features in Lookout for iPhone later this week. In the meantime, try out our new app for your iPhone, iPad or iPod Touch and tell us what you think! Lookout Mobile Security is now available for download in the App Store for FREE! Your iPhone is your lifeline, why wouldn’t you protect it?
October 17, 2011
There is a new scam being sent around on Twitter, very similar to a phishing scam written about in July by NakedSecurity. It all starts when you receive a Direct Message from a friend letting you know that a ‘bad blog’ has been published about you, along with a link that urges you to check it out.
If you click on the link, you are taken to a page that looks almost identical to the Twitter homepage. However, the URL of this webpage is twittler.com instead of twitter.com, which on a mobile device is even harder to distinguish because they are so small. If you mistake this fake page for the actual login screen and enter your login information, the people behind the phishing scam now have access to your account and can continue sending the scam to all of your Twitter contacts.
Many people are understandably worried after receiving a message that suggests there is a negative blog post written about them, and have fallen victim to this scam. If you were tricked don’t worry, you aren’t alone:
- Change your Twitter password immediately. If you use that password for other accounts, change them too and moving forward don’t use the same login for two different accounts
- Let all of your followers know about the scam and tell them not to click on any links from you
- Visit the Twitter Help Center for more tips
Some useful tips to stay safe in the future:
- Don’t click on a link if something looks fishy. (Tiny URLs are great to use on Twitter but you don’t always know where the link will lead you. A simple tool like LinkPeelr will help you get to the real destination of the link, and you can decide whether or not that destination is safe.)
- Use a strong password, and don’t use the same password for multiple websites.
- Follow Twitter’s @Spam and @Safety accounts for timely information on new scams.
- Download a security app like Lookout that reviews every link you click to make sure it’s safe.
October 13, 2011
A new Android phishing scheme posing as an unofficial Netflix app has been discovered outside of the official Android Market. The app asks for users’ Netflix usernames and passwords and sends them to a phishing server. The app was not posted to the Android Market, so the risk for most users is quite low.
When the app is launched, the user is presented with a login dialog requesting an email address and password. Instead of submitting those credentials to Netflix, the app collects the credentials and sends them to a remote server. This server now appears to be offline and unavailable. The app then presents an error screen to the user indicating incompatibility with the device.
While it is possible that the developers of this app sought access to Netflix accounts, we find it unlikely that that was the actual goal of the phishing scheme. Given the tendency of people to use the same password across many different accounts, we speculate that the authors sought to gather email addresses along with passwords that could likely be used to gain access into other accounts like email, Facebook, banking accounts and more.
Who Is Affected
The app seems to take advantage of the fact that the official Netflix Android application was not previously available for all Android devices. This app targets users who, due to being on a device that was unsupported by the official app, were looking for an alternative to watch Netflix movies. The official Netflix application has been available for some time, but it was only downloadable via the official Android Market by a restricted group of devices and platform versions, which Netflix said was due to wanting to provide the best possible experience for users.
With rumors circulating that the app actually does work on a broader range of platforms, users have extracted binaries and shared copies of the official application on Internet file sharing sites such as Mediafire.
How to Stay Safe
All Lookout users are already protected against this threat. If you have not downloaded an unofficial Netflix app outside of the Android Market, you are probably safe. If you believe you may have inadvertently downloaded this phishing app, you should change your Netflix password as well as any other passwords that shared that same password.
As always, we urge you to pay close attention to the apps you are downloading. Remember to:
- Only download applications from trusted sources, such as reputable application markets. Remember to look at the developer name, reviews, and star ratings.
- Always check the permissions an app requests. Use common sense to ensure that the permissions match the features the app provides.
- Be aware that unusual behavior on your phone or unexplained charges on your phone bill could be a sign that your phone is infected.
- Download a mobile security app for your phone that scans every app you download. Lookout users are automatically protected against this phishing app.
- Don’t share passwords across different logins. Create different passwords for all your online logins and avoid simplistic passwords, such as the last four digits of your phone number, or public information (birthday). As a general rule of thumb, if the passcode information may be available on Facebook—don’t use it for your code. | <urn:uuid:63b51705-c899-4b72-bea0-7bb20cfd4589> | CC-MAIN-2017-04 | https://blog.lookout.com/blog/2011/10/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00069-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929728 | 6,432 | 2.53125 | 3 |
What is Cornsilk? Information & Medicinal Properties of Cornsilk
What is Cornsilk?
Corn Silk is a collection of the stigmas (fine, soft, yellowish threads or tassels) from the female flowers of Corn (maize), and they are four to eight inches long with a faintly sweetish taste. Cornsilk (Zea mays) is an herbal remedy made from stigmas, the yellowish thread-like strands found inside the husks of corn. The stigmas are found on the female flower of corn, a grain that is also known as maize and is a member of the grass family (Gramineae or Poaceae). The stigmas measure 4-8 in (10-20 cm) long and are collected for medicinal use before the plant is pollinated. Cornsilk can also be removed from corn cobs for use as a remedy. If fertilized, the stigmas dry and become brown. Then yellow corn kernels develop. Corn is native to North America and now grows around the world in warm climates.
Cornsilk is also known as mother's hair, Indian corn, maize jagnog, Turkish corn, yu mi xu, and stigmata maydis. Corn is a grass which can grow up to 3 meter. Corn forms thick stems with long leaves. The flowers of corn are monoecious: each corn plant forms male and female flowers. The male flowers form the tassel at the top and produce yellow pollen. The female flowers are situated in leave axils and form stigmas or corn silk (yellow soft threads). The purpose of the cornsilk is to catch the pollen. The cornsilk is normally light green but can have other colours such as yellow, yellow or light brown.
Only cornsilk (styles and stigmas) is harvested for medicinal properties. Cornsilk should be harvested just before pollination occurs. Cornsilk can be used fresh or dried. The corn kernels (or corn) are a well known food.
Cornsilk Medicinal Properties
Cornsilk has detoxifying, relaxing and diuretic activity. Cornsilk is used to treat infections of the urinary and genital system, such as cystitis, prostatitis and urethritis. Cornsilk helps to reduce frequent urination caused by irritation of the bladder and is used to treat bed wetting problems. Cornsilk is found to reduce kidney stones. In China, cornsilk is traditionally used to treat oedema and jaundice. Studies indicate that cornsilk can reduces blood clotting time and reduce high blood pressure.
Corn originates from Central America but is cultivated in many countries as a food crop and as fodder. In countries with colder climate the whole corn plant is used a cattle feed.
Health Benefits of Cornsilk
Corn Silk is an old remedy for urinary tract ailments, including bed-wetting, painful and frequent urination, stones, bloating, gravel in the bladder and chronic cystitis and prostatitis. It is also thought to help relieve edema and the painful swelling of carpal tunnel syndrome and gout. Corn Silk is an old-fashioned, gentle, but effective, diuretic without the loss of potassium. Some new research claims that Corn Silk may help to lower blood sugar levels and reduce blood-clotting time.
Cornsilk also served as a remedy for heart trouble, jaundice, malaria, and obesity. Cornsilk is rich in vitamin K, making it useful in controlling bleeding during childbirth. It has also been used to treat gonorrhea. For more than a century, cornsilk has been a remedy for urinary conditions such as acute and inflamed bladders and painful urination. It was also used to treat the prostate. Some of those uses have continued into modern times; cornsilk is a contemporary remedy for all conditions of the urinary passage.
Corn Silk is an old and effective diuretic that promotes the flow of urine, relieving excess water retention, and it has been used to treat acute and chronic bladder infection, cystitis, urethritis, prostatitis (and other prostate disorders) and also combat urinary stones. Unlike other diuretics, however, the high level of potassium in Corn Silk offsets potassium loss caused by the increased urination when used as directed. The herb is also believed to relieve bladder irritation caused by the accumulation of uric acid and gravel and eases the pain of burning urination. When Corn Silk is given to children (or adults) several hours prior to bedtime, it is said to diminish the occurrence of enuresis (bedwetting). Because it soothes bladder irritation, Corn Silk generally helps to reduce the occurrence of frequent urination problems.
- Corn Silk helps to ease edema and swelling caused by many inflammatory conditions, such as gout and carpal tunnel syndrome, and as a demulcent, it helps to soothe inflammation, especially inflamed mucous membranes. It is also used to alleviate the bloating and discomforts of premenstrual syndrome (PMS).
- Drinking cornsilk tea is a remedy to help children stop wetting their beds, a condition known as enuresis. It is also a remedy for urinary conditions experienced by the elderly.
- Cornsilk is used to treat urinary tract infections and kidney stones in adults. Cornsilk is regarded as a soothing diuretic and useful for irritation in the urinary system. This gives it added importance, since today, physicians are more concerned about the increased use of antibiotics to treat infections, especially in children. Eventually, overuse can lead to drug-resistant bacteria. Also, these drugs can cause complications in children.
- Furthermore, cornsilk is used in combination with other herbs to treat conditions such as cystitis (inflammation of the urinary bladder), urethritis (inflammation of the urethra), and parostitis (mumps).
- Cornsilk is said to prevent and remedy infections of the bladder and kidney. The tea is also believed to diminish prostate inflammation and the accompanying pain when urinating.
- Since cornsilk is used as a kidney remedy and in the regulation of fluids, the herb is believed to be helpful in treating high blood pressure and water retention. Corn-silk is also used as a remedy for edema (the abnormal accumulation of fluids).
- Cornsilk is used to treat urinary conditions in countries including the United Sates, China, Haiti, Turkey, and Trinidad. Furthermore, in China, cornsilk as a component in an herbal formula is used to treat diabetes.
- In addition, cornsilk has some nonmedical uses. Cornsilk is an ingredient in cosmetic face powder. The herb used for centuries to treat urinary conditions acquired another modern-day use.
- Cornsilk is safe when taken in proper dosages.
- Before beginning herbal treatment, people should consult a physician, practitioner, or herbalist.
- If a person decides to collect fresh cornsilk, attention should be paid to whether the plants were sprayed with pesticides.
Cornsilk Side Effects
There are no known side effects when cornsilk is taken in designated therapeutic dosages.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:b9979462-484d-49cf-9d63-01c81a83f8e9> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-889.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00373-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938291 | 1,593 | 3.03125 | 3 |
Attacking Asthma with Advanced Telehealth Monitoring
AT&T’s prototype infrastructure provides a complete, end-to-end solution for delivering sensor data from patient to end users.
Asthma is on the rise, affecting people of all ages, particularly the young. This rise is a result of biological and chemical factors, with a main culprit thought to be airborne pollutants, smoke, fragrances and perfumes, and cleaning solvents and other volatile organic compounds (VOCs) that evaporate into the air from fabrics and carpets. With homes sealed to save energy and cut heating and cooling bills, VOCs build up in homes in high enough concentrations to trigger asthma attacks in some people.
AT&T is currently testing a portable room-monitoring device capable of detecting VOCs and then issuing alerts so those susceptible to VOC-induced asthma attacks can take preventative action.
Because healthcare is becoming increasingly data-driven as a new generation of lightweight, low-power sensors is being incorporated into a variety of new and old medical devices. Sensor-equipped heart monitors, pill dispensers, pulse-oximeters, glucometers, and other devices such as the VOC detectors will soon be sending continuous, real-time data.
Data transmitted to physicians’ offices will give physicians something they rarely have today: a long-term, comprehensive, multidimensional view of health and wellness.
This data will enable people to monitor their own health parameters and the environment around them, allowing those with chronic conditions to more carefully manage their symptoms and giving those in good health early indications of developing health problems.
But it’s the transmission of medical sensor data—to doctors, specialists, and researchers—that will do the most to transform healthcare.
Communications as a healthcare tool
Data transmitted to physicians’ offices will give physicians something they rarely have today: a long-term, comprehensive, multidimensional view of health and wellness, as well as a baseline by which to evaluate changes and anomalies. Today, physicians typically get only a snapshot of a patient’s health taken during an office visit, usually when the patient is sick. This “data” is hardly representative, and says little about what is happening in the months, or even years, between office visits. With so little data about a patient, physicians tend to focus on the specific problem or complaint, while disregarding other health indicators that provide more context for a patient’s symptoms.
Anticipating the direction healthcare was taking, AT&T started years ago looking at what would be required for relaying health data over AT&T’s existing IP net.
But sensor data, captured continuously over a long period, will give physicians a bigger picture, allowing them to see the interactions among multiple factors, such as how glucose readings change relative to weight or blood pressure. In the same fashion, data from a VOC detector may enable a physician to correlate a patient’s asthma attacks with a specific VOC trigger. With more data, physicians have more information to treat the illness rather than just alleviate symptoms.
When transmitted to medical researchers, sensor data aggregated from large numbers, perhaps millions, of people can be mined to better understand the links between diseases and their underlying causes. This is especially important for asthma and other complex diseases that are caused by a combination of environmental, biological, genetic and other factors. In the case of asthma, researchers don’t yet have a good understanding of why some people suffer asthma attacks and some do not, or why some people are susceptible to chemical triggers, and others to biological ones.
Data is key to answering these questions, but it can only be provided by individuals. Sensors make it easier to collect data. Getting the data from individuals to those that need it is the current focus.
Tying together technologies for an end-to-end infrastructure
The difficulty in transmitting medical data has always been collecting and delivering the data in a format that healthcare professionals and researchers can immediately begin using and analyzing without being concerned with the technical, low-level details of data transmission.
AT&T is now close to providing that capability. Anticipating the direction healthcare was taking, AT&T started years ago looking at what would be required for relaying health data over AT&T’s existing IP network. One key decision was choosing an appropriate protocol for transmitting sensor data from the analog device to the IP network. AT&T researchers, after evaluating different protocols, chose IEEE ZigBee wireless technology for several reasons. It has low power demands, powerful local networking M2M capability, and, like Wi-Fi, ZigBee allows devices to relay data by passing it off to nearby devices to reach more distant ones. (This contrasts with peer-to-peer Bluetooth, which does not richly network devices or provide mesh connections.) Wi-Fi and ZigBee belong to the same IEEE family of standards, 802.11 and 802.15.4, respectively.
The current step is to demonstrate meaningful use: to show that the VOC detector can be used by real people in real circumstances to avoid asthma attacks.
To transmit ZigBee-based data from sensors, AT&T Research helped create the ActuariusTM gateway, which uses a fixed broadband connection or a ZigBee-enabled smartphone to collect and securely forward measurements from Personal Health Devices standardized by IEEE 11073 and the Continua Health Alliance.
Data transmission is just one part of a complete, end-to-end communications infrastructure, which must also encompass cloud computing and services, big-data analytics and management, and emerging sensor-equipped devices—all areas in which AT&T maintains active and far-ranging research efforts.
AT&T has one other critical advantage when it comes to transmitting sensitive medical data: AT&T is a trusted entity, and does not use the model of “free” services where the hidden cost is the service provider’s access to and use of data.
Proving the concept
The asthma VOC detector is the first device of its kind being tested in conjunction with this infrastructure.
Last year, researchers demonstrated in the laboratory that the VOC detector can detect a range of airborne VOCs and then alert when concentrations might be high enough to trigger attacks.
The current step is to demonstrate meaningful use: to show that the VOC detector can be used by real people in real circumstances to avoid asthma attacks. Preliminary trials are now under way in conjunction with major healthcare partners. Though limited in scope now, the trials will expand next year as the number of participants increases substantially.
Even while meaningful use is being tested, AT&T is looking to make the device more useful by creating improved software and analytics so the device can discriminate among the various types of VOCs. Inserted into the VOC detector, this code will allow the detector to correlate occurrences of specific elevated VOC concentrations with other measurements (e.g., heart rate, blood oxygen, and other asthma indications) to try to zero in on the specific compound that triggers an attack in a specific individual.
This ability to personalize healthcare for the individual, to know the specific VOC triggers or know which medical indicators are anomalous for a specific person, further points to the almost limitless potential of data-driven healthcare. As more data is collected and analyzed, physicians and medical researchers will understand better how to not only help those already sick, but to maintain wellness in those who are healthy. This is something healthcare has been needing for a long time. Proving that a VOC detector can prevent asthma attacks is a step toward a healthcare system centered on health maintenance.
Results of the trial are expected in 2013.
AT&T's history with medical devices
AT&T may seem a nontraditional healthcare participant, but the company’s technology has long been incorporated into medical devices.
Metal detector (1881). Alexander Graham Bell invents a device to locate bullets lodged in Civil War survivors. Coils generate small currents of electricity, producing a signal when near a metal bullet. The detector was used in an unsuccessful attempt to locate the bullet lodged in President Garfield’s body. The metal coils in the spring mattress, an innovation of the day, interfered with the ability to find the bullet.
AT&T wireless heart monitor (1974). A miniaturized FM transmitter sends analog heart data to a nearby FM radio using early FCC unlicensed spectrum rules; the monitor’s “tunnel diode” device (used for the low-power oscillator) is now being revisited for exploration of Terahertz spectrum use.
CCD imaging technology (1970s). Developed originally as a new type of computer memory, the CCD (for charged-coupled device) incorporated light-sensitive silicon 100 times more sensitive than film or camera tubes. The technology was quickly incorporated into all camera types, including (in the 1990s) endoscopes and other medical cameras that could be used to look inside the body. The inventors of the CCD, Willard Boyle and George Smith of Bell Labs at Murray Hill, shared half a Nobel Prize for the invention.
Smart Slipper (2008). AT&T researchers embed pressure sensors, accelerometers, and a ZigBee radio into a slipper's cushioned insoles to continuously gather data about a patient’s gait and weight distribution. This data is transmitted over AT&T's network to physicians who can evaluate who is at risk of falling or to see early warning signs of Alzheimer’s or other health problems. The Smart Slipper, produced by the company 24Eight LLC (now ACM Systems), is now in clinical trials.
Actuarius Gateway (2009). Named for the honorific bestowed on physicians during the middle ages (after the physician Joannes Zacharias Actuarius), this medical gateway converts ZigBee-protocol data to IP for transmission to healthcare professionals. Invented at AT&T Research, Actuarius can use a fixed broadband connection or a ZigBee-enabled smartphone to collect and securely forward measurements from Personal Health Devices and the Continua Health Alliance. Working with the AT&T Network, VitalSpan a (cloud-based data system), and a Health Information Exchange such as AT&T’s Healthcare Community Online, the system can automatically retrieve health data from devices and make them available to medical professionals.
About the author
Bob Miller heads the Communications Technology Research Department at AT&T Labs - Research. His department develops new concepts and technologies for next-generation AT&T wired and wireless broadband packet access systems and services.
He holds a variety of patents covering wireless transmission, advanced speakerphones and acoustics, digital telephones, and advanced networking applications using IP technologies. | <urn:uuid:37f498f6-339c-4b80-a68c-9bcf47696d07> | CC-MAIN-2017-04 | http://www.research.att.com/articles/featured_stories/2012_12/201212_asthma_VOC_detector.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00373-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924785 | 2,215 | 2.734375 | 3 |
Adware is a form of malicious software which displays unwanted advertising on your computer. For more information, see this blog post.
Typically a type of Trojan malware that allows its creator or proponent to gain access to a system by bypassing its security. The term “backdoor” can also refer to the method of gaining access to user systems undetected; should not be mistaken for exploits.
Other form/s: backdooring
In the context of computer malware, behavior refers to the actions malware performs on an affected system once executed.
A derivative of the word “robot.” It usually pertains to (1) one or more compromised machines controlled by a bot master or herder for the purpose of spamming or launching DDoS attacks, and (2) an automated program coded with certain instructions to follow, which includes interacting with websites and humans via the use of Web interfaces (e.g. IMs). A collective of bots is called a botnet.
Synonym: zombie machine
A collection of bots. The term also refers to the malware run on a connected device to turn it into a bot.
A bundler is a group of programs that are bunched up together to be installed with a main program, which is usually what users desire to install onto their systems. These additional programs are other unwanted software, such as adware and toolbars.
Stands for command and control, which may pertain to a a centralized server or computer that online criminals use to issue commands to control malware and bots and to receive reports from them.
Other forms: command & control, C2
DDoS stands for Distributed Denial of Service. It is a network attack that involves attackers forcing numerous systems (usually infected with malware) to send network communication requests to one specific web server. The result is the receiving server being overloaded by nonsense requests and either crashing the server and/or distracting the server enough that normal users are unable to create a connection between their system and the server. This attack has been popularized in many “Hacktivism” attacks by numerous hacker groups as well as state-sponsored attacks conducted by governments against each other.
DNS stands for Domain Name Service. It is an internet protocol that allows user systems to use domain names/URLs to identify a web server rather than inputting the actual IP address of the server. For example, the IP address for Malwarebytes.com is 126.96.36.199, but rather than typing that into your browser, you just type ‘malwarebytes.com’ and your system reaches out to a ‘DNS Server’ which has a list of all domain names and their corresponding IP address, delivering that upon request to the user system. Unfortunately, if a popular DNS server is taken down or in some way disrupted, many users are unable to reach their favorite websites because without the IP address of the web server, your system cannot find the site.
Pertains to (1) the unintended download of one or more files, malicious or not, onto the user’s system without their consent or knowledge. This usually happens when a user visits a website or views an email on HTML format. It may also describe the download and installation of files bundled with a program that users didn’t sign up for. These files can be adware, spyware, or PUPs; (2) the general term used for files that were downloaded unintentionally; i.e. “drive-by downloads.”
Pertains to (1) a type of malware programmed to take advantage of a software bug or vulnerability on a system in order to compromise it and allow the exploit’s creator or proponent to take control of it; (2) the act of successfully taking over a system by taking advantage of certain software vulnerabilities installed on it. A collection of exploits is called an “exploit kit.”
A collection of exploits which are packaged up for use by criminal gangs in spreading malware.
In the context of malware, a keylogger is a type of Trojan spyware that is capable of stealing or recording user keystrokes.
Other forms: key logger, keylogging
Synonyms: keystroke logger, system monitor
Malware which is delivered by email messages. For more information, see https://blog.malwarebytes.com/threats/malspam/
The shortened version of “malicious software.” Malware is the generic or umbrella term to refer to any malicious programs or code that are harmful to systems.
Penetration Testing (or “pen testing”) is the practice of running controlled attacks on a computer system (network, application, Web app, etc.) in an attempt to find unpatched vulnerabilities or flaws. By performing pen tests, an organization can find ways to harden their systems against possible future real attacks, and thus to make them less exploitable.
An attempt to fraudulently obtain credentials without permission, often done by email but also appears on social networks, in fake programs asking for login details, and over the phone.
Stands for “potentially unwanted program.” A program (or bundle of programs) which may be included with software the person downloading it wants. The PUP component may include unnecessary offers, add-ons, deals, adverts, toolbars, and pop-ups, all of which may be entirely unrelated to the functionality of the sole wanted program.
A type of software which locks users out of their computer and/or encrypts their files, offering to unlock on the condition that the victim pays a ransom. The ransom may involve Bitcoin or more traditional forms of payment. Ransomware ranges from crude to highly sophisticated, and only a few types are able to have their encryption successfully decrypted.
A common technique malware uses: running the original executable, suspending it, unmapping from the memory, mapping the payload on its place, and running it again.
A program which claims to perform one function but actually does another, typically malicious. Trojans can take the form of attachments, downloads, and fake videos/programs. Once on board a PC, the Trojan may do a number of things including steal sensitive data, monitor webcams, upload files to a third-party server, or just play pranks on the system owner by opening the CD tray, switching off the screen, or redirecting them to shock sites and other unwanted content.
Typosquatting is the practice of deliberately registering a domain name which is similar to an existing popular name, in the hope of getting traffic by people who mis-type the URL of the popular domain. For more information, see the article typosquatting.
A virus is malware attached to another program (such as a document) which can replicate and spread after an initial execution on a target system where human interaction is required. Many viruses are harmful and can destroy data, slow down system resources, and log keystrokes.
A worm is much the same as a virus, with the key difference being it does not need to be attached to another program to spread. | <urn:uuid:43a78045-0ce8-4797-981f-e87b2b32435a> | CC-MAIN-2017-04 | https://blog.malwarebytes.com/glossary/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00373-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934023 | 1,474 | 3.171875 | 3 |
Broadband services are lucrative in and of themselves, but ancillary services that can ride on top of data such as digital cable and digital telephone services push broadband over the top. Is it any wonder then why telecoms and cable companies, once organizations with exclusive offerings, are not competing with each other in markets across the country? Even utility companies are trying to get a piece of the action, especially power grids. After all, who would be better positioned to deliver broadband to consumers than the same organizations that offer then electricity?
BPL is an acronym for broadband over power lines, and is generic term used to describe using power lines for broadband communications. What makes BPL so attractive from a technical standpoint is that it has the potential to deliver broadband access to areas that are not being serviced by telecoms or cable companies. This leaves only wireless and satellite broadband systems to compete with, and that may be attractive to many in rural or sparsely populated regions.
How Fast is BPL?
At this stage, BPL systems destined for consumer usage offer speeds ranging from around 256 Kbps to approximately 3 Mbps depending on the underlying technology. While these speeds are nothing compared to the latest generation VDSL, cable-broadband, and fiber optic-based solutions, they are certainly competitive with satellite broadband services. There are certainly some arguments about whether or not existing wireless broadband holds an edge over BPL, but it all depends on how one looks at it.
If wireless broadband is not available in a given region, then BPL wins by default. In cases where wireless broadband services are available, there may be limitations such as bandwidth caps or frequent weather conditions that would make BPL a better choice. Of course, there are also plenty of times when BPL would be an inferior choice to wireless services, so it is very hard to definitively state which choice is superior overall.
How Does BPL Work?
If one were to envision a BPL network, it would start at a fiber-optic based hub that connects the power company’s data center to the Internet’s backbone. From here data would be relayed, probably via fiber, to different substations, though there are some BPL options that would allow thick power lines to carry massive quantities of data too and from the main hub.
It is the thick wires that make BPL an attractive option in some regards. The problem with metal wires used by DSL systems and even those by cable-based broadband networks tend to be very thin. Thicker wires have greater tolerance for a wider range of frequencies and amplitudes as a general rule, but unfortunately for BPL systems, the fact is that the power regulating equipment around the country has very fine tolerances that negate much of this advantage. Instead of being able to handle amplitudes and frequencies that would turn thinner copper wiring into slag, these thick wires are going under-utilized at this time, even when BPL data is being transferred over them.
The frequencies used by existing BPL solutions typically range in the low hundred kilohertz range, and is usually split at the coupler and bridge box that is essentially the street-cabinet of the power utility universe. Couple and bridge boxes are found on electric poles and even on the ground throughout the country, and most neighborhoods have at least one. Data typically arrives at these boxes after traveling a long distance from the utility substation and through a backhaul point that is the BPL equivalent of a DSLAM.
The Future of BPL
Companies such as Enikia and Ameren are working tirelessly to help develop and promote a new generation of power-station equipment that would help BPL become a serious possibility, but the work is ongoing. The current effort to rebuild the nation’s power grid as well as the push for complete coast to coast broadband penetration would seem to bode well for the future of BPL, though there are some who would suggest that power line transmissions are not the greenest choice. This is certainly a worthy argument, as fiber optics would seem to be the best way forward in terms of speed and environmental friendliness, but it may not be practical to bury fiber optic cables throughout the entire country.
It is hard to say exactly what the future of BPL is at this point, but there are major players backing BPL. Additionally, the possibility of IPTV and digital telephone services delivered over BPL systems is enticing, and could cause a whole new round of price wars that would benefit consumers. Even if BPL does not become a serious contender in the broadband arena, it will still be appreciated by those with few alternatives. | <urn:uuid:5f1ba95a-b942-498d-bbfe-81b6c026fdc6> | CC-MAIN-2017-04 | http://www.highspeedexperts.com/bpl-basics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964509 | 943 | 3.15625 | 3 |
There is a group armed with a $7.5 million National Science Foundation (NSF) award that is bringing the latest research, insights and innovations from the lab to the voting booth and hopefully make such systems more secure and error free.
A Center for Correct, Usable, Reliable, Auditable and Transparent Elections (ACCURATE) brings together computer experts from across the country and across academic disciplines to find areas that need further research and determine how to apply existing technology and research insights to voting systems. Some of the team's research focuses on system-level issues that affect many aspects of an election, the group said.
The team headed by Avi Rubin, a professor of computer science at Johns Hopkins University, has created several new tools using existing theories and approaches commonly used in computer science to test voting technologies and systems, which state and local elections officials can use to test their election plans and find possible vulnerabilities.
One such tool is the open source AttackDog, a threat modeling system developed by David Dill, a co-primary investigator on the project and a professor at Stanford University. Using algorithms, AttackDog looks at more than 9,000 potential ways a voting system can be attacked, including computer hacking, ballot tampering and voter impersonation. The program contains certain assumptions about each kind of potential attack and countermeasure, and then creates an attack tree, a way of conceptualizing potential faults that is commonly used in computer science and engineering. As new potential attack methods become apparent, the system can be adapted to consider the new threat.
AttackDog works by factoring in all the characteristics of an election system--the number of polling places, the type of voting machine used, the number of poll workers, and so forth. AttackDog will then look at each step in the elections process, from when the ballots are designed to the point that they are counted, and try to find possibilities for an attack at each stage. Planners enter in the details of their countermeasures for each potential vulnerability. AttackDog then takes that new information and tries to find new weaknesses in the election's security precautions, Dill said.
According to Dill, AttackDog is a good example of how the ACCURATE project uses computer science tools and techniques to help local officials improve the security of their elections. "It's using computers to get a grip on problems that are too complex for the mind to understand unaided," Dill says.
Other ACCURATE members from Rice University have designed and implemented a system called "Auditorium" that forms the base of a voting system prototype called "VoteBox." Auditorium is a networked logging and auditing system built from timeline entanglement and broadcast messages. Auditorium allows anybody to audit the events, in the order that they occurred, with strong cryptographic guarantees to protect against tampering with the timeline. Further research on secure logging is considering how such log verification might scale to an entire election in real time.
Further, other ACCURATE members at the University of California, Berkley, are studying methods for building trustworthy audit logs in electronic voting systems. In particular, their goal is to design a mechanism that records the entire user interaction between the voter and the voting machine and allows auditors to replay a "movie" of that interaction after the election. The research challenges are to ensure that this audit log does not compromise ballot secrecy and that it is trustworthy.
Other parts of ACCURATE's research focuses on more specific issues such as identifying the role of cryptology in voting security, designing voter verification systems, relating election policies to new technologies and improving the usability and accessibility of the voting process, the group said.
Layer 8 in a box
Check out these other hot stories: | <urn:uuid:50d0ce01-2f75-41b4-a0fe-e94a667e30ed> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2344622/security/can-computer-scientist-dream-team-clean-up-e-voting-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00033-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945055 | 753 | 2.734375 | 3 |
In my current studies I did some work about security inside networking data paths. In my recent work I tried to get some experiments done that needed to use source based routing in order to be completed. Like most of scientific work that tries to get from paper to experiment and then to something useful, it failed at the very beginning. If I can be more precise and improve a bit the appearance of my failure here. I can do it by explaining what happened and what did I came across while researching my idea. It’s something as simple as this:
Source based routing, by the suggestion of IETF needs to be disabled by default on networking devices. At least it should be as the feature itself is recognized as a major security threat and IETF itself is trying to get rid of it.
Of course that can be considered like a stop sign in an experiment where you are relying solely on source based routing to get your thing running. (:
When you look at the networking technology these days that’s probably IP protocol that you are talking about. Okay maybe you are new age junkie and the you are probably speaking about IPv6 protocol. Either way, the very first and main principle of routing packet across data network is based on destination IP address routing/decision making. Router is making decision on where will he send some packet based more or less solely on destination IP address. It is doing so by reading his locally built routing table of destination subnets. From that table router gets the info out of which interface will he sent the packet that is destined for some address.
When the whole architecture and the logic behind routing was developed the idea was all about getting the packet to the destination by the best path possible. Based on the routing table build from the info received from their neighbors, routers will be able to calculate best path to destination subnet. But what if we want to force the packets to go across some other way. What if we want the packet to visit some other destinations before being routed to final destination written inside his destination address IP header? It was made possible.
In either IPv4 and IPv6 protocol there is a build in option for a sender device to force the path for packets that are being sent. The mean of sender to influence and change normal best path routing implemented by every routing protocol is called source routing.
I want to be clear here, you have all sorts of traffic engineering out there that give you different way to control traffic paths that some datagram will be sent across. Almost all of those methods consider that you will need access to some routers along the way and change routing decision on them. This is of course not scalable for everyday use, particularly not if you are not the owner of those routers. If we speak about Internet here, and we are, then you are surely not he owner of those routers.
Source routing on other hand is basically on of those methods where you can influence routing and stay at home. In this case you do not need to get the access to other routers on the network except the first one that is sending the packet. You are able to get packet steered to some different path somewhere when they leave your source router without changing anything along the way. But how is that possible?
You are simply sending packets that already have all the waypoints that they needs to visit along the way written inside their headers. In the form of IP address of course.
There nothing so clear like a real example!
Packet usually follows normal routing protocol calculated path from source towards destination IP address. On the image above I draw that path in orange. Inserting source routing extension inside sent packets forces the packets to turn from second orange router towards waypoint IP address. Source routing extensions are simply in form of IP address of intermediary router/waypoint address. The waypoint router is one of the red routers with label “waypoint” on it of course. The thing is that when inserting source routing waypoint ip addresses inside the packet, packet is basically using destination routing again but this time only towards waypoint IP address. When it arrives to waypoint IP address device it will be router with normal routing path again towards his real destination. It’s like saying to a packet: “Go to waypoint!” (he is your first destination) and then “Go to real destination”.
You can define more waypoints on the way to destination. Including more waypoint routers make you get even more control on path choice and provides more discovery capabilities. Here somewhere was one of my PhD experiment basis and it surely fall apart after reading IETF suggestions on deprecating this way of doing things.
When we speak about IPv4 there are Loose and Strict Source Routing options – LSSR and SSR. When we look inside IPv6 world there is Type 0 Routing Header extension – RH0.
- LSSR – Lose Source Routing options
- SSR – Strict Source Routing options
- RH0 – Type 0 Routing Header extension
IPv4 networks are now safe because almost all network administrators listen to suggestion of IETF and other security folks and in order to prevent security issues source routing options on routers are almost always disabled. After a while it resulted in that most vendor default configuration for devices is disabled source routing al least on router devices.
When we speak about IPv6 negative security impact regarding source routing enabled by Type 0 Routing Header is even worst than in IPv4. This resulted in IETF tries to completely deprecate the Type 0 Routing Header. It will basically means that the source routing in IPv6 protocol is completely kicked out of RFC.
Possible attack using source routing can have various forms, some of them include network discovery and denial-of-service attacks. DoS attack is one of the main reason for the deprecation suggestions.
Different implementation of source routing in IPv4 vs IPv6
IPv4 mechanism of source routing is enabled by adding the waypoint addresses inside option part of header. Size of addresses list is limited in size and can be up to 40 bytes long. The reason for that is not security but simply the standard for IPv4 that defines option part of header not to be greater than 40 bytes. It’s good because it lets us make a list of 9 waypoint address at most.
IPv6 is a bit different of course. Source routing here is implemented in an extension header that is found after IPv6 header and before the upper-layer payload. Extension headers are different than normal option headers and they are not limited in size. They are taking the payload space to get more room when needed and thus they can become huge. If we do not permit the fragmentation in this experiment number of waypoint addresses in IPv6 extension header will be limited by the maximum size of the packet. Maximum size of the packet is defined by TCP MTU and in this case we can make a list of 90 waypoints inside the packet. This will be true if we assume the MTU of maximum 1500 bytes.
You can probably see that from security perspective IPv6 source routing is far worst that IPv4 implementation of the same thing. IT sounds great to be able and send the packet to 90 destinations before getting it to the last real destination address. But let’s take an example like this into account:
You send out some traffic at high-speed connection and tell to all those packets to visit this list of 90 waypoints on the way toward the end of the path:
If we know that those two address are existing on some devices on IPv6 internet that it is clear that we will get the bast route between them full of our junk traffic. They will start to forward our packets to each other until they do maybe 45 sends each, maybe 44 sends each and in that way probably congest the link. Imagine if they have 1Gb between them and I send 1GB of data all with that IPv6 extension headers. Do you see the issue here?
Basically, an attacker with only few Mbit/s upload link can congest more than 100 Mbit/s link without any problem.
Who is not doing RH0?
There were more different conferences and public speeches and also the IETF made considerable effort to stop the usage of RH0 inside routers and operating systems. Linux kernel 22.214.171.124 and later, most BSD versions, Apple after 10.4.10 are some vendors that did disable the RH0 couple of years ago. I am sure for Cisco that it is forwarding packet based on RH0 header by default and I thing Juniper is also one of those who did not change the default configuration.
On cisco devices you can easily turn RH0 off by typing:
(conf)#no ip source-route
Conclusion and reading suggestions
There are more attacks possible using source routing but for now I will stop here and perhaps write another article solely about those different attacks techniques. For now here’s the list of suggested articles about source routing that I stumbled upon during my research. | <urn:uuid:3b59654c-7309-4b5d-b08b-2acd9c36570d> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2014/source-based-routing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00519-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944174 | 1,852 | 3.171875 | 3 |
Violante R.A.,Argentine Hydrographic Survey |
Paterlini C.M.,Argentine Hydrographic Survey |
Marcolini S.I.,Argentine Hydrographic Survey |
Costa I.P.,Argentine Hydrographic Survey |
And 9 more authors.
Geological Society Memoir | Year: 2014
The Argentine continental shelf is one of the largest and smoothest siliciclastic shelves in the world. Although it is largely emplaced in a passive continental margin, the southernmost regions are related to transcurrent and active margins respectively associated with the Malvinas Plateau and Scotia Arc. Sea-level fluctuations, sediment dynamics and climatic/oceanographic processes were the most important conditioning factors in the modelling of the shelf, with a minor influence from isostatic and tectonic factors that are more relevant in the southernmost regions. The shelf is shaped by diverse geomorphic features, among which the most significant are four sets of terraces genetically associated to sea-level stillstands during the post-glacial transgression; the final one occurred at around 11 ka and is associated with the Younger Dryas event. The Last Glacial Maximum (LGM) sedimentary sequence is composed of, on average, 5-15 m-thick terrigenous, siliciclastic, relict-palimpsest sands mainly sourced from the Andean region, with minor amounts of bioclast and gravels, resulting from the reworking of pre-transgressive coastal environments. © The Geological Society of London 2014. Source | <urn:uuid:d988d347-9ae1-4c10-b60f-fb39459b7801> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/argentine-hydrographic-survey-1880459/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901714 | 332 | 3.046875 | 3 |
Antennas Reduce Absorption Rate of Wireless Devices Without Sacrificing Signal
Viera, Fla. — May 31
SkyCross, a global antenna solutions company, has developed a way to reduce specific absorption rate (SAR) levels in small wireless devices without sacrificing signal strength. Prior to this latest SkyCross innovation, reducing transmitted power was a means to reduce SAR for regulatory compliance, but the signal strength could drop to a level that network operators would not accept.
Incorporating exclusive iMAT technology from SkyCross in small wireless devices enables them to operate using half the power without any change in signal strength while reducing SAR levels and increasing battery life.
SAR measures the rate that the body absorbs radio-frequency (RF) energy when exposed to an electromagnetic field. Various governments around the world regulate SAR levels for wireless devices including handsets, USB dongles, and laptops as a safety precaution. Signal-to-noise ratio (SNR) compares the strength of a desired signal to the level of background noise. Engineers widely understand that reducing power nominally by 50 percent also reduces the SAR by 50 percent, but prior to availability of the SkyCross solution this would also mean that SNR was cut in half.
iMAT (Isolated Mode Antenna Technology) is a design technique that enables a single antenna element to behave like multiple antennas. By using iMAT in a small device such as a USB dongle, where it is particularly challenging to support multiple antennas, iMAT enables transmit beam forming so SNR is not affected when the output power is reduced.
This SAR breakthrough is the latest in a series of iMAT developments. Since iMAT was launched last year, it has been incorporated in many small wireless devices, successfully replacing many antennas traditionally required for diversity or MIMO. The technology simplifies integration, saves space and reduces bill of material (BOM) costs. iMAT has also consolidated the number of antennas required in smart phones and enabled a game-changing handset architecture that lowers the cost of ownership by eliminating the switchplexer.
“iMAT continues to demonstrate its versatility as SkyCross develops novel solutions for today’s small wireless devices,” said Joe Gifford, executive vice president at SkyCross. “We are very pleased to offer any device manufacturer or network operator a way of addressing SAR issues that not only provides safer levels of RF energy but is also environmentally friendly by requiring less power.” | <urn:uuid:331a5db1-f7b3-457b-aed8-ba2d987bc124> | CC-MAIN-2017-04 | http://certmag.com/antennas-reduce-absorption-rate-of-wireless-devices-without-sacrificing-signal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934245 | 503 | 2.65625 | 3 |
Users delete information daily. Before you support deleting in your app, think about the type of information users might delete and how important it is. If users delete something accidentally, they might want to retrieve it, but prompting users to confirm every deletion can slow them down.
You can apply most of the information about deleting to the actions of removing and resetting.
- Deleting: Deleting an item from the BlackBerry device
- Removing: Removing an item but not deleting it from the device (for example, deleting a song from a playlist)
- Resetting: Returning values to a predefined state and losing all changes
Use the criteria below to determine how you should support deleting information in your app:
Users might lose valuable data that affects how the device functions.
Resetting the device or an app.
Removing an email account.
Show a dialog box that describes the outcome of the deletion and requires users to confirm that they understand the consequences before the deletion occurs.
Users might lose valuable application data or content in an application.
Deleting an email.
Deleting a contact.
Deleting a playlist.
Show an intrusive toast that lets users undelete. The toast should disappear within 3 seconds of the user interacting with the screen. Toasts with buttons must be used to undo deletions only.
Users might reproduce the content easily.
Removing a song from a playlist.
Removing an alarm setting.
Removing a tag from a photo.
Don't ask users to confirm deletion.
Implementation of the delete function
Users can delete or remove items in the following ways. Choose the way that works best for your app.
- In a context menu, users can touch and hold an item or use the multi-select gesture to open the context menu. Place the Delete action at the bottom of the menu.
- In an action bar, users can open the action menu. Place the Delete action at the bottom of the menu. You can use this approach when users are in a content view (such as reading an email, looking at a photo, or viewing the details for a contact).
- Users tap an Edit button to act on a lot of data at one time.
Place a reset action on a Settings screen. Don't use a context menu or action bar.
Don't place a Delete action in an action bar. Use an action menu instead to minimize the risk of users deleting an item accidentally. Since the action menu button appears at the bottom right of the screen and the Delete action appears at the bottom of the action menu, users can double-tap to delete an item.
If users delete an item from a list or grid, remove the item from the screen using a delete animation.
If users delete an item from a content view (like an email or contact), remove it from the screen using a delete animation and return users to the previous screen. | <urn:uuid:e0654758-04f6-447c-a677-43f3f3c74702> | CC-MAIN-2017-04 | http://developer.blackberry.com/design/bb10/deleting.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.851602 | 599 | 2.5625 | 3 |
Using the Internet has become customary, and a daily necessity that many Americans take for granted. High-speed broadband Internet service is an indispensible tool when studying, conducting a job search, getting healthcare, and making payments, just to name a few. It is fundamental to many everyday responsibilities and has greatly enhanced the way we communicate with our friends, families and communities.
The number of Americans using the Internet has steadily increased since high-speed broadband connections became the norm more than a decade ago, but 15 percent of adults still are not connected, according to recent research by the Pew Internet & American Life Project. Even more concerning is that Internet adoption among low-income households is lower, at only 54 percent for households with an income of $30,000 or less.
New Federal Communications Commission (FCC) Chairman Tom Wheeler spoke recently about the importance of broadband adoption, identifying “accessibility” as one of the key elements to the public interest work of the Commission. “There is nothing more fundamental to the FCC’s work than ensuring every American has access to our wired and wireless networks,” Wheeler stated.
To help close this digital divide, cable companies in hundreds of communities across the nation are tackling the Internet adoption challenge head on through partnerships with local organizations that can provide low-cost computers and Internet access and digital skills training. Over 250,000 low-income families have already been connected. That’s only part of the way to getting all Americans online but the experience shows that efforts that directly address Internet adoption barriers can be successful.
Cable operators plan further enhancements to broadband adoption programs to increase participation even more by increasing speed, streamlining the enrollment process, expanding eligibility criteria, developing an online application module, and reducing the cost.
Bridging the digital divide is a complex issue driven by many factors, but cable companies are committed to doing their part to connect all Americans. | <urn:uuid:eedfd8d8-96cd-42b7-8861-a00a82bbc793> | CC-MAIN-2017-04 | https://www.ncta.com/platform/broadband-internet/the-pledge-to-connect-everyone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948112 | 388 | 2.609375 | 3 |
October 31, 2016
It’s generally accepted that modern middleware came about because of the emergence of distributed computing systems. These new systems created a need for certain types of software that hadn’t existed before or existed mainly for internal use. This new software came about to support client-server computing. However, modern middleware software has firm roots in software that was developed for centralized computing.
The Batch Roots of Middleware
Modern centralized computing started with support for the batch workload. Batch means collect up a bunch of work and put it through the system as a group. “Batching work” made for a more productive use the precious computer resources. The centralized computing OS developers facilitated this effort for application developers by creating a new and major subsystem, the Job Entry Subsystem (JES).
JES handled inputs to the OS and supported the creation of outputs to files and print. Middleware comes into this discussion through Job Control Language (JCL). JCL was a kind of centralized-computing middleware. JCL was in the middle between the application program and the OS and infrastructure. Like middleware of today, it provided a layer of abstraction to reduce complexity and improved the productivity of the programmer by providing a simple interface involving JOB, EXEC and DD statements.
A JCL Example
JCL improves the productivity of the application developer freeing them up from
Issuing OS commands to allocate memory and files and serialize the use of system files like output queues. Through its layer of abstraction and conventions, JCL made life simpler for the developer allowing them to focus on the logic of their application program. Here is a JCL example:
//IS198CPY JOB (IS198T30500),'COPY JOB',CLASS=L,MSGCLASS=X
//COPY01 EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=IS198.TEST.INPUT,DISP=SHR
//SYSUT2 DD DSN=IS198.TEST.OUTPUT,
//SYSIN DD DUMMY
JCL is elegant and simple. Consider that there are just three basic statements
1. JOB identifies a unit of work. It tells the system (JES) how to handle it like what input queue and output queue and whom to charge for the work.
2. EXEC identifies the system or application program to run.
3. DD (there are multiples) identifies the input and output files that program named
in the execute statement will need to run.
The complexity in JCL comes with the conventions of the programs that it invokes. What are SYSUT1 SYSUT2 and SYSIN? What do these names mean? The answer comes from the utilities manual for IEBGENER, a program written by IBM to perform a utility function. Specifically, it copies the file associated with SYSUT1 to the file associated with SYSUT2. There are many other details in this simple example that can be learned quickly like how “DISP=(NEW,CATLG,DELETE)” causes the dataset to be created in real time and catalogued or deleted depending on the success of failure of the job execution. JCL has sig·nif·i·cance as it is simple, elegant and yet powerful.
The Story Continues
Next post, I’ll discuss how middleware also had its roots in real-time processing on centralized computers.
Posted October 31, 2016 | Permalink | <urn:uuid:0048c633-e02a-4c07-8964-529cf2dd8c4e> | CC-MAIN-2017-04 | http://ibmsystemsmag.com/Blogs/IT-Trendz/October-2016/sig%C2%B7nif%C2%B7i%C2%B7cance-%E2%80%93-JES-as-Early-Batch-Middleware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00264-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905263 | 753 | 3.265625 | 3 |
Use the RENUM command (may be abbreviated REN ) to renumber the data starting at 100 and incrementing by 100.
The RENUM command also sets number mode on. It accepts the same parameters as the NUMBER command.
If number mode is on and you enter:
Command ===> renum leaves number mode unchanged.
If number mode is off and you enter:
Command ===> renum sets "number on std"
If you enter the optional parameters:
Command ===> renum std sets "number on std"
Command ===> renum cobol sets "number on COBOL"
Command ===> renum std cobol sets "number on std COBOL"
In all cases, the sequence fields in the data are renumbered.
To turn off number mode use the NUMBER OFF command.
The renum cobol commands set sequence numbers in the 1st 6 cols of your data and that cannot be overtyped thus you would only do this if you were writing cobol code.
Why don't you try it and see for yourself what happens. | <urn:uuid:0e0c75cc-cd7e-4cb4-b5f2-d151e0a54024> | CC-MAIN-2017-04 | http://ibmmainframes.com/about17602.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00172-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.860168 | 232 | 2.640625 | 3 |
This column is available in a weekly newsletter called IT Best Practices. Click here to subscribe.
In October 2012, then-U.S. Secretary of Defense Leon Panetta gave a speech in which he warned that the United States was facing the possibility of a “cyber Pearl Harbor” and was increasingly vulnerable to foreign computer hackers who could dismantle the nation’s power grid, transportation system, financial networks and government. According to Panetta, the nation's adversaries have been acquiring technologies that could allow an aggressor nation or extremist group to gain control of critical infrastructure. “They could derail passenger trains, or even more dangerous, derail passenger trains loaded with lethal chemicals. They could contaminate the water supply in major cities, or shut down the power grid across large parts of the country.”
None of those things have happened in the U.S. – yet – but there have been recent high profile attacks on industrial and critical infrastructure systems elsewhere.
Investigators have confirmed that the Ukrainian power grid was knocked offline in December 2015 by a cyber attack that used malware to damage computers and sensitive control systems. A troubling aspect of this attack is that other countries' power systems aren't much better protected than the Ukrainian system, meaning this could happen anywhere. Even in the U.S.
Germany's Federal Office for Information Security reported a cyber attack on a steel manufacturing plant in late 2014. According to the agency, attackers used a spear phishing email to gain access to the plant's office network, and from there made their way into the company's production network. Commands were sent to the network's control components, preventing the plant from appropriately shutting down a blast furnace. This resulted in significant physical damage to the plant, costing millions of dollars and shutting down productivity for months.
Many experts believe this is just the beginning of cyber warfare events that will be waged around the world. Unfortunately, the industrial world is years behind the information technology (IT) world in preparing for cyber attacks, largely because this is something new in the operational technology (OT) world.
Traditionally, OT systems have been isolated and protected by the means of "security through obscurity." Industrial systems run on proprietary operating systems from companies like Schneider Electric, Honeywell, Emerson, Siemens and a handful of other vendors. Until recently, they have had no connections to the IT world where malware is prevalent.
This is changing, however, as plant operators seek the benefits of creating connections between IT and OT systems. Operators want to gather important metrics that can help them improve their production processes and gain better insight into the business overall. But this is creating vulnerabilities on the OT side of the house, and as a result, plant operators need to harden their OT environments—something that is easier said than done.
The IT and OT environments are quite different. In the IT world, if a vulnerability is discovered, say on a Windows or Linux system, it's simple to install a patch and reboot the system. In an industrial environment, you can't just shut down a production system to apply an update or patch and then reboot. Something like a petroleum refinery, a water treatment plant or an electricity generating station has to be scheduled for downtime, and even at that the maintenance is typically performed only once or twice a year.
In the IT world, systems get replaced every three to five years. In the OT world, the replacement cycle for machines might be 10, 20 or 30 years or more. Equipment that is decades old was never built with security in mind, so there might not even be a way to update the OS.
There are many issues in the industrial world that make security hardening a real challenge. NextNine is one company stepping up to address the challenges, offering a distributed platform for security management of the Supervisory Control and Data Acquisition/Industrial Control Systems (SCADA/ICS) environments.
NextNine's platform consists of a centralized security center, virtual security engines that are located at each plant location (one per plant), and a secure communications tunnel that connects the security center to each plant's security engine.
The plant operator defines the enterprise security policy in the security center and it is pushed out to the various plant locations. The virtual security engine can semi-automatically – with help from a human because of the industrial environment – implement security policy. The engine also measures the compliance of the security policy to what has been defined and will send the results back to the central office to present in a dashboard. This process goes well with concept of the hardening circle that security people would like to see: set the security policy, measure current compliance to the policy, report the gaps, address the gaps, repeat.
NextNine offers a variety of services with its platform. One of them is a granular remote access solution that enables a vendor – say, Honeywell – to get into the plant to provide updates and patches to its own devices. The remote access solution would allow a specific engineer at Honeywell to see only Honeywell devices, and only the ones that he is allowed to deal with. He can only perform tasks he is authorized to do, and even those can be overseen by local personnel and aborted in case the engineer does something dangerous or that isn’t allowed. It's a strong mechanism with audit trails that are required by various security regulations such as NERC CIP.
Another NextNine capability is inventory. Maybe this sounds trivial, but many of these industrial companies don't know what assets they have. If they don't, they definitely cannot defend them. It's not as easy as it sounds to do an asset discovery and inventory because in this fragile, proprietary world of industrial systems, if the asset discovery is done too aggressively, it's possible to bring the plant down, which is the worst possible scenario. NextNine says it enables an asset inventory without putting availability at risk.
NextNine keeps a whitelist of approved applications and a blacklist of things that aren't permitted to run on these systems. The solution also collects log files to send to a centralized security information and event management (SIEM) system for analysis. NextNine supports a variety of third party anomaly detection tools and does compliance measurement. Everything is presented in a dashboard to keep management informed of the security status.
The end result is that there is an inventory report, a site compliance report for the security policy, and a dashboard that managers can look at and act upon to improve the hardening of the plant. While this is commonplace in IT environments, it's truly a new experience in many industrial situations that have never done this before.
NextNine says its solution is vendor-agnostic and will work with equipment from a variety of industrial vendors, as well as with active security protection solutions (like anomaly detection and patching) from specialty software vendors.
This platform is said to fully conform to the NERC CIP 5.1 standards, and NextNine says it is even helping define industrial security standards being put forth by the White House and NIST. It's all done with the goal of reducing industrial cyber risks and bringing a more mature security posture to this vulnerable environment. | <urn:uuid:f5b40f0e-8547-4b18-a7f4-5af0fd1b314a> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3032301/security/nextnines-security-platform-helps-to-reduce-industrial-cyber-risks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958594 | 1,449 | 2.78125 | 3 |
How to Protect Your Privacy
It's undeniable that we live in a digital world. We shop online. We bank online. We share personal information on a host of social networks and leave an electronic footprint everywhere we go. Unfortunately, predators are everywhere on the Internet waiting to steal and use our information for their personal gain.
We must learn to protect our privacy. Our personal information is stored in many electronic public and private databases. Our information becomes readily available to criminals when it is accidentally exposed due to carelessness, user-error, or through malicious cyber-attacks.
As our lives are increasingly lived online, so we have to take a proactive approach to protection. | <urn:uuid:71098660-1bcc-494c-b6ae-3384424e4b79> | CC-MAIN-2017-04 | https://www.identityforce.com/resources/articles/protect-your-privacy | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00006-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924728 | 134 | 2.96875 | 3 |
Walking the Walk
-- Researchers at the Georgia Institute of Technology and elsewhere are developing technologies to recognize a person's gait.
Still in its infancy, the technology is gaining attention because of federal studies such as several Georgia Tech projects, funded by the federal Defense Advanced Research Projects Agency (DARPA).
One Georgia Tech study explores gait recognition by computer vision, and another takes an even more novel approach -- gait recognition with a radar system, similar to those police officers use to catch speeders.
The ultimate goal is to detect, classify and identify humans from as far away as 500 feet, day or night, and in all weather conditions. These capabilities would strengthen the protection of U.S. forces and facilities from terrorist attacks, according to DARPA officials.
Because gait-recognition technology is so new, researchers are assessing the uniqueness of gait and methods by which it can be evaluated.
With two years of experiments and analysis almost complete, researchers on both Georgia Tech projects are seeking continued funding for further studies. They still must address numerous technical issues; it will be at least five years before the technologies are commercialized, researchers said.
In the radar-system project, results from experiments, data analysis and algorithm design are promising, researchers said. The technique focuses on the gait cycle formed by the movements of a person's various body parts over time.
Researchers correctly identified 80 percent to 95 percent of individual subjects, with variances in that range during the three experiment days.
The next step is to build a more powerful radar system and subject it to lab and field testing. In experiments last year, subjects walked 50 feet away from the radar and then within 15 feet of it. Researchers are now building a radar system that can detect people from a distance of 500 feet or more.
In the study of gait recognition by computer vision, researchers identified subjects by focusing on "activity-specific static biometrics." These are static properties -- such as a person's leg length -- that can be measured from a single image.
Researchers also are developing statistical analysis tools that will allow them to use a small database with easily gathered data to predict how well a particular biometric, including gait recognition, will work on a larger population.
-- Georgia Institute of Technology
Raising Broadband Awareness
-- When the New Hampshire Division of Economic Development surveyed more than 500 businesses across the state about their awareness of high-speed Internet access availability, the results were surprising.
Just 25 percent of the businesses knew high-speed access was available in their area -- despite the fact that all cities in the state have at least 12 ISPs actively operating, said Stuart Arnett, director of the division.
"It was somewhat expected from the smaller companies, but that some of the larger companies still were a little confused [about the availability of broadband] was a surprise," Arnett said. "The bigger the company, the more likely they are to be aware of the accessibility of T1 [service] and related offerings."
Arnett, who also chairs the New Hampshire Telecommunications Development Advisory Board, believes it's the state's job to educate businesses about the benefits of high-speed Internet access as a productivity tool.
The board also found a vigorous local ISP industry, which also was a bit of a surprise.
"This local ISP industry, in addition to providing a roughly equivalent commodity, what they really excel at is the hand-holding," he said. "They're, often times, companies' IT departments. It's one of those things that, when you hear it, you say, 'Of course, that makes sense,' but it took us a while to realize it."
Convincing companies to invest in broadband often requires hand-holding -- and that's a service many local ISPs have become good at providing.
Businesses need to turn to somebody who understands the best solution for their building or their town, and they need somebody who can help them maximize their investment -- by helping them break up a T1 line so several businesses in one building can share it, according to Arnett.
"[Local ISPs] really are IT consultants," he said. "A lot of them told us they make a regular point of going into their clients' shops and fixing a half-dozen things, from getting servers back up and running to untangling other problems. That's the sort of thing you're not going to get large companies to do for small clients."
Online Cigarette Sales Hurt State Revenues
-- Nineteen states have hiked tobacco taxes this year, causing scores of smokers to buy "tax-free" cigarettes over the Internet.
States are trying to collect taxes on these sales, but a federal law that could help them isn't being effectively enforced, officials said. As a result, some states, especially those with high tobacco taxes, said they're being shortchanged by the surge in online cigarette buying.
Internet tobacco sales are expected to reach $5 billion nationwide by 2005, and states stand to lose approximately $1.4 billion in tax revenue from these sales, according to research cited in a recent report
from the General Accounting Office (GAO).
Taxing online tobacco sales is complicated. Cash-strapped states are attempting to collect what revenue they can under a 54-year-old federal law -- but they often lack the resources and legal authority to enforce it.
"State law doesn't extend into other states where retailers are located," said Jim Jenkins, chief of alcohol and tobacco enforcement for the Wisconsin Department of Revenue. "We need federal assistance."
The law in question is the Jenkins Act, which was enacted in 1949. It requires online retailers to provide sales records to states where goods are shipped so states can collect excise taxes.
However, violation of the act is only a misdemeanor with a penalty of $1,000, six months in prison or both. The act is supposed to be enforced by the Department of Justice and the FBI -- organizations with greater priorities, such as fighting terrorism, some state officials say.
No online cigarette vendors have been prosecuted for Jenkins Act violations, the GAO report said. To remedy the situation, the GAO recommended transferring enforcement of the Jenkins Act to the Federal Bureau of Alcohol, Tobacco and Firearms.
At least seven states -- Alaska, California, Iowa, Massachusetts, Rhode Island, Washington and Wisconsin -- have tried to enforce the Jenkins Act on their own. They've called and written letters to consumers and online cigarette retailers notifying them of their responsibilities to pay taxes and comply with federal law.
Some online retailers simply don't respond. Others claim immunity from the Jenkins Act because of Native American status. Roughly half of online cigarette retailers are Native American-owned, according to Eric Lindblom, manager for policy research at the Campaign for Tobacco-Free Kids, an advocacy group in Washington, D.C.
A New Jersey legislator has taken a different approach. State Sen. Peter Inverso recently proposed legislation requiring shippers -- such as Federal Express or UPS -- to ensure cigarettes transported into his state are labeled as tobacco products. Inverso's bill would require shippers to check a consumer's ID upon delivery to verify age and to submit an invoice to the New Jersey Department of Taxation so the state could collect due taxes.
The bill, originally intended to deter minors from buying cigarettes online, also would help the state collect revenue lost to Jenkins Act noncompliance.
"The current enforcement mechanism isn't effective," said Steven Cook, Inverso's chief of staff. "We're targeting shippers because we know they will fall under our jurisdiction." -- Erin Madigan, Stateline.org
Thieves Take the Bait
-- Three men who thought they had successfully stolen computer equipment from a parked car instead found themselves on their way to jail.
Unfortunately for the thieves, the car they broke into belonged to the Arlington County Police Department's "bait car" program.
During the evening of Aug. 25, the Arlington County Emergency Communications Center (ECC) received a signal that the bait car was broken into. ECC personnel notified officers in the area, who immediately put the vehicle under surveillance. The officers watched three male subjects standing near the bait vehicle; one of the men was holding a bag that looked full. The officers knew the bait car contained computer equipment and stopped two of the subjects, determining that the computer equipment in the bag had been taken from the bait car.
Police officials said the arrests were the department's second successful activation of a bait car. The first occurred on April 13, when an Arlington resident was arrested and charged with grand larceny auto, possession of burglary tools and driving with a suspended license.
Bait cars are specially equipped vehicles placed on the street in the hopes of attracting thieves. The cars, camouflaged to look like normal vehicles, transmit a signal to the ECC when entered or started.
ECC personnel can then track the vehicle via GPS technology and remotely control several of the vehicle's functions, including the engine, which significantly reduces the possibility of a stolen bait car being involved in a high-speed pursuit and decreases the risk to officers when apprehending a suspect in a bait car.
The program is the result of a partnership between the county and HGI Wireless, a Canadian company that also has worked with Minneapolis, Minn., to deploy bait cars in the United States. | <urn:uuid:99044756-6529-48c7-912b-e0c63ea7ffdc> | CC-MAIN-2017-04 | http://www.govtech.com/security/99410409.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00126-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96617 | 1,917 | 3.140625 | 3 |
End of PrivacyBy John Parkinson | Posted 08-06-2007
The answer might surprise you. A combination of just gender and the U.S. five-digit zip code (or its foreign equivalent) for your address would on average eliminate all but about 35,000 people. In most zip codes, a date of birth would narrow it down to around
95 people. That's just three data items, none of which would generally be regarded as unique to you, and you're down to fewer than one in 100.
If we add in some situational datafacts about you that don't identify you specifically but build a recognizable context around youthe kind of car you drive, the restaurants you frequentI can typically identify you with just a couple more items, usually a dozen max, none of which would be considered personally identifying information. It's this ability to build context and use it as an efficient information filter that makes privacy so hard to maintain.
We all leave a trail of data items as we move through the world, and we always have. Technology has simply made it easier and cheaper to record and analyze these traces. Today, for about half the world, there is no real privacy. The key questions, therefore, become: Who owns our personally identifying information? Who assures its accuracy and relevance? Who can access and use it? What are its permitted uses? Too many of the answers depend on where you live and how the laws there constrain or allow data use. This leaves businesses and technology managers facing some complex issues even beyond the ethical debate on how the information can be used.
But as bad as things have gotten from a privacy viewpoint, they're about to get a lot worse. As the world becomes more routinely instrumented (think E911-enabled cell phones, GPS, WiFi access points and black-box recorders in autos and surveillance in the name of public safety), event correlation software will make it possible to construct a nearly complete record of your life and make it very hard to hide. This can be a blessing if you have to prove where you were (or weren't) at some point, but I'm not sure we as a society are ready for this level of transparency. And as information managers, we have to be careful, where appropriate use is not yet defined, to avoid making post hoc decisions on what can and can't be done with the data our systems collect.
For CIOs, that means staying on top of the debate and getting some appropriate policy definedor at least getting a discussion of the issues under way internallyeven if you have to modify process and practice later.
Determine what compliance and audit requirements you'll have to meet before you have to meet them. Consider training and awareness needs. And add one more item to the long list of concerns demanding your attention.
John Parkinson has been a business and IT consultant for more than two decades. Please send questions and comments to email@example.com. | <urn:uuid:4650c940-064f-4fcf-853e-75a11e521360> | CC-MAIN-2017-04 | http://www.cioinsight.com/print/c/a/Past-Opinions/End-of-Privacy | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00520-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960284 | 606 | 2.546875 | 3 |
What we call the Internet, was not our first attempt at making a global data network that spanned the globe. It was just the first one that worked.
In this talk, I'll lay out what I see as how the Internet actually works. It's increasingly likely that there will be attempts to *change* the principles of the net, and the reality is that widespread hacking is the exact sort of force that brought us this working-ish system in the first place.
We need to talk about the values of cryptography, of open software and networks, of hackers being a force for measurable good. We need to talk about how infrastructure like DNS -- it was there 25 years ago, we can imagine it will be there 25 years from now -- acts as foundation for future development in a way that the API of the hour doesn't.
Things do need to be better, and we need to talk about the role of Government in that. The things that need to be better are technical in nature, and guide research priorities that are outright not being addressed at present.
Essentially, I'd like to provide a model for comprehending the Internet as it stands, that prevents harm to it (how much could we have used EC2 if SSH was illegal) while providing the useful resources to promote its continued operation.
We can't keep screwing this up forever. NTIA has noted half (!) of the population warily backing away. Let's talk about how it really works, so we can discuss how we can do it better.
The winning submissions to Pwn2Own 2016 provided unprecedented insight into the state of the art in software exploitation. Every successful submission provided remote code execution as the super user (SYSTEM/root) via the browser or a default browser plugin. In most cases, these privileges were attained by exploiting the Microsoft Windows or Apple OS X kernel. Kernel exploitation using the browser as an initial vector was a rare sight in previous contests.
This presentation will detail the eight winning browser to super user exploitation chains (21 total vulnerabilities) demonstrated at this year's Pwn2Own contest. We will cover topics such as modern browser exploitation, the complexity of kernel Use-After-Free exploitation, and the simplicity of exploiting logic errors and directory traversals in the kernel. We will analyze all attack vectors, root causes, exploitation techniques, and possible remediations for the vulnerabilities presented.
Reducing attack surfaces with application sandboxing is a step in the right direction, but the attack surface remains expansive and sandboxes are clearly still just a speed bump on the road to complete compromise. Kernel exploitation is clearly a problem which has not disappeared and is possibly on the rise. If you're like us, you can't get enough of it; it's shell on earth.
OAuth has become a highly influential protocol due to its swift and wide adoption in the industry. The initial objective of the protocol was specific: it serves the authorization needs for websites. However, the protocol has been significantly repurposed and re-targeted over the years: (1) all major identity providers, e.g., Facebook, Google and Microsoft, have re-purposed OAuth for user authentication; (2) developers have re-targeted OAuth to the mobile platforms, in addition to the traditional web platform. Therefore, we believe that it is necessary and timely to conduct an in-depth study to demystify OAuth for mobile application developers.
Our work consists of two pillars: (1) an in-house study of the OAuth protocol documentation that aims to identify what might be ambiguous or unspecified for mobile developers; (2) a field-study of over 600 popular mobile applications that highlights how well developers fulfill the authentication and authorization goals in practice. The result is really worrisome: among the 149 applications that use OAuth, 89 of them (59.7%) were incorrectly implemented and thus vulnerable. In the paper, we pinpoint the key portions in each OAuth protocol flow that are security critical, but are confusing or unspecified for mobile application developers. We then show several representative cases to concretely explain how real implementations fell into these pitfalls. Our findings have been communicated to vendors of the vulnerable applications. Most vendors positively confirmed the issues, and some have applied fixes. We summarize lessons learned from the study, hoping to provoke further thoughts about clear guidelines for OAuth usage in mobile applications
JNDI (Java Naming and Directory Interface) is a Java API that allows clients to discover and look up data and objects via a name. These objects can be stored in different naming or directory services such as RMI, CORBA, LDAP, or DNS.
This talk will present a new type of vulnerability named "JNDI Reference Injection" found on malware samples attacking Java Applets (CVE-2015-4902). The same principles can be applied to attack web applications running JNDI lookups on names controlled by attackers. As we will demo during the talk, attackers will be able to use different techniques to run arbitrary code on the server performing JNDI lookups.
The talk will first present the basics of this new vulnerability including the underlying technology, and will then explain in depth the different ways an attacker can exploit it using different vectors and services. We will focus on exploiting RMI, LDAP and CORBA services as these are present in almost every Enterprise application.
LDAP offers an alternative attack vector where attackers not able to influence the address of an LDAP lookup operation may still be able to modify the LDAP directory in order to store objects that will execute arbitrary code upon retrieval by the application lookup operation. This may be exploited through LDAP manipulation or simply by modifying LDAP entries as some Enterprise directories allow.
Could a worm spread through a smart light network? This talk explores the idea, and in particular dives into the internals of the Philips Hue smart light system, and details what security has been deployed to prevent this.
Examples of hacking various aspects of the system are presented, including how to bypass encrypted bootloaders to read sensitive information. Details on the firmware in multiple versions of the Philips Hue smart lamps and bridges are discussed. This talk concentrates on examples of advanced techniques used in attacking IoT/embedded hardware devices.
TLS has experienced three major vulnerabilities stemming from "export-grade" cryptography in the last year---FREAK, Logajm, and Drown. Although regulations limiting the strength of cryptography that could be exported from the United States were lifted in 1999, and export ciphers were subsequently deprecated in TLS 1.1, Internet-wide scanning showed that support for various forms of export cryptography remained widespread, and that attacks exploiting export-grade cryptography to attack non-export connections affected up to 37% of browser-trusted HTTPS servers in 2015. In this talk, I'll examine the technical details and historical background for all three export-related vulnerabilities, and provide recent vulnerability measurement data gathered from over a year Internet-wide scans, finding that 2% of browser-trusted IPv4 servers remain vulnerable to FREAK, 1% to Logjam, and 16% to Drown. I'll examine why these vulnerabilities happened, how the inclusion of weakened cryptography in a protocol impacts security, and how to better design and implement cryptographic protocols in the future. Having been involved in the discovery of all three export vulnerabilities, I'll distill some lessons learned from measuring and analyzing export cryptography into recommendations for technologists and policymakers alike, and provide a historical context for the current "going dark'' and Apple vs. FBI debate.
Through cooperation between browser vendors and standards bodies in the recent past, numerous standards have been created to enforce stronger client-side control for web applications. As web appsec practitioners continue to shift from mitigating vulnerabilities to implementing proactive controls, each new standard adds another layer of defense for attack patterns previously accepted as risks. With the most basic controls complete, attention is shifting toward mitigating more complex threats. As a result of the drive to control for these threats client-side, standards such as SubResource Integrity (SRI), Content Security Policy (CSP), and HTTP Public Key Pinning (HPKP) carry larger implementation risks than others such as HTTP Strict Transport Security (HSTS). Builders supporting legacy applications actively make trade-offs between implementing the latest standards versus accepting risks simply because of the increased risks newer web standards pose.
In this talk, we'll strictly explore the risks posed by SRI, CSP, and HPKP; demonstrate effective mitigation strategies and compromises which may make these standards more accessible to builders and defenders supporting legacy applications; as well as examine emergent properties of standards such as HPKP to cover previously unforeseen scenarios. As a bonus for the breakers, we'll explore and demonstrate exploitations of the emergent risks in these more volatile standards, to include multiple vulnerabilities uncovered quite literally during our research for this talk (which will hopefully be mitigated by d-day).
AWS users, whether they are devops in a startup or system administrators tasked with migrating an enterprise service into the cloud, interact on a daily basis with the AWS APIs, using either the web console or tools such as the AWS CLI to manage their infrastructure. When working with the latter, authentication is done using long-lived access keys that are often stored in plaintext files, shared between developers, and sometimes publicly exposed. This creates a significant security risk as possession of such credentials provides unconditional and permanent access to the AWS API, which may yield catastrophic events in case of credentials compromise. This talk will detail how MFA may be consistently required for all users, regardless of the authentication method. Furthermore, this talk will introduce several open-source tools, including the release of one new tool, that may be used to allow painless work when MFA-protected API access is enforced in an AWS account.
The widespread adoption of AWS as an enterprise platform for storage, computing and services makes it a lucrative opportunity for the development of AWS focused APTs. We will cover pre-infection, post-infection and advanced persistency techniques on AWS that allows an attacker to access staging and production environments, as well as read and write data and even reverse its way from the cloud to the the corporate datacenter.
This session will cover several methods of infection including a new concept - "account jumping" for taking over both PaaS (e.g. ElasticBeans) and IaaS (EC2, EC2 Containers) resources, discussing poisoned AMIs, dirty account transfer, as well as leveraging S3 and CloudFront for performing AWS specific credentials thefts that can easily lead to full account access. We will then discuss the post-infection phase and how attackers can manipulate AWS resources (public endpoints like EC2 IPS, Elastic IPS, load balancers and more) for complete MITM attacks on services. We will demonstrate how attackers code can be well hidden via Lambda functions, some cross zone replication configuration and the problem with storage affinity to a specific account. We'll examine hybrid deployments from the cloud and compromising the on premise datacenter by leveraging and modifying connectivity methods (HW/SW VPN, Direct connect or cloud hub). Finally, we'll end with a discussion on best practices that can be used to protect from such attacks such as bastion SSH/RDP gateways, understanding the value of CASB based solutions and where they fit, leverage audit and HSM capabilities in AWS as well as looking at different Isolation approaches to create isolation between administrators and the cloud while still providing access to critical services.
Although 0-day exploits are dangerous, we have to admit that the largest threat for Android users are kernel vulnerabilities that have been disclosed but remain unfixed. Having been in the spotlight for weeks or even months, these kernel vulnerabilities usually have clear and stable exploits; therefore, underground businesses commonly utilize them in malware and APTs. The reason for the long periods of remaining unfixed is complex, partly due to the time-consuming patching and verification procedures, or possibly because the vendors care more about innovating new products than securing existing devices. As such, there are still a lot devices all over the world subject to root attacks. The different patching status of various vendors causes fragmentation, and vendors usually don't provide the exact up-to-date kernel source code for all devices, so it is extremely difficult to patch vulnerable devices in scale. We will provide stats of the current Android kernel vulnerability landscape, including the device model population and the corresponding vulnerability rates. Some vulnerabilities with great impact but slow fixing progress will be discussed. The whole community strives to solve this problem, but obviously this cannot be done discretely with limited hands.
In this talk, we present an adaptive Android kernel live patching framework, which enables open and live patching for kernels. It has the following advantages: (1) It enables online hotpatching without interrupting user-experience. Unlike existing Linux kernel hotpatching solutions, it works directly on binaries and can automatically adjust to different device models with different Android kernel versions. (2) It enables third party vendors, who may not access the exact source code of the device kernel and drivers, to perform live patching. (3) Except for the binary patching scheme, it also provides a Lua based patching scheme, which makes patch generation and delivery even easier. It also has stronger confinement. This framework saves developers from repeating the tedious and error-prone patch porting work, and patches can be provided from various vendors, thus the patch deployment period can be greatly shortened. Only offering the power to perform adaptive live patching is not enough -- we need to regulate it just in case the hotpatches introduce further vulnerabilities and backdoors. So, a special alliance with membership qualification is formed. Only those selected vendors can provide patches and audit patches submitted from other alliance members. Furthermore, we will build a reputation ranking system for the patch providers, a mechanism similar to app stores. The Lua based patching scheme can provide even more restrictive regulations upon the operations of patches. Finally, this framework can be easily extended and applied to general Linux platforms. We believe that improving the security of the whole ecosystem is not a dream of our own. We call for more and more parties to join in this effort to fight the evils together.
The end goal of a remote attack against a vehicle is physical control, usually by injecting CAN messages onto the vehicle's network. However, there are often many limitations on what actions the vehicle can be forced to perform when injecting CAN messages. While an attacker may be able to easily change the speedometer while the car is driving, she may not be able to disable the brakes or turn the steering wheel unless the car she is driving meets certain prerequisites, such as traveling below a certain speed. In this talk, we discuss how physical, safety critical systems react to injected CAN messages and how these systems are often resilient to this type of manipulation. We will outline new methods of CAN message injection which can bypass many of these restrictions and demonstrate the results on the braking, steering, and acceleration systems of an automobile. We end by suggesting ways these systems could be made even more robust in future vehicles.
What's scarier, letting HD Moore rent your house and use your home network for day or being the very next renter that uses that network? With the colossal growth of the vacation rental market over the last five years (AirBnb, HomeAway), travellers are now more vulnerable than ever to network based attacks targeted at stealing personal information or outright pwnage. In 2006, the security industry desperately warned of the dangers of using public Wi-Fi at coffee shops. In 2010, we reshaped the conversation around the frightful security of Internet provided at hotels. And now, in 2016, we will start a new battle cry against the abysmal state of network security enabled by short term rentals. Both renters and property owners have a serious stake in this game. Whether you're renting a room in a foreign city to attend a conference or you're profiting off of your own empty domicile, serious risks abound: MitM traffic hi-jacking, accessing illegal content, device exploitation, and more. Common attacks and their corresponding defenses (conventional or otherwise) will be discussed, with a strong emphasis on practicality and simplicity. This talk will contain demos of attacks, introduce atypical hardware for defense, and encourage audience participation.
In Windows 10, Microsoft introduced the AntiMalware Scan Interface (AMSI) which is designed to target script-based attacks and malware. Script-based attacks have been lethal for enterprise security and with advent of PowerShell, such attacks have become increasingly common. AMSI targets malicious scripts written in PowerShell, VBScript, JScript etc. and drastically improves detection and blocking rate of malicious scripts. When a piece of code is submitted for execution to the scripting host, AMSI steps in and the code is scanned for malicious content. What makes AMSI effective is, no matter how obfuscated the code is, it needs to be presented to the script host in clear text and unobfuscated. Moreover, since the code is submitted to AMSI just before execution, it doesn't matter if the code came from disk, memory or was entered interactively. AMSI is an open interface and MS says any application will be able to call its APIs. Currently, Windows Defender uses it on Windows 10. Has Microsoft finally killed script-based attacks? What are the ways out? The talk will be full of live demonstrations.
In recent years, cyber defenders protecting enterprise networks have started incorporating malware code sharing identification tools into their workflows. These tools compare new malware samples to a large databases of known malware samples, in order to identify samples with shared code relationships. When unknown malware binaries are found to share code "fingerprints" with malware from known adversaries, they provides a key clue into which adversary is generating these new binaries, thus helping develop a general mitigation strategy against that family of threats. The efficacy of code sharing identification systems is demonstrated every day, as new family of threats are discovered, and countermeasures are rapidly developed for them.
Unfortunately, these systems are hard to maintain, deploy, and adapt to evolving threats. First and foremost, these systems do not learn to adapt to new malware obfuscation strategies, meaning they will continuously fall out of date with adversary tradecraft, requiring, periodically, a manually intensive tuning in order to adjust the formulae used for similarity between malware. In addition, these systems require an up to date, well maintained database of recent threats in order to provide relevant results. Such a database is difficult to deploy, and hard and expensive to maintain for smaller organizations. In order to address these issues we developed a new malware similarity detection approach. This approach, not only significantly reduces the need for manual tuning of the similarity formulate, but also allows for significantly smaller deployment footprint and provides significant increase in accuracy. Our family/similarity detection system is the first to use deep neural networks for code sharing identification, automatically learning to see through adversary tradecraft, thereby staying up to date with adversary evolution. Using traditional string similarity features our approach increased accuracy by 10%, from 65% to 75%. Using an advanced set of features that we specifically designed for malware classification, our approach has 98% accuracy. In this presentation we describe how our method works, why it is able to significantly improve upon current approaches, and how this approach can be easily adapted and tuned to individual/organization needs of the attendees.
Many critical communications now take place digitally, but recent revelations demonstrate that these communications can often be intercepted. To achieve true message privacy, users need end-to-end message encryption, in which the communications service provider is not able to decrypt the content. Historically, end-to-end encryption has proven extremely difficult for people to use correctly, but recently tools like Apple's iMessage and Google's End-to-End have made it more broadly accessible by using key-directory services. These tools (and others like them) sacrifice some security properties for convenience, which alarms some security experts, but little is known about how average users evaluate these tradeoffs. In a 52-person interview study, we asked participants to complete encryption tasks using both a traditional key-exchange model and a key-directory-based registration model. We also described the security properties of each (varying the order of presentation) and asked participants for their opinions. We found that participants understood the two models well and made coherent assessments about when different tradeoffs might be appropriate. Our participants recognized that the less-convenient exchange model was more secure overall, but found the security of the registration model to be "good enough" for many everyday purposes.
In Windows 10, Microsoft introduced virtualization-based security (VBS), the set of security solutions based on a hypervisor. In this presentation, we will talk about details of VBS implementation and assess the attack surface - it is very different from other virtualization solutions. We will focus on the potential issues resulting from the underlying platform complexity (UEFI firmware being a primary example).
Besides a lot of theory, we will also demonstrate actual exploits: one against VBS itself and one against vulnerable firmware. The former is non-critical (provides bypass of one of VBS features), the latter is critical.
Before attending, one is encouraged to review the two related talks from Black Hat USA 2015: "Battle of the SKM and IUM: How Windows 10 Rewrites OS Architecture" and "Defeating Pass-the-Hash: Separation of Powers."
Machine learning techniques have been gaining significant traction in a variety of industries in recent years, and the security industry is no exception to it's influence. These techniques, when applied correctly, can help assist in many data driven tasks to provide interesting insights and decision recommendations to analyst. While these techniques can be powerful, for the researchers and analyst who are not well versed in machine learning, there can exist a gap in understanding that may prevent them from looking at and applying these tools to problems machine learning techniques could assist with.
The goal of this presentation is to help researchers, analyst, and security enthusiast get their hands dirty applying machine learning to security problems. We will walk the entire pipeline from idea to functioning tool on several diverse security related problems, including offensive and defensive use cases for machine learning. Through these examples and demonstrations, we will be able to explain in a very concrete fashion every step involved to tie in machine learning to the specified problem. In addition, we will be releasing every tool built, along with source code and related datasets, to enable those in attendance to reproduce the research and examples on their own. Machine learning based tools that will be released with this talk include an advanced obfuscation tool for data exfiltration, a network mapper, and command and control panel identification module.
Software-Defined Networking (SDN), by decoupling the control logic from the closed and proprietary implementations of traditional network devices, allows researchers and practitioners to design new innovative network functions/protocols in a much easier, more flexible, and powerful way. This technology has gained significant attentions from both industry and academia, and it is now at its adoption stage. When considering the adoption of SDN, the security vulnerability assessment is an important process that must be conducted against any system before the deployment and arguably the starting point toward making it more secure.
In this briefing, we explore the attack surface of SDN by actually attacking each layer of SDN stack. The SDN stack is generally composed of control plane, control channel and data plane: The control plane implementations, which are commonly known as SDN controllers or Network OS, implementations are commonly developed and distributed as an open-source project. Of those various Network OS implementations, we attack the most prevalent ones, OpenDaylight (ODL) and Open Network Operating System (ONOS) . These Network OS projects are both actively led by major telecommunication and networking companies, and some of the companies have already deployed them to their private cloud or network [3, 4]. For the control channel, we also attack a well-known SDN protocol , OpenFlow. In the case of the data plane, we test some OpenFlow-enabled switch device products from major vendors, such as HP and Pica8.
Of the attacks that we disclose in this briefing, we demonstrate some of the most critical attacks that directly affect the network (service) availability or confidentiality. For example, one of the attack arbitrarily uninstalls crucial SDN applications running on an ODL(or ONOS) cluster, such as routing, forwarding, or even security service applications. Another attack directly manipulates logical network topology maintained by an ODL(or ONOS) cluster to cause network failures. In addition, we also introduce some of the SDN security projects. We briefly go over the design and implementation of Project Delta, which is an official open-source SDN penetration testing tool pushed forward by Open Networking Foundation Security group, and Security-Mode ONOS, a security extension that protects the core of ONOS from the possible threats of untrusted third-party applications.
References Medved, Jan, et al. "Opendaylight: Towards a model-driven sdn controller architecture." 2014 IEEE 15th International Symposium on. IEEE, 2014. Berde, Pankaj, et al. "ONOS: towards an open, distributed SDN OS."Proceedings of the third workshop on Hot topics in software defined networking. ACM, 2014. Jain, Sushant, et al. "B4: Experience with a globally-deployed software defined WAN." ACM SIGCOMM Computer Communication Review. Vol. 43. No. 4. ACM, 2013. CORD: Reinventing Central Offices for Efficiency and Agility. http://opencord.org (2016). OpenFlow. OpenFlow Switch Specification version 1.1.0. Tech. rep., 2011. http://www.openflow.org/documents/openflow-spec-v1.1.0.pdf.
Ablation is a tool built to extract information from a process as it executes. This information is then imported into the disassembly environment where it used to resolve virtual calls, highlight regions of code executed, or visually diff samples. The goal of Ablation is to augment static analysis with minimal overhead or user interaction.
C++ binaries can be a real pain to audit sometimes due to virtual calls. Instead of having to reverse class, object, and inheritance relationships, Ablation can resolve any observed virtual calls, and create fully interactive x-refs in IDA; Disassembled C++ reads like C!
When augmenting analysis by importing runtime data, much of the information is displayed using a color scheme. This allows the info to be passively absorbed making it useful, rather than obtrusive.
Ablation makes it simple to diff samples by and highlight where the samples diverge. This is achieved by comparing the code executed rather than just comparing data. Consider comparing a heavily mutated crash sample, and the source sample. The root cause of the crash is normally tedious and unrewarding. Using Ablation, the root cause can often be determined simply by running each sample, and using the appropriate color scheme. This also means that visualizing the code coverage of a sample set becomes as simple as running each.
Recent findings have indicated that highly traversed code is not particularly interesting, and code infrequently executed or adjacent is more interesting. Ablation could be used to identify undocumented features in a product given a sample set.
Vulnerability research is all about the details. Having this information passively displayed could be the difference between confusion and discovery. Ablation will be made open source at BH2016.
AVLeak is a tool for fingerprinting consumer antivirus emulators through automated black box testing. AVLeak can be used to extract fingerprints from AV emulators that may be used by malware to detect that it is being analyzed and subsequently evade detection, including environmental artifacts, OS API behavioral inconsistencies, emulation of network connectivity, timing inconsistencies, process introspection, and CPU emulator "red pills."
Emulator fingerprints may be discovered through painstaking binary reverse engineering, or with time consuming black box testing using binaries that conditionally choose to behave benignly or drop malware based on the emulated environment. AVLeak significantly advances upon prior approaches to black box testing, allowing researchers to extract emulator fingerprints in just a few seconds, and to script out testing using powerful APIs.
AVLeak will be demoed live, showing real world fingerprints discovered using the tool that can be used to detect and evade popular consumer AVs including Kaspersky, Bitdefender engine (licensed out to 20+ other AV products), AVG, and VBA. This survey of emulation detection methods is the most comprehensive examination of the topic ever presented in one place.
The global market for Bring Your Own Device (BYOD) and enterprise mobility is expected to quadruple in size over the next four years, hitting $284 billion by 2019. BYOD software is used by some of the largest organizations and governments around the world. Barclays, Walmart, AT&T, Vodafone, United States Department of Homeland Security, United States Army, Australian Department of Environment and numerous other organizations, big and small, all over the world. Enterprise Mobile Security (EMS) is a component of BYOD solutions that promises data, device and communications security for enterprises. Amongst others, it aims to solve Data Loss, Network Privacy and jailbreaking/rooting of devices.
Using the Good Technology EMS suite as an example, my talk will show that EMS solutions are largely ineffective and in some cases can even expose an organization to unexpected risks. I will show attacks against EMS protected apps on jailbroken and non-jailbroken devices, putting to rest the rebuttal that CxOs and solution vendors often give penetration testers, "We do not support jailbroken devices." I will also introduce a groundbreaking tool, Swizzler, to help penetration testers confronted with apps wrapped into EMS protections. The tool conveniently automates a large amount of attacks that allows pen-testers to bypass each of the protections that Good and similar solutions implement. In a live demonstration of Swizzler I will show how to disable tampering detection mechanisms and application locks, intercept & decrypt encrypted data, and route "secure" HTTP requests through BURP into established Good VPN tunnels to attack servers on an organization's internal network. Swizzler will be released to the world along with my talk at Blackhat USA. Whether you are a CxO, administrator or user, you can't afford not to understand the risks associated with BYOD.
This presentation will introduce a new threat model. Based on this threat model, we found a flaw in the Windows system. It affects all Windows released in the last two decades, including Windows 10. It also has a very wide range of attacks surface. The attack can be performed on all versions of Internet Explorer, Edge, Microsoft Office, many third-party software, USB flash drives, and even Web server. When this flaw is triggered, YOU ARE BEING WATCHED.
We will also show you how to defend against this threat, particularly on those systems are no longer supported by Microsoft.
WPAD (Web Proxy Auto Discovery) is a protocol that allows computers to automatically discover Web proxy configurations. It is primarily used in networks where clients are only allowed to communicate to the outside through a proxy. The WPAD protocol has been around for almost 20 years (RFC draft 1999-07-28), but has well-known risks to it that have been largely ignored by the security community. This session will present the results of several experiments highlighting the flaws inherent to this badly designed protocol (WPAD), and bring attention to the many ways in which they can be easily exploited. Our research expands on these known flaws and proves a surprisingly broad applicability of "badWPAD" for possible malicious use today by testing it in different environments. The speaker will share how his team initially deployed a WPAD experiment to test whether WPAD was still problematic or had been fixed by most software and OS vendors. This experiment included attacks in 1) Intranets and open-access networks (e.g. Free-WIFI spots and corporate networks) and 2) DNS attacks on clients leaking HTTP requests to the internet.
Attendees will hear the rather surprising results that this experiment yielded: The DNS portion of the experiment revealed more than 38 million requests to the WPAD honeypot domain names from oblivious customers - while the intranet Free-WIFI experiment proved that almost every second Wifi spot can be utilized as attack surface. This test included Wifi at airport lounges, conferences, hotel and on board of aircrafts, and were amazed that apparently nobody realized what their laptop was secretly requesting. It seems that this neglected WPAD flaw is growing, while it's commonly assumed to be fixed. The paper will be backed up by statistics and reveal why badWPAD remains to be a major security concern and what should be done to protect against this serious risk.
With over a billion active devices and in-depth security protections spanning every layer from silicon to software, Apple works to advance the state of the art in mobile security with every release of iOS. We will discuss three iOS security mechanisms in unprecedented technical detail, offering the first public discussion of one of them new to iOS 10.
HomeKit, Auto Unlock and iCloud Keychain are three Apple technologies that handle exceptionally sensitive user data – controlling devices (including locks) in the user's home, the ability to unlock a user's Mac from an Apple Watch, and the user's passwords and credit card information, respectively. We will discuss the cryptographic design and implementation of our novel secure synchronization fabric which moves confidential data between devices without exposing it to Apple, while affording the user the ability to recover data in case of device loss.
Data Protection is the cryptographic system protecting user data on all iOS devices. We will discuss the Secure Enclave Processor present in iPhone 5S and later devices and explain how it enabled a new approach to Data Protection key derivation and brute force rate limiting within a small TCB, making no intermediate or derived keys available to the normal Application Processor.
Traditional browser-based vulnerabilities are becoming harder to exploit due to increasingly sophisticated mitigation techniques. We will discuss a unique JIT hardening mechanism in iOS 10 that makes the iOS Safari JIT a more difficult target.
Active Directory (AD) is leveraged by 95% of the Fortune 1000 companies for its directory, authentication, and management capabilities. This means that both Red and Blue teams need to have a better understanding of Active Directory, it's security, how it's attacked, and how best to align defenses. This presentation covers key Active Directory components which are critical for security professionals to know in order to defend AD. Properly securing the enterprise means identifying and leveraging appropriate defensive technologies. The provided information is immediately useful and actionable in order to help organizations better secure their enterprise resources against attackers. Highlighted are areas attackers go after including some recently patched vulnerabilities and the exploited weaknesses. This includes the critical Kerberos vulnerability (MS14-068), Group Policy Man-in-the-Middle (MS15-011 & MS15-014) and how they take advantages of AD communication.
Some of the content covered:
Let's go beyond the standard MCSE material and dive into how Active Directory works focusing on the key components and how they relate to enterprise security.
Solving the "people problem" of cyber security requires us to understand why people fall victim to spear phishing. Unfortunately, the only proactive solution being used against spear phishing is user training and education. But, judging from the number of continued breaches, training appears to be limited in its effectiveness. Today's leading cybersecurity training programs focus on hooking people in repeated simulated spear phishing attacks and then showing them the nuances in the emails they missed. This "gotcha game" presumes that users merely lack knowledge, and if they are told often enough and repeatedly shown what they lack, they would become better at spear phishing detection. This is akin to trying to teach people to drive by constantly causing accidents and then pointing out why they had an accident each time.
We propose a radical change to this "one-size-fits all" approach. Recent human factors researchthe Suspicion, Cognition, Automaticity Model (SCAM) identifies a small set of factors that lead to individual phishing victimization. Using the SCAM, we propose the development of an employee Cyber Risk Index (CRI). Similar to how financial credit scores work, the CRI will provide security analysts the ability to pinpoint the weak-links in organizations and identify who is likely to fall victim, who needs training, how much training, and also what the training should focus on. The CRI will also allow security analysts to identify which users get administrative access, replacing the current mostly binary, role-based apportioning method, where individuals are given access based on their organizational role and responsibilities, with a system that is based on individuals' quantified cyber risk propensity. The CRI based approach we present will lead to individualized, cognitive-behavioral training and an evidence-based approach to awarding users' admin privileges. These are paradigm-changing solutions that will altogether improve individual cyber resilience and blunt the effectiveness of spear phishing.
The state of authentication is in such disarray today that a black hat is no longer needed to wreak havoc. One avenue to authentication improvement is offered by the FIDO Alliance's open specifications built around public key cryptography. Does FIDO present a better mousetrap? Are there security soft spots for potential exploitation, such as man-in-the-middle attacks, exploits aimed at supporting architecture, or compromises targeting physical hardware? We will pinpoint where vulnerabilities are hidden in FIDO deployments, how difficult they are to exploit, and how enterprises and organizations can protect themselves.
Hardware-Enforced Security is touted as the panacea solution to many modern computer security challenges. While certainly adding robust options to the defenders toolset, they are not without their own weaknesses. In this talk we will demonstrate how low-level technologies such as hypervisors can be used to subvert the claims of security made by these mechanisms. Specifically, we will show how a hypervisor rootkit can bypass Intel's Trusted Execution Environment (TXT) DRTM (dynamic root of trust measurement) and capture keys from Intel's AES-NI instructions. These attacks against TXT and AES-NI have never been published before. Trusted computing has had a varied history, to include technologies such as Trusted Execution Technology (TXT), ARM TrustZone, and now Microsoft Isolated User Mode and Intel SGX. All of these technologies attempt to protect user data from privileged processes snooping or controlling execution. These technologies claim that no elevated process, whether kernel based, System Management Mode (SMM) based, or hypervisor based will be able to compromise the user's data and execution.
This presentation will highlight the age-old problem of misconfiguration of Intel TXT by exploiting a machine through the use of another Intel technology, the Type-1 hypervisor (VT-x). Problems with these technologies have surfaced not as design issues but during implementation. Whether there remains a hardware weakness where attestation keys can be compromised, or a software and hardware combination, such as exposed DMA that permits exfiltration, and sometimes modification, of user process memory. This presentation will highlight one of these implementation flaws as exhibited by the open source tBoot project and the underlying Intel TXT technology. Summation will offer defenses against all too often pitfalls when deploying these systems, including proper deployment design using sealed storage, remote attestation, and hardware hardening.
Kernel hardening has been an important topic, as many applications and security mechanisms often consider the kernel their Trusted Computing Base (TCB). Among various hardening techniques, kernel address space layout randomization (KASLR) is the most effective and widely adopted technique that can practically mitigate various memory corruption vulnerabilities, such as buffer overflow and use-after-free. In principle, KASLR is secure as long as no memory disclosure vulnerability exists and high randomness is ensured. In this talk, we present a novel timing side-channel attack against KASLR, called DrK (De-randomizing Kernel address space), which can accurately, silently, and rapidly de-randomize the kernel memory layout by identifying page properties: unmapped, executable, or non-executable pages. DrK is based on a new hardware feature, Intel Transactional Synchronization Extension (TSX), which allows us to execute a transaction without interrupting the underlying operating system even when the transaction is aborted due to errors, such as access violation and page faults. In DrK, we turned this property into a timing channel that can accurately distinguish the mapping status (i.e., mapped versus unmapped) and execution status (i.e., executable versus non-executable) of the privileged address space. In addition to its surprising accuracy and precision, the DrK attack is not only universally applicable to all OSes, even under a virtualized environment, but also has no visible footprint, making it nearly impossible to be detected in practice. We demonstrate that DrK breaks the KASLR of all major OSes, including Windows, Linux, and OS X with near-perfect accuracy in a few seconds. Finally, we propose potential hardware modifications that can prevent or mitigate the DrK attack.
The payment industry is becoming more driven by security standards. However, the corner stones are still broken even with the latest implementations of these payments systems, mainly due to focusing on the standards rather than security. The best example for that is the ability to bypass protections put in place by points of interaction (POI) devices, by simple modifying several files on the point of sale or manipulating the communication protocols. In this presentation, we will explain the main flaws and provide live demonstrations of several weaknesses on a widely used pinpad. We will not exploit the operating system of the pinpad, but actually bypass the application layer and the business logic protections, i.e. the crypto algorithm is secure, but everything around it is broken. As part of our demos, we will include EMV bypassing, avoiding PIN protections and scraping PANs from various channels.
This presentation demonstrates a method of brute-forcing an AES-256 encrypted hard drive by spoofing the front-panel keyboard. In addition to tears into the internal design of the hard drive, and extends the work by J. Czarny & R. Rigo to validate the (in)security of any encrypted drive based on the MB86C311 chipset.
You've received vulnerability reports in your application or product, now what? As a positive, there is an abundance of incident response guidance for network security and a number of companies that have published their Product Security Incident Response Team (PSIRT) process for customers at a high level. Yet there is a dearth of detailed resources on how to implement PSIRT processes for organizations that have realized that Stage 7 of the SDL process (Response). To not only build but maintain secure products, organizations need to create mechanisms enabling their incident response teams to receive and respond to product incident reports, effectively partnering with development teams, customer support, and communications teams.
This session will be targeted at small to medium companies that have small or overstretched security teams, and will share content and best practices to support these teams' product incident response programs. Attendees will be provided with templates and actionable recommendations based on successful best practices from multiple mature security response organizations.
Voice enabled technology provides developers with great innovation opportunities as well as risks. The Voice Privacy Alliance created a set of 39 Agile security stories specifically for voice enabled IoT products as part of the Voice Privacy Innovation Toolkit. These security stories help product owners and security developer focals bake security into their voice enabled products to save time, money and decrease incidents and reputation damage. This is a very practical, hands-on tool for developers that the Voice Privacy Alliance believes is needed to secure voice enabled technologies and promote innovation.
Robocalling, voice phishing and caller ID spoofing are common cybercrime techniques used to launch scam campaigns through the telephony channel that many people have long trusted. More than 660,000 online complaints regarding unwanted phone calls were recorded on the top six phone complaints websites in 2015. More reliable than online complaints, a telephony honeypot provides complete, accurate and timely information about unwanted phone calls across the United States. By tracking calling patterns in a large telephony honeypot receiving over 600,000 calls per month from more than 90,000 unique source phone numbers, we gathered threat intelligence in the telephony channel. Leveraging this data we developed a methodology to uniquely "fingerprint" bad actors hiding behind multiple phone numbers and detect them within the first few seconds of a call. Over several months, more than 100,000 calls were recorded and several millions call records analyzed to validate our methodology. Our results show that only a few bad actors are responsible for the majority of the spam and scam calls and that they can be quickly identified with high accuracy using features extracted from the audio. This discovery has major implications for law enforcement and businesses that are presently engaged in combatting the rise of telephony fraud.
Before we dive into specific mobile vulnerabilities and talk as if the end times are upon us, let us pop the stack and talk about how the mobile environment works as a whole. We will explore the assumptions and design paradigms of each player in the overall mobile space, along with the requirements and inheritance problems they face. The value of this approach is that it allows us to understand and couch the impacts and implications of all mobile vulnerabilities, be it bugs existing today or theoretical future vulnerabilities. The approach also allows us to catalogue all the design assumptions made and search for any generalized logical flaws that could serve as a lynchpin to undermine the entirety of mobile security and trust.
This talk focuses on the entirety of the mobile ecosystem, from the hardware components to the operating systems to the networks they connect to. We will explore the core components across mobile vendors and operating systems, focusing on bugs, logic, and root problems that potentially effect all mobile devices. We will discuss the limitations of mobile trusted computing and what can be done to protect both your data and the devices your data reside on. From the specific perspectives of trusted computing and hardware integrity, there are a handful of smartphone hardware platforms on the market. OEMs are constrained to release devices based on selecting and trusting one of these platforms. If a skilled attacker can break trust at the hardware level, the entire device becomes compromised at a very basic (and largely undetectable) level. This talk is about how to break that trust.
In the past few years, several tools have been released allowing hobbyists to connect to CAN buses found in cars. This is welcomed as the CAN protocol is becoming the backbone for embedded computers found in smartcars. Its use is now even spreading outside the car through the OBD-II connector: usage-based policies from insurance companies, air-pollution control from law enforcement or engine diagnostics from smartphones for instance. Nonetheless, these tools will do no more than what professional tools from automobile manufacturers can do. In fact, they will do less as they do not have knowledge of upper-layer protocols.
Security auditors are used to dealing with this kind of situation: they reverse-engineer protocols before implementing them on top of their tool of choice. However, to be efficient at this, they need more than just being able to listen to or interact with what they are auditing. Precisely, they need to be able to intercept communications and block them, forward them or modify them on the fly. This is why, for example, a platform such as Burp Suite is popular when it comes to auditing web applications.
In this talk, we present CANSPY, a platform giving security auditors such capabilities when auditing CAN devices. Not only can it block, forward or modify CAN frames on the fly, it can do so autonomously with a set of rules or interactively using Ethernet and a packet manipulation framework such as Scapy. It is also worth noting that it was designed to be cheap and easy to build as it is mostly made of inexpensive COTS. Last but not least, we demonstrate its versatility by turning around a security issue usually considered when it comes to cars: instead of auditing an electronic control unit (ECU) through the OBD-II connector, we are going to partially emulate ECUs in order to audit a device that connects to this very connector.
Put a low-level security researcher in front of hooking mechanisms and you get industry-wide vulnerability notifications, affecting security tools such as Anti-Virus, Anti-Exploitations and DLP, as well as non-security applications such as gaming and productivity tools. In this talk we reveal six(!) different security issues that we uncovered in various hooking engines. The vulnerabilities we found enable a threat actor to bypass the security measures of the underlying operating system. As we uncovered the vulnerabilities one-by-one we found them to impact commercial engines, such as Microsoft's Detours, open source engines such as EasyHook and proprietary engines such as those belonging to TrendMicro, Symantec, Kaspersky and about twenty others.
In this talk we'll survey the different vulnerabilities, and deep dive into a couple of those. In particular, we'll take a close look at a vulnerability appearing in the most popular commercial hooking engine of a large vendor. This vulnerability affects the most widespread productivity applications and forced the vendor to not only fix their engine, but also that their customers fix their applications prior to releasing the patch to the public. Finally, we'll demonstrate how security tools can be used as an intrusion channel for threat actors, ironically defeating security measures.
The security industry has gone to great lengths to make exploitation more difficult. Yet we continue to see weaponized exploits used in malware campaigns and targeted attacks capable of bypassing OS and vendor exploit mitigation strategies. Many of these newly deployed mitigations target code-reuse attacks like return-oriented-programming. Unfortunately, the reality is that once attackers have control over code execution it's only a matter of time before they can circumvent these defenses, as the recent rise of EMET bypasses illustrates. We propose a new strategy to raise the bar significantly. Our approach blocks exploits before they gain execution, preventing the opportunity to bypass mitigations.
This presentation introduces a new cross-platform, hardware-assisted Control-Flow Integrity (CFI) approach to mitigate control-flow hijack attacks on the Intel architecture. Prior research has demonstrated the effectiveness of leveraging processor-provided features such as the Performance Monitoring Unit (PMU) in order to trap various events for detecting ROP behaviors. We extend and generalize this approach by fine-tuning low-level processor features that enable us to insert a CFI policy to detect and prevent abnormal branches in real-time. Our promising results have shown this approach capable of protecting COTS binaries from control-flow hijack attempts stemming from use-after-free and memory corruption vulnerabilities with acceptable overhead on modern Windows and Linux systems.
In this talk, we will cover our research methodology, results, and limitations. We will highlight novel solutions to major obstacles we faced, including: proper tracking of Windows thread context swapping; configuration of PMU interrupt delivery without tripping Microsoft's PatchGuard; efficient algorithms for discovery of valid branch destinations in PE and ELF files at run-time; and the impact of operating in virtualized environments. The effectiveness of our approach using hardware-assisted traps to monitor program execution and enforce CFI policies on mispredicted branches will be demonstrated in real-time. We will prevent weaponized exploits targeting Windows and Linux x86-64 operating systems that nominally bypass anti-exploit technologies like Microsoft's EMET tool. We will also present collected metrics on performance impact and the real-world applications of this technology.
Malware developers are constantly looking for new ways to evade the detection and prevention capabilities of security solutions. In recent years, we have seen many different tools, such as packers and new encryption techniques, help malware reach this goal of hiding the malicious code. If the security solution cannot unpack the compressed or encrypted malicious content (or at least unpack it dynamically), then the security solution will not be able to identify that it is facing malware. To further complicate the matter, we present a new technique for hiding malware (encrypted and unencrypted) inside a digitally signed file (while still keeping the file with a valid certificate) and executing it from the memory, using a benign executable (which acts as a reflective EXE loader, written from scratch). Our research demonstrates our Certificate Bypass tool and the Reflective EXE Loader. During the presentation, we will focus on the research we conducted on the PE file structure. We will take a closer look at the certificate table and how we can inject data to the table without damaging the certificate itself (the file will still look and be treated as a valid digitally signed file). We will examine the tool we wrote to execute PE files from memory (without writing them to the disk). We will cover the relevant fields in the PE structure, as well as the steps required to run a PE file directly from the memory without requiring any files on disk. Last, we will conclude the demonstration with a live example and show how we bypass security solutions based on the way they look at the certificate table.
You're in a potentially malicious network (free WiFi, guest network, or maybe your own corporate LAN). You're a security conscious netizen so you restrict yourself to HTTPS (browsing to HSTS sites and/or using a "Force TLS/SSL" browser extension). All your traffic is protected from the first byte. Or is it?
You've probably heard of network neutrality. In 2015, the Federal Communications Commission enacted transformative rules that prohibit Internet service providers from blocking, throttling, or creating "fast lanes" for online content. The Open Internet Order protects your right to enjoy the lawful content, applications, services, and devices of your choosing. But it also empowers the FCC to protect the security and privacy of your Internet traffic. This talk will give an overview of the FCC's security and privacy authorities, which now cover broadband Internet service, as well as telephone, cable, and satellite connectivity. We will explain how the FCC investigates violations of federal communications law, and how it brings enforcement actions against offenders. In just the past two years, the FCC's Enforcement Bureau has initiated several high-profile law enforcement actions related to security and privacy. We required Verizon to stop injecting a unique identifier "supercookie" into third-party web requests, unless a customer consents. We also required AT&T and Cox to improve their customer information safeguards, after their security failures led to information on hundreds of thousands of customers getting unacceptably and unnecessarily exposed.*
Most recently, the FCC formally proposed new Internet security and privacy rules. The Commission recommended that, if your Internet service provider wants to share information from or about you, it should first obtain your affirmative, opt-in consent. We will explain how the rulemaking process functions, and how you can file comments on FCC proceedings. We will also leave time for a Q & A session. Whether you'd like to ask about net neutrality, robocalls, wifi router firmware (we know many of you have thoughts about that mixup!), or anything else communications related, this is your opportunity. In fact, you can even ask about your cable appointment we bet you didn't know the FCC has rules about that, too!
Secure Channel (Schannel) is Microsoft's standard SSL/TLS Library underpinning services like RDP, Outlook, Internet Explorer, Windows Update, SQL Server, LDAPS, Skype and many third party applications. Schannel has been the subject of scrutiny in the past several years from an external perspective due to reported vulnerabilities, including an RCE. What about the internals? How does Schannel guard its secrets?
This talk looks at how Schannel leverages Microsoft's CryptoAPI-NG (CNG) to cache the master keys, session keys, private and ephemeral keys, and session tickets used in TLS/SSL connections. It discusses the underlying data structures, and how to extract both the keys and other useful information that provides forensic context about connection. This information is then leveraged to decrypt a session that uses ephemeral key exchanges. Information in the cache lives for at least 10 hours by default on modern configurations, storing up to 20,000 entries for client and server each. This makes it forensically relevant in cases where other evidence of the connection may have dissipated.
The conflict between Russia and Ukraine appears to have all the ingredients for "cyber war". Moscow and Kyiv are playing for the highest geopolitical stakes, and both countries have expertise in information technology and computer hacking. However, there are still many skeptics of cyber war, and more questions than answers. Malicious code is great for espionage and crime, but how much does it help soldiers on the battlefield? Does computer hacking have strategic effects? What are the political and military limits to digital operations in peacetime and war? This NATO-funded research project, undertaken by 20 leading authorities on national security and network security, is a benchmark for world leaders and system administrators alike, and sheds light on whether "cyber war" is now reality -- or still science fiction. Further, it helps decision makers to understand that national security choices today have ramifications for democracy and human rights tomorrow.
For the purposes of tailoring the Android to different hardware platforms, countries/regions and other needs, hardware manufacturers (e.g. Qualcomm), device manufacturers, carriers and others have aggressively customized Android into thousands of system images. This practice has led to a highly fragmented ecosystem where the complicated relations among its components and apps though which one party interacts with the other have been seriously compromised. This leads to the pervasiveness of Hare (hanging attribute references e.g. package, activity, service action names, authorities and permissions), a type of vulnerabilities never investigated before.
In this talk, we will show that such flaws could have serious security implications, that is, a malicious app can acquire critical system capabilities by pretending to be the owner of an attribute who has been used on a device while the party defining it does not exist due to vendor customizations. On the factory image of 97 most popular Android devices, we discovered 21557 likely Hare flaws, demonstrating the significant impacts of the problem from stealing user's voice notes, controlling the screen unlock process, replacing Google Email's account settings to injecting messages into Facebook app and Skype. We will also show a set of new techniques we developed for automatically detecting Hare flaws within different Android versions, which can be utilized by the device manufacturers and other parties to secure their custom OSes. And we will provide the guidance for avoiding this pitfall when building future systems.
DNS is an essential substrate of the Internet, responsible for translating user-friendly Internet names into machine-friendly IP addresses. Without DNS, it would be an impossible mission for us to navigate through the Internet. As we have seen in recent years, DNS-based attacks launched by adversaries remain a constant lethal threat in various forms. The record-breaking 300gbps DNS amplification DDoS attack against Spamhaus presented by Cloudflare at Black Hat 2013 is still vivid in our minds. Since then (in the last 3 years), thanks to the dark force's continuous innovations, the dark side of the DNS force is getting much more pernicious. Today, the dark side is capable of assembling an unprecedented massive attacking force of an unimaginable scale and magnitude. As an example, leveraging up to 10X of the Internet domain names, a modern DNS-based attack can easily take down any powerful online service, disrupt well-guarded critical infrastructure, and cripple the Internet, despite all the existing security postures and hardening techniques we have developed and deployed.
In this talk, we will present and discuss an array of new secret weapons behind the emerging DNS-based attacks from the dark side. We will analyze the root causes for the recent surges of the Internet domain counts from 300-million a year ago to over 2-billion. Some real use cases will be shown to illustrate the domain surges' impact on the Internet's availability and stability, especially with spikes up to 5-billion domains. We will focus on the evolution of random subdomain weapon which can generate a large number of queries to nonexistent fully qualified domain names such as 01mp5u89.arkhamnetwork.org and 01k5jj4u.arkhamnetwork.org to overload and knock down both authoritative name servers and cache servers along the query paths. Starting as a simple primitive tool used to disrupt competitors' gaming sites in order to win more users among the Chinese online gaming community about five years ago, random subdomain has become one of the most powerful disruptive weapons nowadays. As the attack targets move towards more high-profile and top level domains, the random subdomain weapon also becomes much sophisticated by blending attacking traffic with legitimate operations. It is a challenge for the cyber security community to distinguish bad traffic from benign ones in a cost-effective manner.
We will address this challenge by dissecting the core techniques and mechanisms used to boost attack strength and to evade detection. We will discuss techniques such as multiple level of random domains, mix use of constant names and random strings, innovative use of timestamps as unique domain names, as well as local and global escalations. We will demonstrate and compare different solutions for the accurate detection and effective mitigation of random subdomain and other active ongoing DNS-based attacks including DNS tunneling of data exfiltration on some most restricted networks due to the pervasiveness of DNS.
Cyber attackers have had the advantage for decades over defenders but we can and must change this with a more defensible cyberspace.
This talk describes the results of a recent task force to identify the top technologies, operational innovations and public policies which have delivered security at scale for the defense to catch up with attackers. All of these innovations have one thing in common: a dollar of defense buys far more than a dollar of offense. Now that we've recognized what has been most effective, the community has to repeat these successes at hyperscale, and the talk gives recommendations.
The secure enclave processor (SEP) was introduced by Apple as part of the A7 SOC with the release of the iPhone 5S, most notably to support their fingerprint technology, Touch ID. SEP is designed as a security circuit configured to perform secure services for the rest of the SOC, with with no direct access from the main processor. In fact, the secure enclave processor runs it own fully functional operating system - dubbed SEPOS - with its own kernel, drivers, services, and applications. This isolated hardware design prevents an attacker from easily recovering sensitive data (such as fingerprint information and cryptographic keys) from an otherwise fully compromised device.
Despite almost three years have passed since its inception, little is still known about the inner workings of the SEP and its applications. The lack of public scrutiny in this space has consequently led to a number of misconceptions and false claims about the SEP.
In this presentation, we aim to shed some light on the secure enclave processor and SEPOS. In particular, we look at the hardware design and boot process of the secure enclave processor, as well as the SEPOS architecture itself. We also detail how the iOS kernel and the SEP exchange data using an elaborate mailbox mechanism, and how this data is handled by SEPOS and relayed to its services and applications. Last, but not least, we evaluate the SEP attack surface and highlight some of the findings of our research, including potential attack vectors.
Organizations often scale at a faster pace than their security teams. Therefore, security teams need to deploy automation that can scale their processes. When it comes to your organization, what criteria should decide the best approach for security automation? Are there simpler alternatives to building a complex, custom built, automation environment? Where do you deploy? Which tools do you need? How do you ensure that your implementation will effectively enable teams versus just creating false positives at scale? This presentation will discuss criteria for designing and evaluating security automation tools for your organization. The goal is provide audience members with effective small scale and large scale automation techniques for securing their environments.
With the proliferation of portable computing systems such as tablet, smartphone, Internet of Things (IoT), etc., ordinary users are facing the increasing burden to properly configure those devices, enabling them to work together. In response to this utility challenge, major device manufacturers and software vendors (e.g., Apple, Microsoft, Hewlett-Packard) tend to build their systems in a "plug-and-play" fashion, using techniques dubbed zero-configuration (ZeroConf). Such ZeroConf services are characterized by automatic IP selection, host name resolving and target service discovery. As the major proponent of ZeroConf techniques, Apple has adopted ZeroConf techniques in various frameworks and system services on iOS and OS X to minimize user involvements in system setup. However, when the design pendulum swings towards usability, concerns may arise whether the system has been adequately protected. In this presentation, we will report the first systematic study on the security implications of these ZeroConf techniques on Apple systems.
Our research brings to light a disturbing lack of security consideration in these systems' designs: major ZeroConf frameworks on the Apple platforms, including the Multipeer Connectivity and Bonjour, are mostly unprotected and system services, such as printer discovery and AirDrop, turn out to be completely vulnerable to an impersonation or Man-in-the-Middle (MitM) attack, even though attempts have been made to protect them against such threats. The consequences are serious, allowing a malicious device to steal documents to be printed out by other devices or files transferred between other devices. Most importantly, our study highlights the fundamental security challenges underlying ZeroConf techniques. Some of the vulnerabilities have not been fixed until this submission though we reported to Apple over half a year ago. We will introduce ZeroConf techniques and publish technical details of our attacks to Apple ZeroConf techniques. We will take Airdrop, Bonjour and Multipeer Connectivity as examples to show the vulnerabilities in their design and implementation and how we hacked these ZeroConf frameworks and system services to perform MitM attacks. We will also show that some of vulnerabilities are due to TLS' incompetence to secure device-to-device communication in the ZeroConf scenario, which is novel discovery and contributes to the state of the art.
At every Black Hat you will inevitably hear hackers boasting that they can break into any company by dropping a malicious USB drive in the company's parking lot. This anecdote has even entered mainstream culture and was prominently featured in the Mr. Robot TV series. However despite its popularity, there has been no rigorous study of whether the attack works or is merely an urban legend. To answer this burning question and assess the actual threat posed by malicious USB drives, we dropped nearly 300 USB sticks on the University of Illinois Urbana-Champaign campus and measured who plugged in the drives. And oh boy how effective that was! Of the drives we dropped, 98% were picked up and for 48% of the drives, someone not only plugged in the drive but also clicked on files. Join us for this talk if you are interested in physical security and want to learn more about the effectiveness of arguably the most well known anecdote of our community. We will provide an in-depth analysis of which factors influence users to pick up a drive, why users plug them in, and demo a new tool that can help mitigate USB attacks.
This research focuses on determining the practical exploitability of software issues by means of crash analysis. The target was not to automatically generate exploits, and not even to fully automate the entire process of crash analysis; but to provide a holistic feedback-oriented approach that augments a researcher's efforts in triaging the exploitability and impact of a program crash (or fault). The result is a semi-automated crash analysis framework that can speed-up the work of an exploit writer (analyst). Fuzzing, a powerful method for vulnerability discovery keeps getting more popular in all segments across the industry - from developers to bug hunters. With fuzzing frameworks becoming more sophisticated (and intelligent), the task of product security teams and exploit analysts to triage the constant influx of bug reports and associated crashes received from external researchers has increased dramatically. Exploit writers are also facing new challenges: with the advance of modern protection mechanisms, bug bounties and high-prices in vulnerabilities, their time to analyze a potential issue found and write a working exploits is shrinking.
Given the need to improve the existing tools and methodologies in the field of program crash analysis, our research speeds-up dealing with a vast corpus of crashes. We discuss existing problems, ideas and present our approach that is in essence a combination of backward and forward taint propagation systems. The idea here is to leverage both these approaches and to integrate them into one single framework that provides, at the moment of a crash, the mapping of the input areas that influence the crash situation and from the crash on, an analysis of the potential capabilities for achieving code execution. We discuss the concepts and the implementation of two functional tools developed by the authors (one of which was previously released) and go about the benefits of integrating them. Finally, we demonstrate the use of the integrated tool (DPTrace to be released as open-source at Black Hat) with public vulnerabilities (zero-days at the time of the released in the past), including a few that the authors themselves discovered, analyzed/exploited and reported.
With new Drone technologies appearing in the consumer space daily, Industrial Plant operators are being forced to rethink their most fundamental assumptions about Industrial Wireless and Cyber-Physical security. This presentation will cover Electronic Threats, Electronic Defensive measures, Recent Electronic jamming incidents, Latest Drone Threats and capabilities, defensive planning, and Electronic Attack Threats with Drones as delivery platform.
The security community knows, the weak link is the human factor - from the project manager deciding that "security costs too much," to the operational bypassing its own company security measure, passing through the end user believing that nobody will ever think how he is using its cat's name as a password or a developper not following best practices.
We all arrive to the same conclusion - we need to train people to the computer security stakes. According to the author's experience, standard Security training is focused on the technical context (what a password is, how does a computer work etc.) and tends to bore or scare a neophyte audience.
This briefing will propose a new way to train a neophyte audience to the basic principles of Computer Security. The training is developed around a role playing game consisting in attacking and defending a building. A debriefing is done after the game to highlight all the similarities between the game and computer security stakes. The presentation will focus on the main feature of the training, and a white paper explaining how to conduct such a training will be available.
Messages containing links to malware-infected websites represent a serious threat. Despite the numerous user education efforts, people still click on suspicious links and attachments, and their motivations for clicking or not clicking remain hidden. We argue that knowing how people reason about their clicking behavior can help the defenders in devising more effective protection mechanisms. To this end, we report the results of two user studies where we sent to over 1600 university students an email or a Facebook message with a link from a non-existing person, claiming that the link leads to the pictures from the party last week. When clicked, the corresponding webpage showed the "access denied" message. We registered the click rates, and later sent to the participants a questionnaire that first assessed their security awareness, and then asked them about the reasons for their clicking behavior.
When addressed by first name, 56% of email and 38% of Facebook recipients clicked. When not addressed by first name, 20% of email and 42.5% of Facebook recipients clicked. Respondents of the survey reported high awareness of the fact that clicking on a link can have bad consequences (78%). However, statistical analysis showed that this was not connected to their reported clicking behavior. By far the most frequent reason for clicking was curiosity about the content of the pictures (34%), followed by the explanations that the content or context of the message fits the current life situation of the person (27%), such as actually having been at a party with unknown people last week. Moreover, 16% thought that they know the sender. The most frequent reason for not clicking was unknown sender (51%), followed by the explanation that the message does not fit the context of the user (36%).
Therefore, it should be possible to make virtually any person click on a link, as any person will be curious about something, or interested in some topic, or find the message plausible because they know the sender, or because it fits their expectations (context). Expecting from the users error-free decision making under these circumstances seems to be highly unrealistic, even if they are provided with effective awareness training.
Moreover, while sending employees fake spear phishing messages from spoofed colleagues and bosses may increase their security awareness, it is also quite likely to have negative consequences in an organization. People's work effectiveness may decrease, as they will have to be suspicious of practically every message they receive. This may also seriously hamper social relationships within the organization, promoting the atmosphere of distrust. Thus, organizations need to carefully assess all pros and cons of increasing security awareness against spear phishing. In the long run, relying on technical in-depth defense may be a better solution, and more research and evidence is needed to determine the feasible level of defense that the non-expert users are able to achieve through security education and training.
Bluetooth Low Energy is probably the most thriving technology implemented recently in all kinds of IoT devices: gadgets, wearables, smart homes, medical equipment and even banking tokens. The BLE specification assures secure connections through link-layer encryption, device whitelisting and bonding - a mechanisms not without flaws, although that's another story we are already aware of. A surprising number of devices do not (or simply cannot - because of the use scenario) utilize these mechanisms. The security (like authentication) is, in fact, provided on higher "application" (GATT protocol) layer of the data exchanged between the "master" (usually mobile phone) and peripheral device. The connection from "master" in such cases is initiated by scanning to a specific broadcast signal, which by design can be trivially spoofed. And guess what - the device GATT internals (so-called "services" and "characteristics") can also be easily cloned.
Using a few simple tricks, we can assure the victim will connect to our impersonator device instead of the original one, and then just proxy the traffic - without consent of the mobile app or device. And here it finally becomes interesting - just imagine how many attacks you might be able to perform with the possibility to actively intercept the BLE communication! Basing on several examples, I will demonstrate common flaws possible to exploit, including improper authentication, static passwords, not-so-random PRNG, excessive services, bad assumptions - which allow you to take over control of smart locks, disrupt smart home, and even get a free lunch. I will also suggest best practices to mitigate the attacks. Ladies and gentlemen - I give you the BLE MITM proxy. A free open-source tool which opens a whole new chapter for your IoT device exploitation, reversing and debugging. Run it on a portable Raspberry Pi, carry around BLE-packed premises, share your experience and contribute to the code.
My evil plot began by making small but seemingly helpful contributions to the GoodFET project, a line of code here, a simple add-on board there. Soon I was answering the occasional question on IRC or the mailing list, and I was in: commit rights!
I had chosen my prey carefully. GoodFET, the preferred open source tool of discriminating hardware hackers around the world, consisted of too many disparate hardware designs. It was full of terrific ideas and PoCs, but it was becoming unmaintainable. The Facedancer variant alone had at least three different and incompatible code bases! The hardware designs were easy to build one at a time but needlessly costly for volume manufacturing. The project was ripe for a takeover.
I struck when Travis Goodspeed was most vulnerable, his faculties diminished by the hordes of Las Vegas. He accepted my $5. GoodFET was mine!
With GoodFET in my control I moved quickly to replace the entire project with something superior, something greater! Today I unleash GreatFET!
Over the past year I have worked at understanding and breaking the new methods that ATM manufactures have implemented on producing "Next Generation" Secure ATM systems. This includes bypassing Anti-skimming/Anti-Shimming methods introduced to the latest generation ATMs, along with NFC long range attacks that allow real-time card communication over 400 miles away. This talk will demonstrate how a $2000 investment can perform unattended "cash outs," touching also on failures in the past with EMV implementations and how credit card data of the future will most likely be sold with the new EMV data - with a short life span. This talk will include a demonstration of "La-Cara," an automated cash out machine that works on current EMV and NFC ATMs. "La-Cara" is an entire fascia placed on the machine to hide the auto PIN keyboard and flashable EMV card system that silently withdraws money from harvested card data. This demonstration of the system can cash out around $20,000/$50,000 in 15 min. With these methods revealed we will be able to protect against similar types of attacks.
A recent security review by David Litchfield of Oracle's eBusiness Suite (fully patched) revealed it is vulnerable to a number of (unauthenticated) remote code execution flaws, a slew of SQL injection vulnerabilities and Cross Site Scripting bugs. Used by large corporations across the globe the question becomes how does one secure this product given its weaknesses. This talk will examine those weakness with demonstration exploits then look at how one can protect their systems against these attacks.
Incident Response procedures differ in the cloud versus when performed in traditional, on-premise, environments. The cloud offers the ability to respond to an incident by programmatically collecting evidence and quarantining instances but with this programmatic ability comes the risk of a compromised API key. The risk of a compromised key can be mitigated but proper configuration and monitoring must be in place.
The talk discusses the paradigm of Incident Response in the cloud and introduces tools to automate the collection of forensic evidence of a compromised host. It highlights the need to properly configure an AWS environment and provides a tool to aid the configuration process.
Cloud IR How is it Different?
Incident response in the cloud is performed differently than when performed in on-premise systems. Specifically, in a cloud environment you can not walk up to the physical asset, clone the drive with a write-blocker, or perform any action that requires hands on time with the system in question. Incident response best practices advise following predefined practiced procedures when dealing with a security incident, but organizations moving infrastructure to the cloud may fail to realize the procedural differences in obtaining forensic evidence. Furthermore, while cloud providers produce documents on handling incident response in the cloud, these documents fail to address the newly released features or services that can aid incident response or help harden cloud infrastructure. (1.)
A survey of AWS facilities for automation around IR
The same features in cloud platforms that create the ability to globally deploy workloads in the blink of an eye can also add to ease of incident handling. An AWS user may establish API keys to use the AWS SDK to programmatically add or remove resources to an environment, scaling on demand. A savvy incident responder can use the same AWS SDK, or (the AWS command line tools) to leverage cloud services to facilitate the collection of evidence. For example, using the AWS command line tools or the AWS SDK, a user can programmatically image the disk of a compromised machine with a single call. However, the power of the AWS SDK introduces a new threat in the event of an API key compromise.
Increased Attack Surface via Convenience ( Walk through some compromise scenarios to illustrate )
There are many stories of users accidentally uploading their AWS keys to GitHub or another sharing service and then having to fight to regain control of the AWS account while their bill skyrockets. (2. 3.) And while these stories are sensational, they are preventable by placing limits on a cloud account directly. More concerning is the risk of a compromised key being used to access private data. A compromised API key without restrictions could access managed database, storage, or code repository services, to name a few. (4.) While the API key itself may not be used to access a targeted box, it is possible to use that key to clone a targeted box, and relaunch it with an attacker's SSH key, giving the attacker full access to the newly instantiated clone. While the consequences of a compromised API key can be dire, the risks can be substantially mitigated with proper configuration and monitoring.
Hardening of AWS Infrastructure
AWS environments can be hardened by following traditional security best practices and leveraging AWS services. AWS Services like CloudTrail and Config should be used to monitor and configure an AWS environment. CloudTrail provides logging of AWS API invocations tied to a specific API key. AWS Config provides historical insight into the configuration of AWS resources including users and the permissions granted in their policies.
API keys associated to AWS accounts should be delegated according to least privilege and therefore have the fewest number of permissions granted in its policy as possible. Furthermore, API keys should be tightened to restrict access only to the resources they need. Managing of these policies is made easier by the group and role constructs provided by AWS IAM, but it still leaves to the user having to understand each of the 195 policies currently recognized by IAM.
Introduction of Tools
We present custom tooling so the entire incident response process can be automated based on certain triggers within the AWS account. With very little configuration users could detect a security incident, acquire memory, take snapshots of disk images, quarantine, and have it presented to an examiner workstation all in the time it takes to get a cup of coffee.
Additional tooling is presented to aid in the recovery of an AWS account should a AWS key be compromised. The tool attempts to rotate compromised keys, identify and remove rogue EC2 instances and produce a report with next steps for the user.
Finally, we present a tool that examines an existing AWS environments and aides in configuring that environment to a hardened state. The tool recommends services to enable, permissions to remove from user accounts, and metrics to collect.
We discuss Incident Response in the cloud and introduce tools to automate the collection of forensic evidence of a compromised host. We highlight the need to properly configure an AWS environment and provide tools to aid the configuration process.
1. AWS Security Resources. N.p., n.d. Web. 10 Apr. 2016.
2. Example AWS Key Compromises. Ed. Soulskill. N.p., n.d. Web. 10 Apr. 2016.
3. IT News Article on AWS Keys. N.p., n.d. Web. 10 Apr. 2016.
4. AWS Console Breach CloudSpaces. N.p., n.d. Web. 10 Apr. 2016.
Over the last few years, a worryingly number of attacks against SSL/TLS and other secure channels have been discovered. Fortunately, at least from a defenders perspective, these attacks require an adversary capable of observing or manipulating network traffic. This prevented a wide and easy exploitation of these vulnerabilities. In contrast, we introduce HEIST, a set of techniques that allows us to carry out attacks against SSL/TLS purely in the browser. More generally, and surprisingly, with HEIST it becomes possible to exploit certain flaws in network protocols without having to sniff actual traffic. HEIST abuses weaknesses and subtleties in the browser, and the underlying HTTP, SSL/TLS, and TCP layers. Most importantly, we discover a side-channel attack that leaks the exact size of any cross-origin response. This side-channel abuses the way responses are sent at the TCP level. Combined with the fact that SSL/TLS lacks length-hiding capabilities, HEIST can directly infer the length of the plaintext message. Concretely, this means that compression-based attacks such as CRIME and BREACH can now be performed purely in the browser, by any malicious website or script, without requiring network access. Moreover, we also show that our length-exposing attacks can be used to obtain sensitive information from unwitting victims by abusing services on popular websites. Finally, we explore the reach and feasibility of exploiting HEIST. We show that attacks can be performed on virtually every web service, even when HTTP/2 is used. In fact, HTTP/2 allows for more damaging attack techniques, further increasing the impact of HEIST. In short, HEIST is a set of novel attack techniques that brings network-level attacks to the browser, posing an imminent threat to our online security and privacy.
What if we took the underlying technical elements of Linux containers and used them for evil? The result a new kind rootkit, which is even able to infect and persist in systems with UEFI secure boot enabled, thanks to the way almost every Linux system boots. This works without a malicious kernel module and therefore works when kernel module signing is used to prevent loading of unsigned kernel modules. The infected system has a nearly invisible backdoor that can be remote controlled via a covert network channel.
Hope is not lost, however! Come to the talk and see how the risk can be eliminated/mitigated. While this may poke a stick in the eye of the current state of boot security, we can fix it!
The widespread demand for online privacy, also fueled by widely-publicized demonstrations of session hijacking attacks against popular websites (see Firesheep), has spearheaded the increasing deployment of HTTPS. However, many websites still avoid ubiquitous encryption due to performance or compatibility issues. The prevailing approach in these cases is to force critical functionality and sensitive data access over encrypted connections, while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. In this work, we conduct an in-depth assessment of a diverse set of major websites and explore what functionality and information is exposed to attackers that have hijacked a user's HTTP cookies. We identify a recurring pattern across websites with partially deployed HTTPS; service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-session cookies. Our cookie hijacking study reveals a number of severe flaws; attackers can obtain the user's home and work address and visited websites from Google, Bing and Baidu expose the user's complete search history, and Yahoo allows attackers to extract the contact list and send emails from the user's account. Furthermore, e-commerce vendors such as Amazon and Ebay expose the user's purchase history (partial and full respectively), and almost every website exposes the user's name and email address. Ad networks like Doubleclick can also reveal pages the user has visited. To fully evaluate the practicality and extent of cookie hijacking, we explore multiple aspects of the online ecosystem, including mobile apps, browser security mechanisms, extensions and search bars. To estimate the extent of the threat, we run IRB-approved measurements on a subset of our university's public wireless network for 30 days, and detect over 282K accounts exposing the cookies required for our hijacking attacks. We also explore how users can protect themselves and find that, while mechanisms such as the EFF's HTTPS Everywhere extension can reduce the attack surface, HTTP cookies are still regularly exposed. The privacy implications of these attacks become even more alarming when considering how they can be used to deanonymize Tor users. Our measurements suggest that a significant portion of Tor users may currently be vulnerable to cookie hijacking.
The meteoric rise of SPDY, HTTP/2, and QUIC has gone largely unremarked upon by most of the security field. QUIC is an application-layer UDP-based protocol that multiplexes connections between endpoints at the application level, rather than the kernel level. HTTP/2 (H2) is a successor to SPDY, and multiplexes different HTTP streams within a single connection. More than 10% of the top 1 Million websites are already using some of these technologies, including much of the 10 highest traffic sites. Whether you multiplex out across connections with QUIC, or multiplex into fewer connections with HTTP/2, the world has changed. We have a strong sensation of Déjà vu with this work and our 2014 BlackHat USA MPTCP research. We find ourselves discussing a similar situation in new protocols with technology stacks evolving faster than ever before, and Network Security is largely unaware of the peril already upon it. This talk briefly introduces QUIC and HTTP/2, covers multiplexing attacks beyond MPTCP, discusses how you can use these techniques over QUIC and within HTTP/2, and discusses how to make sense of and defend against H2/QUIC traffic on your network. We will also demonstrate, and release, some tools with these techniques incorporated.
A decompression bomb attack is relatively simple to perform --- but can be completely devastating to developers who have not taken the time to properly guard their applications against this type of denial of service. The decompression bomb is not a new attack - it's been around since at least 1996 - but unfortunately they are still horrifyingly common. The stereotypical bomb is the zip bomb, but in reality nearly any compression algorithm can provide fruit for this attack (images, HTTP streams, etc.). What algorithms have the highest compression ratio, the sloppiest parsers, and make for the best bomb candidates? This talk is about an ongoing project to answer that question. In addition to the compression algorithm audit, this research is generating a vast library of tools ("bombs") that can be used by security researchers and developers to test for this vulnerability in a wide variety of applications/protocols. These bombs are being released under an open-source license.
The Internet of Things is becoming a reality, and more and more devices are being introduced into the market every day. With this, the demand for technology that would ease device management, improve device security, and facilitate data analytics increases as well.
One such technology is Windows 10 IoT Core, Microsoft's operating system aimed at small footprint, low cost devices. It offers device servicing and manageability, enterprise grade security, and - combined with Microsoft's Azure platform - data analytics in the cloud. Given these features, Microsoft Windows 10 IoT Core will likely play a significant role in the future of IoT. As such, understanding how this operating system works on a deep level is becoming important. Methods and techniques that would aid in assessing its security are also becoming essential.
In this talk I will first discuss the internals of the OS, including the security features and mitigations that it shares with the desktop edition. I will then enumerate the attack surface of a device running Windows 10 IoT Core as well as its potential susceptibility to malware. I will also talk about methods to assess the security of devices running Windows 10 IoT Core such as static/dynamic reverse engineering and fuzzing. I will end the talk with some recommendations on how to secure a Windows 10 IoT Core device.
Today's software needs to isolate not only processes but the many components *within* a process from each other. Process-level isolation via jails, sandboxes, VMs, or hypervisors is finally becoming mainstream, but it misses an important point about modern software: its growing number of libraries that are all loaded into the same address space, and may all interact with complex inputs by way of vulnerable parsers. A process, even isolated, is as weak as the weakest of its components, but is as valuable as the most sensitive data it holds. Heartbleed was a perfect example of this: a faulty parser in a library could read absolutely everything in memory; there are many others less famous but no better. The biggest challenge of making intra-process memory protection practical is that it cannot require major changes to how software is written. A practical granular memory protection scheme must work for the existing C/C++ build chains, nor should it change the ABI. Further, it cannot rely on concepts that aren't already intuitively clear to C/C++ programmers. Many academic proposals for more granular memory access control stopped short of this. They disregard the glue what keeps the development process and runtime together: the ABI.
We demonstrate ELFbac, a system that uses the Linux ELF ABI to express access control policies between a program's components, such as libraries, and requires no changes to the GNU build chain. It enforces these policies by using a modified Linux loader and the Linux virtual memory system. ELFbac policies operate on the level of ELF object file sections. Custom data and code units can be created with existing GCC C/C++ attributes with a one-line annotation per unit; they are no more complex than C's static scoping. We have developed prototypes for ARM and x86. We used our ARM prototype to protect a validating proxy firewall for DNP3, a popular ICS protocol, and our x86 one to write a basic policy for Nginx. We will also demonstrate a policy for protecting OpenSSH.
DDOS attack usage has been accelerating, in terms of both attack volume and frequency. Such attacks present a major threat to enterprises worldwide. Presenters will discuss a number of novel techniques utilized by law enforcement and the private sector, to measure, study, and attribute attacks originating from sources such as embedded device botnets and booter/stresser services. Presenters will discuss the usage of honeypots to gather historical attack details, as well as best practices for conducting live DDOS attack testing. Representative PCAPs will be shown, dissected, and explain. Finally, presenters will provide examples of where these services are offered for sale, how they are purchased, and the individuals who operate them.
Over the past decade, the Islamic Republic of Iran has been targeted by continual intrusion campaigns from foreign actors that sought access to the country's nuclear facilities, economic infrastructure, military apparatus, and governmental institutions for the purpose of espionage and coercive diplomacy. Similarly, since the propagandic defacements of international communications platforms and political dissident sites conducted by an organization describing itself as the "Iranian Cyber Army" beginning in late 2009, Iranian actors have been attributed to a recurrent campaigns of intrusions and disruptions of private companies, foreign government entities, domestic opposition, regional adversaries and international critics. The intent of the CNO activities is not always discernable based on the tactics used or the data accessed, as the end implications of the disclosure of particular information is often distant and concealed. Where such intent is made evident, the reasons for Iranian intrusion campaigns range from retaliatory campaigns against adversaries, as a result of identifiable grievances, to surveillance of domestic opposition in support of the Islamic Republic establishment. Iranian intrusion campaigns have also reflected an interest in internal security operations against active political movements that have historically advocated for the secession of ethnic minority provinces or overthrow of the political establishment through violence. However, Iranian intrusion sets appear to be primarily interested in a broader field of challenges to the political and religious hegemony of the Islamic Republic. Previous reports on Iranian campaigns have referred to the targeting of Iranian dissident. However, in practice those targeted range from reformists operating within the establishment from inside of Iran to former political prisoners forced out of the country.
Across the records of hundreds of intrusion attempts of campaigns conducted by a distinct sets of actors, distinct patterns emerge in the types of individuals and organizations targeted by Iranian actors by internal security operations: high-profile individuals and organizations, such as journalists, human rights advocates or political figures, with extensive relationships and networks inside of Iran; members of the diplomatic establishment of Iran, and former governmental officials under previous administrations; adherents to non-Shia religions, participants in ethnic rights movements, or members of anti-Islamic Republic political organization; academics or public policy organizations critical of the Iranian government; cultural figures that promote values contrary to the interpretation of Islamic values promoted by the establishment; organizations fostering international collaboration and connections with the current Iranian administration; and international organizations conducting political programmes focused on Iran through funding by governmental agencies. In this presentation we will analyze in depth the results of several years of research and investigation on the intrusion activities of Iranian threat actors, particularly engaged in attacks against members of civil society.
Assembler is an application that compiles a string of assembly code and returns instruction encodings. An assembler framework allows us to build new tools, and is a fundamental component in the Reverse Engineering (RE) toolset. However, a good assembler framework is sorely missed since the ice age! Indeed, there is no single multi-architecture, multi-platform and open source framework available and the whole RE community are badly suffering from this lingering issue.
We have decided to step up again to solve this challenge once and for all. We built Keystone, an assembler engine with unparalleled features:
- Multi-architecture, with support for Arm, Arm64 (AArch64/Armv8), Hexagon, Mips, PowerPC, Sparc, SystemZ, & X86 (include 16/32/64bit).
- Clean/simple/lightweight/intuitive architecture-neutral API.
- Implemented in C/C++ languages, with bindings for Python, NodeJS, Ruby, Go & Rust available.
- Native support for Windows & *nix (with Mac OSX, Linux, *BSD & Solaris confirmed).
- Thread-safe by design.
- Open source.
This talk is going to introduce some existing assembler frameworks, then goes into details of their design/implementation and explains their current issues. Next, we will present the architecture of Keystone and the challenges of designing and implementing it. The audience will understand the advantages of our engine and see why the future is assured, so that Keystone will keep getting better, stronger and become the ultimate assembler engine of choice for the security community.
Keystone aims to lay the ground for innovative works and open up new opportunities for future of security research and development. To conclude the talk, some new advanced RE tools built on top of Keystone will be introduced to demonstrate its power.
Keystone has a homepage at http://www.keystone-engine.org. Full source code of our engine will be released at Black Hat USA 2016.
The prevalence of human interactive components of serious system breaches continues to be a problem for every organization. Humans are the biggest vulnerability in any security system; helping people identify social engineering attempts over the phone will be cheaper and more effective than yet another technological implementation. At minimum it will add an important and necessary layer to defense in depth.
Forensic linguistics is the study of language as evidence for the law. It is a relatively new field and has not previously been applied to cybersecurity. Linguistic analysis uncovers several features of language interaction in a limited data set (recorded IRS phone scammers) that begin to answer how forensic linguistics could assist in cybersecurity defense.
This presentation will briefly introduce and explain polar tag questions, topic control, question deferral, and irregular narrative constructions in IRS scam phone calls, and offer some starting points for identifying such linguistic properties during the course of a phone call to help improve defense at the human level. We think this is only the beginning of applying forensic linguistics to cybersecurity.
Many industries, provide consumers with data about the quality, content, and cost of ownership of products, but the software industry leaves consumers with very little data to act upon. In fact when it comes to how secure or weak a product is from a security perspective, there is no meaningful consumer facing data. There has long been a call for the establishment of an independent organization to address this need.
Last year, Mudge (from DARPA, Google, and L0pht fame) announced that after receiving a phone call from the White House he was leaving his senior position inside Google to create a non-profit organization to address this issue. This effort, known as CITL, is akin to Consumer Reports in its methodologies. While the media has dubbed it a "CyberUL", there is no focus on certifications or seals of approval, and no opaque evaluation metrics. Rather, like Consumer Reports, the goal is to evaluate software according to metrics and measurements that allow quantitative comparison and evaluation by anyone from a layperson, CFO, to security expert.
How? A wide range of heuristics that attackers use to identify which targets are hard or soft against new exploitation has been codified, refined, and enhanced. Some of these techniques are quite straightforward and even broadly known, while others are esoteric tradecraft. To date, no one has applied all of these metrics uniformly across an entire software ecosystem before and shared the results. For the first time, a peak at the Cyber Independent Testing Lab's metrics, methodologies, and preliminary results from assessing the software quality and inherent vulnerability in over 100,000 binary applications on Windows, Linux, and OS X will be revealed. All accomplished with binaries only.
Sometimes the more secure product is actually the cheaper, and quite often the security product is the most vulnerable. There are plenty of surprises like these that are finally revealed through quantified measurements. With this information, organizations and consumers can finally make informed purchasing decisions when it comes the security of their products, and measurably realize more hardened environments. Insurance groups are already engaging CITL, as are organizations focused on consumer safety. Vendors will see how much better or worse their products are in comparison to their competitors. Even exploit developers have demonstrated that these results enable bug-bounty arbitrage.
That recommendation you made to your family members last holiday about which web browser they should use to stay safe (or that large purchase you made for your industrial control systems)? Well, you can finally see if you chose a hard or soft target… with the data to back it up.
The relocation of systems and services into cloud environments is on the rise. Because of this trend users lose direct control over their machines and depend on the offered services from cloud providers. These services are especially in the field of digital forensics very rudimentary. The possibilities for users to analyze their virtual machines with forensic methods are very limited. In the underlying research of this talk a practical approach has been developed that gives the user additional capabilities in the field of forensic investigations. The solution focuses on a memory forensic service offering. To reach this goal, a management solution for cloud environments has been extended with memory forensic services. Self-developed memory forensic services, which are installed on each cloud node and are managed through the cloud management component, are the basis for this solution. Forensic data is gained via virtual machine introspection techniques. Compared to other approaches it is possible to get trustworthy data without influencing the running system. Additionally, a general overview about the underlying technologies is provided and the pros and cons are discussed. The solution approach is discussed in a generic way and practically implemented in a prototype. In this prototype OpenNebula is used for managing the cloud infrastructure in combination with Xen as virtualization component, LibVMI as Virtual Machine Introspection library and Volatility as forensic tool.
Our work rebuilds obfuscator for 6 notorious exploit kit families (Angler, Nuclear, Rig, Magnitude, Neutrino, SweetOrange). We will discuss our design to implement an obfuscator used by the exploit kit family, and evaluate how similar our obfuscator is to a real one. We would also like to open-source our obfuscator to benefit the research, which aims to provide better protection of the cyber-world. We performed a serial of experiences based on our obfuscators. With the obfuscator in hand, we are also able to generate more samples than we have ever observed, even those that haven't been created by real exploit-kit. We also simulate the evolution of obfuscator in each exploit kit family by building a new version upon the previous version. We derived some patterns on how obfuscator evolved and tent to predict what the next obfuscator variation could be. We also noticed that current variation naming convention may not properly reflect variation of exploit kit. Currently, people name a new variation of unknown sample by checking whether it shares the similar structure with existing samples. However, our experience shows that even a minor configuration file change in obfuscator could significantly change the obfuscated page. Therefore, we propose to use the actual change of obfuscator as the evidence to name a new variation. We also conduct an evaluation on how many times the obfuscator could amplify its change to the obfuscated page.
We investigate nonce-reuse issues with the Galois/Counter Mode (GCM) algorithm as used in TLS. Nonce reuse in GCM allows an attacker to recover the authentication key and forge messages as described by Joux. With an Internet-wide scan we identified over 70,000 HTTPS servers that are at risk of nonce reuse. We also identified 184 HTTPS servers repeating nonces directly in a short connection. Affected servers include large corporations, financial institutions, and a credit card company. We implement a proof of concept attack allowing us to violate the authenticity of affected HTTPS connections and inject content.
Documents containing executable files are often used in targeted email attacks in Japan. We examine various document formats (Rich Text Format, Compound File Binary and Portable Document Format) for files used in targeted attacks from 2009 to 2012 in Japan. Almost all the examined document files contain executable files that ignore the document file format specifications. Therefore, we focus on deviations from file format specifications and examine stealth techniques for hiding executable files. We classify eight anomalous structures and create a tool named o-checker to detect them. O-checker detects 96.1% of the malicious files used in targeted email attacks in 2013 and 2014. There are far fewer stealth techniques than vulnerabilities of document processors. Additionally, document file formats are more stable than document processors themselves. Accordingly, we assert that o-checker can continue detecting malware with a high detection rate for long periods.
Open source software (OSS) usage is on the rise and also continues to be a major source of risk for companies. OSS and 3rd party code may be inexpensive to use to build products but it comes with significant liability and maintenance costs. Even after high profile vulnerabilities in OpenSSL and other critical libraries, tracking and understanding exposure continues to challenge even at the most mature enterprise company. It doesn't matter if you are a software vendor or not, development and the use of OSS in your organization is most likely significant. It also doesn't matter if you have been developing software for years or are just getting started, or whether you have one product or one hundred, it can feel to many nearly impossible to keep up with OSS vulnerabilities or more important ensure they are properly mitigated.
This presentation looks at the real risk of using OSS and the best way to manage its use within your organization and more specifically the Product Development Lifecycle. We will examine all the current hype around OSS and separate out what are the real risks, and what organizations should be the most concerned about. We explore the true cost of using OSS and review the various factors that can be used to evaluate if a particular product or library should be used at your organization, including analyzing Vulnerability Metrics including Time to Patch. Getting your head wrapped around the issues and the need to improve OSS security is challenging, but then taking action at your organization can feel impossible. This presentation provides several real world examples that have been successful at a including: A case study of a single third party libraries vulnerability across several products will help to show why the result of investigating actual impact against your different products is valuable intelligence. We will provide learnings from your incident response function and why understanding the vulnerabilities in your current software can gain you valuable insight into creating smarter products to avoid maintenance costs. Finally, we will introduce a customized OSS Maturity Model and walk through the stages of maturity for organization developing software with regards to how they prioritize and internalize the risk presented by OSS.
The Xen Project has been a widely used virtualization platform powering some of the largest clouds in production today.
Sitting directly on the hardware below any operating systems, the Xen hypervisor is responsible for the management of CPU/MMU and guest operating systems.
Guest operating systems cound be controled to run in PV mode using paravirtualization technologies or HVM mode using hardware-assisted virtualization technologies.
Compare to HVM mode, PV mode guest OS kernel could recognize the existence of hypervisor and, thus, work normally via hypervisor inferfaces which are called hypercalls. While performing priviledged operations, PV mode guest OS would submit requests via hypercalls then the hypervisor do these operations for it after verifying its requests.
Inspired by Ouroboros, an ancient symbol with a snake bitting its tail, our team has found a critical verification bypass bug in Xen hypervisor and that will be used to tear the hypervisor a hole. With sepecific exploition vectors and payloads, malicious PV guest OS could control not only the hypervisor but also all other guest operating systems running on current platform.
Memory deduplication, a well-known technique to reduce the memory footprint across virtual machines, is now also a default-on feature inside the Windows 10 operating system. Deduplication maps multiple identical copies of a physical page onto a single shared copy with copy-on-write semantics. As a result, a write to such a shared page triggers a page fault and is thus measurably slower than a write to a normal page.
Pangu 9, the first (and only) untethered jailbreak tool for iOS 9, exploited a sequence of vulnerabilities in the iOS userland to achieve final arbitrary code execution in the kernel and persistent code signing bypass. Although these vulnerabilities were fixed in iOS 9.2, there are no details disclosed. This talk will reveal the internals of Pangu 9. Specifically, this talk will first present a logical error in a system service that is exploitable by any container app through XPC communication to gain arbitrary file read/write as mobile. Next, this talk will explain how Pangu 9 gains arbitrary code execution outside the sandbox through the system debugging feature. This talk will then elaborate a vulnerability in the process of loading the dyld_shared_cache file that enables Pangu 9 to achieve persistent code signing bypass. Finally, this talk will present a vulnerability in the backup-restore process that allows apps signed by a revoked enterprise certificate to execute without the need of the user's explicit approval of the certificate.
Nowadays malware authors employ multiple obfuscation and packing techniques to hinder the process of reverse engineering and bypass the anti-virus (AV) signature based analysis. This is a significant threat for end user's PCs since this voids part of the AV analysis, and it is also a problem for professional reverse engineers that have to invest lot of time in order to unpack and study a single packed malware sample. The problem of unpacking is well studied in literature and several works have been proposed both for enhancing the end user's protection and supporting the malware analysts in their work. Different approaches exist in order to build a generic unpacker: debuggers, kernel modules, hypervisor modules, dynamic binary instrumentation (DBI). In this thesis we explore the possibility to exploit the functionality of a DBI framework since it provides great functionality useful during the analysis process: it allows an instruction level granularity inspection and modification, through high level APIs, which gives the analyst full control of the program being instrumented. Our system can extract and reconstruct the original program from a packed version of it, helping and speeding up the analysis of an obfuscated binary. The packers employ different techniques with various levels of complexity, but all of them must share one common behavior during the run-time unpacking: they have to write new code in memory and eventually execute it. Starting from this we have designed a generic unpacking algorithm that can correctly detect this behaviour and defeat the most popular of packing techniques. Not only the packing strategy can be really different, but the obfuscation can be increased by hiding the function imported by the program which is usually a valuable source of information during the process of reverse engineer. These are known in literature as Import Address Table (IAT) obfuscation techniques. Our tool tries to reconstruct a working PE from its packed version, taking care of modern packing techniques like unpacking on dynamic memory allocated areas and tries to defeat the most used IAT obfuscation techniques.
In order to validate our work we have conducted two experiments. The first one demonstrate the generality of our unpacking process with respect to fifteen different packers. The second experiment demonstrates the effectiveness of our system against malware samples packed with both known and unknown packers. Our system was able to reconstruct a working unpacked binary for 63\% of the collected samples. When it is not possible to reconstruct a fully working PE, we provide all the memory dumps, representing the unpacked program along with a log about the unpacking process, which can be really useful to a malware analyst in order to speed up his work as it has been useful for us during the development of this tool. The source code of our tool can be found at https://github.com/Seba0691/PINdemonium.
We will present and demonstrate the first PLC only worm. Our PLC worm will scan and compromise Siemens Simatic S7-1200 v1-v3 PLCs without any external support. No PCs or additional hardware is required. The worm is fully self-contained and "lives" only on the PLC. The Siemens Simatic PLCs are managed using a proprietary Siemens protocol. Using this protocol the PLC may be stopped, started and diagnostic information may be read. Futhermore this protocol is used to upload and download user programs to the PLC. The older S7-300 and S7-400 PLCs are supported by several OpenSource solutions supporting the protocols used on these older PLCs. With the introduction of the S7-1200 the protocol has been replaced by a new version. We inspected the protocol based on the S7-1200v3 and implemented the protocol by ourselves. We are now able to install and extract any user program on these PLCs currently sold by Siemens. The current versions S7-1200v4 and S7-1500 again changed the protocol and are not susceptible to the attack.
Based on this work we developed a PLC program which scans a local network for other S7-1200v3 PLCs. Once these are found the program compromises these PLCs by uploading itself to these devices. The already installed user software is not removed and still running on the PLC. Our malware attaches itself to the original software and runs in parallel to the original user program. The operator does not notice any changed behavior. We developed the first PLC only worm. The worm is only written using the programming language SCL and does not need any additional support. For the remote administration of the compromised PLCs we implemented a Command&Control server. Infected PLCs automatically contact the C&C server and may be remotely controlled using this connection. Using this connection we can manipulate any physical input or output of the PLC. An additional proxy function enables us to access any additional system using a tunnel. Lastly the Stop mode may be initiated through the C&C connection requiring a cold restart of the PLC by disconnecting the power supply. We will demonstrate the attack during the talk.
Our worm prevents its detection and analysis. If the operator connects to the PLC using the programming software TIA Portal 11 the operator may notice unnamed additional function blocks. But when accessing these blocks the TIA Portal crashes preventing the forensic analysis. The infection of the PLC takes roughly 10 seconds. While the infection is in progress the PLC is in Stop mode. As soon as the infection has succeeded the PLC undergoes a warm restart and the worm is running additionally to to the original user program. Our worm malware requires 38,5kb RAM and 216,6kb persistent memory. If the PLC does not offer the memory required by the original user software including our worm the worm may overwrite the original user program. Based on the actually used model of the S7-1200 different setups may be required.
Model RAM (Worm) Persistent Memory (Worm) S7-1211 50kb (77%) 1Mb (21%)
S7-1212 75kb (51%) 1MB (5 %)
S7-1214 100kb (38%) 4MB (5 %)
S7-1215 125kb (30%) 4MB (5 %)
S7-1217 150kb (25%) 4MB (5 %)
A critical requirement for the execution of a PLC program is the cycle time for one full cycle of the user program. Our malware requires 7ms per cycle. This is just 4.7% of the maximum cycle time configured by default on the PLC models we inspected. The original user program still has plenty of time to run. By default all Siemens Simatic S7-1200v1-v3 PLCs are susceptible to this attack. The PLC user programs may be uploaded and downloaded without any restriction. The Siemens Simatic PLCs support several protection mechanisms. We will explain these mechanisms and their result on the attack.
With the introduction of the S7-1200v4 Siemens introduced again a new protocol. These PLCs are not susceptible to the attack. The built-in copy protection restricts the user program to run only on a subset of PLCs with specific serial numbers. This protection is only implemented within the programming software (Siemens Simatic TIA Portal) used to install the software. We can upload and download user programs using this feature to any PLC using our own implementation. The whole protection is implemented on the client. This is the first time this is publicly shown. The built-in know-how protection forbids modifications of the user program on the PLC and prevents the extraction of the user program from the PLC. Again this protection is implemented only in the programming software (Siemens Simatic TIA Portal). Our own implementation can extract the user program, display the source code, modify the program and reinstall the modified program. This feature does not offer the protection advertised. This is the first time publicly shown. The built-in access protection does prevent the attack we will demonstrate. While we present an attack via the ethernet interface the installation of the user program can also happen using the field bus interface. Using this interface even PLCs not connected to the ethernet network may be compromised. Once the first PLC is infected using the Ethernet all other PLCs connected by the field bus would be compromised as well. This talk emphasizes the significance of the built in protection features in modern PLCs and their correct deployment by the user.
Messaging can be found everywhere. It's used by your favourite Mobile Messenger as well as in your bank's backend system. Message Brokers such as Pivotal's RabbitMQ, IBM's WebSphere MQ and others often form a key component of a modern backend system's architecture. Furthermore, there are various messaging standards in place like AMQP, MQTT, and STOMP. When it comes to the Java World it is rather unknown that Messaging in the Java ecosystem relies heavily on Java's serialization. Recent advances in the exploitation of Java deserialization vulnerabilities can be applied to exploit applications using Java messaging. This talk will show the attack surface of various Java messaging API implementations and their deserialization vulnerabilities. Last but not least, the Java Messaging Exploitation Tool (JMET) will be presented to help you identify and exploit message-consuming systems like a boss.
They always taught us that the only thing that can be pulled out from a SSL/TLS session using strong authentication and latest Perferct Forward Secrecy ciphersuites is the public key of the certificate exchanged during the handshake - an insufficient condition to place a MiTM attack without to generate alarms on the validity of the TLS connection and certificate itself. Anyway, this is not always true. In certain circumstances it is possible to derive the private key of server regardless of the size of the used modulus. Even RSA keys of 4096 bits can be factored at the cost of a few CPU cycles and computational resources. All that needed is the generation of a faulty digital signature from server, an event that can be observed when occurring certain conditions such as CPU overheating, RAM errors or other hardware faults. Because of these premises, devices like firewall, switch, router and other embedded appliances are more exposed than traditional IT servers or clients. During the talk, the author will explain the theory behind the attack, how common the factors are that make it possible and his custom pratical implementation of the technique. At the end, a proof-of-concept, able to work both in passive mode (i.e. only by sniffing the network traffic) and in active mode (namely, by participating directly in the establishment of TLS handshakes), will be released.
Samsung announced many layers of security to its Pay app. Without storing or sharing any type of user's credit card information, Samsung Pay is trying to become one of the most secure approaches offering functionality and simplicity for its customers. This app is a complex mechanism which has some limitations relating security. Using random tokenize numbers and implementing Magnetic Secure Transmission (MST) technology, which do not guarantee that every token generated with Samsung Pay would be applied to make a purchase with the same Samsung device. That means that an attacker could steal a token from a Samsung Pay device and use it without restrictions. Inconvenient but practical is that Samsung's users could utilize the app in airplane mode. This makes it impossible for Samsung Pay to have a full control process of the tokens pile. Even when the tokens have their own restrictions, the tokenization process gets weaker after the app generates the first token relating a specific card. How random is a Spay tokenized number? It is really necessary to understand how the tokens heretically share similarities in the generation process, and how this affect the end users' security. What are the odds to guess the next tokenized number knowing the previous one?
Following previous presentations on the dangers penetration testers face in using current off-the-shelf tools and practices, this presentation explores how widely available learning materials used to train penetration testers lead to inadequate protection of client data and penetration testing operations. With widely available books and other training resources targeting the smallest set of prerequisites, in order to attract the largest audience, many penetration testers adopt the techniques used in simplified examples to real world tests, where the network environment can be much more dangerous. Malicious threat actors are incentivized to attack and compromise penetration testers, and given current practices, can do so easily and with dramatic impact. This presentation will include a live demonstration of techniques for hijacking a penetration tester's normal practices, as well as guidance for examining and securing your current testing procedures. Tools shown in this demonstration will be released along with the talk.
In this session we will explore why certain devices, pieces of software or companies lead us to utter frustration while others consistently delight us and put a smile on our face. With these insights in mind, we will explore how we typically create our security processes, teams and solutions. All too often we create something without properly understanding what our colleagues or customers are trying to achieve only to bombard them with awareness training and policies because they "just don't get it" and because "humans are the weakest link." We will look at user-centered design methods and concepts from other disciplines like economy, psychology or marketing that can help us to build security in a truly usable way not just our tools but also the way we setup our teams, the way we communicate and the way we align incentives. Every interaction with security is an opportunity to improve convenience and bring a smile to somebody's face. By understanding the impact of design, we can do a lot to improve corporate productivity and security itself.
Software Guard Extensions (SGX) is a technology available in Intel(R) CPUs released in autumn 2015. SGX allows a remote server to process a client's secret data within a software enclave that hides the secrets from the operating system, hypervisor, and even BIOS or chipset manager, while giving cryptographic evidence to the client that the code has been executed correctly the very definition of secure remote computation.
This talk is the first public assessment of SGX based on real SGX-enabled hardware and on Intel's software development environment. While researchers already scrutinized Intel's partial public documentation, many properties can only be verified and documented by working with the real thing: What's really in the development environment? Which components are implemented in microcode and which are in software? How can developers create secure enclaves that won't leak secrets? Can the development environment be trusted? How to debug and analyze SGX software? What crypto schemes are used in SGX critical components? How reliable are they? How safe are their implementations? Based on these newly documented aspects, we'll assess the attack surface and real risk for SGX users. We'll then present and demo proofs-of-concept of cryptographic functionalities leveraging SGX: secure remote storage and delegation (what fully homomorphic encryption promises, but is too slow to put in practice), and reencryption. We'll see how basic architectures can deliver powerful crypto functionalities with a wide range of applications. We'll release code as well as a tool to extract and verify an enclave's metadata.
In 2013, Yuval Yarom and Katrina Falkner discovered the FLUSH+RELOAD L3 cache side-channel. So far it has broken numerous implementations of cryptography including, notably, the AES and ECDSA in OpenSSL and the RSA GnuPG. Given FLUSH+RELOAD's astounding success at breaking cryptography, we're lead to wonder if it can be applied more broadly, to leak useful information out of regular applications like text editors and web browsers whose main functions are not cryptography.
In this talk, I'll briefly describe how the FLUSH+RELOAD attack works, and how it can be used to build input distinguishing attacks. In particular, I'll demonstrate how when the user Alice browses around the top 100 Wikipedia pages, the user Bob can spy on which of those pages she's visiting.
This isn't an earth-shattering attack, but as the code I'm releasing shows, it can be implemented reliably. My goal is to convince the community that side channels, FLUSH+RELOAD in particular, are useful for more than just breaking cryptography. The code I'm releasing is a starting point for developing better attacks. If you have access to a vulnerable CPU running a suitable OS, you should be able to reproduce the attack within minutes after watching the talk and downloading the code.
Apple graphics, both the userland and the kernel components, are reachable from most of the sandboxed applications, including browsers, where an attack can be launched first remotely and then escalated to obtain root privileges. On OS X, the userland graphics component is running under the WindowServer process, while the kernel component includes IOKit user clients created by IOAccelerator IOService. Similar components do exist on iOS system as well. It is the counterpart of "Win32k.sys" on Windows. In the past few years, lots of interfaces have been neglected by security researchers because some of them are not explicitly defined in the sandbox profile, yet our research reveals not only that they can be opened from a restrictive sandboxed context, but several of them are not designed to be called, exposing a large attack surface to an adversary. On the other hand, due to its complexity and various factors (such as being mainly closed source), Apple graphics internals are not well documented by neither Apple nor the security community. This leads to large pieces of code not well analyzed, including large pieces of functionality behind hidden interfaces with no necessary check in place even in fundamental components. Furthermore, there are specific exploitation techniques in Apple graphics that enable you complete the full exploit chain from inside the sandbox to gain unrestricted access. We named it "graphic-style" exploitation.
In the first part of the talk, we introduce the userland Apple graphics component WindowServer. We start from an overview of WindowServer internals, its MIG interfaces as well as "hello world" sample code. After that, we explain three bugs representing three typical security flaws: - Design related logic issue CVE-2014-1314, which we used at Pwn2Own 2014 - Logic vulnerability within hidden interfaces - The memory corruption issue we used at Pwn2Own 2016 Last but not least we talk about the "graphic-style" approach to exploit a single memory corruption bug and elevate from windowserver to root context.
The second part covers the kernel attack surface. We will show vulnerabilities residing in closed-source core graphics pipeline components of all Apple graphic drivers including the newest chipsets, analyze the root cause and explain how to use our "graphic-style" exploitation technique to obtain root on OS X El Capitan at Pwn2Own 2016. This part of code, mostly related to rendering algorithm, by its nature lies deeply in driver's core stack and requires much graphical programming background to understand and audit, and is overlooked by security researchers. As it's the fundamental of Apple's rendering engine, it hasn't been changed for years and similar issues do exist in this blue ocean. We'll also come up with a new way of kernel heap spraying, with less side-effect and more controllable content than any other previous known methods. The talk is concluded by showing two live demos of remote gaining root through a chain of exploits on OS X El Capitan. Our first demo is done by exploiting userland graphics and the second by exploiting kernel graphics.
In this work we present a massively large-scale survey of Internet traffic that studies the practice of false content injections on the web. We examined more than 1.5 Peta-bits of data from over 1.5 million distinct IP addresses. Earlier this year we have shown that false content injection is practiced by network operators for commercial purposes. These network operators inject advertisements and malware into webpages viewed by potentially ALL users on the Internet.
In this presentation we recap the injections we discovered earlier this year and show them in detail. Additionally, we shall show new types of non-commercial injections, identify the injectors behind them and discuss their modi operandi. Finally, we shall discuss in detail analysis of a targeted injection attack against an American website.
The attacks we discovered are done using out-of-band TCP injection of false packets (rather than in-band alteration of the original packets). This is what actually allowed us to detect the injection events in the first place. We also present a novel client-side tool to mitigate such attacks that has minimal performance impact.
Information security is ever evolving, and Android's security posture is no different. Android users faces threats from a variety of sources, from the mundane to the extraordinary. Lost and stolen devices, malware attacks, rooting vulnerabilities, malicious websites, and nation state attackers are all within the Android threat model, and something the Android Security Team deals with daily. In this talk, we will cover the threats facing Android users, using both specific examples from previous Black Hat conferences and published research, as well as previously unpublished threats. For the threats, we will go into the specific technical controls which contain the vulnerability, as well as newly added Android N security features which defend against future unknown vulnerabilities. Finally, we'll discuss where we could go from here to make Android, and the entire computer industry, safer.
Adobe Flash is one of the battlegrounds of exploit and mitigation methods. As most of the Flash exploits demonstrate native memory layer exploit technique, it is valuable to understand the memory layout and behavior of Adobe Flash Player. We developed fine-grained debugging tactics to observe memory exploit technique and the way to interpret them effectively. This eventually helps defenders to understand new exploit techniques that are used for current targets quickly. This information is also valuable in deciding which area should defenders focus on for mitigation and code fixes. Adobe Flash Player was one of the major attack targets in 2015. We observed at least 17 effective zero-days or 1-day attacks in the wild. Flash is not just used by exploit kits like Angler, it has also been commonly used for advanced persistent threat (APT) attacks. The bug class ranges from simple heap overflows, uninitialized memory to type confusion and use-after-free. At Microsoft, understanding exploits in-the-wild is a continuous process. Flash exploit is one of the hardest to reverse-engineer. It often involves multi-layer obfuscation, and by default, is highly obfuscated and has non-decompilable codes. The challenge with Flash exploit comes from the lack of tools for static and dynamic analysis. Exploits are written with ActionScript programming language and obfuscated in bytecode level using commercial-grade obfuscation tools. Understanding highly obfuscated logic and non-decompilable AVM bytecode is a big challenge. Especially, the lack of usable debuggers for Flash file itself is a huge hurdle for exploit reverse engineers. It is just like debugging PE binaries without using Windbg or Olly debugger. The ability of the researcher is highly limited.
With this presentation, I want to deliver two things: 1. The tactics and debugging technique that can be used to reverse engineer exploits. This includes using existing toolsets and combining them in an effective way. 2. The detailed exploit code reverse engineering examples that can help you understand what's the current and past status of attack and mitigation war. You might have heard of Vector corruption, ByteArray corruption and other JIT manipulation technique. Technical details will be discussed on how the exploits are using these and how the vendor defended against these.
Microsoft Common Object Model (COM) is a technology for providing a binary programming interface for Windows programs. Despite its age it still forms the internal foundation of many new Microsoft technologies such as .NET. However, over the course of more than twenty years of development, the inevitable pressure to retain backwards compatibility has turned the COM runtime into an obscure beast. These days, many COM interfaces exist that mirror almost the same functionality provided by common Windows APIs. Malware authors can easily execute almost any operation (creating files, starting new processes, etc.) only using COM calls. Dynamic malware analyzers must deal with this accordingly without getting lost in the shadowy depths of the COM runtime.
The talk presents various aspects of automated dynamic COM malware analysis and shows which approaches are actually practical and which ones are hopeless from the beginning. We show how COM interfaces are already actively used by malware in the wild. Our data retrieved from various sample sharing programs indicates that COM use is widespread and not only limited to sophisticated attacks. It can be used to create arbitrary files, access the registry, control the Windows firewall, tap into audio interfaces and much more. The possibilities are endless. Furthermore, many script engines such as VBScript or JScript use COM underneath. If such samples are analyzed, then this must be dealt with appropriately. Unfortunately, many existing dynamic analysis solutions fail at monitoring COM correctly which makes it easy for malware to evade many common sandboxes. One essential problem is that COM classes can be implemented in various places: in the calling program itself, in other processes on the same machine, or even in remote processes on different machines using DCOM. This requires to catch and process COM calls at the very first API layer and not later on. Due to the myriad of COM calls in question, hooking-based solutions quickly hit a wall. The popular workaround is to hook on API layers behind (such as NTDLL). Since COM calls can be executed in remote processes and heavily rely on data marshalling, this approach can only be used for not more than simple COM interfaces. Furthermore, it requires filtering out irrelevant API calls from OS libraries, which, notwithstanding the above, poses many problems by itself. Last but not least, hooking COM calls (or API calls in general) makes it easy for malware to detect that it is running in a sandbox.
We show how transition-based monitoring can be used to monitor all COM calls at the first interface layer. This requires additional effort in parsing the numerous different formats COM uses to encode function call parameters. We show what obstacles are to be expected and how to deal with them accordingly. This generic approach yields a detailed list of all COM calls executed by malware with all their parameters. In addition, malware cannot evade the analysis since transitions are detected transparently in a hypervisor. Not a single bit has to be modified in the analysis environment.
Initially known as "Project Astoria" and delivered in beta builds of Windows 10 Threshold 2 for Mobile, Microsoft implemented a full blown Linux 3.4 kernel in the core of the Windows operating system, including full support for VFS, BSD Sockets, ptrace, and a bonafide ELF loader. After a short cancellation, it's back and improved in Windows 10 Anniversary Update ("Redstone"), under the guise of Bash Shell interoperability. This new kernel and related components can run 100% native, unmodified Linux binaries, meaning that NT can now execute Linux system calls, schedule thread groups, fork processes, and access the VDSO!
As it's implemented using a full-blown, built-in, loaded-by-default, Ring 0 driver with kernel privileges, this not a mere wrapper library or user-mode system call converter like the POSIX subsystem of yore. The very thought of an alternate virtual file system layer, networking stack, memory and process management logic, and complicated ELF parser and loader in the kernel should tantalize exploit writers - why choose from the attack surface of a single kernel, when there's now two?
But it's not just about the attack surface - what effects does this have on security software? Do these frankenLinux processes show up in Procmon or other security drivers? Do they have PEBs and TEBs? Is there even an EPROCESS? And can a Windows machine, and the kernel, now be attacked by Linux/Android malware? How are Linux system calls implemented and intercepted?
As usual, we'll take a look at the internals of this entirely new paradigm shift in the Windows OS, and touch the boundaries of the undocumented and unsupported to discover interesting design flaws and abusable assumptions, which lead to a wealth of new security challenges on Windows 10 Anniversary Update ("Redstone") machines.
An Evil Maid attack is a security exploit that targets a computing device that has been left unattended. An evil maid attack is characterized by the attacker's ability to physically access the target multiple times without the owner's knowledge. On BlackHat Europe 2015, Ian Haken in his talk "Bypassing Local Windows Authentication to Defeat Full Disk Encryption" had demonstrated a smart Evil Maid attack which allows the attacker to bypass Bitlocker disk encryption in an enterprise's domain environment. The attacker can do so by connecting the unattended computer into a rogue Domain Controller and abusing a client side authentication vulnerability. As a result, Microsoft had released a patch to fix this vulnerability and mitigate the attack. While being a clever attack, the physical access requirement for the attack seems to be prohibitive and would prevent it from being used on most APT campaigns. As a result, defenders might not correctly prioritize the importance of patching it.
In our talk, we reveal the "Remote Malicious Butler" attack, which shows how attackers can perform such an attack, remotely, to take a complete control over the remote computer. We will dive into the technical details of the attack including the rogue Domain Controller, the client-side vulnerability and the Kerberos authentication protocol network traffic that ties them. We would explore some other attack avenues, all leveraging on the rogue Domain Controller concept. We would conclude with the analysis of some practical generic detection and prevention methods against rogue Domain Controllers.
Power line communication (PLC) is a kind of communication technology which uses the power line as the communication media. The PLC technology is divided with 2 sub-field: narrow-band PLC and wide-band PLC. For the narrow-band PLC, there are 2 very import standards: Prime and G3. Both the standards are widely used in AMR and electric monitor system and it lead to the rise of threat in AMR system security and electric safety. This topic will talk about how to get the PLC data stream in a PLC communication system which would use G3 or Prime standard, and will also talk about how to detect attacking in the net. We will focus on how to identify which kind of standard the system using and how to sniff the PLC data in physical level.
Embedded, IOT, and ICS devices tend to be things we can pick up, see, and touch. They're designed for nontechnical users who think of them as immutable hardware devices. Even software security experts, at some point, consider hardware attacks out of scope. Thankfully, even though a handful of hardware manufacturers are making some basic efforts to harden devices, there's still plenty of cheap and easy ways to subvert hardware. The leaked ANT catalog validated that these cheap hardware attacks are worthwhile. The projects of the NSA Playset have explored what's possible in terms of cheap and easy DIY hardware implants, so I've continued to apply those same techniques to more embedded devices and industrial control systems. I'll show off a handful of simple hardware implants that can 1) Blindly escalate privilege using JTAG 2) Patch kernels via direct memory access on an embedded device without JTAG 3) Enable wireless control of the inputs and outputs of an off-the-shelf PLC 4) Hot-plug a malicious expansion module onto another PLC without even taking the system offline and 5) Subvert a system via a malicious display adapter. Some of these are new applications of previously published implants - others are brand new.
I'll conclude with some potential design decisions that could reduce vulnerability to implants, as well as ways of protecting existing hardware systems from tampering.
Adobe Flash continues to be a popular target for attackers in the wild. As an increasing number of bug fixes and mitigations are implemented, increasingly complex vulnerabilities and exploits are coming to light. This talk describes notable vulnerabilities and exploits that have been discovered in Flash in the past year.
It will start with an overview of the attack surface of Flash, and then discuss how the most common types of vulnerabilities work. It will then go through the year with regards to bugs, exploits and mitigations. It will end with a discussion of the future of Flash attacks: likely areas for new bugs, and the impact of existing mitigations.
Cross-site search (XS-search) is a practical timing side-channel attack that allows the extraction of sensitive information from web-services. The attack exploits inflation techniques to efficiently distinguish between search requests that yield results and requests that do not. This work focuses on the response inflation technique that increases the size of the response; as the difference in the sizes of the responses increases, it becomes easier to distinguish between them. We begin with browser-based XS-search attack and demonstrate its use in extracting users' private data from Gmail and Facebook. The browser-based XS-search attack exploits the differences in the sizes of HTTP responses, and works even when significant inflation of the response is impossible. This part also involves algorithmic improvements compared to previous work. When there is no leakage of information via the timing side channel it is possible to use second-order (SO) XS-search, a novel type of attack that allows the attacker to significantly increase the difference in the sizes of the responses by planting maliciously crafted record into the storage. SO XS-search attacks can be used to extract sensitive information such as email content of Gmail and Yahoo! users, and search history of Bing users.
To defeat your adversaries, it is crucial to understand how they operate and to develop a comprehensive view of their playing field. In this talk, we describe a holistic and scalable approach to investigating and combating cybercrime. Our strategy focuses on two perspectives: the network attack surface and the actors. The network attack surface exploited by malware manifests itself through various aspects such as hosting IP space, DNS traffic, open ports, BGP announcements, ASN peerings, and SSL certificates. The actors' view tracks trends, motivations, and TTPs of cyber criminals by infiltrating and maintaining access to closed underground forums where threat actors collaborate to plan cyber attacks. Crimeware campaigns nowadays rely heavily on bulletproof hosting for scalable deployment. We distinguish two types of such hosting infrastructures: the first consists of a large number of infected residential hosts scattered geographically that are leveraged to build a fast flux proxy network. This network is a hosting-as-a service platform for various malware and ransomware C2, phishing, carding, and botnet panels. The second type exists in dedicated servers acquired from rogue hosting companies or large abused hosting providers with the purpose of hosting exploit kits, phishing, malware C2, and other gray content. We start by using DNS traffic analysis and passive DNS mining algorithms to massively detect malware domains. After we identify the hosting IPs of these domains, we will demonstrate novel methods using DNS PTR data to further map out the entire IP space of bulletproof hosters serving these attacks. In the case of fast flux proxy networks, we leverage SSL data to map out larger sets of compromised hosts. Concurrently, we investigate underground forums for emerging signals about bulletproof hosters just about to be employed for malware campaigns.
The talk describes how to proactively bridge the gap between the actors and network views by identifying the IP space of the mentioned hosters given very few initial indicators and predictively block it. This is made possible thanks to the deployment at large scale of DNS PTR, SSL, and HTTP data provided by Project Sonar datasets and our own scanning of certain IP regions. It is undoubtedly a serious challenge facing security researchers to devise means to quickly index and search through vast quantities of security related log data. Therefore, we will also describe the backend architecture, based on HBase and ElasticSearch, that we use to index global Internet metadata so it is easily searchable and retrievable. Join us in this talk to learn about effective methods to investigate malware from both network and actors' perspectives and hear about our experience on how to deploy and mine large scale Internet data to support threat research.
Health Level-7 or HL7 refers to a set of international standards for transfer of clinical and administrative data between software applications used by various healthcare providers. Healthcare provider organizations typically have many different computer systems used for everything from billing records to patient tracking. All of these systems should communicate with each other (or "interface") when they receive new information, or when they wish to retrieve information, but not all do so. The Hl7 2.x protocol was designed keeping certain factors in mind. Some of which are: a closed network, no malicious intent by the devices, and running the devices in a completely reliable environment. The number of devices using the HL7 2.x is huge (currently, the HL7 v2.x messaging standard is supported by every major medical information systems vendor in the world). However, a secure implementation standard / guide still needs to be worked on. Over some time I have observed that hospitals and vendors do not fully understand the risks on their infrastructure. Also vendors need to implement some changes over their software and hardware to make their devices more resilient to attacks.
The talk will cover HL7 2.x messages, their significance and the information in these messages, also the impact of gaining access to these messages. We will look the scenario of gaining patient information, fingerprinting architecture, examining and changing diagnosis, gaining access to non-prescribed drugs / changing medication and possible financial scams. This talk will also cover how to Pen test medical systems running HL7 interfaces (EMR Software, Patient monitors, X-ray machines.. etc.), discovering common flaws and attack surfaces and on devices that use HL 7 2.x messages to test machine interfaces and connected environment.
Security breaches never happen exactly the way you expected or planned for. Yet an organization's infrastructure should be able to withstand a breach of its perimeter security layer, and also handle the infection of internal servers. The security testing toolset available to security professionals today consists mainly of penetration testing and vulnerability scanners.These tools were designed for traditional, relatively static networks and can no longer address ALL the possible vulnerabilities of today's dynamic and hybrid network. While there is no replacement to a highly skilled human pen-test hacker, penetration tests are limited to specific parts of a network, are expensive, and may become obsolete within months. Automatic vulnerability scanners have limited accessibility and can not simulate today's advanced lateral movement attack methods. The result is network blind spots which is where security threats often arise. This calls for a new approach to testing network security resilience. An ideal tool would be easy to use, budgetary conscious, autonomous and scalable.
We propose using the Infection Monkey, a new open source cyber security testing tool, designed to thoroughly test a network from an attacker's point of view. Our tool draws its inspiration from Netflix's Chaos Monkey released in 2011. Netflix's Monkey was designed to randomly delete servers in Netflix' infrastructure to test a service's ability to withstand server failures. We think that a similar approach applies to network security, "infecting" your network to test your defenses capabilities, so we have leveraged Netflix's Chaos Monkey concept to address the challenges of the network defense community. The Infection Monkey spins up an infected virtual machine inside random parts of your data center, to test for potential security failures. By "inside", we mean behind the firewall and any other perimeter defense you are deploying for your computing infrastructure. By equipping the monkey with advanced exploitation abilities (without destructive payloads), it can spread to any vulnerable machine within reach. Along with the ability to spread onwards from its victims, the monkey can detect surprising weak spots throughout the network.
In our talk we will show how our Infection Monkey uncovers blind spots and argue that ongoing network-wide security testing adds strong capabilities to the security team. We will focus on vulnerabilities that up until now have stayed in the industry's 'collective blind spot'. The security community can greatly benefit from a disruptive, modern tool that helps verify security solution deployments and shed light on the weaker parts of the security chain.
The Cyber Kill Chain model provides a framework for understanding how an adversary breaches the perimeter to gain access to systems on the internal network. However, this model is incomplete and can lead to over-focusing on perimeter security, to the detriment of internal security controls. In this presentation, we'll explore an expanded model including the Internal Kill Chain and the Target Manipulation Kill Chain.
We'll review what actions are taken in each phase, and what's necessary for the adversary to move from one phase to the next. We'll discuss multiple types of controls that you can implement today in your enterprise to frustrate the adversary's plan at each stage, to avoid needing to declare "game over" just because an adversary has gained access to the internal network. The primary limiting factor of the traditional Cyber Kill Chain is that it ends with Stage 7: Actions on Objectives, conveying that once the adversary reaches this stage and has access to a system on the internal network, the defending victim has already lost. In reality, there should be multiple layers of security zones on the internal network, to protect the most critical assets. The adversary often has to move through numerous additional phases in order to access and manipulate specific systems to achieve his objective. By increasing the time and effort required to move through these stages, we decrease the likelihood of the adversary causing material damage to the enterprise.
Microsoft's Enhanced Mitigation Experience Toolkit (EMET) is a project that adds security mitigations to user mode programs beyond those built in to the operating system. It runs inside "protected" programs as a Dynamic Link Library (DLL), and makes various changes in order to make software exploitation expensive. If an attacker can bypass EMET with significantly less work, then it defeats EMET's purpose of increasing the cost of exploit development. In this briefing we discuss protections being offered from EMET, how individually each of them can be evaded by playing around the validation code and then a generic disabling method, which applies to multiple endpoint products and sandboxing agents relying on injecting their Dynamic Link Library into host processes in order to protect them. It can be noted that Microsoft has issued a patch to address this very issue in EMET 5.5 in February 2016. EMET was designed to raise the cost of exploit development and not as a "fool proof exploit mitigation solution". Consequently, it is no surprise that attackers who have read/write capabilities within the process space of a protected program can bypass EMET by systematically defeating its mitigations. As long as their address space remains same, a complete defensive solution cannot be used to prevent exploitation.
The talk will focus on how easy is it to defeat EMET or any other Agent. How secure is any endpoint exploit prevention/detection solution, which relies on same address space validations and how to defeat them with their own checks or by circumventing and evading their validation. Moreover it will also reflect on, targeted EMET evasion i.e. when the attacker knows EMET is installed on victim machine. These methods applied on EMET can be applied on other enterprise products and were tested on many during our research.
Typically, hackers focus on software bugs to find vulnerabilities in the trust model of computers. In this talk, however, we'll focus on, how the micro architectural design of computers and how they enable an attacker to breach trust boundaries. Specifically, we'll focus on how an attacker with no special privileges can gain insights into the kernel and how these insights can enable further breaches of security. We will focus on the x86-64 architecture, but round up with comments on how our research touches on ARM processors. Unlike software bugs, micro architectural design issues have applications across operating systems and are independent of easily fixable software bugs. In modern operating systems the security model is enforced by the kernel. The kernel itself runs in a processor supported and protected state often called supervisor or kernel mode. Thus the kernel itself is protected from introspection and attack by hardware. We will present a method that'll allow for fast and reliable introspection into the memory hierarchy in the kernel based on undocumented CPU behavior and show how attackers could make use of this information to mount attacks on the kernel and consequently of the entire security model of modern computers. Making a map of memory and breaking KASLR Modern operating systems use a number of methods to prevent an attacker from running unauthorized code in kernel mode. They range from requiring user-privileges to load drivers, over driver signing to hardware enabled features preventing execution in memory marked as data such as DEP (Data Execution Prevention) or more resonantly SMEP that prevents execution of user allocated code with kernel level privileges. Often used bypasses modify either page tables or use so called code reuse attacks. Either way an attacker needs to know where the code or page tables are located. To further complicate an attack modern operating system is equipped with "Kernel Address Space Randomized Layout" (KASLR) that randomizes the location of important system memory.
We'll present a fast and reliable method to map where the kernel has mapped pages in the kernel mode area. Further, we'll present a method for locating specific kernel modules thus by passing KASLR and paving the way for classic privileged elevation attacks. Neither method requires any special privileges and they even run from a sandboxed environment. Also relevant is that our methods are more flexible than traditional software information leaks, since they leak information on the entire memory hierarchy. The core idea of the work is that the prefetch instructions leaks information about the caches that are related to translating a virtual address into a physical address. Also significant is that the prefetch instruction is unprivileged and does not cause exceptions nor does it have any privilege verification. Thus it can be used on any address in the address space. Physical to virtual address conversion A number of micro-architectural attacks is possible on modern computers. The Row hammer is probably the most famous of these attacks. But attacks methodologies such as cache side channel attacks have proven to be able to exfiltrate private data, such as private keys, across trust boundaries. These two attack methodologies have in common that they require information about how virtual memory is mapped to physical memory. Both methodologies have thus far either used the "/proc/PID/pagemap" which is now accessible only with administrator privileges or by using approximations. We will discuss a method where an unprivileged user is able to reconstruct this mapping. This goes a long way towards making the row hammer attack a practical attack vector and can be a valuable assistance in doing cache side channel attacks. Again we use the prefetch's instructions lack of privilege checking, but instead of using the timing that it leaks we now use the instructions ability to load CPU caches and that timing of memory access instructions depend heavily on the cache state. Bonus material We will shortly discuss the attack vectors relevance on ARM platforms and its potential impact on hypervisor environments. Finally, we will shortly outline a possible defense.
Many web applications allow users to upload video - video/image hostings, cloud storages, social networks, instant messengers, etc. Typically, developers want to convert user uploaded files into formats supported by all clients. The number of input formats is very big, so developers use third-party tools/libraries for video encoding. The most common solution in this area is ffmpeg and its forks. ffmpeg by default supports many different formats, including playlists (files with a set of links to other files). In this Briefing, we will examine exploitation of SSRF in hls (m3u8) playlists processing. Video processing is frequently done in clouds, which by design is more vulnerable to SSRF attacks, and playlists support many different protocols (http, file, tcp, upd, gopher ...), so SSRF in playlist processing can be very critical and even lead to full service takeover.
We will show how implementation details of hls playlists processing in ffmpeg allow reading files from the video conversion server, with and without network support. We will show how SSRF in video converter can give full access to service based on cloud like Amazon AWS. We will also present our tool for the detection and exploitation of this vulnerability. We will show a truly "viral" video which could perform successful attacks on Facebook, Telegram, Microsoft Azure, flickr, one of Twitter services, Imgur and others.
Larger organisations are using VoIP within their commercial services and corporate communications and the take up of cloud based Unified Communications (UC) solutions is rising every day. However, response teams and security testers have limited knowledge of VoIP attack surfaces and threats in the wild. Due to this lack of understanding of modern UC security requirements, numerous service providers, larger organisations and subscribers are leaving themselves susceptible to attack. Current threat actors are repurposing this exposed infrastructure for botnets, toll fraud etc.
The talk aims to arm response and security testing teams with knowledge of cutting-edge attacks, tools and vulnerabilities for VoIP networks. Some of the headlines are: attacking cloud based VoIP solutions to jailbreak tenant environments; discovering critical security vulnerabilities with the VoIP products of major vendors; exploiting harder to fix VoIP protocol and service vulnerabilities; testing the security of IP Multimedia Subsystem (IMS) services; and understanding the toolset developed by the author to discover previously unknown vulnerabilities and to develop custom attacks. In addition, the business impact of these attacks will be explained for various implementations, such as cloud UC services, commercial services, service provider networks and corporate communication. Through the demonstrations, the audience will understand how can they secure and test their communication infrastructure and services. The talk will also be accompanied by the newer versions of Viproy and Viproxy developed by the author to operate the attack demonstrations.
Detected breaches are often classified by security operation centers and incident response teams as either "targeted" or "untargeted." This quick classification of a breach as "untargeted," and the following de-prioritization for remediation, often misses a re-classification and upgrade process several attack groups have been conducting. As part of this process, assets compromised as part of broad, untargeted "commodity" malware campaigns are re-classified based on the organizational network they're part of to determine their potential value in the market. The higher value ones are upgraded and taken out of the "commodity" campaign to prepare them for a sale, for buyers planning a targeted attack. Organizations overlooking this often miss the opportunity to eliminate the threat prior to its escalation.
This session will cover the analysis of endpoint and network data captured during these re-classification operations, demonstrating the techniques and procedures used by some of the attack groups as they migrate compromised endpoints from the "commodity" threat platform to the valuable-target's platform. What measures can be taken to detect that a commodity threat is going through a migration process? How can this be leveraged to increase the efficiency of the incident response process?
Historically, machine learning for information security has prioritized defense: think intrusion detection systems, malware classification and botnet traffic identification. Offense can benefit from data just as well. Social networks, especially Twitter with its access to extensive personal data, bot-friendly API, colloquial syntax and prevalence of shortened links, are the perfect venues for spreading machine-generated malicious content.
We present a recurrent neural network that learns to tweet phishing posts targeting specific users. The model is trained using spear phishing pen-testing data, and in order to make a click-through more likely, it is dynamically seeded with topics extracted from timeline posts of both the target and the users they retweet or follow. We augment the model with clustering to identify high value targets based on their level of social engagement such as their number of followers and retweets, and measure success using click-rates of IP-tracked links. Taken together, these techniques enable the world's first automated end-to-end spear phishing campaign generator for Twitter.
The presentation will highlight the core of Web Application Firewall (WAF): detection logic, with an accent on regular expressions detection mechanism. The security of 6 trending opensource WAFs (OWASP CRS 2,3 - ModSecurity, Comodo WAF, PHPIDS, QuickDefense, Libinjection) will be called into question.
Static Application Security Testing (SAST) tool for Regular Expressions analysis will be released, which aims to finds security flaws in the cunning syntax of regular expressions. Using the proposed "regex security cheatsheet", rules from popular WAFs will be examined. Logical flaws in regular expressions will be demonstrated by applying author's bughunting experience and best practices. Unexpected by regexp's primary logic vectors will be discovered for Cross-Site Scripting and SQL-Injection attacks (MySQL, MSSQL, Oracle) using advanced fuzz testing techniques. Obtained from fuzz testing framework attack vectors will be clustered and represented via look-up tables. Such tables can be used by both attackers and defenders in order to understand the purpose of characters in various parts of attack vector, which are allowed by appropriate browsers or databases.
More than 15 new bypass vectors will be described, with an indication of over 300 potential weakness in regular expression detection logic of WAFs.
Digital Forensics and Incident Response (DFIR) for IT systems has been around quite a while, but what about Industrial Control Systems (ICS)? This talk will explore the basics of DFIR for embedded devices used in critical infrastructure such as Programmable Logic Controllers (PLCs), Remote Terminal Units (RTUs), and controllers. If these are compromised or even have a misoperation, we will show what files, firmware, memory dumps, physical conditions, and other data can be analyzed in embedded systems to determine the root cause.
This talk will show examples of what and how to collect forensics data from two popular RTUs that are used in Electric Substations: the General Electric D20MX and the Schweitzer Engineering Labs SEL-3530 RTAC.
This talk will not cover Windows or *nixbased devices such as Human Machine Interfaces (HMIs) or gateways.
Targeted malware campaigns against Activists, Lawyers and journalists are becoming extremely commonplace. These attacks range in sophistication from simple spear-phishing campaigns using off the shelf malware, to APT-level attacks employing exploits, large budgets, and increasingly sophisticated techniques. Activists, lawyers and journalists are, for the most part, completely unprepared to deal with cyber-attacks; most of them don't even have a single security professional on staff. In this session Eva
Galperin and Cooper Quintin of the Electronic Frontier Foundation will discuss the technical and operational details of malware campaigns against activists, journalists, and lawyers around the world, including EFF. They will also present brand new research about a threat actor targeting lawyers and activists in Europe and the Post-Soviet States. With targeted malware campaigns, governments have a powerful tool to suppress and silence dissent. As security professionals we are in a unique position to help in this fight.
What kind of surveillance assistance can the U.S. government force companies to provide? This issue has entered the public consciousness due to the FBI's demand in February that Apple write software to help it access the San Bernardino shooter's encrypted iPhone. Technical assistance orders can go beyond the usual government requests for user data, requiring a company to actively participate in the government's monitoring of the targeted user(s). Companies that take seriously the task of securing of their users' information and communications must be prepared to respond to demands to disclose, proactively begin storing, or decrypt user data; write custom code; allow the installation of government equipment on their systems; or hand over encryption keys. Advance preparation for handling technical assistance demands is especially important now since the U.S. Department of Justice has been so aggressive with companies that resist broad or novel surveillance orders. In the "Apple vs. FBI" case, America's richest company faced a motion for contempt of court and derisive rhetoric from U.S. officials before it enlisted the nation's top lawyers in its defense and ultimately fought off the case. In stark contrast, encrypted e-mail provider Lavabit unsuccessfully opposed multiple court orders to compel it to decrypt and give law enforcement the e-mails of its most famous customer, Edward Snowden, and even to hand over its private encryption keys. The Fourth Circuit Court of Appeal did not look kindly on Lavabit, which lost its legal battle and shuttered its operations after its legal defeat. In 2007, Yahoo! unsuccessfully battled warrantless wiretapping in secret before the Foreign Intelligence Surveillance Court. The price for seeking to protect its users' Fourth Amendment rights? DOJ argued that Yahoo! should be fined $250,000 a day for non-compliance while the litigation was pending.
This talk, given by two Crypto Policy Project attorneys from Stanford Law School's Center for Internet and Society, will teach an enterprise audience what they need to know about technical-assistance orders by U.S. law enforcement, so that they can handle demands effectively even if they do not have Apple-level resources. We'll go over what sorts of assistance law enforcement may demand you provide (and has demanded of companies in the past), whether they have authority to require such assistance and under what law(s), and a company's options in response.
Continuous improvements have been made to Windows and other Microsoft products over the past decade that have made it more difficult and costly to exploit software vulnerabilities. The various mitigation technologies that have been created as a result have played a key role in helping to keep people safe online even as the number of vulnerabilities that are found and fixed each year has increased. In this presentation, we'll describe some of the new ways that Microsoft is tackling software security and some of the new mitigation improvements that have been made to Windows 10 as a result. This talk will cover a new data driven approach to software security at Microsoft. This approach involves proactive monitoring and analysis of exploits found in-the-wild to better understand the types of vulnerabilities that are being exploited and exploitation techniques being used. This category of analysis and insight has driven a series of mitigation improvements that has broken widely used exploitation techniques and in some cases virtually eliminated entire classes of vulnerabilities.
In this presentation, we'll share more details on how this analysis is performed at Microsoft, how it has helped drive improvements, and how we have measured the success of those improvements. This presentation will also describe Microsoft's unique proactive approach to software security assurance which embraces offensive security research and extends traditional "red team" operations into the software security world. This approach replaces traditional software security design and implementation reviews with a true end-to-end simulation of attacks in the wild by spanning vulnerability discovery, exploit development, and mitigation bypass identification. This approach enables Microsoft to concretely evaluate the effectiveness of mitigations, identify gaps in protection, and provide concrete metrics on the cost and resources required to develop an exploit in a given scenario. In other words, this provides concrete data to help Microsoft be proactive about making holistic platform security improvements rather than simply waiting and reacting to what we see attackers do in-the-wild. In order to help drive these points home, this presentation will describe a number of mitigation improvements that have been made in Windows 10 and the upcoming Windows 10 anniversary edition. We will show how these improvements were supported by the above methods and what impact we expect these improvements to have going forward. This portion of the presentation can be seen as a follow-on to our "Exploit Mitigation Improvements in Windows 8" presentation which was given at Black Hat USA 2012.
Introduced in Windows 10, Segment Heap is the native heap used in Windows app (formerly called Modern/Metro app) processes and certain system processes. This heap is an addition to the well-researched and widely documented NT heap that is still used in traditional application processes and in certain types of allocations in Windows app processes.
One important aspect of the Segment Heap is that it is enabled for Microsoft Edge which means that components/dependencies running in Edge that do not use a custom heap manager will use the Segment Heap. Therefore, reliably exploiting memory corruption vulnerabilities in these Edge components/dependencies would require some level of understanding of the Segment Heap.
In this presentation, I'll discuss the data structures, algorithms and security mechanisms of the Segment Heap. Knowledge of the Segment Heap is also applied by discussing and demonstrating how a memory corruption vulnerability in the Microsoft WinRT PDF library (CVE-2016-0117) is used to create a reliable write primitive in the context of the Edge content process.
Instead of simply emulating old and slow hardware, modern hypervisors use paravirtualized devices to provide guests access to virtual hardware. Bugs in the privileged backend components can allow an attacker to break out of a guest, making them quite an interesting target.
In this talk, I'll present the results of my research on the security of these backend components and discuss Xenpwn, a hypervisor based memory access tracing tool used to discover multiple critical vulnerabilities in paravirtualized drivers of the Xen hypervisor.
If you like virtualization security, race conditions, vulnerabilities introduced by compiler optimizations or are a big fan of Bochspwn, this is the right talk for you. | <urn:uuid:7941e827-3f95-44ac-9dd1-556ae46d7dc9> | CC-MAIN-2017-04 | https://www.blackhat.com/us-16/briefings.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00244-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936206 | 35,459 | 2.515625 | 3 |
Government regulations to divert electronics waste from landfills have shown an interesting and unintended turn.
“Take-back” programs promoted by state and federal environmental advocates have been successful in diverting more than 290,000 tons of e-waste away from landfills and toward responsible recyclers each year.
However, these programs have also promoted electronics hoarding. The demand for the glass from used electronics is at an all time low, which causes companies to avoid the process and expense of partnering with a certified electronics recycler like HOBI International.
The “glass tsunami” or stockpile of roughly 660 million pounds of glass being stored in warehouses across the country will cost anywhere between $85 million to $360 million to responsibly recycle.
Twenty-two states already have enacted a law that makes electronics manufactures like Sony, Toshiba and Apple financially responsible for the recycling of their old equipment. Although without initiatives by government overseers for proper regulatory standards, fraud has been a huge issue.
“Paper transactions” is a quietly known industry term that has recyclers buying fraudulent paperwork to represent the amount of e-waste they have collected but never actually did.
One of the largest producers of e-waste, the federal government, has strengthened the oversight of e-waste thanks to the Obama Administration. Unfortunately, federal agencies have been lacking at effectively tracking their e-waste. Large amounts continue to be disposed of through auction and online outlets.
“In these auctions, the waste is often sold to a first layer of contractors who promise to handle it appropriately, only to have the most toxic portion subsequently sold to subcontractors who move it around as they wish,” according to The New York Times.
Simply, companies see more profit in recycling computers, cellphones and printers because they contain more precious metals. Electronics like televisions and old monitors do not return as much profitability and can slip through the cracks by being hoarded in warehouses, dumped in landfills or shipped abroad.
Some companies are even refusing to accept CRT/TV to recycle.
It is a very real situation that some small time recyclers who are in over their heads with the amount of e-waste they have stored will abandon the stash due to insufficient profits.
In hopes to diminish this problem while keeping the waste stream free of e-waste number of states have provided recyclers with payments for proper disposal and assistance has been provided by electronics companies. Also, a number of electronics recyclers have developed innovative technology to clean lead from the tube glass. | <urn:uuid:06ed0bce-00a5-4f7e-9e06-6a0034de2e6a> | CC-MAIN-2017-04 | https://hobi.com/e-waste-regulations-electronics-hoarding/e-waste-regulations-electronics-hoarding/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00180-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961288 | 528 | 2.53125 | 3 |
LIVERMORE, Calif., Dec. 22 — Sandia National Laboratories has formed an industry-funded Spray Combustion Consortium to better understand fuel injection by developing modeling tools. Control of fuel sprays is key to the development of clean, affordable fuel-efficient engines.
Intended for industry, software vendors and national laboratories, the consortium provides a direct path from fundamental research to validated engineering models ultimately used in combustion engine design. The three-year consortium agreement builds on Department of Energy (DOE) research projects to develop predictive engine fuel injector nozzle flow models and methods and couple them to spray development outside the nozzle.
Consortium participants include Sandia and Argonne national laboratories, the University of Massachusetts at Amherst, Toyota Motor Corp., Renault, Convergent Science, Cummins, Hino Motors, Isuzu and Ford Motor Co. Data, understanding of the critical physical processes involved and initial computer model formulations are being developed and provided to all participants.
Sandia researcher Lyle Pickett, who serves as Sandia’s lead for the consortium, said predictive spray modeling is critical in the development of advanced engines.
“Most pathways to higher engine efficiency rely on fuel injection directly into the engine cylinder,” Pickett said. “While industry is moving toward improved direct-injection strategies, they often encounter uncertainties associated with fuel injection equipment and in-cylinder mixing driven by fuel sprays. Characterizing fuel injector performance for all operating conditions becomes a time-consuming and expensive proposition that seriously hinders engine development.”
Industry has consequently identified predictive models for fuel sprays as a high research priority supporting the development and optimization of higher-efficiency engines. Sprays affect fuel-air mixing, combustion and emission formation processes in the engine cylinder; understanding and modeling the spray requires detailed knowledge about flow within the fuel injector nozzle as well as the dispersion of liquid outside of the nozzle. However, nozzle flow processes are poorly understood and quantitative data for model development and validation are extremely sparse.
“The Office of Energy Efficiency and Renewable Energy Vehicle Technologies Office supports the unique research facility utilized by the consortium to elucidate sprays and also supports scientists at Sandia in performing experiments and developing predictive models that will enable industry to bring more efficient engines to market,” said Gurpreet Singh, program manager at the DOE’s Vehicle Technologies Office.
Performing experiments to measure, simulate, model
Consortium participants already are conducting several experiments using different nozzle shapes, transparent and metal nozzles and gasoline and diesel type fuels. The experiments provide quantitative data and a better understanding of the critical physics of internal nozzle flows, using advanced techniques like high-speed optical microscopy, X-ray radiography and phase-contrast imaging.
The experiments and detailed simulations of the internal flow, cavitation, flash-boiling and liquid breakup processes are used as validation information for engineering-level modeling that is ultimately used by software vendors and industry for the design and control of fuel injection equipment.
The goals of the research are to reveal the physics that are general to all injectors and to develop predictive spray models that will ultimately be used for combustion design.
“Predictive spray modeling is a critical part of achieving accurate simulations of direct injection engines,” said Kelly Senecal, co-founder of Convergent Science. “As a software vendor specializing in computational fluid dynamics of reactive flows, the knowledge gained from the data produced by the consortium is invaluable to our future code-development efforts.”
Industry-government cooperation to deliver results
Consortium participants meet on a quarterly basis where information is shared and updates are provided.
“The consortium addresses a critical need impacting the design and optimization of direct injection engines,” Pickett said. “The deliverables of the consortium will offer a distinct competitive advantage to both engine companies and software vendors.”
Source: Sandia National Laboratories | <urn:uuid:33418a06-db8e-46b5-9ad5-ac160a517566> | CC-MAIN-2017-04 | https://www.hpcwire.com/off-the-wire/sandia-national-laboratories-forms-new-consortium/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00392-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910325 | 806 | 2.625 | 3 |
The Internet of Things Means BusinessBy Samuel Greengard | Posted 2014-02-19 Email Print
Connected devices and networked machines are enabling industry and government to collect information and act on it in ways that will redefine IT and business.
One of the remarkable yet sobering realities of business is how quickly the world has become networked—and interconnected. Over the last decade, wireless technologies have extended the reach of computers to almost every corner of the planet and have left no enterprise untouched. A spate of machines—including desktop computers, laptops, tablets and smartphones—makes data available and accessible in real time.
As people and computers become inextricably linked through these technologies, the next phase of computing is already taking shape: connected objects that are part of the Internet of things (IoE). Increasingly sophisticated devices and apps—along with sensors embedded into everything from ordinary household objects to industrial machinery—are enabling businesses, government agencies and other institutions to collect information and act on it in ways that promises to redefine technology and business.
"We are witnessing a very fast transition to embedded intelligence in all sorts of items," explains Chris Curran, chief technologist at consulting firm PwC. "The ability to extract data from a wide array of objects and devices helps businesses analyze things and gain far better insights. Instead of making educated guesses, it's possible to tap into data and analytics in order to understand patterns, trends and behavior in a more thorough and comprehensive way."
The Internet of things (also called the Internet of everything and, in the business world, the Industrial Internet) is expanding at a rapid clip. Cisco Systems predicts that the number of Internet-connected devices will reach 25 billion by 2015 and top 50 billion by 2020. The firm also forecasts that 99 percent of physical objects will eventually become part of a network through technologies such as cellular, WiFi, Bluetooth, RFID and Near Field Communication (NFC).
Almost anything and everything—from trees, milk cartons and medical equipment to roads, bridges, vehicles and power generators—can be equipped with sensors that collect and transmit data about consumption, usage patterns, location and more. The IoT beckons with the promise of helping manufacturers, health care providers, the military, retailers and others better track equipment, supplies and people; ratchet up marketing; understand behavior; build smarter vehicles; detect machine failures and, quite simply, understand the world in ways that weren't possible only a few years ago.
"What's special about connected devices is that they can continuously report about usage, operating behavior, conditions and other information," says Vijitha Kaduwela, founder and CEO of consulting firm Kavi Associates. "They generate a lot of data that can be analyzed and acted upon."
He points out that there are a huge number of potential sources of data and enormous diversity in the type of data that organizations can collect. "In terms of business value, this data is very complimentary to the data generated by operations, sales, marketing, finance and other departments in the enterprise," Kaduwela explains.
Embracing the Opportunities
Businesses are beginning to spot the opportunities offered by the IoT. For example, Hawaiian Legacy Hardwoods, a Honolulu-based lumber investment and ecotourism firm that grows Koa hardwood trees, has planted more than 225,000 trees over the last four years. It allows 25 percent of the trees to be harvested and protects the remaining 75 percent through a permanent reforestation program.
The organization maintains a database that contains detailed information about each tree. In order to track its inventory, the firm has tagged every tree with passive RFID tags that contain GPS coordinates, information about seed stock and fertilization schedules. "We can keep track of pretty much any event involving the tree," says CIO William Gilliam. "We simply scan it, record it and log it." | <urn:uuid:183fddb6-ff89-46f9-8fe1-71b6ac4bf58e> | CC-MAIN-2017-04 | http://www.baselinemag.com/innovation/the-internet-of-things-means-business.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00300-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942171 | 790 | 2.625 | 3 |
What are “cookies”?
“Cookies” (also known as HTTP cookies, web cookies or browser cookies) are simply small pieces of data, which are stored as text files on your computer, whenever you visit certain websites. Their typical purpose is to help sites remember particular actions you may have done there in the past. For example, cookies may track when you have logged into a site, visited certain pages or clicked certain buttons.
- Remember when you have logged into a site.
- Remember your user preferences, searches and favourites.
- Track your usage of a site, via Google Analytics©.
Track the success of our marketing campaigns. Additionally, RX websites have a small set of carefully selected third-party providers.
- Target more relevant advertisements to you (DoubleClick™)
- Enable social media sharing (AddThis©, Facebook©, Twitter©, YouTube©)
You can view a full list of all cookies used by RX on this website in the “What cookies do we use and why?” section.
Are cookies harmful?
Despite this, if you do wish to disable or remove cookies, please see the “Help” section of your browser or mobile device. Each browser or device handles the management of cookies differently, so you will need to refer to your appropriate “Help” documentation. However, as mentioned, please be aware that cookies are essential for certain features of an RX site to work properly.
Why are we telling you this?
What cookies do we use and why?
The following shows the full list of platform cookies, used throughout this RX website.
|ASP.NET_SessionId||Infosecurity Magazine||Session||This cookie is necessary for us to identify if you are logged in to the website or not and perform other essential site functions. The cookie doesn't store any personal information and is deleted when you finish browsing the website.|
|ISM.ScreenSize||Infosecurity Magazine||Session||This cookie allows the website to store your screen size so we can optimise the delivery of images to your device.|
|ISM.Cookies||Infosecurity Magazine||24 months||This cookie is set when you dismiss the "Privacy and Cookies" message, displayed at the bottom of the Infosecurity Magazine website. Once set, it ensures you will not be shown the message again.|
|_ga||Google Analytics||18 months||Google Analytics© is an analytics solution, which provides information about your activity on the Infosecurity Magazine website. This helps us to understand what works on the site and better tailor it to your needs.|
|_utma||Google Analytics||18 months||Google Analytics© is another analytics solution, which provides information about your activity on an RX website. As with WebAbacus?, this helps us to understand what works on the site and better tailor it to your needs. This cookie is used to determine unique visitors to an RX website. It is updated with each page view.|
|_utmb||Google Analytics||30 minutes||This is another Google Analytics© cookie. It is used to establish a user session with an RX website.|
|_utmc||Google Analytics||None||This is another Google Analytics© cookie. It determines whether or not a new session has been created.|
|_utmz||Google Analytics||6 months||This is another Google Analytics© cookie. It is used to identify how you arrived at the site, whether via a direct method, a referring link, a website search or a campaign, such as an advertisement or email link. This cookie is used to calculate search engine traffic, advertisement campaigns and page navigation. It is updated with each page view.|
|_gads||DoubleClick||18 months||This cookie is used to improve our advertising. Some common applications are to target advertising based on what's relevant to you, to improve reporting on campaign performance and to avoid showing ads you have already seen.|
|_atuvc||AddThis||Session||This cookie, provided by Clearspring Technologies Inc., is used to provide you with the option to share content to your favourite social networks. AddThis collects basic information on how you use the service but any data is always anonymous.| | <urn:uuid:8938f974-30d2-4d55-ba78-ccfbf08b3356> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/cookies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884653 | 888 | 2.921875 | 3 |
Putting the Laws into PracticeBy Baselinemag | Posted 2008-04-30 Email Print
Virtualization technology can deliver cost savings and improve IT performance, but it also introduces new security concerns. In this summary of a Burton Group report, security expert Pete Lindstrom examines the security considerations unique to virtualized IT environments.
Putting the Laws into Practice
The answer to the question of security rarely has an absolute value. Instead, it is a matter of degrees. For most enterprises, the decision is not whether to virtualize, because virtualization is here now. The decision involves determining where and when to apply controls that are sufficient in the environment based on risk tolerance. Ultimately, whether virtualization is bane or boon for security depends on how the systems are configured, deployed and managed.
To manage these new security concerns, it’s important to understand the underpinnings of today’s virtual systems.
The primary components of a virtual environment are:
- Virtual Machines and their accompanying guest operating systems: These are the core components of the virtual architecture.
- Virtual Machine Monitor (VMM): The software component responsible for managing interactions between the VM and the physical system.
- Hypervisor and/or host operating system: The software that handles kernel operations.
A virtualized environment consists of a VMM and one or more VMs. The VMs and VMM interact with either a hypervisor or a host operating system to access hardware, local I/O and networking resources. In addition to these components, virtualization architectures leverage virtual networking, virtual storage and terminal service capabilities to complete their architectures.
This minimum set of components makes up virtual environments in several distinct ways:
- Type 1 Virtual Environments are considered full virtualization environments and have VMs running on a hypervisor that interacts with the hardware.
- Type 2 Virtual Environments also are considered full virtualization environments, but work with a host operating system instead of a hypervisor (though sometimes the VMM is called a hypervisor).
- Paravirtualized environments make performance gains by eliminating some of the emulation that occurs in full virtualization environments.
- Other designations include hybrid virtual machines (HVMs) and hardware-assisted techniques.
From a security perspective, the most important thing to remember is that there is a more significant impact in a Type 2 environment where a host operating system with user applications and interfaces is running outside of a VM at a level lower than the other VMs. Because of the architecture, the Type 2 environment increases risk through its incorporation of potential attacks against the host operating system. For example, a laptop running VMware with a Linux VM on a Windows XP system inherits the attack surface of both operating systems, plus the virtualization code of the VMM. | <urn:uuid:333e1242-e426-4c00-a068-7bc832ed2b97> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Security/5-Laws-of-Virtualization-Security/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00510-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906575 | 566 | 2.53125 | 3 |
Windows 8 allows multiple users to share the same computer using different accounts. This allows each user to have their own location where they can store personal information such as documents, pictures, videos, saved games, and other files so that they are not mixed in with the files of other users on the same computer. Having multiple accounts also plays a strong role in Windows Security. It is advised that each account on the computer be setup as a Standard User, which has limited permissions, so that it is harder for malware to infect the computer. You should then create a separate account that will be for the Administrator of the computer. This account, though, would only be used to administer the computer as necessary and would not be used as a day-to-day account. Using this security plan significantly reduces the chance of your computer becoming infected.
Windows 8 also introduces the ability to create and login as a Local account or a Microsoft account. A Local account is an account that is local to your computer and is not integrated into any of Microsoft's online services. This account is the same as what was used in previous Windows versions. A Microsoft account, which was previously known as a Windows Live ID, is an online account that you register with Microsoft and that allows you to integrate all of Microsoft's online services into Windows 8. These services include the Windows Store, SkyDrive, Calendar, Hotmail, and the ability to synch your account settings and preferences to other Windows 8 machines you may use. Ultimately, there is no wrong choice when selecting the type of account to use as you have the ability to switch between a Microsoft account and a local account at any time.
How to create a user in Windows
To create a new user in Windows, please make sure you are logged in with an account that has Administrator privileges. Now, go to the Windows 8 Start Screen and type Add User. When the search results appear click on the Settings category as shown below.
Now click on the option labeled Give other users access to this computer, which will open the User Settings screen.
Scroll down and click on the Add User option as shown above. You will now be at a screen prompting you to enter the user's email address.
By default, the above screen prompts you to enter an email so that you create a Microsoft account. If you wish to create a Microsoft account, enter your email address and click on the Next button. If the email address is not an existing Microsoft account you will be prompted to register one. When the registration process is completed, Microsoft will send an email to the inputted email address. In this email will be a link that you need to click on in order to verify that you want this Microsoft account used on this computer.
If you do not wish to use a Microsoft account, you should instead click on the Sign in without a Microsoft account option in the screen above. You will be brought to a screen where Windows will ask again if you are sure you wish to make a Local account. Click on the Local account button and this will bring you to a new screen where you need to input the information you wish to use for the Local account. At this screen you need to fill in the desired user name, password, and a hint that will be used to help you remember your password. When you are done filling in the information, please click on the Next button.
Your account should now be created and you will see a confirmation screen similar to the one below.
If the new account belongs to a child and you wish to enable Family Safety, please place a check mark in the checkbox and click on the Finish button.
Your new account has now been created.
Windows 7 allows you to have multiple users sharing the same computer under their own individual accounts. This allows each individual user to have their own location on the computer where they can store their personal documents, pictures, videos, saved games, and other personal data. This also allows the owner of the computer to assign certain accounts the ability to perform administrative tasks ...
When you are logged into Windows 8, Windows will display your full name next to your account picture on the Start Screen. Normally you can use the User Accounts control panel to change the full name that is displayed for an account in Windows. If you are using a Microsoft Account in Windows 8, though, you will no longer be able to change your full name in Windows as it synchronizes it with the ...
When creating accounts on Windows 8 you have the option to choose a Local account or a Microsoft account. A Microsoft account, formerly known as a Windows Live ID, is an account that has been registered with Microsoft so that you can use their online services such as Hotmail, SkyDrive, Calendar, or the Windows Store. In order to use most of these services and integrate them into Windows 8, you ...
System Restore is a recovery feature in Windows 8 that allows you to restore your computer to a previous state. This is useful if your computer starts to function poorly or crashes and you cannot determine what the cause is. To resolve these types of issues, you can use System Restore to restore your computer back to a previous state that was saved before your problems started occurring. This will ...
A new feature in Windows 8 is tiles on the Start Screen called Live Tiles that can display a constantly updating stream of new information related to that particular app. This allows you to have all the up-to-date information you need displayed in one place as it occurs. Some of the information that these apps display, though, may be private and you do not want the information to be publicly ... | <urn:uuid:db8861ab-172e-4283-b2a3-e33a937ce8fa> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/create-new-user-account-in-windows-8/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00053-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923252 | 1,136 | 2.796875 | 3 |
When it comes to understanding bipolar and mental disorders, a wealth of relevant data can be obtained from a smartphone, as it’s what most people interact with constantly on a daily basis.
A new study from the Center for Research and Telecommunication Experimentation for Networked Communities (CREATE-NET) in Trento, Italy, explores how analysing data on a bipolar patient’s daily activity collected from smartphone sensors such as GPS and voice calls could help predict the onset of an episode.
Venet Osmani, from CREATE-NET, who conducted the study, monitored 12 patients’ daily activities, each for 12 weeks on average. He collected more than 1000 days of smartphone sensor data. The study was approved by the ethics board of the Innsbruck University Hospital in Austria.
“Behavioural data will have a significant impact on our understanding of mental disorders. Because symptoms of most mental disorders are manifested as changes in an individual’s behaviour, analysing such changes could lead to a better understanding of these types of diseases and possible treatments,” he wrote in his research paper.
“Detecting patients’ change of state (which can indicate onset of an episode) can lead to a visit to the clinic and allow early intervention.”
Osmani said current clinical rating scales for diagnosing bipolar disorders - such as Hamilton Depression Rating Scale (HAMD) and Bipolar Spectrum Diagnostic Scale (BSDS) - have their drawbacks as they tend to be subjective. The study aims to address this issue through more objective data analysis to better understand bipolar disorders and the behaviour changes that lead to episodes.
Patients in the study underwent a psychiatric mental state examination every three weeks, including one at the beginning and one at the end. This was used for ‘ground truth’ data to compare the predicted result against the actual situation. The predictive models were assessed against whether the examination gave a patient a score that states either an episode of severe depression, an episode of severe mania, or moderate and mild conditions.
“We chose a period of seven days before and two days after the mental state examination for the sensor data. This was based on the assumptions elicited from discussions with the psychiatrists that state changes are gradual and the probability of a major change within a few days is low.”
Osmani first looked at the correlation between a patient’s activity and their mental state using data collected from a smartphone accelerometer. Dividing the day into morning, afternoon, evening and night, he calculated an activity score for each part of the day. A strong correlation was established between a patient’s activity and their mental state at particular parts of the day.
GPS data were used to train a Naïve Bayes classification model to predict a patient’s mental state, which achieved 81 per cent mean accuracy when tested against the ground truth. Osmani also tried k-Nearest Neighbour, J48 search tree, and a conjunctive rule learner yielded - all giving similar accuracy results.
Osmani then included voice phone call patterns and sound analysis, as bipolar patients experience voice changes during an episode. By fusing all the sensor data together and considering all disease-relevant aspects of behaviour, he was able to achieve a prediction accuracy of 97.19 (percentage of correct positive predictions) and of 97.36 per cent for recall (how often model found positives).
“We plan to investigate whether similar results can be obtained with a higher number of patients, monitored over a longer period of time," said Osmani.
The research team has begun the initial steps in this direction through the Nympha-MD (www.nympha-md-project.eu), a follow-up European project.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers. | <urn:uuid:bc980f58-2390-4eae-b7fb-859acd47eac1> | CC-MAIN-2017-04 | http://www.cio.com.au/article/586712/smartphone-sensor-data-help-fight-bipolar/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00173-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944728 | 813 | 2.9375 | 3 |
Definition: A binary relation R for which a R a for all a.
See also symmetric, transitive, irreflexive.
Note: The relation "equals" is reflexive since everything is equal to itself. However the relation "less than" is not reflexive. The relation "likes" is not reflexive either since some people do not like themselves.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "reflexive", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/reflexive.html | <urn:uuid:72edb626-21a8-4fd4-a8ec-0d68e44d9607> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/reflexive.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898754 | 195 | 3.15625 | 3 |
Researchers in Japan are proposing an interesting way to get rid of space debris – mount laser in the International Space Station and zap it with a beam.
Land-based lasers have been proposed for such as task in the past but researchers at Japan’s largest research institution RIKEN want to combine a super-wide field-of-view telescope, developed by RIKEN which would detect objects and a recently developed high-efficiency laser system, known as CAN, that could track space debris and remove it from orbit.
+More on Network World: 10 crucial issues around controlling orbital debris, directing space traffic+
The RIKEN EUSO telescope, which will be used to find debris, was originally planned to detect ultraviolet light emitted from air showers produced by ultra-high energy cosmic rays entering the atmosphere at night. “We realized that we could put it to another use,” says Toshikazu Ebisuzaki, who coauthored a paper on the proposal. “During twilight, thanks to EUSO’s wide field of view and powerful optics, we could adapt it to the new mission of detecting high-velocity debris in orbit near the ISS.”
The CAN laser was originally developed to power particle accelerators. It consists of bundles of optical fibers that act in concert to efficiently produce powerful laser pulses. Combining these two instruments will be capable of tracking down and deorbiting the most dangerous space debris, around the size of one centimeter. The intense laser beam focused on the debris will produce high-velocity plasma ablation, and the reaction force will reduce its orbital velocity, leading to its reentry into the earth's atmosphere, the researchers stated.
The group plans to deploy a small proof-of-concept experiment on the ISS, with a small, 20-centimeter version of the EUSO telescope and a laser with 100 fibers. “If that goes well, we plan to install a full-scale version on the ISS, incorporating a three-meter telescope and a laser with 10,000 fibers, giving it the ability to deorbit debris with a range of approximately 100 kilometers. Looking further to the future, we could create a free-flyer mission and put it into a polar orbit at an altitude near 800 kilometers, where the greatest concentration of debris is found.” said Ebisuzaki in a statement.
According to Ebisuzaki, “Our proposal is radically different from the more conventional approach that is ground based, and we believe it is a more manageable approach that will be accurate, fast, and cheap. We may finally have a way to stop the headache of rapidly growing space debris that endangers space activities. We believe that this dedicated system could remove most of the centimeter-sized debris within five years of operation.”
[More on Network World: NASA identifies Top Ten space junk missions]
At the current density of debris, there will be an in-orbit collision about every five years, The research went on to say that about 10 to 15 large objects or about seven tons of debris need to be removed from space a year to reduce the risk of collisions and damage to other spacecraft, according to research presented at the 6th European Conference on Space Debris in 2014.
Space debris consists of human-made objects in Earth's orbit that no longer has a useful purpose, such as pieces of launched spacecraft. It is estimated that up to 600,000 objects larger than 1 centimeter and at least 16,000 larger than 10 cm orbit Earth. An object larger than 1 cm hitting a satellite would damage or destroy sub-systems or instruments on board and a collision with an object larger than 10 cm would destroy the satellite, according to Commission figures. The number of objects larger than 1 cm is expected to reach around 1 million in 2020.
Check out these other hot stories: | <urn:uuid:f811cadc-8428-473b-92fb-9c2482eb1aa2> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2914663/security0/scientists-want-to-blast-space-debris-with-a-space-station-mounted-laser.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945777 | 793 | 3.484375 | 3 |
In recent years, fiber optic communication technology has been made in the development of new technologies are emerging, which significantly improved communication capabilities and expanding the scope of application of fiber optic communication.
1. Common optical fiber
Ordinary single-mode optical fiber is most commonly used in an optical fiber. With the development of optical communication system, the optical distance relay and a single wavelength channel capacity increases, G.652.A fiber performance is also possible to further optimize performance in the area 1550rim low attenuation coefficient not be fully utilized and the optical fiber the minimum attenuation coefficient and zero dispersion point is not in the same area. The cut off wavelength found ITUTG.654 requirements shifted single-mode optical fiber and dispersion-shifted single-mode optical fiber G.653 predetermined to achieve such improvements.
2. Core network fiber optic cable
China has in the trunk (including State Route, province route and region trunk) on the full adoption of fiber optic cable, multimode fiber have been eliminated, all using a single-mode fiber, including the fiber G.652 and G.655 fiber. G.653 fiber once used in our country, but will be no more development. G.654 fiber because it can not significantly increase the capacity of the fiber optic system, it is not used in China’s terrestrial fiber optic cable. Discrete fiber trunk cable, do not use optical fiber ribbon. Trunk cable is mainly used outdoors, in these cables, which have been previously tight layer construction and the skeleton structure, has been discontinued.
3. Access network fiber optic cable
Short-distance fiber optic cable in the access network, multi-branch, stars inserted frequently, in order to increase the capacity of the network, the number of fibers is usually increased. Especially in the local pipe, the pipe diameter, while increasing the number of optical fiber core to increase the packing density of the optical fiber cable set, resuced fiber diameter and weight, is very important. Access network, G.652 ordinary single-mode fiber and G.652 C low water peak single-mode fiber. Low water peak single-mode fiber suitable for dense wavelength division multiplexing (DWDM) in China has a small amount of use.
4. Indoor fiber optic cable
Indoor fiber optic cable often need to be used for the transmission of voice, data and video signals at the same time. And may also be used for telemetry and sensor. International Electrotechnical Commission (IEC) referred to in indoor fiber optic cable in fiber optic cable classification, the author think that should be at least two major parts of the include the fiber optic cable and integrated wiring in the within the bureau with the optical cable. Office cable laying on the central office or other telecommunications room, laying closely orderly and relatively fixed position. Combined with wiring fiber optic cable cloth on the the indoor of the the user end of, mainly by the users to use, Therefore of its vulnerability of should be higher than the Bureau with the optical cable have more stringent consider.
5. Communications cable in the power lines
The optical fiber is a dielectric fiber optic cable can also be made of all-dielectric, completely metal. This all-dielectric fiber optic cable will be ideal communication lines of the power system. There are two all-dielectric fiber optic cable used for the power mast road laying structure: that all dielectric self-supporting (ADSS) structure for overhead line winding structure. ADSS cable because it can separate laying, wide, and has been widely used in China’s power transmission system transformation. ADSS fiber optic cable in the country’s recent demand is a hot product. | <urn:uuid:29808097-b66d-47ee-aac1-2a5b9b692b5e> | CC-MAIN-2017-04 | http://www.fs.com/blog/achievements-and-challenges-of-chinas-fiber-optic-cable-technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00109-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.892273 | 759 | 2.890625 | 3 |
Real-time voice along with video communication can be attained with the internet standard SIP (Session Initiation Protocol). That was initiated by the IETF (Internet Engineering Task Force) and you can find it in published form from RFC 3261. As for live communications, internet protocol “SIP” is used for establishing voice and video calls and within an IP network, one or more than one participant can whether create, alter, or end sessions with the help of this signaling protocol. It is very important in explanation of the functionality of the VoIP technology as one of the Voice over IP protocols. But at this point of learning about SIP, first you have to understand the term “session” within a communicating network. Actually, it is a clear-cut two-way phone call procedure. But in case of multi-media conference session, it can be consisted on loads of participating persons.
Session initiation protocol working group is at the present in the authority to make improvements in it and to maintain the standard of this text based SIP such as upholding its function of starting interactive communication session for the users. Moreover, this group is doing their best for maintaining the basic architecture of SIP model by using on hand internet protocols so to retain the architecture and the simplicity of model etc.
Peep to peer protocol SIP is required just a simple but scalable (network ability to deal with growing amount of exertion in a proper and capable manner) central network along with intellect distributed to the edge of network, fixed in the endpoints (terminating hardware of software devices). Signaling protocol “Session Initiation Protocol” (SIP) is designed especially for a series of services such as: internet conferencing, instant messaging, IP telephony, presence, voice contact, video communication, data alliance, live gaming, distribution of application and much more tasks are possible with this protocol assistance. “SIP” acts in the same way for the real-time unicast or multicast communication as HTTP protocol takes steps for the web.
In addition to this, SIP forking is referred to the course of dividing a particular SIP call into several SIP endpoints. With this powerful SIP feature, a sole call can ring on a lot of endpoints simultaneously e.g. with it, you can at the same time ring your deskphone as well as Android phone sip. Most of all, in both devices cases no rules for forwarding are required in order to make them ring. Best example of SIP forking can be as: an office device with this protocol can let the secretary to reply all calls to the phone extension of boss whenever he/she is out of office. Such SIP telephone system is featured with secure, considerable cost-saving and improved user’s mobility and efficiency functions facilities.
In short, SIP was originally developed within the IETF (Internet Engineering Task Force) MMUSIC (multiparty multimedia session control) group. But a number of regularities, associations and other groups are considerably using SIP such as: IETF PINT working group, IMTC, ETSI Tiphon, PacketCable DCS etc. The Application layer SIP protocol is designed to be self regulating of the transport layer. It can be run over the TCP, UDP, or SCTP and it is incorporated several features of the HTTP and the SMTP. | <urn:uuid:038c3da2-30fb-4c2a-98fc-16a32ec79594> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/sip | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929966 | 697 | 2.96875 | 3 |
Think of all of the websites that you utilize online that require passwords to protect your sensitive information. You have a password for your online bank account, your email, your credit card accounts — the list goes on.
Out of all of those websites, however, how many unique passwords do you have? Not too many? If a hacker deciphers even one password to your account, your entire online life would be in serious jeopardy.
According to credit firm Experian, the typical online user has 26 accounts—and only five passwords. Couple that with the fact that 90 percent of passwords are vulnerable to cracking, and it becomes clear that passwords are not quite as effective when it comes to protecting sensitive information.
Quite simply, passwords are failing to get the job done in an era where digital security is of the utmost importance. They are failing because they are easy to crack. A better solution, therefore, is to take a multilayered approach to online security providing protection beyond the initial scope of a password. A multilayered approach to online security involves implementing advanced authentication requirements to verify a user or application’s identity.
In this regard, mobile is playing a pivotal role in the multilayer approach to security at the enterprise level. One advantage of mobile is that applications can use a process called sandboxing, in which applications on a device cannot access the digital information of other applications. This is imperative when it comes to the prevention of advanced malware, as taking the completion of a transaction out of the compromised desktop channel may be the only way to defend against evolving malware threats.
Additional security involves PIN locks and embedded, transparent one-time passcodes (OTP), as well as digital certificates for mobile devices.
Believe it or not, authenticating an identity is much easier to accomplish on a mobile device than on a desktop or laptop computer because these traditional computer platforms were designed to share device memory as a basis for architecture, unlike sandboxed mobile applications.
While 71 percent of IT executives still believe that the traditional desktop or laptop computer is more secure than a mobile device, the reality is that mobile devices are in fact more secure. That is why 65 percent of organizations are placing mobile security as a critical priority moving forward.
Once you understand how comprehensive a multilayered mobile approach is to overall security, a basic password for an important account seems about as safe as locking a bicycle with a rope. While passwords are still the status quo for consumers, organizations looking for advanced security measurements should seriously consider the comprehensive security benefits that mobile technology currently affords. | <urn:uuid:7000e503-00e1-4a74-b5d7-90df5f0a6f07> | CC-MAIN-2017-04 | https://www.entrust.com/passwords-weak-todays-digital-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950053 | 521 | 2.734375 | 3 |
A supercomputer running in China's National Supercomputer Center has taken over the top spot in a closely watched ranking of the world's most powerful supercomputers.
China now operates 42 of the Top 500 supercomputers in the world, confirming the rise of China in the supercomputing realm and putting them second only to the U.S., according to the ranking. Another Chinese system, which had been ranked No. 2, is the third most powerful supercomputer in the world.
The supercomputer that previously held the No. 1 position, a Cray XT5 "Jaguar" system running at the U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility in Tennessee, is now No. 2.
The ranking of the Top 500 most powerful supercomputers in the world is compiled by Hans Meuer of the University of Mannheim in Germany, Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory, and Jack Dongarra of the University of Tennessee in Knoxville. The most recent list, the 36th edition, was released late last week.
The Chinese Tianhe-1A system running in China's National Supercomputer Center in Tianjin has achieved a peak performance of 2.57 petaflops per second, according to the ranking. (A petaflop is one quadrillion calculations per second.) The report said there had been rumors the Tianhe-1A could take over the top spot and the system was the subject of a New York Times story last month.
A Chinese system called Nebulae, located at the National Supercomputing Centre in Shenzen, is the third most powerful supercomputer in the world with a performance of 1.27 petaflops per second.
NEXT: The Most Power Supercomputers In Japan, Europe | <urn:uuid:b34d907a-d1d6-4725-877c-773bcbba6927> | CC-MAIN-2017-04 | http://www.crn.com/news/data-center/228200905/chinese-system-takes-no-1-spot-in-worldwide-supercomputer-ranking.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928904 | 380 | 2.796875 | 3 |
Is Your School Falling Behind The IoT Curve? [Infographic]
“Smart Schools are a great way to engage students at their own level of comfort.”
“My desire for our school is to implement a smart school within the next 5 years. :)”
The comments above reflect the sentiments of the over 600 K-12 and higher education IT managers participating in our survey about smart school technology, a concept similar the smart city and smart hospital. In fact, 46% of those surveyed believe smart schools will have a major impact over the next one to two years. The benefits cited include: increasing student engagement, taking advantage of mobile learning, enabling more personalized education, improving efficiency, and reducing costs.
What Exactly Is An Internet Of Things Smart School?
The word smart implies an intelligence and awareness, as well as an ability to learn and transform. Smart schools have an infrastructure that enables them to grow, adapt and progress as important environments for learning. Today’s smart school utilizes Internet of Things devices that communicate their status via Wi-Fi. While this can include interactive smart boards, the scope of smart schools reaches far beyond these boards to include iBeacons, wearables, sensors throughout the school, eBooks and tablets, collaborative classrooms, smart lighting and HVAC, and video/motion trackers. Our survey found growing use of robots, augmented reality, facial recognition, parking sensors, attendance tracking, and 3-D printers. These devices provide extensive data for both real-time and subsequent analysis.
Implementation Concerns and Drawbacks
As with all advancements, implementing the devices that enable the smart school brings along a set of concerns to be addressed. Security was cited by just over 50% of the respondents as a potential issue. Others worry about privacy, interoperability, and added expenses. Manageability will be a concern until someone comes up with a single, consistent dashboard to control all the currently-disparate devices and systems throughout the schools.
Reliable Wi-Fi leads the list of most important success factors in implementing smart school technology. Not surprisingly teacher development, well-designed learning environments, and insuring that the students have appropriate devices are also on the list of important requirements for success.
The Importance of Planning
How new smart school technology is introduced into the school is vitally important. To be successful, this requires an education vision; understanding how the technology improves education. Effective technology roll-outs require user training; adequate infrastructure, especially sufficient Wi-Fi coverage and bandwidth; and coordinated timing. As one IT manager commented, “Getting the teachers and staff on board can sometimes be more challenging than getting the new technology implemented.”
Examples of Smart School Devices In Use By Schools Surveyed:
Cameras and video
Student ID cards
School bus tracking
Smart HVAC system
Supply inventory tracking
Tablets and eBooks
Electric lighting/ maintenance
Athletic bands or wearables
Motion sensing and tracking devices
Airplay and Smart TV Devices
Wireless door locks
Adaptive learning systems
Virtual and augmented reality
Facial recognition systems
Survey size: 612 completed surveys | <urn:uuid:1411cc5b-eefd-437c-9499-5f683218ba97> | CC-MAIN-2017-04 | http://www.extremenetworks.com/mobility-is-driving-the-internet-of-things-smart-school-infographic-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922953 | 648 | 2.59375 | 3 |
Using Sites in Windows 2000
When dealing with a large enterprise-level Active Directory structure, one of the more important concepts is replication. Replication is the process of sharing Active Directory updates between domain controllers. Many challenges are involved in replicating database changes across a large enterprise. To make this process easier, Windows 2000 uses an organizational structure called a site. In this article, I'll discuss the ways that sites are used within Windows 2000.
What's a Site?
If you're familiar with Exchange 5.5, then you're probably already familiar with the idea of sites. The main difference between Exchange and Windows sites is that whereas an Exchange site consists of a group of mail servers, a Windows 2000 site is made up of a group of domain controllers. Unlike Windows NT version 4.0, Windows 2000 uses what's known as a multimaster domain model. This means that rather than making all administrative changes directly to a primary domain controller and replicating them out, administrative changes can be made to any domain controller. These changes are then replicated to each domain controller.
The Site Model
The multimaster domain model can be a bit chaotic. Imagine a large network with dozens of domain controllers that are constantly trying to replicate changes to each other, and you'll understand how quickly the network could be flooded with replication traffic. To help reduce this constant bombardment, Microsoft implemented the site model. The site model groups domain controllers that are members of the same domain and that are connected by high-speed, low-cost links. Dividing the domain controllers in this way eases the strain caused by replication.
For example, suppose that your domain consists of three domain controllers. Now, imagine that an Ethernet segment connects two of those domain controllers to each other, and the third connects via a dedicated ISDN line. Needless to say, Ethernet offers a speed that's more than sufficient to sustain replication. Therefore, you'd probably want to form a site that contains the two domain controllers connected by the Ethernet segment. Doing so would allow the two domain controllers to replicate freely between each other as needed. And it makes sense because you usually don't have to worry about bogging down an Ethernet segment with replication traffic. If your network is too congested with traffic already, you can install a second network card into each server and form a dedicated segment between the servers that's used solely as a backbone for replication traffic.
Once you've established your initial site, you'll probably want to create a second site to contain the server on the other end of the dedicated ISDN line. The reason for doing so is that ISDN is a slow, and potentially expensive, medium, and you don't want to risk congesting your ISDN link with constant replication traffic. You can solve this problem with the two-site model. Servers within each site will replicate Active Directory changes with each other freely, but servers in different sites will only replicate directory information at scheduled times. You can set the replication schedule to replicate across the slow link at a time when network traffic will be minimal. In the future, as you add servers to the network, you can place them in the site to which they have the most efficient link. If a new server is connected to the rest of the domain only by slow links, you can always create another site.
So far, the model I've given you for creating sites has applied to multi-facility networks. For example, you might use this site model when most of a company's employees reside in an office building, but the network also needs to be linked to a warehouse across town. However, it sometimes makes sense to use multiple sites in one physical location. A good rule of thumb to follow is that each subnet that contains at least one domain controller should be its own site. In fact, you can actually associate individual subnets with sites within the Active Directory.
Remember that if you do decide to use a separate site for each subnet, you should plan carefully how often those sites will replicate with each other. For example, suppose that some of the users in subnet A frequently use some of the network resources found in subnet B. If replication doesn't occur frequently enough, users in subnet A might not be able to see changes made to their resources in B until several hours after the change has occurred. A guideline to follow is connection speed. Basically, if the sites are on different subnets, but those subnets are connected by low-cost, high-speed links, then there's little reason not to replicate the sites more often than you would if they were separated by a slow wide-area connection.
Creating a Site
Creating a site within Windows 2000 is a relatively simple process. First, click Start and select Programs|Administrative Tools|Active Directory Sites And Services. When you do, you'll see the Active Directory Sites and Services snap-in for Microsoft Management Console. In the column to the left, right-click on the Sites folder and select New Site from the context menu. At this point, you'll see the New Object Site dialog box. Enter the name of the site you want to create in the Name field. You should also select the site link object that you want to use for the site from the bottom portion of the dialog box. Usually, if you're establishing your first site, the only available link name will be DEFAULTSITELINK. The default site link is automatically set up to use the IP protocol.
When you install the first domain controller in a site, Windows 2000 will automatically create a site with the name Default-First- Site-Name. If you're planning to use multiple sites in your enterprise, you should definitely change this name to something more fitting to your organizational naming scheme. Even if you don't currently plan to create other sites, it isn't a bad idea to give the default site a custom name just to make your Active Directory structure a little easier to follow. Besides, you never know when you may have to create a second site.
If you do decide to rename the default site (or any other site, for that matter), go back into the Active Directory Sites and Services snap-in. In the column on the left, navigate to Active Directory Sites and Services|Sites. When you select the Sites folder, the column on the right will display all the existing sites. Right- click on the site you want to rename and select Rename from the context menu.
So far, I've shown you how to create sites; but a critical piece of the puzzle is still missing. Unless you link the sites, replication will never occur between them. Remember that as far as Windows 2000 is concerned, each site is a separate entity unless you tell it otherwise. The task of linking sites is accomplished by a mechanism known as a site link.. A site link is bound to a protocol that both sites joined by the link can use to communicate. The site link itself also contains the replication schedule and various security mechanisms.
When you create your first site, Windows 2000 automatically creates one site link. This is the DEFAULTSITELINK that you saw earlier when you created the site. If you had selected this option when creating a site, this site link would be used to join the new site to any existing sites that were also set to use the link.
You can access the DEFAULTSITELINK by going into the Active Directory Sites and Services snap-in and navigating to Active Directory Sites and Services|Sites|Inter-Site Transports|IP. When you select the IP folder, the DEFAULTSITELINK will appear in the column on the right. Right-click on the DEFAULTSITELINK and select Properties from the context menu. When you do, you'll see the DEFAULTSITELINK's Properties sheet. The General tab displays which sites are linked by the site link. The tab also displays the link's cost and replication schedule. You can use the Change Schedule button to replicate the connected sites more or less often. The default replication schedule is set to replicate the connected sites every 180 minutes.
By looking through the various options found on the properties sheet, you can easily establish basic inter-site replication. However, this is just the tip of the proverbial replication iceberg. I cover site links and replication in more detail in part 2 of this series ( Inter-site Replication ). //
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:0a571504-84ed-4a34-9d5a-4f4751d1d581> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/624401/Using-Sites-in-Windows-2000.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00007-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935261 | 1,788 | 2.703125 | 3 |
Why Use MapReduce? - Page 3
At the beginning of this article I used Henry's definition for Big Data, which means taking data and turning it into information (as a first step). Given that Big Data usually implies lots of data (but not always), this means that there could be lots of processing. You are not likely to run one application against a single set of data producing some end result. Instead, you are likely to run various different analyses against multiple data sets with various parameters and collect information (results) from each and store those in a database for further processing. This implies a large number of runs over different, potentially large, data sets, resulting in lots of results. How do you coordinate and configure all of these runs?
One way to do this is to use something called MapReduce. In general terms, MapReduce is a framework for embarrassingly parallel computations that use potentially large data sets and a large number of nodes. Ideally, it also uses data that is stored locally on a particular node where the job is being executed. The computations are embarrassingly parallel because there is no communication between them. The run independent of one another.
As the name implies, MapReduce has two steps. The first step, the "Map" step, takes the input and breaks it into smaller sub-problems and distributes them to the worker nodes. The worker nodes then send their results back to the "master" node. The second step, the "Reduce" step, takes the results from the worker nodes and combines them in some manner to create the output, which is the output for the original job.
As you can tell from the description, MapReduce deals with distributed processing for both steps, but remember, the processing is designed to be embarrassingly parallel. This is where MapReduce gets performance, performing operations in parallel. To get the most performance means that there is no communication between worker nodes, so no data is really shared between them (unlike HPC applications, which are MPI-based and can potentially share massive amounts of data).
However, there could be situations where the mapping operations spawn other mapping operations, so there is some communication between them, resulting in not so embarrassingly parallelism. Typically, these don't involve too much internode communication. In addition, parallelism can be limited by the number of worker nodes that have a copy of the data. If you have five nodes needing access to the same data file, but you have only three copies, two nodes will have to pull the data from a different worker node. This results in reduced parallelism and reduced performance. This is true for both the Map phase and the Reduce phase. On the other hand, three copies of the data allows three applications to access the same data, unlike serial applications where there is only one copy of the data.
At first glance, one would think MapReduce was fairly inefficient because it must break up the problem, distribute the problem (which may be sub-divided yet again), and then assemble all of the results from the worker node to create the final answer. That seems like a great deal of work just to set up the problem and execute it. For small problems, this is definitely true -- it's faster to execute the application on a single node than to use MapReduce.
Where MapReduce shines is in parallel operations that require a great deal of computational time on the worker nodes or the assembler nodes and for large data sets. If I haven't said it clearly enough, the "magic" of MapReduce is exploiting parallelism to improve performance.
Traditionally, databases are not necessarily designed for fault-tolerance when run in a clustered situation. If you lose a node in the cluster, then you have to stop the job, check the file system and database, then restart the database on fewer nodes and rerun the application. NoSQL databases, and most of the tools like it, were primarily designed for two things: 1) performance, particularly around data, and 2) fault-tolerance. One way that some of the tools get fault-tolerance is to use Hadoop as the underlying file system. Another way to achieve fault-tolerance is to make MapReduce fault-tolerant as well.
Remember, MapReduce breaks problems into smaller sub-problems and so on. It then takes the output from these sub-problems and assembles them into the final answer. This is using parallelism to your advantage in running your application as quickly as possible. MapReduce usually adds fault-tolerance because if a task fails for some reason, then the job scheduler can reschedule the job if the data is still available. This means MapReduce can recover from the failure of a datanode (or several) and still be able to complete the job.
Many times people think of Hadoop as a pure file system that you can use as a normal file system. However, Hadoop was designed to support MapReduce from the beginning, and it's the fundamental way of interacting with the file system. Applications that interact with Hadoop can use an API, but Hadoop is really designed to use MapReduce as the primary method of interaction. The coupling of multiple data copies with the parallelism of the MapReduce produces a very scale-out, distributed and fault-tolerant solution. Just remember that the design allows for nodes to fail without interrupting the processing. This means that you can also add datanodes to the system, and Hadoop and MapReduce will take advantage of them. | <urn:uuid:a6cc979d-297b-4be2-8ce5-3c4b366bb7e1> | CC-MAIN-2017-04 | http://www.enterprisestorageforum.com/storage-management/why-use-mapreduce.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946992 | 1,157 | 2.890625 | 3 |
How Do I Store Data on My ID Cards?
Critical visual identification information should be placed clearly on the badge. For security and privacy reasons, many organizations encode information to their cards to keep information confidential and secure. There are several types of cards available to store data. It’s best to consult an ID expert to determine the best type of card for your application.
Contact Smart Cards
Contact smart cards have an embedded integrated circuit chip that contains just memory, or memory plus a microprocessor. Memory-only chips are functionally similar to a small floppy disk. They are less expensive than microprocessor chips, but they also offer less security and should therefore not be used to store sensitive or valuable information.
Chips that contain both memory and a microprocessor are also similar to a small floppy disk, except they contain an "intelligent" controller used to securely add, delete, change, and update information contained in memory. Sophisticated microprocessor chips have state-of-the-art security features built in to protect the contents of memory from unauthorized access.
Contact smart cards must be inserted into a reader to read and store information in the chip. This type of e-card is used in a wide variety of applications including network security, vending, meal plans, loyalty, electronic cash, government IDs, campus IDs, e-commerce, health cards, and many more.
Contactless Smart Cards
Contactless smart cards are similar to contact smart cards, except contactless cards do not have to be inserted into a card acceptor device. Instead, contactless smart cards contain an embedded antenna for reading and writing information contained in the chip's memory. They need only be passed within range of a radio frequency acceptor to read and store information in the chip. The range of operation is typically from about 2.5" to 3.9" (63.5 mm to 99.06 mm) depending on the acceptor.
Contactless smart cards are used in many of the same applications as contact smart cards, especially where the added convenience and speed of not having to insert the card into a reader is desirable. There is a growing acceptance of this type of card for both physical and logical access control applications. Student identification, electronic passport, vending, parking, and tolls are common applications for contactless cards.
Proximity cards (aka, prox cards) communicate through an antenna similar to contactless smart cards except that they are read-only devices that generally have a greater range of operation. The range of operation for prox cards is typically from 2.5" to 20" (63.5 mm to 508 mm) depending on the reader.
Small amounts of information can be read with prox cards, such as an identification code that is usually verified by a remote computer; however, it is not possible to write information back to the card. Prox cards are available from several sources in both ISO thickness cards from 0.027" to 0.033" (0.6858 mm to 0.8382 mm) and "clamshell" cards from 0.060" to over 0.070" thick (1.524 mm to over 1.778 mm).
Prox cards continue to grow in popularity because of the convenience they offer in security, identification, and access control applications, especially door access where fast, hands-free operation is preferred.
Hybrid cards offer a unique solution for updating your existing badging system. Hybrid card is the term given to cards that contain two or more embedded chip technologies, such as a contactless smart chip, a contact smart chip, and/or a proximity chip—all in a single card. The contactless chip is typically used for applications demanding fast transaction times, such as mass transit. The contact chip can be used in applications requiring higher levels of security. The individual electronic components are not connected to each other even though they share space in a single card.
The combi card—also known as a dual-interface card—has one smart chip embedded in the card that can be accessed through either contact with a reader or an embedded antenna. This form of smart card is growing in popularity because it provides ease-of-use and high security in a single card product. Mass transit is one of the more popular applications for the combi card; a contact-type acceptor can be used to place a cash value in the chip's memory and the contactless interface can be used to deduct a fare from the card.
Ready to purchase cards for your printer? Shop ID cards now! | <urn:uuid:2d2e08c6-cebb-40a9-9fe6-70d65864c91e> | CC-MAIN-2017-04 | http://www.idzone.com/learning-center/articles/id-cards/data-storage.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931739 | 924 | 2.59375 | 3 |
In order to build more resilient data centers, many Cumulus Networks customers are leveraging the Linux ecosystem to run routing protocols directly to their servers. This is often referred to as routing on the host. This means running layer 3 protocols like OSPF (Open Shortest Path First) or BGP (Border Gateway Protocol) directly down to the host level, and is done in a variety of ways, by running Quagga:
- Within Linux containers (such as Docker)
- Within a VM as a virtual router on the hypervisor
- Directly on the hypervisor
- Directly on the host (such as an Ubuntu server)
Why Route on the Host?
Why do customers do this? Why should you care?
Troubleshooting layer 2 network problems in the data center has been a persistent challenge in modern networks, so expanding the layer 3 footprint further into your data center by routing on the host alleviates many issues described below.
Consider a network where layer 2 MLAG is configured between all devices. Although this is a common data center design, and can be deployed on Cumulus Linux, it suffers from a number of shortcomings.
- Traceroute is not effective, since it only shows layer 3 hops in the network; this design uses layer 2 devices only. All traceroute outputs, regardless of the path taken, only show the layer 3 exit leafs. There is no way to determine which spine is forwarding traffic.
- MAC address tables become the only way to trace down hosts. For the diagram above, to hunt down a particular host you would need to run commands to show the MAC addresses on the exit leafs, the spine switches and the leaf switches. If a host or VM migrates while troubleshooting, or a loop occurs from a misconfiguration, you may have to show the addresses multiple times.
- Duplicate MAC addresses and MAC flaps become frustratingly hard to track down. Orphan ports and dealing with MLAG and non-MLAG pairs increase network complexity. The fastest way to find a specific MAC address is to check the MAC address table of every single network switch in the data center.
- Proving load balancing is working correctly can become cumbersome. With layer 2 solutions, LACP (Link Aggregation Control Protocol) is very prevalent, so you need to have multiple bonds/Etherchannels between the switches. Performing a simple ping doesn't help because the hash remains the same for layer 2 Etherchannels, which are most commonly hashed on SRC IP, DST IP, SRC port and DST port. In the end, you need multiple streams that hash evenly across the LACP bond. This often means you must buy test tools from companies like Spirent and Ixia.
With a layer 3 design, you can run
ip route showand see all of the equal cost routes. It's possible to use tools like
scamperand see all possible ECMP routes; that is, what switches are being load balanced.
Three or More Top of Rack Switches
With solutions like Cisco's vPC (virtual Port Channel), Juniper's MC-LAG (Multi-Chassis Link Aggregation) or Arista's MLAG (Multi-chassis Link Aggregation), you gain high availability by having two active connections. Cumulus Networks has feature parity with these solutions with its own MLAG implementation.
High availability means having two or more active connections. However, with high density servers, or hyper-converged infrastructure deployments, it is common to see more than two NICs per host. By routing on the host, three or more ToR (top of rack) switches can be configured, giving much more redundancy. If one ToR fails, you only lose 1/total ToR switches, whereas with a layer 2 MLAG solution, you lose 50% of your bandwidth.
Clear Upgrade Strategy
By routing on the host, you gain two huge bonuses:
- Ability to gracefully remove a ToR switch from the fabric for maintenance
- More redudnancy by having multiple ToRs (3+)
Let's expand on these two points. With layer 2 only (like MLAG), there is no way to influence routes without being disruptive (that is, some traffic loss must occur). With OSPF and BGP, there are multiple load balanced routes via ECMP (Equal Cost Multipath) routing. Since there is routing, it is possible to change these routes dynamically.
For OSPF, you can increase the cost of all the links making the network node less preferable.
With BGP, there are multiple ways to change the routes, but the most common is prepending your BGP AS to make the switch less preferable.
Both BGP and OSPF make the ToR switch less preferable, removing it as an ECMP choice for both protocols. However, the link doesn't get turned off. Unlike layer 2, where the link must be shut down and all traffic currently being transmitted is lost, a routing solution notifies the rest of the network to no longer send traffic to this switch. By watching interface counters you can determine when traffic is no longer being sent to the device under maintenance, so you can safely remove it from the network with no impact on traffic.
Because routing on the host uses three or more ToRs, this reduces the impact of a ToR being removed from service, either due to expected maintenance or unexpected network failure. So, instead of losing 50% of bandwidth in a two ToR MLAG deployment, the bandwidth loss can be reduced to 33% with three ToRs or 25% with four.
The redundancy with layer 3 networks is tremendous. In the image above, the network on the left can still operate even if 3 out of 4 ToR switches are down. That is 4N redundancy. The best case for the network on the right is 2N redundancy, no matter what vendor you choose. Layer 3 allows applications to have much more uptime with no risk for outages.
Often when deploying a new application, server or service, there can be a delay between when the new device or service is available and when it is integrated with the network. This is typically a result of the additional configuration required to set up layer 2 high availability (HA) technologies on the upstream switches, which is often a manual process.
Using layer 3 and routing on the host eliminates this delay entirely. Tight prefix list control coupled with authentication can be leveraged on leaf and spine switches to protect the rest of the network from the downstream servers and what they are allowed to advertise into the network. Server admins can be in control of getting their service on the network within the bounds of a safe framework setup by the network team. This is similar to how service providers treat their customers today.
Similarly, when an application or service moves from one part of the network to another, the application team has the ability to advertise the newly moved application quickly to the rest of the network allowing for more agility in service location.
A service or application can be represented by a /32 IPv4 or /128 IPv6 host route. Since that application depends on that /32 or /128 being reachable, the application is dependent on the network. Usually this means the ToR or spine is advertising reachability. If the application is migrated or moved (for example, by VMware vMotion or KVM Migration), the network may need substantial reconfiguration to advertise it correctly. Usually this requires multiple steps:
- Removing the host route from the previous ToR, spine or pair of ToRs or spines so it is no longer advertised to the wrong location.
- Adding the host route to the new ToR, spine or pair of ToRs or spines so it is advertised into the routed fabric.
- Checking connectivity from the host to make sure it has reachability.
These steps are often done by different teams, which can also cause problems. When routing on the host this is done automatically by Quagga advertising, the host routes no matter where the host is plugged in.
One problem with layer 2, especially around MLAG environments, is interoperability. This means if you have 1 Cisco device and 1 Juniper device, they can't act as an MLAG pair. This causes a problem known as vendor lock-in where the customer is locked into a vendor because of propritary requirements. One huge benefit of doing layer 3 is that by using OSPF or BGP, the network is adhering to open standards that have been around a long time. OSPF and BGP interoperability is highly tested, very scalable and has a track record of success. Most networks are multi-vendor networks where they peer at layer 3. By designing the network down to the host level with layer 3, it is now possible to have multiple vendors everywhere in your network. The following diagram is perfectly acceptable in a layer 3 environment:
Host, VM and Container Mobility
When routing on the host, all VMs, containers, subnets and so forth are advertised into the fabric automatically. This means the only the subnet on the connection between the ToR and the router on the host needs to be configured on the ToR. This greatly increases host mobility by allowing minimal configuration on the ToR switch. All the ToR switch has to do is peer with the server.
If security is a concern, the host can be forced authenticate to allow BGP or OSPF adjacencies to occur. Consider the following diagram:
In the above diagram the Quagga configuration does not need to change, no matter what ToR you plug it into. The only configuration that needs to change is the subnet on swp1 and eth0 (configured under
/etc/network/interfaces, which is not shown here). This greatly reduces configuration complexity and allows for easy host mobility.
BGP Unnumbered Interfaces
Cumulus Networks enhanced Quagga with the ability to implement RFC 5549. This means that you can configure BGP unnumbered interfaces on the host. In addition to the benefits of not having to configure every subnet described above, you do not have to configure anything specific on the ToR switch at all, so you don't have to configure an IPv4 address in
/etc/network/interfaces for peering.
BGP unnumbered interfaces enables IPv6 link-local addresses to be utilized for IPv4 BGP adjacencies. Link-local addresses are automatically configured with SLAAC (StateLess Address AutoConfiguration). This address is derived from an interface's MAC address and is unique to each layer 3 adjaency. DAD (Duplicate Address Detection) keeps duplicate addresses from being configured. This means the configuration remains the same no matter where the host resides. There is no specific subnet used on the Ethernet connection between the host and the switch.
Along with implementation of RFC 5549, Quagga has a simpler configuration, allowing novice users the ability to quickly configure, understand and troubleshoot BGP configurations within the data center. The following illustration shows a single attached host using BGP unnumbered interfaces:
Why Have Networks not Done this in the Past?
If routing on the host has a lot of benefits, why has this not happened in the past?
Lack of a Fully-featured Host Routing Application
In the past, there were no enterprise grade open routing applications that could be installed easily on hosts. Cumulus Networks and many other organizations have made these open source projects robust enough to run in production for hundreds of customers. Now that applications like Quagga have reached a high level of maturity, it is only natural for them to run directly on the host as well.
Cost of Layer 3 Licensing
Many vendors have many license costs based on features. Unfortunately, vendors like Cisco, Arista and Juniper often want to charge more money for layer 3 features. This means that designing a layer 3-capable network is not as simple as just turning it on; the customer is forced to pay additional licenses to enable these features.
The licensing is often confusing (for example, "What is the upgrade path?" "Do I need additional licenses for BGP vs OSPF?" "Does scale affect my price?"), even when the cost is budgeted for. Routing is not something that should cost additional money for customers when buying a layer 3-capable switch. At Cumulus Networks our licensing model is simple, concise and publicly available. | <urn:uuid:7dd6aeb7-300c-4036-9c15-0680a92a0078> | CC-MAIN-2017-04 | https://support.cumulusnetworks.com/hc/en-us/articles/216805858-Routing-on-the-Host-An-Introduction | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00181-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929902 | 2,576 | 2.5625 | 3 |
Primary school kids could soon be taught about Web 2.0 applications such as Wikipedia and Twitter in the classroom if a leaked document seen by the Guardian is anything to go by.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
According to the Guardian the plans were created by Jim Rose, the former Ofsted chief who was appointed by ministers to overhaul the primary school curriculum.
Children will be expected to leave primary school familiar with blogging, podcasts, Wikipedia and Twitter.
They must also have good keyboard skills, and learn how to use a computer spellchecker.
Recent research from ntl:Telewest Business revealed that children are keen to use the internet to support their studies.
Of the children asked what Web 2.0 applications would be useful in the classroom, 44% said Wikipedia, 35% chose instant messaging,and34% selectedYouTube. But the same survey revealed that less than a fifth of teachers use Wikipedia as a resource in classrooms and only 5% useYouTube.
Stephen Beynon, managing director at ntl:Telewest Business, said this could close the gap between the tools that pupils want to see in the classroom and what teachers are actually using.
"However, the key to using Web 2.0 tools effectively is having the right infrastructure to deliver them. It is only a matter of time before social networking takes on a more extensive role in the classroom, so schools and colleges must provide sufficient bandwidth for media-rich applications." | <urn:uuid:d7faad2b-0e40-4d4e-a536-1f68f6e8a9c0> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240088833/Twitter-and-Wikipedia-could-be-on-school-curriculum | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00393-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958353 | 313 | 3.28125 | 3 |
Networks at War
Being something of a military buff, I’ve been following trends in warfare during the past decade with a great deal of interest. Particularly fascinating is the application of networks to the ways in which wars are fought. Here, the term “network” has a couple of meanings: the human network in which information is rapidly disseminated and acted upon, and the technologies and strategies that enable this coordination.
With regard to the latter, entire military doctrines recently have been formulated around the notion of using networks to fight wars. Enabled by increasingly sophisticated networking technologies, leaders in the U.S. Armed Forces began to develop the concept of network-centric warfare in the late 1990s. It quickly gained support in the United States and the militaries of its Western allies due to its potential to revolutionize battlefield operations.
The reason it caught on so quickly is the fact that most 20th-century wars involving developed countries were characterized by massive armies — as well as air and sea forces — clashing on battlefields that covered hundreds of square miles. Because of limited communications and logistics abilities, the results were often messy, with uncoordinated assaults that caused wanton destruction of military machines and infrastructure and unnecessary high losses in military and civilian lives.
Network-centric warfare offered something different: Precision weapons, speedy logistics, omnipresent battlefield intelligence and advanced communications rolled together in a unified network could act as a “force multiplier.” This enabled a relatively small but cohesive military unit to concentrate on strategic targets, thus minimizing devastation on not only its own side, but also its enemy’s.
Although elements of this appeared in the Gulf War in 1991, the first time it was truly applied was in 1999 during the Kosovo War, when NATO forces executed highly coordinated bombing raids on the former Federal Republic of Yugoslavia. In this conflict — which lasted about two and a half months — cruise missiles repeatedly were launched from fighter-bombers against Serbian air defense systems, tanks, artillery and other military targets.
During the course of the campaign, the decision was made to bomb targets — offices, warehouses and the like — belonging to close associates of Serbian leader Slobodan Milosevic. In other words, they literally attacked his personal network.
The war was not an unqualified success. NATO accidentally bombed the Chinese embassy in Belgrade, which sparked protests in that country. However, the operations were targeted enough to compel the Serbians to withdraw from the province of Kosovo and eventually oust Milosevic from his seat of power without a single loss of life among NATO military personnel.
However, the limitations of network-centric warfare — at least as practiced by the U.S. and its allies — became apparent just four years later in the 2003 invasion of Iraq. While the initial conventional combat went well — with minimal casualties for coalition forces as they drove to Baghdad — the subsequent occupation proved problematic despite the superior technical systems of the U.S. and its allies.
It was much more challenging to employ network-centric warfare against a motley assortment of insurgents than the official Iraqi army, for the simple reason that it’s difficult to execute a strategy that emphasizes targets against enemies that make themselves untargetable via hit-and-run guerrilla methods.
Also, the insurgency managed to develop its own form of network-based warfare that proved surprisingly effective. Essentially, it was an ad-hoc affair, with various paramilitary groups and terrorist organizations sharing tactical information freely via chat rooms, e-mail, videos and other online media.
This turn of events can be explained by a quote from U.S. Air Force Col. John Boyd, a pioneer in network-centric warfare: “Machines don’t fight wars, people do. And they use their minds.”
In other words, a network is only as good as the relationships it fosters. To use Iraq as an example, the highly sophisticated networks used by the coalition forces were great for providing a coordinated effort against an opponent that wore a uniform and fought with tanks, artillery and attack helicopters.
However, when that part of the war was over and the time came to reach out to the people of Iraq and get their support for the daunting task of putting the country back together, the coalition had no real network in place to do so. The native insurgency groups, however, had those connections, and used what technologies they could to fill that vacuum.
Boyd had another quote worth citing here: “People, ideas, hardware — in that order!”
To put it another way, technology serves people, not the other way around. This is something all networking professionals should keep in mind as they develop and maintain their solutions.
- Brian Summerfield, firstname.lastname@example.org “ | <urn:uuid:876f1dbc-ae09-40ee-92dd-0c75578d5104> | CC-MAIN-2017-04 | http://certmag.com/networks-at-war/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973363 | 978 | 2.546875 | 3 |
Researchers from two Chinese universities have developed a solar cell that can produce electricity from light and water, enabling a solar panel that works in the sun or rain.
The scientists from Ocean University of China in Qingdao and Yunnan Normal University in Kunming developed a highly efficient and flexible dye-sensitized solar cell and then coated that cell with a one atom-thick layer of electron-enriched graphene.
The new all-weather solar cells can be excited by light on sunny days and raindrops on rainy days, yielding an optimal "solar-to-electric conversion efficiency of 6.53% under sunlight as well as a voltage of hundreds of microvolts by simulated raindrops.
"Graphene is known for its conductivity, among many other benefits. All it takes is a mere one-atom thick graphene layer for an excessive amount of electrons to move as they wish across the surface," the researchers wrote in the journal Angewandte Chemie.
"In situations where water is present, graphene binds its electrons with positively charged ions. Some of you may know this process to be called the Lewis acid-base interaction," the researchers said.
The graphene converts each drop of rain into microamps and raindrops into hundreds of microvolts.
With only a 6.53% light/water electricity conversion rate, the new all-weather panels are far from perfect. Today's typical solar panels have a 15% solar energy conversion rate. The best panels, which are still in laboratories, have a 22% conversion rate. So the all-weather panels are highly inefficient, and graphene is an extremely expensive material to manufacture.
However, for areas with greater rain accumulation such as the U.K., the panels do offer an alternative to solar panels that would otherwise suffer a 10% to 25% decrease in energy production.
"All-weather solar cells are promising in solving the energy crisis," the researchers wrote.
This story, "New solar cell turns raindrops into electricity" was originally published by Computerworld. | <urn:uuid:7ea85e22-6c50-46f6-821b-8b9ce7b81b3c> | CC-MAIN-2017-09 | http://www.itnews.com/article/3054627/sustainable-it/new-solar-cell-turns-raindrops-into-electricity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00228-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946328 | 416 | 3.96875 | 4 |
NASA wants a humanoid robot that can perform CPR, draw blood and operate on astronauts aboard the International Space Station or en route to Mars.
A doctor at the Houston Methodist Research Institute is working to make that happen.
The humanoid robot, Robonaut, developed by NASA, is in training at the Houston Methodist Research Institute to perform medical procedures in space -- someday. (Photo: NASA)
"We're trying to get the best care for our astronauts, who are risking their lives to push the boundaries in space," said Dr. Zsolt Garami, an instructor at the Houston Methodist Research Institute, an arm of Houston Methodist Hospital. "Our motivation was really when we saw astronauts perform ultrasounds on each other or on themselves. They just could use an extra hand.... Why not have a robot help? There's already a robot up in the space station, and he's already shown that he can switch buttons reliably. Why not make him a nurse or a physician?"
Garami is working with NASA to teach robots how to perform medical procedures. He said the robots are quick learners -- much quicker than his human students.
Robonaut, the robot Garami is working with, learned in two hours what humans take a week to learn. That hasn't been a popular observation with his colleagues.
"Robonaut is learning extremely fast," he told Computerworld. "His motions, without shaky hands, are very precise and gentle. There were no sudden motions."
The humanoid robot that Garami is working with is a twin to Robonaut 2, or R2, which was brought to the space station early in 2011.
It took about 11 years to build the 300-lb. robot, which runs on 38 PowerPC processors, including 36 embedded chips that control its joints. Each of the embedded processors communicates with the main chip in the robot.
Garami said he hasn't yet worked with Robonaut 2 on the space station, but he is confident that the space-dwelling robot won't have any trouble. His work with Robonaut on the ground has gone extremely well.
So far, the doctor has taught Robonaut to perform ultrasounds, start an IV and give medications. It also can stop bleeding and put in sutures. The robot is just about ready to perform CPR on its own.
Garami also said the robot, at some point, will be able to perform surgery. "Say you're sending two people to Mars and one has a medical emergency," he said. "One astronaut needs help but they're going to be 15 to 20 minutes with no video signal. They're left alone with no connection to Earth.... I feel Robonaut could be a partner for them, helping them."
NASA said it is working with the Houston hospital to develop Robonaut's abilities, but so far the project has just scratched the surface of what the robot may be capable of. Since researchers are still in the evaluation phase, there's no immediate plan to have Robonaut 2 try medical procedures in space.
Dr. Zsolt Garami, an instructor at the Houston Methodist Research Institute, enjoys a photo op with his cyber student, Robonaut. (Photo: NASA)
"We are really just starting to explore this capability down here on the ground, and there is a significant amount of research to do before we would be able to make the jump to space," said a NASA spokesman. "We have no real way to say how soon R2 would be able to assist with any medical procedures on the station, let alone something as complex as surgery."
While the idea of a robot performing medical procedures, even surgeries, in space is new, robots have been working in operating rooms around the world for several years.
A 2008 study by the University of Maryland School of Medicine showed that patients who underwent minimally invasive heart-bypass surgery using a robot had shorter hospital stays, faster recovery times and fewer complications than patients who had traditional surgeries. Also, the chances that their bypassed vessels would remain open were better.
Garami said he's not sure how soon he'll be able to start working to get Robonaut on the space station and up to speed with medical procedures, but he said it's more a matter of getting it scheduled. He's ready to go.
He also said medically trained robots would be a huge benefit to the military.
"In the long run, I see Robonaut acting as a nurse for the Army," Garami said. "You won't send a medic to save a guy. You'd send Robonaut to carry the soldier and provide medical assistance at the same time."
This article, " Scalpel. Check. Robot. Check. NASA Bots in Training to Operate in Space," was originally published on Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter, at @sgaudin, and on Google+, or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Scalpel. Check. Robot. Check. NASA Bots, One Day, May Operate in Space" was originally published by Computerworld. | <urn:uuid:7982dc75-1a64-47f2-baed-70cbe78516ff> | CC-MAIN-2017-09 | http://www.cio.com/article/2378529/government-use-of-it/scalpel--check--robot--check--nasa-bots--one-day--may-operate-in-space.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00104-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.972233 | 1,101 | 3.46875 | 3 |
10 Guiding Principles
The architecture for the Hippocratic database concept is to be based on 10 guiding principles: purpose specification, consent, limited collection, limited use, limited disclosure, limited retention, accuracy, openness and compliance. The Hippocratic database and its components would work in the following way, according to IBM officials. First, metadata tables would be defined for each type of information collected. A Privacy Metadata Creator would generate the tables to determine who should have access to what data and how long that data should be stored. A Privacy Constraint Validator would check whether a sites privacy policies match a users preferences, and once this is verified the data would be transmitted from the user to the database.In the final step, a Data Retention Manager would delete any items stored beyond the length of their intended purpose. Audit trails of queries also would be kept to allow for privacy audits and to guard the database from suspicion that it has been misused. While IBM researchers are interested in eventually including the Hippocratic database concept into IBMs DB2 database, they also want to expand interest in the concept. Agrawal hopes the presentation of the concept will lead other vendors and university researchers to embrace and evolve it. "I wanted the database community to become cognizant of the issues," Agrawal said. "I personally think it will help if others participate in it."
A Data Accuracy Analyzer would test the accuracy of the data being shared. Once queries are submitted along with their intended purpose, the Attribute Access Control would verify whether the query is accessing only those fields necessary for the querys purpose. Only records that match the queries purpose would be visible thanks to the Record Access Control component. The Query Intrusion Detector then would run compliance tests on the results to detect any queries whose access pattern varies from the normal access pattern. | <urn:uuid:04a2a613-c134-460c-b6cc-12afcd7b5fb5> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Database/IBM-at-Work-on-Hippocratic-Database/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00048-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940149 | 371 | 2.765625 | 3 |
A busy person’s iPhone can chew off more energy in a year than a refrigerator, but that doesn’t mean it needs to belly up to the wall trough every time its battery starts running low.
Wireless power won’t be here tomorrow, but it’s coming. Energy is everywhere, in every action, most of it bleeding away be recycled back into the endless machinations of the universe. Scientists and engineers are moving ever closer to figuring out how to harvest power from our environment, ourselves, and our devices themselves—from nanoscale pillars that could offset a device’s energy use by turning waste heat into electricity, to a spongey cell-phone case that works up a charge by sitting on a vibrating car dashboard.
This isn’t just wireless charging; this is harvesting energy from the world around us. Rather than use acres of solar panels or skyscraping wind turbines, energy harvesting engineers want to power your mobile devices from things like heat differentials, ambient vibrations, and your walk to work. That’s not just an engineering challenge, but also a design challenge. Tapping into the energy of everywhere shouldn’t add friction to the pace of modern life.
A few wireless power harvesters have already made their way to the shelves, but the biggest advances are still being worked out in the lab.
Click here for a Quartz review of some of the promising paths that engineers and designers are taking to power the mobile Web. | <urn:uuid:837a80bf-223d-471b-a74f-f3d8b5474d9f> | CC-MAIN-2017-09 | http://www.nextgov.com/mobile/2014/03/mobile-devices-future-will-get-energy-everywhere-except-wall-socket/79966/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00224-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943576 | 306 | 2.84375 | 3 |
[ABOVE: Narrated by Apple CEO, Tim Cook, this is a commitment to corporate environmental responsiblity.]
Switch it off
Apple's newly updated environmental responsibility pages tell us iMacs use 0.9 watts of electricity in sleep mode, in contrast to 35 watts used by the original iMac.
That's significant, but millions of Mac users can make a big impact by simply turning their computer off when it isn't being used.
If you leave your iMac in Sleep mode for ten hours a day, you save 9 watts of electricity by switching it off -- that's 3,285 watts per year. If we assume 70 million active Mac users (and for the purposes of this illustration pretend they all run modern iMacs, which they don't) that’s a potential saving of 229,950 megawatts of power each year.
The Palo Verde plant in Arizona (the biggest such plant in the US) generates 3,937 megawatts.
If every Mac user switches their machine off when they aren't using it, the combined difference to global energy needs would be equivalent to the production of several power stations.
Battery powered mobile devices use much less power than PCs, but need recharging. Many iPhone, iPad and other device users leave battery chargers plugged in when they aren't in use. They use little power when left in this state, but with perhaps 1.1 billion mobile devices in use today, that small amount of wasted electricity is significant. Yes, the cost savings to you are minimal but if you multiply that small drain by a billion users then the figures add up. The electricity used annually by 170 million iPhone 5's would power all the homes in Cedar Rapids, Iowa, for a year. Not only this, but think of the money being handed over to greedy electricity firms for the convenience of leaving your charger plugged in. It's free money for them at little cost for you, but what's the global cost?
Just how long does it take you to walk to your printer, television, USB hub or external hard drive to switch it on or off? These consume power in standby mode -- not always a lot, but multiply that waste by millions of computer users and the numbers add up. Switching off electrical devices when not in use can shave a dollar or two off your utility bill, which is nice, but the consequences on global energy supply are incalculable.
- When purchasing electrical equipment check the label. Does it tell you how much electricity the device requires in normal use in a clear and intelligible way?
- Does the manufacturer offer any public statement explaining its environmental commitment?
- Does the manufacturer of the device you're considering offer recycling support?
Recycling schemes exist. Some even pay you for your old electrical devices. Don't just throw these things in the trash for an inevitable journey into landfill -- check with retailers, manufacturers and local services for recycling facilities.
[ABOVE: A partial scan of a full page Apple pro-environment ad on the back page of a newspaper this morning. "Some ideas we want every company to copy".]
Each of these steps makes little difference in isolation, but there are many millions of computer users on the planet. The potential difference to global energy demand made by millions taking these few steps is significant. Taking these simple steps will also help encourage other consumer electronics firms to do what Apple wants them to do and the Earth needs them to do: "Benchmark" its commitment to greener IT.
Got a story? Drop me a line via Twitter or in comments below and let me know. I'd like it if you chose to follow me on Twitter so I can let you know when fresh items are published here first on Computerworld. | <urn:uuid:9e5334e2-45b7-4a53-84a7-b7d23d6ced84> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2476203/apple-mac/5-steps-to-save-the-planet--earth-day--apple-special.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00400-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.936188 | 767 | 2.890625 | 3 |
IBM and Business Analytics
IBM's history with analytics solutions goes back to 1890.
OhMyGov's Richard Hartman recently examined IBM's longstanding focus on business analytics, performance measurement and government consulting.
"IBM's history goes deep into the government, beginning with support of the U.S. Census Bureau's adoption of the Hollerith Punch Card, Tabulating Machine and Sorter in 1890 by inventor Herman Hollerith, a Census Bureau statistician," Hartman writes.
"Two of IBM's software products to help implement analytics are Cognos, a performance management solution to improve visibility into and across government agencies, and SPSS, which includes data collection, text and data mining, and advanced statistical analysis and predictive solutions," Hartman notes. | <urn:uuid:ea4e7efd-7af9-46c2-bf94-8674feb89962> | CC-MAIN-2017-09 | http://www.enterpriseappstoday.com/business-intelligence/article.php/396828/IBM-and-Business-Analytics.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00400-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916766 | 156 | 3.125 | 3 |
With more than 40 confirmed cases of swine flu in the U.S. and the number expected to grow because of the virus's novelty, government agencies will be expected to respond quickly as conditions warrant. Thanks to the emergence of Web 2.0, several helpful online maps, mash-ups and wikis are available to help keep officials in-the-know. Here are a few examples, though it's nowhere near an exhaustive list:
The HealthMap is a Web Site that aggregates the occurrence of disease outbreaks and plots them on a map. The data comes from the U.S. Centers for Disease Control, Canadian Institutes of Health Research, World Health Organization, Google and other sources. The HealthMap has launched a map dedicated to swine flu.
A user claiming to be a biomedical researcher and who goes by the screen name "niman" has created a Google Map that tracks reported, suspected and confirmed Swine Flu cases.
Several user-generated wikis about swine flu are sprouting up across the Web. One example is at Wikia.
The Centers for Disease Control is maintaining a Web page with an official tally by state of confirmed swine flu cases.
Are you looking for breaking news in near real-time about the latest swine flu cases in your jurisdiction? There's perhaps no more powerful tool than Twitter Search, which is the search engine for the popular short messaging Web site. Citizens from around the world are posting thousands of updates per hour about the influenza's latest developments. It's a great example of "crowdsourcing." The Centers for Disease Control also has its own Twitter page. | <urn:uuid:3a5cfade-04cc-4ff8-afda-231890e44f21> | CC-MAIN-2017-09 | http://www.govtech.com/health/Swine-Flu-Resources-Proliferate-Across-the.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00276-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941993 | 331 | 3.140625 | 3 |
Every government agency has to deal with managing identity, and protecting sensitive data. From passwords to employee information to agency information, securing information should be a top priority.
According to John Bennett of Oracle, 84 percent of North American enterprises suffered a security breach in the past year, which is a 17 percent increase over three years. What can be done about keeping information secure?
The most important thing in identity management is planning security policies -- having a specific plan of access (who can access what, and when/ how). Without this, the agency is setting itself up for a security breach, and that can be both costly and embarrassing.
Bennett uses this simile to help explain security: think of identity management like a Ding-Dong. The high calorie (but admittedly tasty) treat is a creamy filling, covered in a chocolate cake, sealed in a foil wrapper. The foil is like the network perimeter security; chocolate is the majority of information, which is important to the agency but not of value to hackers and identity thieves; the creamy filling is the sensitive data most coveted by identity thieves.
To protect the sensitive "creamy filling" encryption is the key. If sensitive information is not encrypted, it can be visible to hackers using hex editors. Information such as SSNs, health history or credit card numbers could all be there for the taking. If, however, it is encrypted, such information is safe from would be identity thieves. | <urn:uuid:17131f22-fcbf-40b1-ad8a-ab09d199b80e> | CC-MAIN-2017-09 | http://www.govtech.com/security/Encryption-Key-to-Identity-Management.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00276-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.932313 | 291 | 2.609375 | 3 |
The Internet of Things (IoT) has exploded onto the scene, bringing us smart phones that track health data, drones that monitor wildfires and onesies that alert parents when their baby starts to move. A digital system where people, objects, and networks communicate and interact in entirely new ways, IoT is viewed as a major paradigm shift that will transform how we live and work. But is IoT really all it’s cracked up to be? And how will it affect the computing landscape that makes it possible?
Not Just for Consumers Anymore
According to a report by futurist and CTO Dave Evans, an average of 127 new things are connected to the Internet every second. While IoT has initially focused on consumer applications, industrial IoT is on the way. Clean tech businesses are pulling data from connected windmills, for example, with companies using genomic sensors or aggregating dispersed data for industrial purposes. Even government is getting into the act: the Homeland Security Department announced last year that it’s exploring wearable equipment for emergency first responders.
It’s a Brave New Computing World
Behind the emergence of IoT are major structural issues about information governance – especially as apps are moved to cloud. Connected devices consume and generate information that requires backup, recovery and management. To accomplish this, IoT requires a new, holistic approach to the data residing in endpoints, data centers and the cloud. Yet unresolved questions remain about data processing, backup and intelligence.
What Should I Do with All that Data?
An enormous challenge of IoT is that many organizations jumping on the bandwagon don’t yet realize they are now Big Data companies that have to process and manage massive data sets, including personal information and passwords. Currently, CTOs allocate engineering resources to this effort, but they’ll eventually want ready-made tools to manage workflows. Unless the people and companies driving IoT can get their arms around Big Data, privacy and security concerns will thwart mass adoption.
IoT – which often collects information unbeknownst to users – raises a host of privacy, security and liability issues. Yet authors of a report issued by the Institute for Critical Infrastructure Technology (CIT) note that IoT often lacks any form of security, representing “practically an infinite attack surface” for cybercriminals. The risks are real: according to Marc Rotenberg of the Electronic Privacy Information Center, “If you think you’ve got a cybersecurity problem now, wait for the cold winter day when a hacker halfway around the world turns down the thermostat on 100,000 homes in Washington D.C.” As a result, IoT solutions will require bullet-proof functionality to protect sensitive commercial information and safeguard users’ personal data.
The Edge and the Cloud Duke it Out
Traditionally, data has been collected from endpoints and sent to the home server or data center for processing. Because this approach is ineffective for handling massive data sets, organizations are increasingly using edge computing to maintain and process IoT data locally. At the same time, it’s routine for sensor and other IoT-generated data to be moved to the cloud. Since there’s little agreement on which approach is best for processing, IoT employs a mix of edge and cloud computing, with data management and protection strategies required for both.
Global Dedupe Lends a Hand
Global dedupe has proven key for transmitting and managing IoT data. By understanding data patterns that are common to devices, global dedupe transmits only what’s uncommon, thereby reducing enterprise bandwidth transmission by about 90 percent. While global dedupe uses the GPU to achieve exceedingly fast fingerprint processing, its intelligent use of CPU cycles can also help minimize user disruption.
Data Classification Moves Front and Center
IoT data classification has become a hot topic given its ability to optimize information backups, governance and workflows. By classifying data based on variables of its choosing, organizations can optimize data management (e.g. here’s information to be disposed of versus stored and retrieved later on). Data identification is currently a manual process but this will change, as auto-classification evolves over the next few years.
IoT is here and the sooner organizations adapt to the new normal, the more likely they are to benefit from the enormous opportunities that have opened up. Fortunately, solution providers are busy developing customized tools and systems for IoT. And better yet, organizations that understand the dynamics of IoT can keep their ear to the ground and quickly embrace new approaches so they can thrive.
View the eWEEK slideshow looking at seven ways IoT will impact the computing landscape. | <urn:uuid:0ac5766b-5843-4491-8423-e6935a0246aa> | CC-MAIN-2017-09 | https://www.druva.com/blog/internet-things-arrived-let-games-begin/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00044-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.92787 | 943 | 2.828125 | 3 |
Some people frown on Pokémon Go hunts in historic areas, but a new FCC ruling could make it even more tempting to risk a glare and a wagging finger.
On Monday, the U.S. Federal Communications Commission announced a deal to made it easier for mobile operators and building owners to install cellular gear on many old buildings, including some in historic districts. Just because those structures may evoke the past doesn’t mean they can’t have the screaming 5G wireless speeds of the future.
While some see online addictions like Pokémon Go as intrusions on the spirit of historic sites, smaller cells are actually making it easier to sneak networks into places where network gear used to look out of place. Now equipment is so compact that the agency is lifting some regulations on where it can go.
The changes are the result of a deal between the FCC and two historic preservation groups. Among other things, mounting a small cell on a building more than 45 years old won’t require a historic review unless the building has been named a historic property or is in a historic district. A building inside a historic district may also get a break in some cases unless it's a National Historic Landmark. Rules have also been changed for DAS (distributed antenna systems), the linked antennas used in many buildings to boost indoor coverage.
As much as consumers like a good cell signal, some say the equipment that delivers it is unsightly. That’s why there are myriad federal, state and local rules for getting permission to put up wireless gear.
Now, the desire for more data capacity is converging with the urge to keep cell equipment out of sight. More and smaller cells can provide better performance than a few larger ones that need to cover whole neighborhoods. Separating components like antennas and base stations is also helping to make mobile networks less obvious.
Small cells are part of what will make 5G work. The next big cellular standard, due for completion by 2020, will have to serve more users with more data-hungry applications. Some 5G small cells will even use new, higher frequencies that are especially well suited to going short distances.
CTIA, the main trade group for U.S. mobile operators, wants the rules for mounting cell equipment to be even more streamlined.
“Americans will benefit tremendously from innovations like 5G and the Internet of Things, which require more small cell facilities – often the size of a pizza box – to build a denser network,” the group said. “Today’s action by the FCC recognizes the minimal impact of these facilities, but there is more work to be done.” | <urn:uuid:64b4ed2e-2433-439a-a51e-2656d03f5ebb> | CC-MAIN-2017-09 | http://www.itnews.com/article/3105548/george-washington-didnt-tweet-here-but-you-may-get-5g.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00096-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957766 | 541 | 2.71875 | 3 |
Automated tiering and caching often get confused. While each vendor's technology will vary a bit automated tiering is generally seen to be a more permanent placement of data on a faster tier of storage. It also can be seen as a way to move less active data to a high capacity but more cost effective tier of storage. Caching is often seen as more temporary in nature, accelerating only the most active data and, in most cases, this approach does not move old data to a third tier of storage.
The challenge in trying to grasp these two methods is that when used with solid state their use looks similar. In the past, caching was often thought of as a very small area of memory used to accelerate disk access for a very short period of time. Often it held only the most recent minutes of accessed data. Obviously the chances of a cache miss were relatively high, which meant a performance degradation as data was retrieved from mechanical hard disk. This lead to a very narrow deployment model, either a single server or a specific application on that server.
With the falling cost of today's flash-based SSDs, a very large cache can be created and data can reside on cache for a long period of time. This of course reduces the chance of a cache miss. It also means that data can be in cache for hours, even days if the flash memory in the cache is sized large enough. Flash has allowed large caches to be deployed in a much broader fashion and across multiple servers and applications.
A big difference between cache and automated tiering is that the data in cache is always a second copy of the data that is on the hard drive. Automated tiering is an actual move of data from the hard drive. Failure of the cache rarely produces a data loss, just a performance loss since everything would need to be served from mechanical drives until the cache can be replaced.
Since the SSD tier holds potentially the only copy of data in an automated tiering system, the failure of the SSD tier can't be tolerated so these systems have to set the SSD tier in a redundant configuration by using a RAID-like data protection scheme. The overhead of that protection, RAID parity bit calculation for example, may impact performance and of course any RAID algorithm requires extra disk capacity. Having to purchase extra SSD to support a RAID-like function makes an already premium priced technology even more expensive.
In most situations, read performance should be about the same between the two options. Mostly the efficiency of read performance is going to depend on the efficiency and customizability of the caching appliance to promote data. The goal should be to make sure the right data is in cache at the right moment in time. As we discuss in our recent article "Maximizing SSD Investment With Analytics" we believe that this is the largest opportunity for improvement in this technology. Both caching and automated tiering need to become smarter about what they cache and when.
Another area to examine with automated tiering vs. caching is which one can deliver better write performance and can be clear are of distinction between automated tiering and caching. We'll cover this in our next entry.
Follow Storage Switzerland on Twitter | <urn:uuid:831e65ab-67b2-42cd-9727-8a22b1d00044> | CC-MAIN-2017-09 | http://www.networkcomputing.com/storage/ssd-options-tier-vs-cache/1267262167?cid=sbx_bigdata_related_mostpopular_storage_virtualization_big_data&itc=sbx_bigdata_related_mostpopular_storage_virtualization_big_data&piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00624-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955242 | 638 | 2.59375 | 3 |
US, Honduras to test hurricane response simulation
The United States and Honduras will put a mapping tool to the test this week designed to help government and non-government organizations (NGOs) improve situational awareness, locate supplies and react quickly during a humanitarian crisis or natural disaster.
The software, called GeoSHAPE, is an open source and open standard tool that integrates emergency data from multiple formats and displays it as an Internet-based map.
Juan Hurtado, a science advisor to the U.S. Southern Command, said GeoSHAPE, “bridges geospatial information sharing gaps we witnessed during the international response to the 2010 earthquake in Haiti, providing a tool for military and civil organizations, local and international, to efficiently coordinate their activities and, in turn, save more lives.”
The tool, which has been through a two-year development effort, will be tested this week during a simulated hurricane event across Central America. The multi-organizational role players will include Honduras’ Permanent Contingency Commission, the local Red Cross, NGO Plan Internacional and U.S. Joint Task Force-Bravo.
Components of GeoSHAPE include a Web-based platform for creating, and sharing geospatially tagged events and a mobile application for capturing data and photos in the field. The tools will help rescue organizations put together a picture of both the resources at hand and extent of the damage.
The availability of hospitals, helicopter landing zones, food, water and medical supplies as well as the deployment of rescue personnel to affected areas will be are plotted in a map authorized users can see from anywhere in the world.
GeoSHAPE is part of a technology project sponsored by the U.S. Department of Defense’s Office of the Deputy Assistant Secretary of Defense for Emerging Capabilities and Prototyping.
After the demonstration and evaluation in Honduras, the software will be integrated with the Pacific Disaster Center’s DisasterAWARE platform, which provides continuously updated hazard information worldwide and functions as a hub for accessing, updating and sharing relevant data before, during and after a disaster.
According to Hurtado, disaster relief and humanitarian assistance are only two potential applications for GeoSHAPE. It can also be used in situations where organizations need to share geospatial information, including peacekeeping missions and border security.
Posted by GCN Staff on Jun 10, 2014 at 9:16 AM | <urn:uuid:5414e065-aba9-49dd-982d-a660800429b8> | CC-MAIN-2017-09 | https://gcn.com/blogs/pulse/2014/06/geoshape-honduras-hurricane.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00624-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928372 | 494 | 2.53125 | 3 |
Low output voltage reading on a UPS that uses a step approximated sine wave
Multimeter does not properly read the voltage of a UPS that uses stepped approximated output sine wave while on battery. Output voltage of a 120 VAC UPS is measured to be between 80-90 VAC. Output voltage of a 230 VAC UPS is measured to be between 170-180 VAC.
J Type power conditioners
All serial ranges
There are generally two types of meters: Average responding and True RMS meters. Average responding meters are more commonly used. True RMS meters tend to be more expensive. If the meter is not labeled "True RMS" it is most likely NOT a True RMS meter.
The issue that arises with a non-True RMS or average responding meter is whether it is measuring the output from a linear load or a nonlinear load. Linear loads include but are not limited to devices such as light bulbs, incandescent lamps and resistive heaters. Nonlinear loads include devices like computers. When measuring the output of nonlinear loads, the average responding meter will typically read LOW. True-RMS meters are most effective when measuring environments with harmonics. When a waveform is distorted from a standard sine wave (the fundamental), an average responding meter may produce readings that are 30-500w. A stepped approximated sine wave appears distorted when compared to a true sine wave; therefore, a reading will produce incorrect results.
When an average responding meter is measuring the stepped approximated output from an APC UPS while operating on battery, this meter will also read low. The wave shape generated is similar to what the meter would see from a nonlinear load, hence the averaging calculation that the meter makes will be miscalculated. A True RMS meter must be used to accurately measure a stepped approximated sine wave. | <urn:uuid:33f618a4-53e5-457c-8189-4a21966c054d> | CC-MAIN-2017-09 | http://www.apc.com/ba/en/faqs/FA157483/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00392-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.92165 | 382 | 2.5625 | 3 |
Zinc whiskers are small crystals of zinc that grow naturally from zinc-plated surfaces. They were first discovered in 1948 by Bell Labs. Zinc whiskers are most commonly found growing on the underside of wood-core raised access floor panels that have a flat steel bottom. Zinc whiskers have been found to cause equipment failures ranging from minor data corruption or anomalies to catastrophic hardware failures.
Wood-core floor panels
Wood-core raised access floor panels with a flat, steel sheet as the underside have been found to be the most common breeding ground of zinc whiskers. These panels consist of a premium-grade, high-density, resin-bonded particle board that is one inch thick and completely encased by a galvanized steel top sheet and bottom pan. The top steel piece of the floor panel is covered with a high pressure laminate (HPL). The underside steel piece of the floor panel is coated with zinc because it is a mineral that prevents rusting and oxidation. The coating process is either hot dipped galvanized (HDG) or electroplated. Only those panels that are coated by an electroplated process are capable of growing zinc whiskers.
In the past, zinc electroplated wood-core floor panels were manufactured by several companies and were extremely popular because of their economical benefits. Today, most of the manufacturers have stopped produc- ing this type of floor panel. Unfortunately, enough time has passed that systems that were installed years ago are now experiencing zinc whisker complications.
The truth about zinc whiskers
Zinc whiskers are capable of growing 250 microns per year. They seem to have a uniform diameter of 2 microns. They are harmless if they remain attached to the floor panel. If disturbed, a zinc whisker will break free and become airborne. It will then circulate through the mission critical facility and may eventually rest inside a sensitive piece of equipment. Because zinc is a conductor of electricity, it can cause equipment failures or system resets if it rests on an exposed circuit card.
Zinc whiskers are extremely stubborn. They will grow in any environmental condition (even a vacuum). And they are so minuscule that normal dust filters that are used in mission critical facilities are ineffective.
Discovering zinc whiskers
Determining if your mission critical facility has zinc whiskers is a difficult and extremely risky task. Zinc whiskers are virtually invisible to the naked eye. To spot them, panels must be removed and observed in specific settings. But, removing floor panels can disrupt the zinc whiskers and create a more severe problem. For these reasons, Bick Group recommends that an expert in identification, containment, and clean up of zinc whiskers be brought in. But, before calling an expert, here are a few symptoms that are common to facilities with a zinc whisker problem:
» Unexplained hardware failures (particularly disk drives and power supplies)
» Unexplained data corruption problems
» Problems occurring more frequently or more severely after equipment has been moved or work has been done underneath the raised access floor system.
Although zinc whiskers pose a serious issue, the contamination is certainly manageable without causing interruption to the operation of the mission critical facility. If you are experiencing symptoms of zinc whiskers, give us call to discuss containment practices.
Bick Group has subject matter experts in this and many other topics. Talk to our Access Floor experts by emailing: firstname.lastname@example.org | <urn:uuid:93c7b894-9ddb-40b5-abf2-a275fc31e113> | CC-MAIN-2017-09 | http://bickgroup.com/zinc-whiskers-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00264-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948417 | 707 | 3.28125 | 3 |
In theory, EMV or “Chip and PIN” -enabled credit cards are more secure because they are hard to copy (what's called skimming) -- but four researchers have shown yet another way to defeat the technology.
Created by a combination of EuroPay, MasterCard and Visa , the EMV algorithm embedded on a chip within a card is designed to combat face-to-face fraud. By inserting the EMV-enabled card and typing in a password at the Point of Sale terminal, the customer is demonstrating to the merchant that he or she is authorized to use that card. EMV doesn’t attempt to protect credit card data in motion or at at the merchant. Nor does Chip and PIN address “Card Not Present” (CNP) fraud.Read More > | <urn:uuid:63f9102f-cfd9-440d-8a39-6ad46ecfb198> | CC-MAIN-2017-09 | https://www.mocana.com/blog/topic/europay | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00492-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954193 | 163 | 2.65625 | 3 |
The world's fastest computer runs a Chinese chip, and that fact hasn't escaped notice by the U.S. government.
So how does the U.S. government bludgeon the Chinese chip threat? A new U.S. government working group aims to encourage domestic companies to use homegrown chip technology and resist the urge to buy inexpensive Chinese semiconductors.
The White House this week established the Semiconductor Working Group, a private-public advisory group that will create policy and research guidelines for semiconductor development. The ultimate goal is to retain U.S. leadership in semiconductor technology.
Nations are waging a battle to build the world's fastest computers, and homegrown chips are at the center of that race. Supercomputers help with economic projections, weapons development, scientific simulations, and scenarios critical to national security.
Advanced semiconductors will also drive the development of self-driving cars, robots, drones, and satellites. Semiconductors are building blocks for most electronics.
"A loss of leadership in semiconductor innovation and manufacturing could have significant adverse impacts on the U.S. economy and even on national security," said John Holdren, director of the White House Office of Science and Technology Policy.
The working group is also encouraging the development of new types of computers. Companies are already developing quantum computers and chips mimicking brain functionality, which could eventually replace today's PCs and chips.
Without mentioning China, Holdren said some countries subsidize chip development and dump inferior technology on U.S. companies. That hurts the development of new semiconductor technology, he said.
"Such policies could lead to overcapacity and dumping, reduce incentives for private-sector research and development in the United States, and thereby slow the pace of semiconductor innovation and realization of the economic and security benefits that such innovation could bring," Holdren said.
The call for national cooperation in semiconductor development is timely. Researchers believe that Moore's Law, a guiding light for the semiconductor development cycle codified by Intel co-founder Gordon Moore, is coming to an end, creating an opportunity to rethink chip design.
But technological challenges are holding back chip advances. It's becoming difficult to cram more features on smaller chips, and adoption of new materials and manufacturing technologies are progressing at a slow pace.
Major chip companies IBM and Intel have their own ideas of what future PCs and servers will look like. One goal is to bring uniformity into semiconductor development so that large and small U.S. companies can benefit alike.
The U.S. government may have its own interests in mind. U.S. companies sell chip technology to China, but the government has blocked some sales in the name of national security. Last year, the U.S. government blocked Intel from selling its Xeon chips for China’s supercomputers, reflecting concerns they could be used in supercomputers related to "nuclear explosive activities."
The Semiconductor Industry Association, which boasts major chip companies as its members, applauded the formation of the working group, saying it was long overdue.
"The chip industry spawns new industries, makes existing industries more productive, and drives advances once never imagined," said John Neuffer, president and CEO of SIA.
The U.S. is creating faster supercomputers and has efforts to develop new computing technologies through the Brain Initiative, the Advanced Manufacturing Initiative, the National Nanotechnology Initiative, and Computer Science for All, which promotes technology education. Many other technologies are developed by private sector companies or academic institutions.
It's unclear how effective the advisory group will be, but it'll likely have little impact in creating new semiconductor and computing technologies, said Jim McGregor, principal analyst at Tirias Research.
This task force is being established just as U.S. President Barack Obama leaves office, and it may be scrubbed by the time the new president takes over.
Moreover, little gets done in the U.S. government, and the working group won't change the way companies like Intel and IBM develop technologies or do business, McGregor said.
The U.S. government needs to invest more heavily to assert its influence and drive technological change, McGregor said. The nominal U.S. government investments in technological development come through organizations like the National Science Foundation and the Defense Advanced Research Projects Agency.
By comparison, China, South Korea, and Taiwan have centralized organizations that help consolidate IT and drive technological education, McGregor said.
The working group should instead prioritize promoting STEM (science, technology, engineering, and math) education and retraining the workforce for homegrown semiconductor development, McGregor said. A lot of chip development for companies like Intel and AMD are exported to countries like India, China, and Israel.
Prominent members of the Semiconductor Working Group include Qualcomm chairman Paul Jacobs and former Intel CEO Paul Otellini. | <urn:uuid:da3cfa77-a558-4018-a11c-717925254c02> | CC-MAIN-2017-09 | http://www.itnews.com/article/3137466/cpu-processors/worried-about-china-the-us-pushes-for-homegrown-chip-development.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00016-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944937 | 991 | 2.671875 | 3 |
In fact, SDN can be explicitly defined. There are three architectural layers to an SDN network: the physical network, the SDN applications and the SDN controller. Let's look at each.
Physical Network. The lowest layer consists of the physical devices in your network that form the foundation of all IT infrastructure. We use the term "switch" because OpenFlow changes the way Ethernet switches work. For this article, you can also consider virtual switches part of the physical infrastructure.
SDN Applications. The most visible layer in an SDN design is the applications that deliver services, such as switch/network virtualization, firewalls and flow balancers. (Note that OpenFlow-based load balancers are called flow balancers. They aren't traditional load balancers because they can't read packet contents.) These applications are similar to or the same as those in use today where the software runs on dedicated hardware. Most of the coming innovation in networking will occur in SDN applications.
SDN Controller. The SDN controller is the middleware that serves as the linchpin of the entire architecture. The controller must integrate with all the physical and virtual devices in the network. The controller abstracts the physical network devices from the SDN software that works with those devices. There is a high degree of integration between the controller and network devices. In an OpenFlow environment, the controller will use the OpenFlow protocol and the NETCONF protocol to communicate with switches. (OpenFlow is the API for sending flow data to the switch, and NETCONF is the network configuration API).
SDN: Basic Architecture
In current SDN approaches, vendors provide applications and a controller in a single product. For example, Nicira/VMware packages its applications and controller into a single proprietary application stack. Cisco will package its controller into the OnePK product by embedding the controller in IOS software on the devices. I also expect Cisco to deliver a master controller in the near future. Big Switch Networks, which recently launched the commercial version of its SDN controller, offers two applications that run on the controller: Big Virtual Switch and Big Tap.
Clearly the controller is a key element in the network architecture. It must present APIs to the applications that represents usable functions, and it's here that the battle for SDN dominance will be fiercest among the vendors.
SDN APIs: The New Battleground
An SDN architecture has two distinct networking APIs: northbound and southbound. OpenFlow is a southbound API. OpenFlow describes an industry-standard API that configures the frame-forwarding silicon in an Ethernet switch and defines the flow path through a network. In addition, the Open Networking Foundation (ONF), the standards body overseeing the OpenFlow protocol, announced an API for device configuration called OF-CONFIG. OF-CONFIG uses the NETCONF XML data format to define the language.
Cisco's OnePK is also a southbound API. There is much discussion around whether OpenFlow is enough to meet all the needs of networking, especially with regards to migrating from a packet-based network to a flow-based network. There are unresolved issues that will hinder that migration, such as the need for interoperability with existing protocols such as STP and OSPF.
Northbound and Southbound APIs
The northbound API provides a mechanism in an SDN architecture to present services or applications to the business. Each application will develop a view of the flow tables for network devices and then send requests to the controller for distribution to the network devices.
For example, a virtual switching application would build a network graph/database of all points in the network of physical and virtual switches. In a multitenant Ethernet network, the app would develop a set of flow rules that emulate Ethernet VLANs while maintaining full isolation for each tenant's flows. The flow rules would consist of values based on ingress and egress ports, plus the source and destination MAC address.
Next page: API uncertainty | <urn:uuid:fefb657e-e47b-4c55-b13c-6423faeccf1d> | CC-MAIN-2017-09 | http://www.networkcomputing.com/networking/sdn-business-openflow-technology/53316220?cid=sbx_nwc_related_mostpopular_default_cloud_computing&itc=sbx_nwc_related_mostpopular_default_cloud_computing&piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00436-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920394 | 819 | 3.171875 | 3 |
The Pathway course teaches the student about the Tandem Pathway application environment. It also describes the use of Pathway tools and commands.
Personnel requiring a working knowledge of Pathway in a Tandem system
Knowledge of the Tandem operating system
- After completing this course, the student will be able to:
- Identify how to use Pathway and Pathway Tools
- Recognize Pathway commands
Tools - Pathaid, Screen COBOL, Screen COBOL Utility
Programs - Inspect
DDL (Data Definition Language)
PATHway – Management, Access
PATHmon - Parameters
Status, Info, Add, Show, Set, Delete
Instant Web Chat:
- Chat with tech support or
- have a product question?
Contact our Learning Consultants or call us at 770-872-4278 | <urn:uuid:dd06fea1-478d-4c8d-9718-42840c0b5086> | CC-MAIN-2017-09 | https://interskill.com/course-catalog/Tandem-Pathway.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00436-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.684828 | 175 | 2.609375 | 3 |
The Federal Communications Commission kicked off the formal process to allocate more federal spectrum for unlicensed Wi-Fi use in the 5 GHz band to support speeds in the gigabit range, a plan first announced by FCC chairman Julius Genachowski on Jan. 11.
The 35 percent increase in spectrum will not only boost Wi-Fi speeds, which today max out at 600 megabits per second, it also will give cellular carriers more bandwidth to handle customers’ increased data use, Genachowski said. Though carriers such as Verizon Wireless use their licensed spectrum to transmit data at six megabits per second, Genachowski said the carriers currently offload 33 percent of their traffic to Wi-Fi networks, and he predicted that would jump to 46 percent by 2017.
Commissioner Robert McDowell said that in 2012, mobile data traffic hit 207 petabytes per month -- the equivalent of streaming 52 million DVDs -- and predicted mobile data traffic will increase nine-fold over the next five years. He said 96 percent of mobile data traffic was carried on Wi-Fi devices at some point, which means the unlicensed spectrum is experiencing congestion, which can be relieved by new bandwidth.
The FCC plans to allocate 75 MHz of spectrum in a new unlicensed band that runs from 5850 GHz to 5925 GHz and tap another 120 MHz in the 5150 GHz-5470 GHz band used by existing Wi-Fi systems, the Defense Department, FAA and commercial systems.
Commissioner Ajit Pai said the FCC is precluded by statue from allocating new unlicensed spectrum unless federal users are protected from interference.
The FCC, in its Notice of Proposed Rule Making on the new Wi-Fi spectrum, said FAA terminal Doppler weather radar systems operating in the 5600-5650 GHz band used to detect cloud microbursts over airports have already experienced interference from existing Wi-Fi systems modified to operate outside their authorized frequencies.
The Defense Department operates multiple radar systems in the new planned Wi-Fi bands; NASA operates a space radar system used to determine ocean height; and licensed domestic and international communications carriers and the Defense Department operate fixed Earth stations in the bandwidth eyed for Wi-Fi.
FCC has proposed a number of technical solutions to prevent interference with these federal systems by Wi-Fi gear operating in the 5 GHz bands and it seeks input from Chip manufacturers on those proposals. These include software security features that prevent end users from modifying equipment for out of band operation; geo-location technology built into a Wi-Fi device that uses an automated database to detect location of radars; caps on transmitter power; and a dynamic frequency system, in use today with current 5 GHz Wi-Fi gear, which detects radars and then selects non-interfering channels for unlicensed use.
Defense and Homeland Security Department drones use the 5625-5850 GHz and 5250-5475 GHz bands for command and control operations, but FCC said existing Wi-Fi signal detection technologies may not be able to detect signals that could lead to “performance degradation” of drones.
Considering the benefits that an expanded Wi-Fi spectrum can deliver, the onus should be on federal users to support unlicensed use, Pai said. “I hope that we will consider whether Federal users should (emphasis included) alter their systems or operations to accommodate unlicensed devices in this spectrum and what solutions will work, keeping in mind the costs and benefits of all potential options.” | <urn:uuid:edba3e23-35ed-4f75-9407-26657dd9955e> | CC-MAIN-2017-09 | http://www.nextgov.com/mobile/2013/02/fcc-seeks-more-federal-spectrum-boost-wi-fi-use/61454/?oref=ng-channelriver | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00488-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.923262 | 702 | 2.578125 | 3 |
Information Security in 2020
The rise in mobility and participation in social networks, the increasing willingness to share more and more data, new technology that captures more data about data, and the growing business around Big Data all have at least one assured outcome — the need for information security.
However, the news from the digital universe is as follows:
- The proportion of data in the digital universe that requires protection is growing faster than the digital universe itself, from less than a third in 2010 to more than 40% in 2020.
- Only about half the information that needs protection has protection. That may improve slightly by 2020, as some of the better-secured information categories will grow faster than the digital universe itself, but it still means that the amount of unprotected data will grow by a factor of 26.
- Emerging markets have even less protection than mature markets.
In our annual studies, we have defined, for the sake of analysis, five levels of security that can be associated with data having some level of sensitivity:
- Privacy only — an email address on a YouTube upload
- Compliance driven — emails that might be discoverable in litigation or subject to retention rules
- Custodial — account information, a breach of which could lead to or aid in identity theft
- Confidential — information the originator wants to protect, such as trade secrets, customer lists, confidential memos, etc.
- Lockdown — information requiring the highest security, such as financial transactions, personnel files, medical records, military intelligence, etc.
The tables and charts illustrate the scope of the security challenge but not the solution. While information security technology keeps getting better, so do the skills and tools of those trying to circumvent these protections. Just follow the news on groups such as Anonymous and the discussions of cyberwarfare.
However, for enterprises and, for that matter, consumers, the issues may be more sociological or organizational than technological — data that is not backed up, two-phase security that is ignored, and corporate policies that are overlooked. Technological solutions will improve, but they will be ineffective if consumer and corporate behavior doesn’t change.
Big Data is of particular concern when it comes to information security. The lack of standards among ecommerce sites, the openness of customers, the sophistication of phishers, and the tenacity of hackers place considerable private information at risk. For example, what one retailer may keep private about your purchase, such as your transaction and customer profile data, another company may not and instead may have other data hidden. Yet intersecting these data sets with other seemingly disparate data sets may open up wide security holes and make public what should be private information.
There is a huge need for standardization among retail and financial Web sites as well as any other type of Web site that may save, collect, and gather private information so that individuals’ private information is kept that way. | <urn:uuid:85a64281-d822-414d-95db-882815d067d9> | CC-MAIN-2017-09 | https://www.emc.com/leadership/digital-universe/2012iview/information-security-2020.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00488-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934616 | 589 | 2.515625 | 3 |
Key security issues still need to be addressed as IT is integrated into the nation’s electricity infrastructure, according to a recent report released by the U.S. Government Accountability Office (GAO).
If done right, proponents say the interconnected system — commonly referred to as the smart grid — would provide a range of benefits, such as provide operators with more information about the condition of the electricity system, and allow consumers to receive real-time information about pricing and demand.
However, if the IT systems are not installed correctly, the electric grid will be more vulnerable to cyber-attacks and disrupted service, according to the report.
Six key challenges persist, as identified by the GAO:
Work began years ago on security standards for the smart grid. The Energy Independence and Security Act of 2007 gave the National Institute of Standards and Technology (NIST) and Federal Energy Regulatory Commission (FERC) the responsibility of coordinating the development and adoption of smart grid guidelines and standards.
Last year, both agencies released the first round of this information for the GOA to review. After assessment and evaluation, the GOA found that the guidelines were not adequate in covering potential cyber-security issues.
“While NIST largely addressed the key elements in developing its guidelines, it did not address an important element essential to securing smart grid systems and networks that NIST had planned to include. Specifically it did not address the risk of combined cyber-physical attacks,” according to the report.
NIST officials said they intend to update the guidelines to address the missing elements and have already drafted a plan to do so.
“Without it, there is increased risk that important cyber-security elements will not be addressed by entities implementing smart grid systems, thus making these systems vulnerable to attack,” according to the report.
At the same time, FERC began a process to consider an initial set of smart grid interoperability and cyber-security standards for adoption. However, FERC hasn’t developed an approach to monitor the extent to which industry follows these standards, the report said, because according to the GAO’s analysis, it has not yet determined whether or how to perform such a task.
“Without a documented approach to coordinate with state and other regulators on this issue, FERC will not be well positioned to promptly begin monitoring the results of any standards it adopts or quickly respond if gaps arise,” the report said.
The GAO recommends that NIST finalize its plan and schedule an update of its cyber-security guidelines to incorporate missing elements, and that FERC develop a coordinated approach to monitor the standards and address any gaps in compliance. Both agencies have agreed with these recommendations.
The report also stated that although challenges remain, progress has been made, such as installing smart grid modernization on homes and commercial buildings that enable communication between the utility and customer.
“Smart grid modernization is an ongoing process,” the report said, and various initiatives continue to ensure safe implementation. | <urn:uuid:2e6b13e6-e227-45c0-ba50-61cc85687421> | CC-MAIN-2017-09 | http://www.govtech.com/technology/Smart-Grid-Cyber-Security-Standards.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00012-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947895 | 613 | 2.78125 | 3 |
NEW YORK -- Sixty years from now, we'll look back on today's 3D-printed tissue and organ technology and think it's as primitive as the iron lung seems to us now.
Six decades out, replacing a liver or a kidney will likely be a routine procedure that involves harvesting some patients cells, growing them and then printing them across artificial scaffolding.
Dr. Anthony Atala, director of the Wake Forest Institute for Regenerative Medicine, spoke at the Inside 3D Printing Conference here about where the technology is today, and what hurdles it still must overcome.
The biggest hurdle is being able to 3D print supportive vascular structure so that tissue can receive the oxygen critical to its survival once it's implanted into a patient.
Today, scientists can replicate tissue in small amounts beginning with the simplest: flat human skin tissue. Researchers have been able to create tubular blood vessels, and even parts of hollow organs of the body, such as the bladder. But, it's the solid, hard organs, such as the liver and the kidney, that are more complex and require far more vascular support to recreate successfully.
"We're working very hard to make sure we get there someday," Atala said.
It's not as insurmountable a goal as it may seem. For one, scientists don't need to replicate an entire organ. In fact, up to 90% of an organ can fail before it seriously affects the health of a patient, Atala said.
"Imagine you're playing tennis one weekend and you get chest pains. You go and get an x-ray and they find 90% of you're your heart vessel is occluded. The fact is you never had chest pain when your heart vessel was 80% occluded," he said. "It's the same thing with kidneys. You don't get into kidney failure until about 90% of your kidney is gone. So you have to burn 90% of your reserve before you get in trouble."
The current strategy of bioengineers is to create enough tissue to boost an organ that's failing while not completely replacing it, Atala said
Typically, the maximum distance between tissue and the vascular structures that support them is 3mm. That means that for every 3mm of tissue, physicians will have to be able to construct capillaries to support them.
Today, 3D bioprinting constructs a series of tissue held up by artificial scaffolding, like the iron beams in a building. First a layer of scaffolding is laid down, and then a layer of cells is laid down on top of that.
In the past, scientists have had to separately create the scaffolding, and then coat it with the living cells. That not only takes longer to complete, but it also places the living cells in danger of dying before they can even be implanted.
3D printing allows the scaffolding and living tissue to be printed together.
In order to construct veins and capillaries, they first print out a tubular construct made of dis-solvable material; they then coat the outside of that tube with muscle cells and the inside with venous barrier cells. A heart valve can be constructed in the same way; first by using dis-solvable scaffolding followed by outer muscular cells and interior barrier cells.
Researchers at Wake Forest have even successfully laid down heart cells; the completed tissue then begins to beat.
"Liver structures we print today last 30 days. They secrete urea, albumin and they produce [stem] layers. We're reproducing embryonic development of the kidney," Atala said.
Scaling up that tissue production is the challenge today, Atala said.
3D printed faces and skulls
While printing actual tissue holds the key to someday replacing body parts, another promising area is 3D printed bones. Today, 3D printing is used to create structural support.
For example, if a patient has been in an accident and lost part of their skull, physicians can create an exact 3D virtual image of the patient's skull through CT scans and use it to print out a hard polymer section of the skull. But 3D printers have yet to tackle the printing of actual bone.
At best, 3D printers represent an evolutionary step toward more advanced cranialfacial reconstruction, according to Dr. Amir Dorafshar, co-director of the Facial Transplantation Program at Johns Hopkins University.
Still, 3D printing has advanced surgery by leaps and bounds.
Less than 10 years ago, cranialfacial surgeons would act as artists, spending hours in an operating room trying to put the jigsaw puzzle of a smashed or deformed skull back together.
Through the use of 3D imaging, surgeons can perform the surgery on a computer screen before entering the operating room, affording them an exact model to follow once the scalpel begins its cuts.
Amir said where 3D printing is also evolving craniofacial surgery in its ability to produce exact surgical guides that are placed over the section of a patient's head or face. The plastic guides allow surgeons to make their cuts without fear of mistakes.
"Today, the gold standard is still to take tissue and bone from another part of the patient's body and transplant it," Dorafshar said.
The revolution will be when physicians can print up a bioactive bone scaffolding that includes a vascular structure to supply the living bone with nutrients and oxygen. "Imagine one day when we can recreate vascularized bone," Dorafshar said.
"I can assure you people will look back on these technologies someday and say, "Boy, weren't these primitive," Atala said.
Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "3D Printing a New Face, or Liver, Isn't That Far Off" was originally published by Computerworld. | <urn:uuid:55b11b73-4392-4838-b06b-db3748d46e9b> | CC-MAIN-2017-09 | http://www.cio.com/article/2377322/hardware/3d-printing-a-new-face--or-liver--isn-t-that-far-off.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00132-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957068 | 1,273 | 3.46875 | 3 |
How Stuxnet changes the security game
New attacks could cross the 'air gap' to systems not connected to the Internet
- By (ISC)2 Government Advisory Council Executive Writers Bureau
- Mar 21, 2011
In November 1988, the first computer worm indiscriminately propagated through 6,000 Unix systems, or roughly 10 percent of the computer systems on the Internet. Although developed with innocuous intent, this worm had the ability to duplicate itself repeatedly in a given environment, ultimately causing the affected system to fail.
Roughly 22 years later, the Stuxnet worm emerged as a technological advancement with the potential to cause an unimaginable impact. Many have heralded it as a paradigm shift in the cyber threat landscape because of its precision targeting, as opposed to indiscriminate destruction. Instead of attacking every system it enters, Stuxnet is designed to only subvert specific Supervisory Control and Data Acquisition systems.
Stuxnet is not Superworm, researcher says
Stuxnet story is high-profile but still out of reach
Listen to an (ISC)2 podcast on Stuxnet here.
While using common operating systems and networking components, SCADA systems have traditionally been air-gapped from the Internet to pre-empt such attacks. In the case of Stuxnet, the worm was able to bridge this gap through targeting specific systems and using removable media.
Given the potential damage this malware advancement is capable of causing, security professionals need to re-evaluate their perceptions of risk and challenge their preconceived notions regarding segmented networks and critical infrastructure.
In order to stay on the cutting edge of threat advancement, security practitioners traditionally have sought the newest tools and techniques that would provide greater insight into how to build and manage secure networks. Tools employed in recent years to maintain that edge include Intrusion Prevention Systems and Data Loss Prevention suites.
However, it is important to consider that a shift in focus from basic security practices to more sophisticated implementations such as IPS or DLP can often allow the “urgent” to overshadow the “fundamental,” from both a security posture and budgetary standpoint. There is evidence that, although Stuxnet was designed to circumvent our industry’s leading security technology, it might not have been as far-reaching if certain fundamental security controls had been in place.
In order to gain a foothold in targeted networks, Stuxnet used multiple zero-day vulnerabilities and advanced targeting techniques, along with the more traditional methods such as the “USB autorun” feature, “known” and “patched” system vulnerabilities and default passwords in commercial-off-the-shelf (COTS) products.
Although simple mitigation solutions, such as a recently issued Microsoft patch, can turn off the USB autorun feature, this patch might never have been applied to the systems that were affected by Stuxnet.
Critical control systems and those that are not connected to the Internet are often left unpatched, either because it is difficult to fully test the patch and ensure that it will not affect normal operations, or, in the case of systems not connected to the Internet, because it’s assumed that those systems are not subject to outside influence and are therefore protected.
In either case, CIOs should work with their chief information security officers, system architects and operators to ensure that best practices are employed to protect their systems.
Additionally, after it was discovered that Stuxnet was exploiting default passwords, the SCADA system vendor issued guidance to its customers requesting that they refrain from changing default passwords because they were hard-coded into the products and could cause their systems to fail if changed. This recommendation contradicted the traditional practice of changing default passwords upon installation of commercial products.
Security practitioners found themselves in the challenging position of having to decide which risk was greater, knowing that either option could cause significant service outages to critical systems. The use of hard-coded passwords in network or SCADA components can harm a system’s security and may not be easily remediated.
In some instances, if a weakness of this nature is identified in a COTS product used on a network, the product vendor can update the system or develop an alternative method to meet the immediate security needs. In other cases, the "fix" is offered as part of the next system update.
The issues highlighted above represent only a few of the considerations that security practitioners should take into account in light of the Stuxnet attack. Moving into the future, organizations will need to re-evaluate baseline controls and reconsider assumptions about computers not connected to the Internet and the potential consequences of an attack that can traverse the air gap to segmented networks.
Yet another consideration that is not easily remedied is how Stuxnet undermines our underlying trust in sensors for critical systems. Once Stuxnet infects a target system and is operational, it has the ability to intercept and modify sensor signals that would otherwise notify the system operator that an error had occurred.
Without this insight into system anomalies, and with operators effectively blinded, Stuxnet is able to wreak havoc on critical control systems. If this same tactic were employed by other malware in a major enterprise, which data source could be trusted? Would intrusion detection and prevention systems be immune, or would they actually create a false sense of security?
While not comforting for officials charged with deploying and securing information systems, these are the realities that must be addressed in the wake of Stuxnet.
Members of the (ISC)2 U.S. Government Advisory Council Executive Writers Bureau include federal IT security experts from government and industry. For a full list of Bureau members, visit https://www.isc2.org/usgac-ewb. | <urn:uuid:7ea6209b-def8-4481-a5ac-b93dcc5c30fd> | CC-MAIN-2017-09 | https://gcn.com/articles/2011/03/21/commentary-stuxnet-new-threats.aspx?s=security_240311&admgarea=TC_SECCYBERSEC | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00008-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953665 | 1,169 | 2.796875 | 3 |
Photon Entanglement over the Fiber-Optic Network
Quantum mechanics with its bizarre and wonderful properties—individual particles that exist in an arbitrary combination of states, entangled particles acting in concert even when separated over long distances—is usually thought of as a world separate from everyday classical physics.
Even Einstein couldn’t quite resolve entanglement with his view of the physical world and in a 1935 paper (along with Boris Podolsky and Nathan Rosen) argued that entanglement violated the locality principle (which states one physical system should have no immediate effect on another spatially separated system). But subsequent theoretical ideas and experiments have verified the existence and nonlocal behavior of entangled particles.
And in the past 20 years, the new field quantum information science has been bridging quantum physics, computer science, optical technology, and communication engineering to harness the power of quantum properties. While the initial drive came from the desire to build a quantum computer capable of vastly outperforming today’s supercomputers, more recent efforts are venturing into more immediately practical applications.
The power of quantum mechanics
Rather than utilizing ordinary bits that exist in only one of two states (0 or 1), quantum information science utilizes qubits (see sidebar) that, first, exist in a superposition of their states 0 and 1 and, second, are capable of interacting with one another. In theory, a computer based on these interacting qubits is capable of doing certain calculations much faster than an ordinary computer that is limited to operating on a bit’s single state. As more qubits are combined, more simultaneous operations are theoretically possible; and for certain calculations (such as factoring), a relatively small number of qubits could conceivably outperform ordinary computers with a million processors, an extraordinary advance in computing.
(Pioneering work in quantum computational theory was done by Peter Shor who, while at AT&T Bell Labs, devised a polynomial time algorithm for factoring large numbers on a quantum computer.)
An important feature of quantum cryptopgraphic systems is that they are impervious to eavesdropping . . .
Qubits can be combined, or entangled, because the mathematical rules of quantum mechanics allow two or more particles—atoms, photons, and ions have all been successfully entangled—to belong to a certain single joint quantum state that is not just the combination of the individual qubit states. A typical example would be a situation in which physical conservation laws constrain two qubits to have the same value so that, when measured, either both qubits are 1 or both qubits are 0.
This is true even though neither qubit has a certain value prior to any measurements, even though the result of the first reading is completely random, and even if the second qubit is removed to a remote location.
Quantum computers are proving very hard to build due to the difficulty of controlling the interactions of multiple qubits and keeping the entanglement state alive long enough to perform calculations. But proposals to employ entanglement for other applications—quantum metrology, quantum lithography—may be closer to reality. Currently the most advanced and promising of these proposals is quantum cryptography.
Harnessing entanglement for secure cryptographic schemes
Modern cryptography depends on the exchange of public and private cryptographic keys that enable two parties to encrypt information. The Achilles' heel of these systems is the safe distribution of a key without it being intercepted by third parties. The security of public-key distribution methods relies on the unproven hardness of math problems such as factoring (see article that describes how ALFP Fellow Adriana Lopez's research is addressing this issue ) using conventional (not quantum) computers. And while private shared keys do offer in principle the possibility of unconditionally secure communication, the key distribution process still does not completely avoid the possibility of eavesdropping or, in the case of physical couriers, immunity to bribes.
Quantum cryptography or, more correctly, quantum key distribution, offers a protocol of creating a pair of private keys secured by the laws of physics. Quantum key distribution does not invoke the transport of a key because, by the nature of the entangled state, the key can be created at the sender and receiver simultaneously.
By repeatedly sending and measuring photon polarization states in a clocked fashion, two users gradually build up identical strings of classical bits (or 0s and 1s) that they can use for encrypting another (parallel) data-stream between them. Since quantum states cannot be known before a measurement, the key is completely random.
An important feature of quantum cryptographic systems is that they are impervious to eavesdropping since the state of entanglement is constantly monitored. Any potential eavesdroppers would unavoidably degrade the entanglement and reveal themselves.
For the past year, AT&T Research has been studying photon entanglement distribution over optical fibers.
The first commercial devices for entangled photon systems (see NuCrypt) are already being sold. Though more research instruments than real network equipment, they are a tangible step to harnessing quantum mechanics. Similar equipment from noncommercial sources (see link) is now being used in actual testbeds (one example is the Tokyo QKD Network; for more information, see the News and Views section of the January 2011 issue of Nature Photonics).
AT&T's experiments into photon entanglement
The vast installed global fiber-optic network, consisting of over a billion meters of optical fiber cables, opens a particularly attractive opportunity for implementing quantum communications protocols that rely on the distribution of entanglement between distant parties.
Currently two major entanglement schemes have been proposed for telecom photons: polarization and time-bin entanglement. Polarization is particularly attractive because of the ease with which the polarization can be handled with standard off-the-shelf components (as a result, equipment for creating and detecting polarization-entangled photons is now commercially available).
However, there has been a long-standing concern in the community that polarization entanglement could be significantly decohered (degraded) during fiber transmission due to two polarization effects in optical fibers: polarization mode dispersion (PMD) and polarization-dependent loss (PDL).
Answers are starting to come. For the past year, AT&T Research has been conducting experiments to find out what happens to polarization entanglement over fiber optic cables with PMD and other network conditions. The equipment for these experiments has been custom-built for AT&T by NuCrypt. This project is a joint effort with two theorists: Dr. Cristian Antonelli (Università dell’Aquila), who collaborates under the auspices of AT&T Virtual University Research Initiative, and Prof. Mark Shtaif (Tel Aviv University), a former AT&T Labs researcher.
On the way to solutions, AT&T researchers are learning more about the fundamental physics of entanglement decoherence . . .
From the outside, the lab setup doesn’t look anything quantum; like ordinary pieces of network equipment, it is housed simply in several black boxes interconnected by strips of fiber and electrical cabling.
One box is an entangled photon source. It creates a pair of entangled photon qubits, separates the paired photons spectrally, and directs each one over a dedicated fiber to one of two single photon detector stations. This process is repeated in a clocked fashion and, as the resulting stream of photon pairs arrives at corresponding detectors, the quantum state of a two-photon state is analyzed using quantum tomography, which completely quantifies a quantum state. By introducing PMD in a controlled way and performing tomography for various levels of PMD in each fiber, researchers thus probe the PMD-induced degradations. While the tomography measurement itself goes quickly (usually a few minutes), the most time is consumed by setting and verifying certain fiber conditions, which are very sensitive to miniscule changes in temperatures as well as other hard-to-control factors.
The primary goal for these experiments is to fully investigate the engineering problems and corresponding solutions that need to be put in place should entanglement-based quantum protocols be someday implemented over AT&T networks. Recent experiments together with developed theory are yielding the first steps to understanding those issues. On the way to solutions, AT&T researchers are learning more about the fundamental physics of entanglement decoherence.
This work, for the first time, found that transmission of polarization entangled photons in optical fibers reveals interplay among several intriguing physical phenomena: entanglement sudden death, the existence of decoherence-free subspaces, and the loss of non-locality. To take the example of sudden death of entanglement: this concept, originally proposed to describe the entanglement dynamics in atomic physics, describes a situation in which the entangled state degrades abruptly and completely in contrast to gradual decay of a single particle state. Interestingly, when PMD is present in one fiber only, the degradation of the entanglement is always gradual. On the other hand, adding some PMD to another fiber could either reduce or increase decoherence depending on the relative orientation of two PMD elements. Sometimes in the latter case the entanglement disappears completely, which is a manifestation of the sudden death arising naturally during photon propagation in fibers.
Remarkably, the use of polarization entanglement in fibers has been debated at numerous conferences in recent years, and the quantum communication community remains split on the subject. AT&T’s work takes a first step towards solving this critical problem, and may have implications across subfield boundaries.
Already experiment results have been presented at various conferences throughout 2010, including the most selective post-deadline session at the Optical Fiber Communication Conference. Journal papers are to follow with more details. The first that appeared in print in 2011 are: Loss of polarization entanglement in a fiber-optic systems with polarization mode dispersion in one optical path (preprint), and Nonlocal PMD compensation in the transmission of non-stationary streams of polarization entangled photons (preprint), and Sudden Death of Entanglement induced by Polarization Mode Dispersion (preprint).
Possible future directions
Future research in this field should encompass the following several directions.
First, PMD in realistic fibers is frequency-dependent. This strong frequency dependence could either kill the entanglement or alternatively revive the entanglement if it is lost. Second, studies of the effect of polarization-dependent loss (PDL) are needed. While the PDL of the fibers is relatively small, a notable amount of PDL is introduced by network elements such as wavelength selective switches, optical add/drop multiplexes, and dispersion compensation devices. It would be interesting to figure out the non-trivial interplay between PDL and PMD. Finally, the other effects, such as nonlinearities from strong classical signals propagating through in the same fiber, could also play a role in entanglement decoherence.
Eventually, the potential effectiveness of, and fundamental impediments to, implementing quantum repeater technology in the fiber-optic link also will need to be explored. This technology, once available, holds the promise of truly exploiting the quantum potential of long-haul fiber optic transmission.
Quantum cryptography and communications hold great promise, but numerous effects need to be understood and various related problems are in need of solutions in the rich research area of entanglement distribution via fiber optics fibers.
What is a qubit?
A qubit (quantum bit) is a representation of a particle state, such as the spin direction of an electron or the polarization orientation of a photon.
A qubit is the quantum equivalent of a bit in ordinary computing.
But where a bit exists in one of two states (1 or 0), a qubit can exist in an arbitrary combination of both states. Physicists describe this as a coherent superposition of two states. This superposition is often represented by a point on a sphere with values 0 and 1 at the sphere poles.
Measurements play an important role in quantum physics.
Once one reads a qubit (or “performs a measurement” in quantum parlance), the qubit collapses into one of the two possible outcome states, with the probability of the particular state depending on the location of the superposition on the sphere.
The result of any particular qubit measurement always remains uncertain until the measurement is performed: quantum mechanics just predicts the probabilities of the outcomes.
What is entanglement?
Entanglement is a fundamental concept in quantum mechanics. When only two particles are entangled, a measurement performed on one is reflected in the other, even when the two are separated by large distances.
(This “spooky action at a distance” bothered Einstein who, along with Boris Pokolsky and Nathan Rose, in a 1935 paper argued that entanglement violated the locality principle, which states that changes performed on one physical system should have no immediate effect on another spatially separated system. Later experiments, however, have verified the nonlocal behavior of entangled photons.)
Researchers have learned to entangle atoms, photons, atomic ensembles, superconducting quantum interference devices, and mechanical vibrations. The majority of experiments are done with light because entangled photons are easier to create and because they preserve their entanglement better than other particles. One drawback of using photons for quantum computing is that photons fly too fast for convenient storage. However, photon speed is not a constraint but an advantage for quantum cryptography.
About the author
Dr. Misha Brodsky joined AT&T Labs in 2000. His contributions to fiber-optic communications focused on optical transmission systems and the physics of fiber propagation, most notably through his work on polarization effects in fiber-optic networks. More recently Misha has been working on quantum communications; single photon detection; where his prime research interest is in photon entanglement and entanglement decoherence mechanisms in optical fibers.
Dr. Brodsky has authored or co-authored over 70 journal and conference papers, a book chapter, and about two dozen patent applications. He is a topical editor for Optics Letters and has been active on numerous program committees for IEEE Photonics Society and OSA conferences. Dr. Brodsky holds a PhD in Physics from MIT. | <urn:uuid:b7831be1-60d3-41df-8fa2-b35f7de5ebbf> | CC-MAIN-2017-09 | http://www.research.att.com/articles/featured_stories/2010_12/201101_Entangled_photons.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00360-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.919217 | 2,940 | 3.390625 | 3 |
NASA AND NURTURE.
People have always entertained the idea that because signals broadcast from Earth ' such as old radio or TV shows ' are beamed into the cosmos, they could be picked up somewhere in space, where aliens might form an opinion of the human race based on 'I Love Lucy.' But accidental transmissions work the other way, too, with an assist from the Internet. A woman in Palatine, Ill., has been keeping tabs on activity aboard the shuttle Atlantis ' by watching her baby monitor. Natalie Meilinger started receiving black-and-white feeds from Atlantis early in its mission. NASA said the images weren't coming from the shuttle, but the video is available on NASA's Web site, so the monitor could be picking up a wireless feed from somewhere in the neighborhood. No secrets are being revealed, but it still makes one wonder what could accidentally leak out. VIRTUAL HELP.
Researchers at Emory University are testing a treatment method that uses virtual reality to help veterans traumatized by the Iraq war. But the new wrinkle in the study isn't the use of computer simulations. It's combining virtual reality with a drug once used to treat tuberculosis. Scientists are trying to determine if virtual reality, already used in post-traumatic stress disorder treatments, works better with d-Cycloserine, which has been shown to reduce fear, the Associated Press said. ROBO CROSSOVER.
Some people think pending advances in robotics will make it the next big thing. You would have found plenty of people who agree at the recent RoboGames, an event that made both Wired magazine's Best Ten North American Geek Fests and ESPN SportCenter's Top Ten. Check out the action at www.robogames.net. | <urn:uuid:626228ac-5a37-4b75-86c6-8be06d78260d> | CC-MAIN-2017-09 | https://gcn.com/articles/2007/06/22/technicalities_633659424664440761.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00536-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947275 | 356 | 2.8125 | 3 |
The Most Common Threats
Malware, or “Malicious Software”, are types of adware or spyware programmed by attackers trying to infect your computer, steal your data, or even gain access to your network. A total of 91.9 million URLs serving malicious code were detected in the fall of 2012. Malware is a generic term for many different types of “infections” that may be downloaded to your computer.
Phishing is a scam where an attacker uses fake or partial information to try to trick someone into revealing passwords and other confidential information, typically via email or social media. LastPass helps protect you against fake-website phishing attacks by not filling your credentials when it does not see a URL or field match.
Viruses are programs that infect software on your computer. When you run this software, it causes the virus to spread throughout your computer. Basically, the virus can replicate itself and continue spreading to other computers (much like a biological virus), causing hardware and software issues.
Worms are programs that replicate and spread through a network, infecting multiple devices. Unlike a virus, a worm does not need to attach itself to an existing software. Worms cause harm to a network, while viruses cause harm to a targeted computer.
Trojans are software that “appear” genuine, and invites the user to run it, but instead, it releases a malicious load that deletes your files and harms your computer. 49% of all Kaspersky Lab threat detections in Q2 of 2012 were multi-functional Trojans.
A backdoor is a method of bypassing normal authentication to illegally gain remote access to the machine and the data on it. It can be installed to computers by Trojans or worms.
Spyware is a software that gathers a users information without their knowledge, and sends this data to third parties.
KeyLogger software captures the keystrokes entered on your computer keyboard. The keylogger software is then able to transmit these keystrokes where they can be viewed. As a prevention against keyloggers, when using a public or “untrusted” computer, LastPass offers the option to input your master password with a ‘Virtual Keyboard’, allowing you to login without using the keyboard to type your master password.
Adware are programs that send advertisements or “pop-ups,” to users based on their internet usage, which can display annoying ads or link you to more malicious software.
Scareware is malware trying to pose as a viable solution to a “fake” virus on your computer. The idea behind Scareware is to “scare” you into installing an antivirus software directly to your computer, which in reality is the virus, and then may hold your data ransom.
Rootkits modify a user’s operating system so a malware can stay hidden.
Spam are bulk emails sent without any consent from the receivers. According to the Electronic Commerce in Canada, 80% of emails sent today are spam.
Apps are a relatively new threat, but their popularity extends the risk they may pose (over 50 billion apps are available for download from the iTunes App Store). Many users believe that apps are safe because they are sold from “trusted” providers, like the iTunes App Store or Google Play Store. However, legitimate apps may be infected and sold through these locations. An example, the Dougalek malicious program, which tens of thousands of people downloaded, led to one of the biggest data breaches ever caused as a result of mobile devices. Also, free apps from unofficial providers are frequently compromised as well.
How Can You Stay Safe?
We want to emphasize that using LastPass makes you safer and that following these practices will further help to improve your security:
- Never tell your LastPass master password to anyone for any reason.
- Always use and make sure your anti-virus, anti-malware, and firewall software are up-to-date.
- Never click on any links in emails unless you specifically requested that the email be sent to you. Even then, if it seems out of character, double-check with the sender before opening a link or attachment.
- Never assume that any email you receive was actually sent by the recipient listed as the sender.
- Avoid using untrusted computers or untrusted computer networks.
- Do not trust any communications claiming to be from LastPass that reveal any personal or confidential information about you whatsoever.
- Use LastPass to automatically fill login credentials for websites you visit to avoid the risk of phishing attacks.
- Always click on the LastPass browser plugin icon to access your LastPass vault, rather than links in any suspicious emails.
- Only download apps from trusted companies, and check all permissions before completing the download.
- Use multifactor authentication for increased security.
In the end, good security is about being proactive and vigilant. What other tips and tricks would you recommend?
Have a question you’d like to see answered by the LastPass team in a blog post? Let us know in comments or send us a note at marketing[at]lastpass.com. If we choose your question, you’ll get a Tshirt! | <urn:uuid:680dccf1-f455-4e86-9261-463d5d04a5bc> | CC-MAIN-2017-09 | https://blog.lastpass.com/2013/06/common-online-threats-and-how-to-protect-yourself.html/?showComment=1370321787064 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00480-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.922197 | 1,091 | 3 | 3 |
The following will also be used as part of my day job. To ITSec types, it’s likely old hat, but it may be new to some of my readers. Some terminology in here is my-job-specific, but I did try to generalize it as much as possible.
What is it?
Firesheep is a readily-available and easily-installed add on for Firefox, a commonly-used web browser. It allows the person using it to, within certain parameters, steal web cookies used for authorization on many common websites, mostly social networking but also, most notably, Hotmail. This is a form of attack commonly known as sidejacking. Note that while this document largely talks about uw-wireless, everything about that network applies equally to any open or unsecured wireless network, such as uw-guest or those commonly found in coffee shops, restaurants, and other places offering free service to customers.
Please note that discussing a tool does not condone its use. In particular, the use of Firesheep on a campus network is a violation of the principles laid down in the Guidelines on Use of Waterloo Computing and Network Resources and, as noted by these Guidelines, is subject to discipline under the appropriate University Policy or Policies. Use of Firesheep here, or anywhere else, may furthermore violate local laws and regulations on privacy, mischief, or wiretapping.
What does it mean?
Firesheep makes it easy for anybody who can click a mouse to be able to access these websites as if they were the person whose credentials they’ve stolen – and that person may not even be aware of the accesses.
How does it work?
The victim has to be on the same wireless access point, using the same wireless network, as the attacker. That implies a certain physical proximity. The wireless network must be open, and the victim must be using the network at the same time as the attacker, and must be accessing these sites (but see below, the victim may not necessarily know they were using a particular website, embedded content can cause problems).
When Firesheep is started, the attacker’s computer starts passively watching the network for authentication credentials. When it sees such, it saves them and presents an icon for the attacker to click; this allows the attacker to easily access the website as the victim.
It should be noted that a user may inadvertently access a website. Many websites will have badges to follow the site’s author on Twitter, or link to them on Facebook. Sometimes the simple act of loading that website can cause your browser to send and receive authentication cookies, and therefore expose this information to the attacker.
I don’t use social networks.
Firesheep doesn’t only work on social networks, so you still may not be safe. Web services as varied as Evernote, Cisco, eBay, Amazon, and Slicehost have had Firesheep handlers written for them.
Why doesn’t the University stop it from happening?
This attack takes advantage of two weaknesses in the way victims might access vulnerable websites.
The first weakness is that open networks are precisely as the name implies. All clients transmit all data unencrypted using their wireless radio. Like any radio, anybody can listen in. This means that unless the client takes extra precautions to encrypt data, such as using TLS/SSL encryption on the data stream itself, that data is exposed for anybody within range who’s listening. This means that many websites which do encryption properly, such as almost all banking websites, are not vulnerable to authentication theft in the way that Firesheep accomplishes it.
The second weakness is in the way the vulnerable websites perform authentication and authorization. Without getting too technical, these sites rely on cookies, and merely having that cookie implies both that you are who you say you are, and you’re allowed to access the content you’ve requested. Not all websites have this issue; as noted, banking websites don’t rely on this model. Other sites such as GMail and corporate applications at the University of Waterloo don’t either, and so your credentials there are safe from this attack.
What can I do to protect myself?
The simplest thing you can do is not use open wireless networks. These are ones which do not require a key or password to access. uw-wireless is one such network, but many coffee shops and restaurants and other companies provide such networks for the convenience of their customers. This also affords attackers certain conveniences.
The next-best thing is to not use sites vulnerable to Firesheep attack whilst connected to an open wireless network. Be aware that some sites you visit may effectively force you to give up your credentials anyway, as noted under How Does It Work?
Some clients customized for use by some social networks, such as Tweetdeck, may allow these networks to be used in a manner that does not expose authorization credentials to Firesheep. That does not necessarily mean that these clients always operate in a safe manner.
Some people have authored Firefox extensions which could potentially warn you about the use of Firesheep on your network segment. The use of these extensions (the most commonly mentioned are FireShepherd and BlackSheep) is prohibited on University wireless networks, as they have a deleterious effect on the operation of the campus network and, not incidentally, of the remote service. | <urn:uuid:107a28bd-119d-4e94-aa02-1d142d3a2732> | CC-MAIN-2017-09 | http://snowcrash.ca/tag/social-media/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00532-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94599 | 1,117 | 2.703125 | 3 |
Apparently, solar flares – giant explosions on the sun that send energy, light and high speed particles into space – are back, with a vengeance. There are numerous reports that an intense solar storm – which began when a solar flare exploded from the sun – could affect airline flights and satellite operations. You can see the flare in this movie from the Solar Dynamics Observatory in a combination of light wavelengths. The flare created what NASA calls an earth-directed coronal mass ejection (CME), and NASA's Space Weather Services estimated that it has travelled at over 630 miles per second to earth and is causing the solar storm. The storm is creating a wave of charged particles, apparently, and these particles are creating interference.
By the way, you can see video of the CME here, as captured by the Solar Heliospheric Observatory's LASCO C2 camera. This CME was associated with an M3.2 class solar flare.
In this Space.com article, it’s reported that the solar flare unleashed “a plasma wave that may supercharge the northern lights for skywatchers in high latitudes.” And boy did it.
Check out this video on space.com, which captures amazing footage of the aurora borealis in Sweden on January 24th, 2012.
But back to the havoc such storms and flares can wreak…
In this article by msnbc.com staff, its reported that airlines were having to change routes for some of their scheduled flights. A strong CME aimed directly at earth can also cause disruptions to satellites in orbit, as well as power grids and communications infrastructures on the ground. And yes, even data centers.
Back in August 2011, I wrote this blog about how solar explosions could impact earth and, in turn, data centers. I had come across this article, which states that particularly high-powered flares directed at Earth, such flares and associated solar magnetic storms known CMEs can “create long lasting radiation storms that can harm satellites, communications systems, and even ground-based technologies and power grids.”
NASA and others (such as the National Oceanic and Atmospheric Administration) do monitor for X-class flares and their associated magnetic storms. And another comforting thought is that while the very recent solar flare set off an extremely fast-moving CME, but the ejected cloud of plasma and charged particles was not directly aimed at Earth and hit the planet at an angle instead. In short, this CME’s impact is lessened.
But it is yet another reminder that regular and vigilant disaster recovery planning is necessary.
No one wants to have to make the call to his or her boss, alerting them to the fact that a solar flare and resulting CME has taken down the data center and data has been lost, mainly because there were no robust backup power supplies or automated fail-overs. | <urn:uuid:8e8e1e53-17f0-4ec9-9eb4-6367be5b8107> | CC-MAIN-2017-09 | http://www.itworld.com/article/2731835/data-center/intense-solar-storm-electrifies-the-air.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00177-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95721 | 591 | 3.109375 | 3 |
Strong Signals: The UltranetBy John Parkinson | Posted 11-01-2003
The radio frequency identification infrastructure developed at the Auto-ID Center at MIT and being steadily rolled out by early adopters is an interesting departure from recent technology architectures, including both the Internet and the Web. It uses essentially dumb endpoints (the tags) that are cheap to build in huge volume, and a network of sensors (the readers) and servers that contain the intelligence and information (and hence the cost) for making sense of tag data.
That makes RFID an early example of a mesh network, in which network components close together in space are moderately to highly connected to each other (in this case wirelessly), but only loosely connected to components further away. In this architecture, distance matters and the essentially flat connection model of IP (in which you can't tell where anything is physically by looking at its address) gives way to a layered interconnection scheme (in which you route "local" traffic differently from "long-distance" traffic). Think of these differences as similar to the differences between the wired and wireless phone networks.
The Auto-ID architecture is the purest example to date of an ultrathin client, fat-network model, cleverly architected so that it is essentially fractalit looks topologically similar at all resolutionsbecause a local mesh needs only a limited amount of intelligence (computing power) and information to function. By treating a local mesh as a relatively dumb node on a larger-scale, smarter and richer mesh, we can build a very reliable, inherently scalable system architecture without having to know in advance what capabilities and capacity every node eventually will need.
Mesh networks contain a lot of aggregate bandwidth, but most of the individual connections don't provide or consume much of it. Lots of potential local connections make a mesh node hard to destroyalthough generally not so hard to disconnectbecause there may be few connections to more distant nodes. However, the nature of the aggregate connectedness also makes connection damage easy to locate and repair.
Interestingly, the infrastructure architects at Auto-ID did not require that the endpoints be dumb; indeed, they are happy to accommodate any kind of endpoint, and the RFID world already contains many smart endpoints such as the tracking units on railcars and shipping containers. Smart endpoints also exist outside the RFID world, and it's interesting to speculate how one such endpoint, an emerging class of ultrapowerful, ultraportable computing devices, might fit into the mesh model.
Ever since IBM showed us the Meta Pad ultracompact personal computer in 2001, we've been tracking efforts to bring such devices to the marketand extrapolating what changes in IT infrastructure might result. Two years later we still don't have a commercially viable ultracompact PC, despite a lot of hype and some prototypes from IBM, Tiqit, OQO and others. But it's inevitable that we will get oneperhaps in the first quarter of 2004.
We recently completed a scenario exercise on the capabilities of such a device, most likely to appear in 2006, the outer edge of our weak-signals scenario horizon. We started by assuming a few thingsthat we wouldn't need much more processing capability than we could get from today's fastest Intel Pentium M processor, and that the device would have only moderate graphics processing requirements. And we made some conservative extrapolations about features, memory density, storage capacity, battery life and weight.
We came up with a device about the size of a paperback book that would fit into a large pocket, include handwriting and voice-recognition, and store around two terabytes of data. It would have 802.11g, Bluetooth and some form of cellular connection as well as audio- and video-processing capabilities and it may be able to act as both a phone and a camera. In basic formwithout any special add-ons or softwareit would cost less than $1,000.
The device would also have a biometric-based signature that is identity-linked to its user (and probably a restricted-capability, public-use mode as well). It would be "dockable" via both function and capability-specific docking stations as well as an optical-fiber link to a generic docking port, for bulk data transfers that would consume too much wireless bandwidth. It would be its own Web site and transaction processor, blurring the distinction between client and server. And most important, it would be "mesh aware" because it would be its own RFID tag and, from time to time, a global positioning system locator. It could be a tag reader, toobut it doesn't have to be.
Two terabytes is a lot of storagebut is it enough? It's about equal to the total personal storage I have today, although mine, which is about half full, is spread over several highly nonportable NAS RAID 5 volume sets and a dozen desktop PCs, notebooks and tablets. I currently store transaction-level copies of all the external personal records that I have access toevery credit-card transaction I've ever made since 1991, for exampleexcluding 100GB of health data, but not much else. Judging from what I've stored so far, I can see how you could just about fit a lifetime of critical personal data into two terabytes. But there's a lot you couldn't fit: not every scanned document image (although my entire scanned document archive takes up just 15GB, so it might); not every digital image and home video clip (I've accumulated 200GB of images and video clips so far, and the volume is growing steadily as image quality improves); not every favorite DVD (my collection of digital movie files already adds up to half a terabyte); not every track of every CD I like (100GB of high-definition MP3 and WMA files so far). But many of my favorite 10 percent in most of these categories probably would fit.
If lots of peoplesay 100 millionhad such devices, much of the need for persistent transactional network data storage, such as bank account records and credit bureau databases, could go away. We could simply replace it with a directory-driven, peer-to-peer architecture where transactions occur between small virtual networks of services, including demand (what's wanted), supply (who's got it), verification (who can be trusted) and archiving (maintaining a backup copy of the transaction). In this model I can keep an authoritative record of all my transactions on my own ultraportable device, reconciled periodically with the pertinent counter-parties as needed. Add in ad hoc connections for peer-to-peer workgroups that don't need a serveror in which any member can be the server when one is neededand I can support collaborative work as well.
There probably will still be circumstances where economies of scale make conventional centralized capacity worthwhile, but these could become an exception rather than the rule.
If the new portable devices can hop from mesh node to mesh node, using whatever connection scheme makes the most sense at the moment, the majority of them will always be in touch with the meshand the mesh itself can use the spare capacity of all those connected devices as a dynamic computing grid. No more need for a lot of expensive, centralized computing resources either. And finally, because each "PC" is almost always close to a "fixed" mesh node, it knows where it is, without having to power up its GPS receiver and find its location directly from the satellites. Even if it can't find a fixed node on the mesh, it could probably find a nearby mobile node that can help out with location data. This would save a lot of battery capacity. And if for any reason a device drops off the mesh altogether, it could always get a quick fix from the GPS satellites in order to find its way back.
This is a very different infrastructure approach from anything we have in place todayyet lab-level efforts have been built and seem to work. It won't come into existence overnight, and it will have some engineering challenges that we don't have certain answers to yet. RFID may be the first step, but it won't be the last.
John Parkinson is chief technologist for the Americas at Cap Gemini Ernst & Young. | <urn:uuid:6526ee4c-2662-4664-b017-351b24913846> | CC-MAIN-2017-09 | http://www.cioinsight.com/print/c/a/Past-Opinions/Strong-Signals-The-Ultranet | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00581-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956815 | 1,693 | 2.578125 | 3 |
What is Spam mail guide - Email address security
Spam mail is unwanted e-mail. It is e-mail that a user has not requested for, or usually knows about. Spam mail also known as junk or bulk e-mails are sent by spammers, usually attempting to legally or illegally sell their products or services. Spammers send copies of an e-mail to as many users as they can. Today over 90% of e-mail’s are spam mail. This has become a major concern around the world, and is taking up an individual’s time deleting spam mail, and also taking up business resources such as bandwidth. Laws have been put into place to prosecute spammers, however spam mail seems to be on the rise all the time. Spam mail is also known as unsolicited (Not looked for or requested) e-mail.
How did they get my e-mail address?
E-mail addresses are collected from a number of sources as below;
- Chat rooms,
- Websites, most commonly when a user purchases a product or service and specifies their e-mail address as part of an online transaction.
- Newsgroups, Forums and Blogs
- viruses which harvest users address books
- And other sources
E-mail addresses are sold to other spammers. Much of spam mail is sent to invalid e-mail addresses as well, hoping that there is a possibility it would be a legitimate address.
Basics into avoiding Spam mail
Avoid giving out your e mail address to anyone, such as the sources mentioned above.
Avoid signing up for free offers and ensure you tick the option, not to participate in receiving future promotions/offers.
Create a 2nd e-mail address so you can use this for potential spam mail.
Purchase spam protection for your computer. See my Spam product guide.
Check with your ISP, they may provide spam protection.
Also read my;
Wikipedia's guide to Spam | <urn:uuid:c9c6c14c-fe6b-4b97-a9e5-a6af8efec585> | CC-MAIN-2017-09 | http://internet-computer-security.com/Spam/What%20is%20Spam.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00349-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.97088 | 409 | 3 | 3 |
What is cyber security?
That’s the question that GTreasury, a company that offers a Treasury Management System (TMS), attempts to answer in a recent infographic.
First, a primer on what some of the terms surrounding cyber security mean:
- Spam is electronic junk mail. The term refers to unsolicited, and often times unwanted, email.
- Spear Phishing is a type of highly specialized attacks against specific targets to collect information and or gain access to systems. A common example of spear phishing is when a cybercriminal initiates an attack on a business to gain access to the network. From there, the criminal is able to send emails that look authentic and can easily make it through spam filters.
- Social Engineering attacks involve manipulation of users and employees to offer confidential and sensitive data. These types of attacks usually involve email to invoke fear and urgency, forcing the unsuspecting victim to share information by opening a malicious file or click a harmful link. It is a type of confidence trick for the purpose of information gathering, fraud, or system access. It differs from a traditional “con” in that it is often one of many steps in a more complex fraud scheme.
- Confederates Inside the Target Institutions refers to a group of cybercriminals of which one or more are secretly working from within the targeted institution. These ‘inside men’ can use social engineering, theft, or other nefarious means to aid their colleagues outside the institution.
- Black-Hat Tool Kits are a set of tools whose initial purpose was to provide researchers with the means to test the security of networks and devices. However, they are often used by hackers, “black hats”, to break into networks.
- Single sign-on (SSO) is a method of access control for multiple related, but independent software systems. With SSO a user logs in with a single ID and password to gain access to a connected system or systems without using different usernames or passwords, or in some configurations seamlessly sign on at each system. Conversely, single sign-off is the property whereby a single action of signing out terminates access to multiple software systems.
- IP White Listing is a term for a list of email addresses or IP addresses that are deemed spam free. The list overrides blacklists and spam filters, allowing emails to be delivered to a users inbox rather than filtered out as spam.
- Multi-Factor Authentication is a type of access control where a user is only granted access after providing several separate pieces of information for authentication. Typically at least two of the following categories: knowledge (something they know), possession (something they have), and inherence (something they are).
- A firewall is a network security system that monitors and controls the incoming and outgoing network traffic based on predetermined security rules. A firewall typically establishes a barrier between a trusted, secure internal network and another outside network, such as the Internet, that is assumed not to be secure or trusted. | <urn:uuid:aed52ffa-5abf-4e1c-a1a7-367e03432972> | CC-MAIN-2017-09 | https://techdecisions.co/it-infrastructure/what-is-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00349-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938444 | 610 | 3.46875 | 3 |
A biometric is a safety feature based on using “something you are” to confirm your identity. Fingerprints, hand prints and iris scans are all examples of a biometric. All in all, Biometrics help to ensure someone is who they claim to be, because it ties identification to one’s physical body, which is the true root of our identities.
The international agreements on electronic passports require a biometric identifier, which is used to verify that the person presenting the passport is really its owner. There is a digital version of the photograph, and following a European Union directive, all EU countries are working to add fingerprint biometrics protected to the epassport. Germany for example already has two fingerprints, one from each hand, in the country’s passport. All European Union countries are working to add fingerprint biometrics protected to the epassport, and are currently conducting cross-border tests of these more advanced epassports.
The best way to find out how you can obtain a biometric passport is to visit the website of your country’s Government. From there, you can find out if your country is issuing biometric passports, and the specific URL for its passport office (outside of the U.S. only). The Web site for the United Kingdom’s passport office, for example, is http://www.ips.gov.uk.
In the United States, visit the government website l and look at the “Apply for a US passport” section.0 | <urn:uuid:1591dce3-e41b-489c-bf48-e62725a4ae02> | CC-MAIN-2017-09 | https://www.justaskgemalto.com/en/what-is-a-biometric-passport/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00349-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.924944 | 313 | 2.796875 | 3 |
Password Police Cite Evernote MistakesEvernote used the wrong security method to store passwords, cryptography experts say. Unfortunately, it's a common error.
Anonymous: 10 Things We Have Learned In 2013 (click image for larger view and for slideshow)
Cryptography warning: Knowing how to secure passwords remains tough. Really tough. As in: Businesses should seek ultra-professional help whenever they approach password storage.
More evidence of that fact came to light this week with the announcement from online note-taking service Evernote that attackers had breached its systems and stolen email addresses, usernames, as well as hashed and salted versions of customers' passwords. As a result, the company opted to proactively reset all 50 million users' passwords, and apparently did so before attackers were able to use the stolen data to access any accounts. Furthermore, Evernote says it will accelerate plans to offer users optional two-factor authentication.
Those may be strong measures, but then time wasn't on Evernote's side. Notably, Evernote used the MD5 cryptographic algorithm to secure its passwords, despite numerous security experts saying that MD5 isn't fit for that purpose -- no matter how well it might be salted. That's because MD5 is a cryptographic hash designed for quick data verification, which makes it child's play for an attacker to compromise through brute-force guessing. "When you can do five billion [guesses] per second on one GPU [graphics processing unit], the salting doesn't make that much of a difference," Adam Caudill, a security consultant and software developer, told Ars Technica. "You need something else, something like bcrypt, scrypt, or PBKDF2 to slow things down so you can't do 5 billion [guesses] per second."
Caudill's reference to password hashing refers to the fact that websites don't store encrypted copies of passwords. Instead, they need to run the password -- after adding random data called a salt -- through a one-way cryptographic password hashing algorithm that produces a hash value, which gets stored. The original copy of the password is then discarded. Whenever a user later enters their password into a website or application, the input gets run through the password hash, and the resulting hash is compared to what's been stored. If they match, it means the password entered by the user is legit.
When Evernote CTO Dave Engberg was presented with criticism in 2011 that using MD5 wasn't a secure way to handle passwords, in response to a blog he'd posted about Evernote's security architecture, Engberg disagreed, saying that "we salt the passwords with a large random value, but the MD5 flaws aren't really relevant to internal password storage," and noting that "the hashed password is never exposed outside of our data center." Except, of course, if attackers breach Evernote's website or database and steal them. At that point, MD5-hashed passwords are trivially easy to crack.
Cryptography may be sexy, but password security too often remains an abstract concept -- until it's too late. As Mozilla software engineer and architect Ben Adida noted on Twitter, "Evernote story is critically important in one respect: security doesn't matter until all of a sudden it does, and then it *really* matters." Indeed, once attackers come calling, any password-implementation failure can facilitate a complete password-security compromise.
More confirmation that password security is routinely mishandled abounds. In June 2012, for example, both LinkedIn and Last.fm disclosed breaches in which attackers obtained their users' passwords, which the sites hashed using SHA1 and no salt. In the case of LinkedIn, of the 6.5 million password hashes obtained by attackers, they'd reported being able to quickly crack 163,267 of them, and were hard at work on the rest.
In January 2012, meanwhile, Zappos warned its 24 million customers to reset their passwords, warning that their personal details and account information, including passwords, had likely been stolen by attackers. But the company said the passwords had been stored "cryptographically scrambled" -- a phrase that security experts dismissed as marketing-speak. A Zappos spokeswoman, however, declined to specify exactly how the passwords had been secured.
"Nobody gets this right," password expert Thomas H. Ptacek, a security researcher with Matasano Security, told security reporter Brian Krebs last year. "I think it's a problem of generalist developers writing password storage systems. They may be good developers, but they're almost never security domain specialists."
What's the problem? "In LinkedIn's case, and with many other sites, the problem is they're using the wrong kind of algorithm," Ptacek said. "They use a cryptographic hash, when they need to use a password hash." Cryptographic hashes are designed to secure environments in which data must move at ultra-fast speeds, for example with IPsec, which encrypts individual packets. In those situations, latency isn't tolerable.
But when it comes to password security, some latency -- even if only measured in milliseconds -- is acceptable, because it means that even if it slightly slows a website's password-verification process for users, it can completely deny an attacker's attempt to decrypt stolen passwords.
"For data integrity, fast algorithms are preferred to allow quick verification," said security architect Brian Keefer (aka Chort) via email. "For password storage there needs to be [a] defense against offline cracking, which means the algorithm needs to be slow."
For attackers, "if you make a password hash take longer, that's murder on them," Ptacek said. | <urn:uuid:581fbccf-4fd1-456a-8b0b-ebac0d3eb365> | CC-MAIN-2017-09 | http://www.darkreading.com/risk-management/password-police-cite-evernote-mistakes/d/d-id/1108987?cid=sbx_byte_related_commentary_byte_news_will_olympics_streaming_video_take_your&itc=sbx_byte_related_commentary_byte_news_will_olympics_streaming_video_take_your&piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00577-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958071 | 1,175 | 2.6875 | 3 |
Quartz is the umbrella term for the Mac OS X display layer through which all drawing and screen display is done. Conceptually, the most important parts of Quartz are:
- Quartz 2D is a vector-based drawing API. The functions it provides map nicely to the drawing commands that make up Adobe's PDF page description language. Quartz takes advantage of this to enable nearly anything drawn to the screen to be written to a PDF file, and to help unify printing and screen display under a single API.
- The Quartz Compositor is the engine that combines ("composites") the rasterized output of each application's drawing commands into a single scene.
- The Window Server manages all the windows on the system, tracking the input focus and routing events (e.g., mouse or keyboard actions) to the appropriate applications.
Practically speaking, the Quartz Compositor exists inside the window server, and Quartz 2D is implemented as a series of libraries. Mac OS X has other drawing APIs in addition to Quartz 2D. OpenGL, QuickDraw, and QuickTime are all ways to produce rasterized content that the Quartz Compositor must then add to the screen.
Since the Quartz Compositor only works with rasterized data (i.e., bitmaps), the result of every drawing command must be stored as a bitmap in a memory buffer somewhere. These memory buffers are also called "backing stores," and they exist for every window on the system.
The Quartz Compositor reads all of these backing stores and combines them into a single scene, which is then moved in the frame buffer of the video card. The frame buffer is essentially the backing store for the single "image" that makes up the entire screen.
Conceptually, the system looks like this:
Things get more interesting when we start to consider which pieces of hardware are involved in which parts of the system. Although the conceptual model has remained the same, the implementation has changed radically as Mac OS X has developed.
OpenGL has been completely hardware accelerated from the start. It is, after all, one of the two APIs that modern video cards are designed to accelerate (DirectX being the other one). We'll discuss QuickTime acceleration in a later section. That leaves Quartz 2D and QuickDraw, the two general purpose drawing APIs in Mac OS X.
Quartz 2D is Mac OS X's "native" drawing API. It was created pretty much from scratch for Mac OS X 10.0, and it remains the recommended way to draw to the screen. QuickDraw is the venerable, revolutionary drawing API from classic Mac OS that enabled the first mass-market GUI operating system to run on an 8MHz Motorola 68000 CPU attached to 128KB of RAM. QuickDraw is supported in Mac OS X in order to provide backward compatibility for Carbon applications ported from classic Mac OS.
When Mac OS X was introduced in 2001, the division of labor looked like this.
Quartz in Mac OS X 10.0 - 10.1
The bandwidth numbers are slightly misleading because they represent the state of the art in 2005: a Power Mac G5 with an ATI X800 video card. No such beast existed when Mac OS X 10.0 was introduced. I chose to keep the bandwidth numbers the same to highlight the architectural differences of the various implementations of Quartz. The hardware differences will be a factor eventually, but in the meantime just consider the ratios between the bandwidth numbers rather than their absolute values.
In the Mac OS X 10.0 Quartz implementation, it's clear that the CPU is doing a lot of work. Every major component of the system executes on the CPU, and the CPU is involved with nearly every transfer of data. The sole exception is the hardware-accelerated final drawing to the screen. (Well, to the frame buffer really. The video card handles flushing the frame buffer to the screen.)
Note that only the Quartz Compositor gets direct, hardware-accelerated access to the frame buffer. Applications can't have direct access to the frame buffer because the Quartz conceptual model requires the output from all the applications to be blended together to create the final scene. Everything has to go through the Quartz Compositor.
Since the Quartz Compositor's primary job is to blend the backing stores for each window into a single, cohesive scene, accelerated access to the frame buffer isn't much good until the actual work of compositing is completed. Then, yes, the finished scene can be squirted into the frame buffer with no CPU involvement.
This level of hardware acceleration is actually a step down from what existed in classic Mac OS. The QuickDraw display model used in classic Mac OS was not a compositing engine. Instead, individual applications "owned" the portions of the screen where their windows were visible, and could flip pixels on and off in those areas without worrying about what other applications were doing. Direct frame buffer access was therefore granted to any application that wanted it.
On top of that, contemporary video cards supported what was called "QuickDraw acceleration." QuickDraw drawing commands were translated by the video driver and then sent directly to the video card. The video card would then do the actual drawing, modifying its frame buffer as appropriate to draw the lines, shapes, or whatever else the commands dictated.
Applications running in Mac OS X 10.0 had none of those advantages, and it showed. Mac OS X 10.0 felt slow. The CPU had to work overtime, first to decide what to draw in response to each applications' drawing commands, and then to composite the backing stores for each window into a final scene. That's a lot of drawing, a lot of blending, and a lot of traffic going between the CPU and main memory (RAM). Also remember that in the days of 10.0, the bus between the CPU and RAM was 100MHz—if you were lucky! It was painful.
The Quartz software was further optimized in Mac OS X 10.1, and the hardware got marginally faster. (Woo! A 133MHz bus between CPU and RAM!) But the division of labor remained the same until Mac OS X 10.2 Jaguar in 2002. Jaguar's implementation of Quartz looked like this.
Quartz in Mac OS X 10.2 - 10.3
Jaguar effectively moved the Quartz Compositor onto the video card. It did this by turning the window server into an OpenGL application. Each window was treated like a 2D surface, with the backing store as its texture map. Since the video card was already designed to accelerate OpenGL, this refactoring made the Quartz Compositor a perfect fit for the abilities of GPU.
This hardware acceleration of the Quartz Compositor was called Quartz Extreme. It only worked on video cards that supported arbitrary texture dimensions (since a window backing store can potentially be any size) and required a 2x AGP bus in order for the video card to efficiently read backing stores directly from RAM.
Moving the Quartz Compositor onto the video card reduced Quartz's CPU load tremendously. All that tedious blending of pixels is exactly what the GPU is good at. It also got the CPU out of the business of ferrying pixels from the backing store into the video card. Pushing pixel data from place to place is definitely not a good use of CPU power, especially when front-side bus speeds are still stuck below 200MHz.
The effects of Quartz Extreme were easy to feel. My favorite test was to shake a transparent window on top of a playing QuickTime movie. In order to comply with the conceptual model of Quartz, the area under transparent window had to be re-composited every time either the window moved or another frame of video was displayed.
Without Quartz Extreme, on what was a fast Mac for the time (a Power Mac G4 800MHz), shaking the transparent window used so much CPU time that the QuickTime movie, starved for cycles, dropped to less than half its full frame rate. Layering five more stationary transparent windows on top of the QuickTime movie while still shaking the top window effectively used 100% of the CPU. The QuickTime movie simply stopped playing entirely: 0fps.
With Quartz Extreme enabled on the same Mac, the QuickTime movie stayed at a rock-solid 24fps the entire time. I actually tried layering on up to 25 windows before I gave up. The frame rate of the QuickTime movie never wavered.
Like Mac OS X 10.1, version 10.3 Panther added software optimizations to Quartz but did not change the division of labor. So, what's next?
With the advent of Quartz Extreme, the CPU no longer had to blend window buffers or push them to the video card. Unfortunately, window buffers don't spring fully formed from the application into the backing store in RAM. Something actually has to draw the contents of each window into the backing store.
A Safari window, for example, is full of text, graphics, buttons, and all manner of things. Sure, it's great to have the finished, fully drawn Safari window blended onto the screen by Quartz Extreme without any involvement from the CPU. But unless the Safari window is transparent (or buried under one more transparent windows), that task pales in comparison to the task of actually drawing the window contents in the first place.
Just think of the text alone. All those little characters need to be rasterized from their vector font definitions, then sub-pixel antialiased and blended onto the background. This all has to be done by the CPU—even the blending. Remember, Quartz Extreme only accelerates the Quartz Compositor, and the Quartz Compositor only deals with completed window buffers. Any blending that has to be done in order to actually draw one of those window buffers is up to the CPU. Only after all drawing is complete is a window buffer passed off to the Quartz Compositor.
Drawing takes place whenever a window's contents change. Aside from the initial drawing of a window's contents, two other common cases spring to mind: scrolling and resizing windows. Scrolling can cause the entire contents of the window to be redrawn many times a second. Depending on how the contents of a window reflow, resizing can cause the same thing to happen.
Again, Quartz Extreme does nothing to help here. It's just twiddling its virtual thumbs over on the GPU, waiting for the CPU to finish drawing the contents of the window buffer. Then it quickly blends the completed window buffer onto the screen without breaking a sweat and begins waiting for the next iteration of the window's contents to be drawn.
In this situation, drawing speed can be quantified by a "frame rate" (number of images drawn per second) much like in a 3D game. To borrow (and slightly modify the meaning of) some more 3D gaming terminology, in the days before Quartz Extreme, Quartz was often "fill-rate limited." In situations that required large areas of multi-layered blending, the Quartz Compositor simply couldn't compose the scene fast enough to maintain a good frame rate for the entire display.
In the post Quartz Extreme era, drawing is often "CPU bound." Composing the finished scene by blending all the window buffers is no longer the bottleneck. The GPU eats that task for breakfast. Instead, the system spends most of its time waiting for the contents of those window buffers to be produced in the first place.
Clearly, the solution is to dramatically increase drawing speed, but how? To answer that question, take another look at the two earlier Quartz implementation diagrams. In addition to bandwidth and CPU involvement, there's something else to consider about the data flow lines that connect the various parts of the system: the size of the data that must flow down those lines.
The results of the drawing process (rasterized images) are big and require lots of bandwidth. Drawing commands are small. The best solution is to make sure the biggest data travels over the biggest pipes, leaving the small pipes to deal with the smaller data.
The Jaguar/Panther Quartz implementation doesn't look too great in this light.
Quartz in Mac OS X 10.2 - 10.3
The CPU is used to produce rasterized drawing results that are stored in RAM. That process has about 5GB/s of effective bandwidth on a fast Power Mac G5. When the drawing is complete, the video card has to pull the finished image from the backing store in RAM. Although this does not require CPU involvement, the video card can only pull data from RAM at about 2.1GB/s over an 8x AGP bus. This is the lowest bandwidth connection in the diagram, but it is tasked with carrying the largest pieces of data.
In addition to bandwidth constraints, there is significant synchronization overhead inherent in the Jaguar/Panther implementation of Quartz. When an application finishes drawing to a window backing store, it signals to the Quartz Compositor that its window is ready to be drawn ("flushed") to the screen. To flush a window, the Quartz Compositor has to read the window's backing store from RAM (over that narrow pipe) and then blend it into the scene.
While the window buffer is being read by the Quartz Compositor, the application can't write to it. In other words, the next "frame" of window contents cannot be drawn until the Quartz Compositor is done slurping up the last frame. The reverse is also true: the Quartz Compositor cannot begin reading pixels from the window backing store until the application has finished drawing to it.
The upshot is that, with drawing and compositing running on separate hardware (CPU/GPU) with separate local memory (RAM/VRAM), the resources shared between them (window backing stores in RAM) cause a significant synchronization bottleneck in the drawing system. Combined with the narrow path to the shared resource (relative to the large data it is asked to carry), this effectively limits the maximum frame rate of the drawing system in Jaguar and Panther.
The solution is simple: eliminate both the synchronization and bandwidth bottlenecks by moving Quartz 2D into the part of the block diagram with the fat, red lines: the video card. Enter Quartz 2D Extreme (Q2DE). | <urn:uuid:ebf65977-6d95-4916-a10d-577fd63c6f49> | CC-MAIN-2017-09 | https://arstechnica.com/apple/2005/04/macosx-10-4/13/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00221-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944154 | 2,902 | 3.171875 | 3 |
Researchers: Stuxnet Virus Origin Dates to 2007Duqu, Other Malware Believed Created on Same Platform
The roots of the Stuxnet virus that crippled Iran's nuclear program in 2010 and the related Duqu worm discovered this fall date back to 2007, new research suggests.
Researchers from Kaspersky Lab say at least two other pieces of malware may have been developed on the same computing platform, perhaps by the same individuals.
"Despite the large volume of data obtained - most of which has yet to be published - we still lack the answer to the fundamental question: Who is behind Duqu?" Kaspersky Lab researchers Alexander Gostev and Igor Soumenkov asked in a blog posted Wednesday.
"We believe Duqu and Stuxnet were simultaneous projects supported by the same team of developers," the researchers said.
In terms of architecture, the platform used to create Duqu [dyü-kyü] and Stuxnet is the same, the blog said. The platform is known as Tilded. "Its authors are, for some reason, inclined to use file names which start with '~d,'" the Gostev and Soumenkov wrote.
The researchers said they uncovered several other details that suggest at least one piece of spyware was based on the Tilded platform in 2007 or 2008 as well as other programs whose functional remains unclear that were developed between 2008 and 2010.
"From the data we have at our disposal, we can say with a fair degree of certainty that the Tilded platform was created around the end of 2007 or early 2008 before undergoing its most significant changes in summer/autumn 2010," the bloggers wrote. "Those changes were sparked by advances in code and the need to avoid detection by antivirus solutions.
"There were a number of projects involving programs based on the Tilded platform throughout the period 2007-2011. Stuxnet and Duqu are two of them - there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing: we're likely to see more modifications in the future."
In October, IT security provider Symantec reported a research lab had discovered on computers in Europe a worm very similar to Stuxnet dubbed Duqu (see New Stuxnet-Like Worm Discovered ). The Stuxnet virus was believed to have crippled centrifuges Iran uses to produce enriched uranium that could be used in a nuclear weapon. Speculation is that Israel and/or the United States are behind the development of Stuxnet. | <urn:uuid:5e78a3cd-2532-4a1b-af1e-43cee3506248> | CC-MAIN-2017-09 | http://www.bankinfosecurity.com/articles.php?art_id=4364 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00397-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.970376 | 523 | 2.90625 | 3 |
Google Tests Accessible Search Page
Google is testing a version of its popular search engine that would allow those with vision problems to more easily use the site. Called Accessible Search, the page is different from normal Google Web search in that it also evaluates site usability in ranking results.
Complex site designs can make web surfing very difficult for those with disabilities. For example, individuals that use devices to convert text to speech may find it hard to find what they're looking for, says Google Research Scientist T.V. Raman.
"If the information I'm after is on a visually busy page, I have to sort through that page to find the text I want--an extra step that can sometimes be very time-consuming," Raman said. The Google employee, who is blind, leads the project at the Mountain View, Calif. company.
The tweaked Google search engine can be found on Google Labs, the company's testbed for new products. Along with the traditional ranking algorithm, Raman has added an additional layer that inspects the site for usability issues.
To do this, the feature looks at the HTML code behind the page for specific attributes that make it easier for devices like page readers. "It tends to favor pages that degrade gracefully--that is, pages with few visual distractions, and pages that are likely to render well with images turned off," Raman explained.
The customized search engine is built on top of Google Co-op technology, which the company released to allow developers to build search engines that optimize results based on interests or other uses.
Millions could benefit from such work; a 2001 survey found that eight million people have visual impairments and would need some type of assistance in using the Web.
Lighthouse International, a national leader in print and online accessibility, applauded Google's recent efforts. "This is a very important step by Google and other Internet companies. It demonstrates an enlightened understanding of the need to apply sophisticated technology to meet the growing needs of the consumer," commented Lighthouse CEO Tara A. Cortes, PhD, RN. | <urn:uuid:466169e1-6547-4fd4-b2e8-c00a6c7a840e> | CC-MAIN-2017-09 | https://betanews.com/2006/07/20/google-tests-accessible-search-page/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00273-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946215 | 421 | 2.875 | 3 |
With all the buzz about Facebook kumbaya-ing with Greenpeace (and announcing earlier this month it will collaborate with the environmental group on clean and renewable energy), and more and more companies heading north for chilly climes to help keep their data center operations more efficiently cooled and green, you’d think everyone has figured out the best way to cool the data center. Fact is, there are still plenty of folks operating with what they’ve got and don’t have any near-term plans to make big changes. But small changes still can and do make a difference.
According to Schneider Electric, basic design and configuration flaws are keeping a lot of data centers from achieving their optimal cooling capacity and preventing them delivering cool air where it is needed. Recent increases in power density of newer IT equipment are testing existing data center design limits. The global energy management company says, in a recent white paper, typical mistakes are related to five areas: airflow in the rack itself; the layout of the racks; the distribution of load; and the layout of air delivery and return vents.
The first, airflow in the rack, relates to whether appropriate conditioned air is presented at the equipment air intake and that airflow in and out of equipment is not restricted. According to Schneider Electric, the two key problems that often occur are that the CRAC (or computer room air conditioner) air gets mixed with hot exhaust air before it gets to the equipment air intake and/or the equipment airflow is blocked by obstructions. For the former, the fix is often simple: the use of a blanking panel, which provides a natural barrier that increases the length of air recirculation path and reduced the equipment intake of hot exhaust air.
Interestingly, lots of data centers omit blanking panels, despite recommendations from all major IT equipment manufacturers.
Rack layout is another critical design element that affects cooling and ensures that air of the appropriate temperature and quantity is available at the rack and is designed to separate the hot exhaust air from the equipment intake air (much like blanking panels are designed to do). Schneider Electric says that by placing racks in rows and reversing the direction that alternate rows of racks (the hot-aisle-cold-aisle design), recirculation can be dramatically reduced. But there are folks that still put racks in rows that face in the same direction – a design flaw that causes significant recirculation and will most likely create hot spots.
Load distribution is well-known. The location of loads can stress data center performance and can give rise to hot spots where high-density, high-performance servers are packed into one or more racks. Often to counteract those hot spots, operators lower temperature set points or add CRAC units. Better to spread the load out where feasible.
Finally, the layout of air deliver and return vents is critical. Air conditioning performance is maximized when the CRAC output air temperature is highest, according to Schneider Electric, and in an ideal-world data center with zero recirculation, the CRAC output temperature would be the same 68-77°F (20-25°C) desired for the computer equipment. But this doesn’t happen in the real-world data center. So Schneider Electric suggests that the CRAC set point should now be set lower than what is necessary to maintain desired equipment intake temperatures. Although the CRAC temperature set point is dictated by the design of the air distribution system, the vendor notes, the humidity may be set to any preferred value. Setting humidity to high can detract from the air cooling capacity of the CRAC unit – which will need to power up dehumidification functions that affect its air cooling abilities – and humidifiers will have to be added to replace the water removed from the air by the dehumidification (oh, and humidifiers are a significant source of heat which then needs to be cooled and further detracts from the capacity of the CRAC unit).
Schneider Electric offers a lot of other tips in the white paper, titled Power and Cooling Capacity Management for Data Centers, which can be viewed here. | <urn:uuid:622d5d2a-dd41-4ff7-8b5c-5eac1a73abd3> | CC-MAIN-2017-09 | http://www.itworld.com/article/2733381/data-center/are-you-cool-with-your-data-center-cool-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00449-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925937 | 831 | 2.546875 | 3 |
There is so much about MPLS and how MPLS works. Here I wrote some simple introductory lines about it but only from one perspective. The costumer side one. There is nothing here about BGP and all the things that need to be done and configured in order for MPLS to function in ISP cloud. As an introductory in MPLS this text will take you to the central office and branch side of the MPLS configuration and in this way it will be simpler to explain and enter in the world of MPLS networking technology.
In MPLS networks, packets are sending with special MPLS prefix before IP packet data. With an MPLS header that is sometimes mentioned as a label stack. In MPLS header there are labels, every label with some value:
- Traffic-class field, important for quality of service – QoS
- Bottom-of-stack flag
- 8-bit time-to-live – TTL field
Only thing that is important for MPLS switches and the only thing that they are examining is the label stack. In MPLS there is no need for IP routing table. MPLS is becoming more and more popular in networking. These days it is very important and used in almost every network that previously used Frame Relay or ATM for connecting remote branches. ATM is a thing of the past, mostly because overhead in the packet headers. Frame Relays Virtual Circuits VCs that can connect only two end point are also become to expensive and consequently not very popular. MPLS in other hand offered simplicity and speed with less in price.
The technology behind MPLS is based on entire packets prefixed with an MPLS header. MPLS network can connect unlimited number of networks as virtual networks in one MPLS cloud. We can also say that there are no virtual circuits in an MPLS network if we look from customer perspective.
MPLS is not only faster than other technologies but is a big improvement over the Frame Relay and ATM in other ways. The bigger improvement is that each remote local network can be directly connected to all other locations without the need for PVCs – Private Virtual Circuits. This means that every branch office connected with MPLS is able to communicate directly with every other branch office without communicating through central office location. If we want to implement VoIP solution this is a big deal. We all know that the biggest VoIP enemy is delay, even more than slow link.
If you take a peek in the ISP (or Internet) cloud, you will see that there are MPLS paths within the cloud, of course. Not only there are, sure that there are there but this communication path in the cloud and the configuration can get pretty complicated, but from the other, customer perspective, there are no virtual circuits to support.
But the branches are communicating and there are no links connecting them, is that normal or we have missed something? If we send something from one place to another one across MPLS network cloud, the IP addressing will tell us next hop for reach the destination. How this possible if is there is a line in this text in the beginning that is telling us that for MPLS there is no need for routing table?
The true is that there is no need for your Routing table to support MPLS communication across the WAN. The WAN technology, BGP routing protocol and all that is provided by the ISP. That is the name standing for, they are providing us virtual network for communication between distant branches. The provider look at labels in order to make MPLS function, and our network has no labels.
The provider simply takes our packet and puts a label to that packet. After that the packet is forwarded through the cloud with help of that label. You private local LAN network can use the same subnet like some other company that uses the same provider and the same cloud for communication (for example 192.168.10.0/24). Customers from that other company that are also using 192.168.10.0/24 subnet are not able to connect to our routers. They are unable to connect to our branch routers because they have they own labels, those labels are different than ours. Is somehow similar to a VPN. Customers can see its own equipment but not anything else; even they are connected to the same Internet. Be careful, MPLS networks are not Virtual Private Networks or VPNs. MPLS has no encryption involved.
The best thing in MPLS beside the transparent functionality and simplicity is the support for QoS. There is the label in the MPLS label stack called traffic-class field, with the use of this label stack MPLS networks are supporting classes of service. The support for priority queue makes MPLS the best choice for companies that use VoIP across their WAN link. The best thing in this part of the story is that you don’t need to configure MPLS in order for him to work. The provider equipment is the place where the magic is done.
There are some things that you will still need to know for most implementations of MPLS. You will need some knowledge about BGP, and QoS. At least you will need to know how to configure them. Here is a sample MPLS router configuration for the router at our branch office that will use MPLS to connect our LAN segment to other LAN segments in other locations:
A little QoS configuration and in which VoIP RTP and call control will get first priority, and other stuff will be in second line, in default queue. In this example the priority queue will be able to use not more than 60 percent of the link, while call control will get 10 percent:
class-map match-any VoIP-RTP match ip dscp ef class-map match-any VoIP-Call-Control match ip dscp cs3 match ip dscp af31 policy-map MPLS-QoS class VoIP-RTP priority percent 60 class VoIP-Call-Control bandwidth percent 5 class class-default fair-queue
Here’s the configuration for the MPLS link. Notice that there’s nothing MPLS-specific in this configuration:
interface Serial0/2/0 description [ Branch 10 MPLS ip address 10.255.10.2 255.255.255.252 encapsulation ppp auto qos voip trust service-policy output MPLS-QoS
Here’s the inside Ethernet interface:
interface FastEthernet0/0 description [ Branch 10 LAN ] ip address 10.10.10.1 255.255.255.128 duplex auto speed auto auto qos voip trust service-policy output MPLS-QoS
Next, I’ll add a loopback address for routing, which will be useful for VPN failover (not shown here):
interface Loopback0 description [ Branch 10 Loopback ] ip address 10.10.10.129 255.255.255.255
Finally, we need some BGP configuration so that we can learn and advertise routes
through the cloud:
router bgp 65035 no synchronization bgp log-neighbor-changes network 10.10.10.0 mask 255.255.255.128 network 10.10.10.129 mask 255.255.255.255 aggregate-address 10.10.10.0 255.255.255.0 summary-only neighbor 10.255.255.1 remote-as 65035 neighbor 10.255.255.1 update-source Serial0/3/0 neighbor 10.255.255.1 version 4 neighbor 10.255.255.1 prefix-list Aggregate out no auto-summary ! ip prefix-list Aggregate seq 5 permit 10.10.10.0/24 | <urn:uuid:aa67b530-4fee-4ec3-a2c9-0c46b2350ac0> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/2012/mpls | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00449-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.904464 | 1,593 | 2.90625 | 3 |
An attack using the SMB file sharing protocol that has been believed to work only within local area networks for over a decade can also be executed over the Internet, two researchers showed at the Black Hat security conference.
The attack, called an SMB relay, causes a Windows computer that’s part of an Active Directory domain to leak the user’s credentials to an attacker when visiting a Web page, reading an email in Outlook or opening a video in Windows Media Player.
Those credentials can then be used by the attacker to authenticate as the user on any Windows servers where the user has an account, including those hosted in the cloud.
In an Active Directory network, Windows computers automatically send their credentials when they want to access different types of services like remote file shares, Microsoft Exchange email servers or SharePoint enterprise collaboration tools. This is done using the NTLM version 2 (NTLMv2) authentication protocol and the credentials that get sent are the computer and user name in plain text and a cryptographic hash derived from the user’s password.
In 2001 security researchers devised an attack called SMB relay where attackers can position themselves between a Windows computer and a server to intercept credentials and then relay them back to the server in order to authenticate as the user.
It was believed that this attack worked only inside local networks. In fact, Internet Explorer has a user authentication option that is set by default to “automatic logon only in Intranet zone.”
However, security researchers Jonathan Brossard and Hormazd Billimoria found that this option is ignored and the browser can be tricked to silently send the user’s Active Directory credentials—the username and password hash—to a remote SMB server on the Internet controlled by the attackers.
They tracked the issue down to a Windows system DLL file that is used not just by Internet Explorer, but by many applications that can access URLs, including Microsoft Outlook, Windows Media Player, as well as third-party programs.
When an URL is queried by these applications, the DLL checks for the authentication setting in registry, but then ignores it, the researchers said in their presentation at the conference in Las Vegas.
This is true for all supported versions of Windows and Internet Explorer, making it the first remote attack for the newly released Windows 10 and Microsoft Edge browser, Brossard said.
“We’re aware of this matter and are looking into this further,” a Microsoft representative said Thursday via email.
Once attackers have the user’s credentials, there are several ways in which they can be used, according to Brossard.
In one scenario, they could use an SMB relay attack to authenticate as the victim on servers hosted outside of the user’s local network by using a feature known as NTLM over HTTP that was introduced to accommodate network expansions into cloud environments. In this way they could obtain a remote shell on the server which could then be used to install malware or execute other exploits.
If the remote server is an Exchange one, the attackers could download the user’s entire mailbox.
Another scenario involves cracking the hash and then using it to access a Remote Desktop Protocol server. This can be done using specialized hardware rigs or services that combine the power of multiple GPUs.
A password that has eight characters or less can be cracked in around two days. Cracking an entire list of stolen hashes would take the same amount of time, because all possible character combinations are tried as part of the process, he said.
Stealing Windows credentials over the Internet could also be useful for attackers who are already inside a local network, but don’t have administrator privileges. They could then send an email message to the administrator that would leak his credentials when viewed in Outlook. Attackers could then use the stolen hash to execute SMB relay attacks against servers on the local network.
There are several methods to limit such attacks, but some of them have significant drawbacks.
Enabling an SMB feature called packet signing would prevent relay attacks, but not the credential leaking itself or attacks that rely on cracking the hash, Brossard said. This feature also adds a significant performance impact.
Another feature that could help is called Extended Protection for Windows Authentication, but it is hard to configure, which is why it’s not usually enabled on corporate networks, the researcher said.
Microsoft recommends using a firewall to block SMB packets from leaving the local network. This would prevent credential leaks, but is not very practical in the age of employee mobility and cloud computing, according to Brossard. The researcher feels that a host-based filtering solution would be more appropriate.
The firewall integrated into Windows can be used to block SMB packets on ports 137, 138, 139 and 445 from going out on the Internet, but still allow them on the local network so it doesn’t break file sharing, he said. | <urn:uuid:b02772fe-b4c2-4b29-a455-42b062990c70> | CC-MAIN-2017-09 | http://www.itnews.com/article/2966135/researchers-find-way-to-steal-windows-active-directory-credentials-from-the-internet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00625-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942011 | 1,011 | 2.5625 | 3 |
Now that we have reviewed the evolution of servers and data center technologies, we are ready to look more closely at the architecture of a typical data center. Pictured below are the various elements in a representative format, resembling some designs that I have created over the years.
As we have discussed in dome detail, servers lay at the heart of the data center, performing numerous tasks and providing applications to the enterprise they serve. This may be a mix of rack-mounted and blade servers, though trends for greater density in servers would slant toward a blade-based infrastructure.
Mentioned only briefly earlier, large-scale storage arrays provide massive data archiving capabilities, often using completely separate networks using fiber channel protocols. Data transfer in this type of network takes place at a block level, which is intolerant of loss or delay. Using specialized fiber channel switches and unique server adapters called Host Bus Adapters (HBAs), this adds another layer of complexity to the overall infrastructure. This is referred to as a Storage Area Network (SAN).
From a design perspective, the preferred method of network management is to make use of an out-of-band (OOB) network that is logically, or even physically, separate from the network carrying user data (production traffic). Many network devices actually have dedicated Ethernet management ports to facilitate this exact structure. In a physically separate management network, a separate set of switches connects the OOB interfaces, in essence requiring another network for this purpose.
Cisco Data Center Training | <urn:uuid:d2b68a84-a068-478f-95e5-3ce3122dc789> | CC-MAIN-2017-09 | http://blog.globalknowledge.com/2012/10/17/the-secret-sauce-of-cisco-ucs-data-center-architecture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00393-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916802 | 303 | 2.8125 | 3 |
Satellites come to the rescue when ground systems fail
Data now drives satellite communications in disaster response
- By William Jackson
- Oct 28, 2010
Satellites deliver the fallback system for emergency responders in areas where disasters have destroyed or damaged terrestrial infrastructure.
In the past, satellite communications principally facilitated voice traffic, but that is rapidly changing. “We are definitely doing more data than voice,” said Jack Deasy, civil programs director at satcom provider Inmarsat. “Data is what is driving the industry.”
The first large-scale demonstration of that shift was during the response to the Haiti earthquake in January, which left much of the island nation’s communications infrastructure in ruins.
For the first two weeks, many response teams relied almost exclusively on mobile satellite terminals for communications using Inmarsat’s Broadband Global Area Network (BGAN) service. The terminals have a throughput of 200 to 400 kilobits/sec, which was adequate for voice and more than adequate for e-mail, text messages, tweets and other data services that rescuers relied on to share information and tap expertise across the world.
Big telework savings trumps butts in the seats
Navy tests telework tool for Reserves
For continuity, build telework into operations
The shift to data is a reflection of the increasingly mobile, connected lives people live, Deasy said. “As the world moves toward wireless connectivity, people want that capability everywhere,” especially in disaster areas.
That includes government users. “The government is often the early adopters, and they are big users, especially for mobility,” he said. About 40 percent of Inmarsat’s revenue is from government customers, and the United States is its largest customer.
The satellite industry has a 10- to 15-year lead time for fielding new systems, and Inmarsat bet in the 1990s, when it began designing the fourth-generation satellites that support BGAN, that IP data connections would become increasingly important.
The BGAN satellites launched in 2005 and 2006, and the service became fully operational in 2008. Operating in the L Band spectrum, at 1.5 GHz, it enables voice and data communications through a laptop-sized terminal that a user can set up in minutes to establish a shared 500 kilobits/sec IP channel. Voice codecs use about 4 kilobits/sec, so there also is plenty of room for data in a channel.
BGAN uses three satellites in geosynchronous orbit over the equator, each about the size of a double-decker bus with solar panels about 100 yards across. Inmarsat is preparing to deliver more bandwidth for mobile IP in its fifth generation of satellites and services. It is spending $1.2 billion for an Earth station and a new fleet of satellites that Boeing is building.
The new satellites will operate in the Ka Band, from 26.5 GHz to 40 GHz, and are supposed to be able to support throughput of 50 megabits/sec to a small terminal. The satellites are expected to launch in 2013 and 2014.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:dbf48a7e-1abd-478b-b3e6-7347ce099cf9> | CC-MAIN-2017-09 | https://gcn.com/articles/2010/11/01/telework-sidebar.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00321-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947381 | 661 | 2.921875 | 3 |
The time-honored hierarchy of technology access security has always been:
-something you have (a physical key)
-something you know (a username/password)
-something you are (a fingerprint)
When it comes to education, academic integrity has always been of the utmost concern. How can we be sure that students who are taking (and passing!) tests are actually who they say they are? Student identity verification is more challenging in the online learning environment than in a face-to-face environment. Many programs still rely on the typical username/password protection. Others have implemented live remote proctoring via webcam, despite a number of logistical, financial, and in some cases legal challenges related to doing so.
Biometrics going mainstream via the iPhone 5S holds a lot of promise, not just for the owners of high-tech gadgets, but for academia as well. With Propero, our self-paced, online learning service, students in select courses will soon be asked to verify their identity by drawing a password using their mouse, stylus or touchscreen. This gesture biometrics technology works on any device (just as our students do!) and analyzes not just what is drawn, but how it’s drawn, including speed, angle, height, width and direction. | <urn:uuid:133b2a01-1879-483b-9747-02376a13a76d> | CC-MAIN-2017-09 | https://www.biosig-id.com/about/press-releases/169-biometrics-in-education-preserving-integrity-in-online-learning | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00321-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946595 | 264 | 2.53125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.